title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"TIME-AWARE MULTIWAY ADAPTIVE FUSION NETWORK FOR TEMPORAL KNOWLEDGE GRAPH QUESTION ANSWERING",
"TIME-AWARE MULTIWAY ADAPTIVE FUSION NETWORK FOR TEMPORAL KNOWLEDGE GRAPH QUESTION ANSWERING"
]
| [
"Yonghao Liu [email protected] \nCentre for Natural Language Processing\nMeituan Inc\nBeijingChina\n",
"♦ ♣ ",
"Di Liang [email protected] \nCentre for Natural Language Processing\nMeituan Inc\nBeijingChina\n",
"Fang Fang ♦♠♦ ♣ ",
"Sirui Wang [email protected] \nDepartment of Automation\nTsinghua University\nBeijingChina\n",
"♠ ",
"Wei Wu \nDepartment of Automation\nTsinghua University\nBeijingChina\n\nCentre for Natural Language Processing\nMeituan Inc\nBeijingChina\n",
"Rui Jiang [email protected] \nDepartment of Automation\nTsinghua University\nBeijingChina\n"
]
| [
"Centre for Natural Language Processing\nMeituan Inc\nBeijingChina",
"Centre for Natural Language Processing\nMeituan Inc\nBeijingChina",
"Department of Automation\nTsinghua University\nBeijingChina",
"Department of Automation\nTsinghua University\nBeijingChina",
"Centre for Natural Language Processing\nMeituan Inc\nBeijingChina",
"Department of Automation\nTsinghua University\nBeijingChina"
]
| []
| Knowledge graphs (KGs) have received increasing attention due to its wide applications on natural language processing. However, its use case on temporal question answering (QA) has not been well-explored. Most of existing methods are developed based on pre-trained language models, which might not be capable to learn temporal-specific presentations of entities in terms of temporal KGQA task. To alleviate this problem, we propose a novel Time-aware Multiway Adaptive (TMA) fusion network. Inspired by the step-by-step reasoning behavior of humans. For each given question, TMA first extracts the relevant concepts from the KG, and then feeds them into a multiway adaptive module to produce a temporal-specific representation of the question. This representation can be incorporated with the pre-trained KG embedding to generate the final prediction. Empirical results verify that the proposed model achieves better performance than the state-of-the-art models in the benchmark dataset. Notably, the Hits@1 and Hits@10 results of TMA on the CronQuestions dataset's complex questions are absolutely improved by 24% and 10% compared to the best-performing baseline. Furthermore, we also show that TMA employing an adaptive fusion mechanism can provide interpretability by analyzing the proportion of information in question representations. | 10.1109/icassp49357.2023.10095395 | [
"https://export.arxiv.org/pdf/2302.12529v2.pdf"
]
| 257,205,845 | 2302.12529 | 4aed81cf00b09893dfb18e0089fa2773a558278b |
TIME-AWARE MULTIWAY ADAPTIVE FUSION NETWORK FOR TEMPORAL KNOWLEDGE GRAPH QUESTION ANSWERING
Yonghao Liu [email protected]
Centre for Natural Language Processing
Meituan Inc
BeijingChina
♦ ♣
Di Liang [email protected]
Centre for Natural Language Processing
Meituan Inc
BeijingChina
Fang Fang ♦♠♦ ♣
Sirui Wang [email protected]
Department of Automation
Tsinghua University
BeijingChina
♠
Wei Wu
Department of Automation
Tsinghua University
BeijingChina
Centre for Natural Language Processing
Meituan Inc
BeijingChina
Rui Jiang [email protected]
Department of Automation
Tsinghua University
BeijingChina
TIME-AWARE MULTIWAY ADAPTIVE FUSION NETWORK FOR TEMPORAL KNOWLEDGE GRAPH QUESTION ANSWERING
Index Terms-Temporal knowledge graph question answeringknowledge graph question answeringneural language processing
Knowledge graphs (KGs) have received increasing attention due to its wide applications on natural language processing. However, its use case on temporal question answering (QA) has not been well-explored. Most of existing methods are developed based on pre-trained language models, which might not be capable to learn temporal-specific presentations of entities in terms of temporal KGQA task. To alleviate this problem, we propose a novel Time-aware Multiway Adaptive (TMA) fusion network. Inspired by the step-by-step reasoning behavior of humans. For each given question, TMA first extracts the relevant concepts from the KG, and then feeds them into a multiway adaptive module to produce a temporal-specific representation of the question. This representation can be incorporated with the pre-trained KG embedding to generate the final prediction. Empirical results verify that the proposed model achieves better performance than the state-of-the-art models in the benchmark dataset. Notably, the Hits@1 and Hits@10 results of TMA on the CronQuestions dataset's complex questions are absolutely improved by 24% and 10% compared to the best-performing baseline. Furthermore, we also show that TMA employing an adaptive fusion mechanism can provide interpretability by analyzing the proportion of information in question representations.
INTRODUCTION
Knowledge graph question answering (KGQA) is a core technique in many natural language processing applications, such as search and recommendation [1,2,3]. Among several branches of KGQA, temporal KGQA is a recently emerging direction and has shown great potential in real-world practices. We note that there are critical differences between traditional KGQA and temporal KGQA tasks, which are summarized as follows: (I) Temporal KGQA has more complex semantic information, unlike the traditional KGs constructed based on the tuple of (subject, predicate, object) 1 , temporal KGs are attached with additional timestamps. In other words, the tuple of temporal KGs is (subject, predicate, object, time duration). One example is (Barack Obama, position held, President of USA, 2008USA, , 2016 representing that Barack Obama held the position of President of the USA from 2008 to 2016. (II) Temporal KGQA is expected to generate answers with more diverse types. Different from regular KGQA whose answers are always entities, the answer of temporal KGQA ♦ Equal contribution. 1 Some researchers refer to predicates as relations. The two are equivalent. The two search results are wrong, the correct answer is Neymar 2 . The reason for the wrong answer is that it focuses on the entity (e.g., Dani Alves) but fails to capture and utilize the implicit temporal information of "before Dani Alves".
can either be an entity (e.g., Barack Obama) or a timestamp (e.g., 2008, 2016). The above differences make the temporal KGQA much more challenging to solve, as it often requires additional temporal reasoning compared to traditional KGQA tasks.
To solve the above problem, the limited literature either decomposes the given question into non-temporal and temporal sub-question to answer [4], or directly combines the pre-trained language model with the temporal KG to generate answers [5]. These methods can achieve satisfying performance on the questions with simple-entity or simple-time (refer to Table 1 for examples), but fail to answer the questions with complex templates (e.g., using Before/After to construct the question). We argue that the current state-of-the-art methods have not well solved, and may not even be aware of, the following challenges, which we address in this paper:
Q1: How to capture the implicit or explicit temporal information in the question to specify the question representation? Most of the existing methods directly feed the question into the pretrained language models to obtain the question embeddings. These approaches would over-rely on information about the entity involved in the given question and ignore the temporal constraint. Take the question depicted in Fig. 1 as an example. The Google search engine might ignore the time constraint "before Dani Alves" and purely regard "Dani Alves" as the query, which leads to the wrong answers.
Q2: How to effectively incorporate the relevant knowledge 2 Note that the date we conduct the search is March 15, 2022. of temporal KGs into the question representation? There is rich temporal information in temporal KGs, which can promote understanding the given question. For instance, for the question in Fig. 1, one can extract the quadruple (Dani Alves, position held, captain of Brazil, 2019, 2022) in temporal KGs using the entity Dani Alves as the query. Unfortunately, many prior researches use KGs solely for querying the answer rather than enriching the question representations.
To this end, we propose a Time-aware Multiway Adaptive (TMA) fusion network for temporal KGQA. Specifically, for a given question, we first select the relevant knowledge (i.e., Subject-Predicate-Object (abbr. SPO)) of the entity in the question from temporal KGs, which is capable of dealing with (Q1). We then adopt multiway attention to perform matching between the question and the SPOs. Next, we design an adaptive fusion mechanism to incorporate the SPO into the question representations, which allows the question embeddings to encode the relevant knowledge from temporal KGs, corresponding to (Q2). Finally, we generate the final predictions by feeding the pre-trained temporal KG embeddings together with the question embeddings into a Multi-Layer Perceptron (MLP) module.
The main contributions of this work can be summarized as follows. First, We systematically discuss the feasibility of explicitly integrating SPO information into the question for solving temporal KGQA and propose a novel framework called time-aware multiway adaptive (TMA) fusion networks. Second, We develop a new multiway matching module to capture the temporal information in the question, whose outputs are then fed into a novel adaptive fusion module to incorporate the relevant knowledge from the KGs into the question representations. Finally, Extensive experiments over temporal datasets demonstrate the superiority of our model compared with other competitive methods. It is worth noticing that, on the Cron-Questions dataset, the largest temporal KGQA dataset, our Hits@1 and Hits@10 results of TMA on complex questions are improved by 24% and 10% compared to the best-performing baselines.
RELATED WORK
In this section, we briefly review some related works, i.e., temporal knowledge graph embedding and temporal QA methods.
Temporal Knowledge Graph Embedding
Knowledge graph embedding (KGE) [6] aims to embed entities and relationships into a low-dimensional continuous vector space, thus facilitating downstream tasks like knowledge graph completion [7], relation extraction and classification [8,9,10] and semantic matching [11,12,13,14]. However, these methods work on non-temporal KGs but are unsuitable for temporal KGs. Recently, several methods have been proposed to shift the learning capabilities of the model to the temporal KGs. In [15], the authors present an approach that combines the timestamp embeddings with the score function, which is the first attempt to apply TransE [6] to temporal KGs. Later, HyTE [16] leverages time information in the entity-relation space by assigning a corresponding hyperplane to each timestamp. Afterwards, TCompIEx [17] then uses the solution based on the canonical decomposition of tensors to further extend the CompIEx [18].
Temporal QA Methods
Recently, several approaches [4,19] have been proposed to solve this task. TEQUILA [4] decomposes and rewrites each question into temporal and non-temporal sub-questions and then adopts constrained reasoning about time intervals to obtain the desired answers. Moreover, this work also presents a dataset (TempQuestions) dedicated to the temporal KGQA, where the KG used in this dataset is derived from Freebase. Then, EXAQT [19], as the first end-to-end temporal QA system, extracts question-specific subgraphs from the KG and employs relational graph convolutional networks to obtain the updated entity and relation embeddings. Remarkably, to further promote the field of temporal KGQA, a temporal QA question dataset called CronQuestions [5] has been released, which is more comprehensive than previous benchmarks. Meanwhile, they introduce a model, namely CronKGQA, which combines the temporal KG embeddings and pre-trained language models and achieves relatively satisfying performance compared to other baselines referred to in this work. However, the aforementioned methods either use hand-crafted rules to tackle the temporal questions or only handle simple question reasoning and gain uncompetitive performance when meeting complex questions with temporal constraints. However, our model does not need to deal with hand-crafted rules while still achieving desirable results in reasoning about complex multi-hop questions.
PROBLEM DEFINITION AND BACKGROUND
In this section, we will introduce the definition of this task and relevant background. Temporal KGQA. Given a natural language question and a temporal KG G = (E, P, T , Υ), the task of temporal KGQA is to find a suitable entity e ∈ E or timestamp t ∈ T that can answer the question accurately. Here, E denotes a set of entities, and P denotes a set of predicates. T represents a set of timestamps. Υ contains many facts that are tuples existing in KGs with the form of (s, p, o, t), where s, o ∈ E represent subject and object, respectively, p ∈ P is the predicate, and t ∈ T is the timestamp. For example, for the question ϑ "Who was the President of Italy in 2008?", we can transform it into the form (?, ϑ, Italy, 2008). Temporal KG embedding aims to learn low-dimensional embeddings es, ep, eo, et ∈ R d for each s, o ∈ E, p ∈ P, t ∈ T . Generally, we can define a score function φ(·) based on semantic similarity to learn these embeddings. For a valid fact υ = (s, p, o, t) ∈ Υ, it will be scored much higher than an invalid fact υ = (s , p , o , t ) / ∈ Υ via the function φ(·). That is, we need to make φ(es, ep, eo, et) > φ(e s , e p , e o , e t ).
TComplEx [17] is a semantic matching algorithm specific to the temporal KG, which is the extension of ComplEx [18]. Concretely, it defines entities, relations, and temporal embeddings in the complex space, and its corresponding score function is as in Eq. 1.
where Re(·) represents the real part and · denotes the multi-linear product operation. Moreover,êo is the conjugate operation, and es, ep, eo, et ∈ C d are the complex-value embeddings. TComplEx has become a prevailing method for inferring missing facts due to this learning paradigm. Therefore, we employ it to generate KG embeddings in this work.
METHODS
In this section, we present the details of the proposed TMA. To better understand how it works, its framework is illustrated in Fig. 2 we will elaborate on these critical components that make up our framework.
SPO Selector
We design the SPO selector inspired by Sentence-BERT [20], As shown in Fig. 2 (b), it is a standard two-tower architecture, where the DNN we use is BERT. The tokenized question is fed to the BERT to obtain token embeddings Q. SPO information of temporal KG is also performed in the same operations as above, and we can get the SPO embeddings S. The concrete formulas are as Eq. 2.
where Q ∈ R (n+1)×d in which n is the number of tokens and d is the dimension of hidden embeddings of the last layer of BERT (i.e., d = 768). S ∈ R c×d where c is the number of tokens of the SPO. In general, we take the [CLS] embedding (i.e., q [CLS] and qs [CLS] ) as the final question embedding and SPO embedding. Finally, we apply the cosine similarity on the question and SPO representation to learn the matching scores as follows:
score(q [CLS] , qs [CLS] ) = q [CLS] qs [CLS] q [CLS] q s [CLS](3)
where score is a scalar. The top ten scored SPOs are selected as candidate information.
Multiway & Adaptive Fusion
Previous studies [21,22] demonstrate the effectiveness of word-level attention in sentence pair modeling. In the multiway attention module, as shown in Fig. 2 (c), different attention used to compare the question and SPO relationship from different perspectives. For a given question, we embed it with Eq. 2, excluding the [CLS] token, i.e.,Q = [q1, q2, . . . , qn]. For the ten selected SPOs, we take the [CLS] token of each SPO and concatenate them together, i.e., P = [S1, S2, . . . , Sm] (m is the number of selected SPOs). Then, the candidate SPOs can be matched by the word at each position k of the question. which are formulated as follows:
p k = Φ (P, q k ; W )(4)
wherep k is the corresponding weighted-sum representation of SPOs specified by q k , employing the attention function Φ with parameterized by W , in which denotes concat attention, dot attention, and minus attention, respectively. More precisely, the different attention mechanisms can be described as follows: Concat Attention:
h k j = v cat tanh(Wcat[q k , S j ]) α k i = exp(h k i )/ m j=1 exp(h k j ),p cat k = m i=1 α k i S i (5)
Dot Attention:
h k j = v dot tanh(W dot (q k S j )) α k i = exp(h k i )/ m j=1 exp(h k j ),p dot k = m i=1 α k i S i (6)
Minus Attention:
h k j = v min tanh(W min (q k − S j )) α k i = exp(h k i )/ m j=1 exp(h k j ),p min k = m i=1 α k i S i(7)
Next, to obtain the attention-based question representationQ , we aggregate the matching informationp k together with the word representation q k via the concatenation operation, i.e.,q k = [q k ,p k ]. Finally, the linear transformation is applied toQ that fuses the SPO information, i.e.,
Q f inal = W[Q cat ,Q dot ,Q min ] = [q1, . . . ,qn].
Similarly, the question can be matched by a particular SPO by performing the above multiway operation and linear transformation. In this way, we can obtain the updated SPO representationŜi. Adaptive Fusion: To make question representations more timeaware, as shown in Fig. 2 (d), we use a gate mechanism to adaptively fuse the temporal information from SPOs.
S = tanh(WŜ i 1 m m i=1Ŝ i + bŜ i ) g i = σ(Wg i (q i ·S)), qnew = g iqi + (1 − g i )S (8)
where σ denote the nonlinear activation function, respectively. qnew is the final embedding of the word in the question.
Answer Prediction
First, we obtain two embeddings by projecting qnew, qent and qtime, which are specific to entity and time, respectively. Then, we get the entity score and timestamp score by the entity scoring function and the time scoring function, respectively. Entity scoring function: As mentioned in Section 3, we calculate each entityê ∈ E score by using the score function φ(·) as φent(ê) = Re( es, qent et,ê ) (9) where s and t are the entity and the timestamp extracted from the question. es and et are the corresponding embeddings computed by the pre-trained TComplEx method. Note that if s and t do not exist, we use a dummy entity instead. Time scoring function: We first extract the subject s and the object o from a given question. Then each timestampt ∈ T score can be computed as follows.
φ time (t) = Re( es, q time et, eo )
Finally, the obtained score for all entities and timestamps are concatenated, and the answer probabilities are computed with a softmax layer over these combined scores. The objective function we adopted is cross-entropy loss, which is defined as.
L = − N i y i log(ŷ i ) + (1 − y i ) log(1 −ŷ i )(11)
where yi is the ground truth label andŷi is the predicted label.
EXPERIMENTS
Dataset. To validate the effectiveness of our proposed model for the temporal KGQA task, we employ a benchmark dataset, called CronQuestions, that has been widely used in the previous study [5].
Baselines. To demonstrate the superiority of TMA, we select two types of baseline models for comparison. (I) Pre-trained language models includes BERT, RoBERTa, T5 and KnowBERT. (II) KG embedding-based approaches includes the variants of Entities as Experts (i.e., T-EaE-add and T-EaE-replace), EmbedKGQA [9] and CronKGQA [5].
6. RESULTS AND ANALYSIS
Model Performance
We evaluate the performance of TMA and other competitive models on the temporal KGQA dataset, i.e., CronQuestions. Table 1 shows the results of different baselines in terms of Hits@1 and Hits@10 across different question types and answer types. Our proposed model achieves state-of-the-art performance on all types of questions and answers on Hits@1 and Hits@10, which illustrates its powerful representation capability. Remarkably, the Hits@1 and Hits@10 results of TMA on complex questions are improved by 24% and 10%, respectively, compared to the second best-performing model. One plausible reason is that complex reasoning requires a better understanding of the question representation, and TMA can incorporate relevant temporal-specific information from the KG due to the design of SPO selector and multiway & adaptive fusion. Furthermore, we find that the performance of methods based only on large-scale pre-trained language models (i.e., BERT, RoBERTa, T5, and KnowBERT) is significantly worse than that of KG embedding-based methods (i.e., EmbedkGQA, T-EaE, and CronKGQA). This indicates that it is difficult to capture the timeaware question embeddings through pre-trained language models alone. Meanwhile, T5 is better than other pre-trained language models since it contains more trainable parameters.
Ablation Study
To evaluate the contribution of each module in our framework, we perform extensive ablation experiments. The experimental results are shown in Table 3. The SPO Selector can select the SPO triples from the temporal KG relevant to the semantics of the question. When we remove the SPO Selector, the performance drops to 0.726, indicating that the candidate SPOs are critical for this task.
The multiway attention module is composed of three components. When we remove concatenation attention, dot attention, and minus attention, the performance drops to 0.759, 0.745, and 0.768, respectively. The concatenation attention, which is often employed in retrieving QA, is significantly improved for "Entity" questions. This is probably because the concatenation attention facilitates the fusion between entities and provides more detailed entity alignment information. In addition, minus attention significantly improves on the "Time" question. The possible reason is that it can explicitly align the differences between entities and time, thus providing better underlying features for adaptive fusion.
Finally, we remove the adaptive module for information fusion, which is equivalent to directly using the SPO information and the original semantic information fusion. The performance of TMA on simple reasoning drops by nearly 9%, which indicates that adaptive fusion can effectively integrate two different kinds of information.
CONCLUSION
Temporal KGQA task exists a problem that is not capable of learning temporal-specific embeddings of entities. We propose a method called TMA, which can explicitly fuse the SPO to question representations by a select-match-fusion-predict paradigm. This method improves the model's robustness by enabling the obtained question embeddings to be more temporal-specific. Extensive experiments on CronQuestions dataset verify the effectiveness of the TMA.
Fig. 1 .
1An example of querying a complex question in Google search.
φ(es, ep,êo, et) = Re( es, ep et,êo )
Fig. 2 .
2The overall framework of our model (Best viewed in color).
Q
= BERT([CLS] + question + [SEP]) S = BERT([CLS]+ < SPO > +[SEP])
Table 1 .
1Performance of baselines and our methods on the CronQuestions dataset.Model
Hits@1
Hits@10
Overall
Question Type
Answer Type
Overall
Question Type
Answer Type
Complex
Simple
Entity
Time
Complex
Simple
Entity
Time
BERT
0.071
0.086
0.052
0.077
0.06
0.213
0.205
0.225
0.192
0.253
RoBERTa
0.07
0.086
0.05
0.082
0.048
0.202
0.192
0.215
0.186
0.231
KnowBERT
0.07
0.083
0.051
0.081
0.048
0.201
0.189
0.217
0.185
0.23
T5-3B
0.081
0.073
0.091
0.088
0.067
-
-
-
-
-
EmbedKGQA
0.288
0.286
0.29
0.411
0.057
0.672
0.632
0.725
0.85
0.341
T-EaE-add
0.278
0.257
0.306
0.313
0.213
0.663
0.614
0.729
0.662
0.665
T-EaE-replace
0.288
0.257
0.329
0.318
0.231
0.678
0.623
0.753
0.668
0.698
CronKGQA
0.647
0.392
0.987
0.699
0.549
0.884
0.802
0.992
0.898
0.857
TMA (ours)
0.784
0.632
0.987
0.792
0.743
0.943
0.904
0.995
0.947
0.936
Table 2 .
2Hits@1 for different reasoning type questions. We compare our model against various KG-based methods with different categories of questions in terms of Hits@1, and summarize the results inTable 2. As the table depicts, the proposed method TMA obtains better performance than all the KG-based methods, especially on those complex questions. TMA's performance in Before/After", First/Last", and Time Join" can consistently achieve 30%, 25%, and 16% improvements, respectively, when compared to the CronKGQA. This phenomenon further verifies our idea of explicitly incorporating the SPO information into the question to learn temporal-specific question representations for temporal KGQA tasks.Category
Complex Question
Simple Question
All
Before/ First/ Time Simple Simple
After
Last
Join
Entity
Time
EmbedKGQA
0.199
0.324 0.223
0.421
0.087
0.288
T-EaE-add
0.256
0.285 0.175
0.296
0.321
0.278
T-EaE-replace
0.256
0.288 0.168
0.318
0.346
0.288
CronKGQA
0.288
0.371 0.511
0.988
0.985
0.647
TMA (ours)
0.581
0.627 0.675
0.988
0.987
0.784
Table 3 .
3Results of component ablation experiment.Model
Hits@1
Overall
Question Type
Answer Type
Complex Simple Entity Time
TMA
0.784
0.632
0.987
0.792
0.743
w/o SPO Selector
0.726
0.584
0.916
0.736
0.707
w/o Concat Attention
0.759
0.628
0.934
0.769
0.739
w/o Dot Attention
0.745
0.617
0.914
0.771
0.732
w/o Minus Attention
0.768
0.630
0.952
0.789
0.728
w/o Adaptive Fusion
0.736
0.613
0.899
0.742
0.724
Knowledge graph embedding based question answering. X Huang, J Zhang, D Li, P Li, Proceedings of ACM International Conference on Web Search and Data Mining. ACM International Conference on Web Search and Data MiningX. Huang, J. Zhang, D. Li, and P. Li, "Knowledge graph em- bedding based question answering," in Proceedings of ACM International Conference on Web Search and Data Mining, 2019, pp. 105-113.
Reinforcement knowledge graph reasoning for explainable recommendation. Y Xian, Z Fu, S Muthukrishnan, G De Melo, Y Zhang, Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval. ACM SIGIR Conference on Research and Development in Information RetrievalY. Xian, Z. Fu, S. Muthukrishnan, G. De Melo, and Y. Zhang, "Reinforcement knowledge graph reasoning for explainable rec- ommendation," in Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval, 2019, pp. 285-294.
Vpalg: Paper-publication prediction with graph neural networks. R Guan, Y Liu, X Feng, X Li, Proceedings of the 30th ACM International Conference on Information & Knowledge Management. the 30th ACM International Conference on Information & Knowledge ManagementR. Guan, Y. Liu, X. Feng, and X. Li, "Vpalg: Paper-publication prediction with graph neural networks," in Proceedings of the 30th ACM International Conference on Information & Knowl- edge Management, 2021, pp. 617-626.
Tequila: Temporal question answering over knowledge bases. Z Jia, A Abujabal, R Saha Roy, J Strötgen, G Weikum, Proceedings of the ACM International Conference on Information & Knowledge Management. the ACM International Conference on Information & Knowledge ManagementZ. Jia, A. Abujabal, R. Saha Roy, J. Strötgen, and G. Weikum, "Tequila: Temporal question answering over knowledge bases," in Proceedings of the ACM International Conference on Infor- mation & Knowledge Management, 2018, pp. 1807-1810.
Question answering over temporal knowledge graphs. A Saxena, S Chakrabarti, P Talukdar, Proceedings of the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing. the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language ProcessingA. Saxena, S. Chakrabarti, and P. Talukdar, "Question answer- ing over temporal knowledge graphs," in Proceedings of the Annual Meeting of the Association for Computational Linguis- tics and International Joint Conference on Natural Language Processing, 2021, pp. 6663-6676.
Translating embeddings for modeling multirelational data. A Bordes, N Usunier, A Garcia-Duran, J Weston, O Yakhnenko, Advances in Neural Information Processing Systems. A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, "Translating embeddings for modeling multi- relational data," in Advances in Neural Information Processing Systems, 2013.
Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. H Sun, T Bedrax-Weiss, W Cohen, Proceedings of the Conference on EMNLP and IJCNLP. the Conference on EMNLP and IJCNLPH. Sun, T. Bedrax-Weiss, and W. Cohen, "Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text," in Proceedings of the Conference on EMNLP and IJCNLP, 2019, pp. 2380-2390.
Deep attention diffusion graph neural networks for text classification. Y Liu, R Guan, F Giunchiglia, Y Liang, X Feng, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingY. Liu, R. Guan, F. Giunchiglia, Y. Liang, and X. Feng, "Deep attention diffusion graph neural networks for text classification," in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 8142-8152.
Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. A Saxena, A Tripathi, P Talukdar, Proceedings of the Annual Meeting of the Association for Computational Linguistics. the Annual Meeting of the Association for Computational LinguisticsA. Saxena, A. Tripathi, and P. Talukdar, "Improving multi-hop question answering over knowledge graphs using knowledge base embeddings," in Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2020, pp. 4498- 4507.
Dual path modeling for semantic matching by perceiving subtle conflicts. C Xue, D Liang, S Wang, W Wu, J Zhang, arXiv:2302.12530arXiv preprintC. Xue, D. Liang, S. Wang, W. Wu, and J. Zhang, "Dual path modeling for semantic matching by perceiving subtle conflicts," arXiv preprint arXiv:2302.12530, 2023.
Dabert: Dual attention enhanced bert for semantic matching. S Wang, D Liang, J Song, Y Li, W Wu, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsS. Wang, D. Liang, J. Song, Y. Li, and W. Wu, "Dabert: Dual attention enhanced bert for semantic matching," in Proceed- ings of the 29th International Conference on Computational Linguistics, 2022, pp. 1645-1654.
Improving semantic matching through dependency-enhanced pre-trained model with adaptive fusion. J Song, D Liang, R Li, Y Li, S Wang, M Peng, W Wu, Y Yu, Association for Computational LinguisticsUnited Arab Emiratesin Findings of the Association for Computational Linguistics: EMNLP 2022. Abu DhabiJ. Song, D. Liang, R. Li, Y. Li, S. Wang, M. Peng, W. Wu, and Y. Yu, "Improving semantic matching through dependency-enhanced pre-trained model with adaptive fusion," in Findings of the Association for Computational Linguistics: EMNLP 2022. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 45-57. [Online].
Adaptive multi-attention network incorporating answer information for duplicate question detection. D Liang, F Zhang, W Zhang, Q Zhang, J Fu, M Peng, T Gui, X Huang, Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalD. Liang, F. Zhang, W. Zhang, Q. Zhang, J. Fu, M. Peng, T. Gui, and X. Huang, "Adaptive multi-attention network incorporating answer information for duplicate question detection," in Pro- ceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2019, pp. 95-104.
Asynchronous deep interaction network for natural language inference. D Liang, F Zhang, Q Zhang, X.-J Huang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingD. Liang, F. Zhang, Q. Zhang, and X.-J. Huang, "Asynchronous deep interaction network for natural language inference," in Pro- ceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 2692-2700.
Towards time-aware knowledge graph completion. T Jiang, T Liu, T Ge, L Sha, B Chang, S Li, Z Sui, Proceedings of International Conference on Computational Linguistics. International Conference on Computational LinguisticsT. Jiang, T. Liu, T. Ge, L. Sha, B. Chang, S. Li, and Z. Sui, "Towards time-aware knowledge graph completion," in Proceed- ings of International Conference on Computational Linguistics, 2016, pp. 1715-1724.
Hyte: Hyperplanebased temporally aware knowledge graph embedding. S S Dasgupta, S N Ray, P Talukdar, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingS. S. Dasgupta, S. N. Ray, and P. Talukdar, "Hyte: Hyperplane- based temporally aware knowledge graph embedding," in Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, 2018, pp. 2001-2011.
Tensor decompositions for temporal knowledge base completion. T Lacroix, G Obozinski, N Usunier, International Conference on Learning Representation. T. Lacroix, G. Obozinski, and N. Usunier, "Tensor decomposi- tions for temporal knowledge base completion," in International Conference on Learning Representation, 2020.
Complex embeddings for simple link prediction. T Trouillon, J Welbl, S Riedel, É Gaussier, G Bouchard, International Conference on Machine Learning. T. Trouillon, J. Welbl, S. Riedel,É. Gaussier, and G. Bouchard, "Complex embeddings for simple link prediction," in Interna- tional Conference on Machine Learning, 2016, pp. 2071-2080.
Complex temporal question answering on knowledge graphs. Z Jia, S Pramanik, R Saha Roy, G Weikum, Proceedings of ACM International Conference on Information & Knowledge Management. ACM International Conference on Information & Knowledge ManagementZ. Jia, S. Pramanik, R. Saha Roy, and G. Weikum, "Complex temporal question answering on knowledge graphs," in Pro- ceedings of ACM International Conference on Information & Knowledge Management, 2021, pp. 792-802.
Sentence-bert: Sentence embeddings using siamese bert-networks. N Reimers, I Gurevych, Proceedings of Conference on EMNLP and IJCNLP. Conference on EMNLP and IJCNLPN. Reimers and I. Gurevych, "Sentence-bert: Sentence embed- dings using siamese bert-networks," in Proceedings of Confer- ence on EMNLP and IJCNLP, 2019, pp. 3982-3992.
Reasoning about entailment with neural attention. T Rocktäschel, E Grefenstette, K M Hermann, T Kočiskỳ, P Blunsom, arXiv:1509.06664arXiv preprintT. Rocktäschel, E. Grefenstette, K. M. Hermann, T. Kočiskỳ, and P. Blunsom, "Reasoning about entailment with neural atten- tion," arXiv preprint arXiv:1509.06664, 2015.
Multiway attention networks for modeling sentence pairs. C Tan, F Wei, W Wang, W Lv, M Zhou, International Joint Conference on Artificial Intelligence. C. Tan, F. Wei, W. Wang, W. Lv, and M. Zhou, "Multiway attention networks for modeling sentence pairs." in Interna- tional Joint Conference on Artificial Intelligence, 2018, pp. 4411-4417.
| []
|
[
"AutoGrow: Automatic Layer Growing in Deep Convolutional Networks",
"AutoGrow: Automatic Layer Growing in Deep Convolutional Networks"
]
| [
"Wei Wen [email protected] \nDuke University\nUniversity of Nevada -Reno\nDuke University\n\n",
"Feng Yan [email protected] \nDuke University\nUniversity of Nevada -Reno\nDuke University\n\n",
"Hai Li [email protected] \nDuke University\nUniversity of Nevada -Reno\nDuke University\n\n"
]
| [
"Duke University\nUniversity of Nevada -Reno\nDuke University\n",
"Duke University\nUniversity of Nevada -Reno\nDuke University\n",
"Duke University\nUniversity of Nevada -Reno\nDuke University\n"
]
| []
| We propose AutoGrow to automate depth discovery in Deep Neural Networks (DNNs): starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, the growth stops and the network depth is discovered. The residual and plain blocks are used as growing sub-modules to study DNNs with and without shortcuts. We propose generic growing and stopping policies to minimize human efforts spent on the optimal depth search. Our experiments show that by applying the same policy to different tasks, AutoGrow can always discover network depth effectively and achieve state-of-theart accuracy on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. Comparing to Neural Architecture Search (NAS) that often designs a gigantic search space and consumes tremendous resources, AutoGrow lies at the other end of the research spectrum: it focuses on efficient depth discovery and reduces the growing and searching time to a level similar to that of training a single DNN. Thus, AutoGrow is able to scale up to large datasets such as ImageNet. Our study also reveals that previous Network Morphism is sub-optimal for increasing layer depth. Finally, we demonstrate that AutoGrow enables the training of deeper plain networks, which has been problematic even using Batch Normalization. | 10.1145/3394486.3403126 | [
"https://arxiv.org/pdf/1906.02909v1.pdf"
]
| 174,801,086 | 1906.02909 | fd40e8654761ef573113bb55511f96bac890162e |
AutoGrow: Automatic Layer Growing in Deep Convolutional Networks
Wei Wen [email protected]
Duke University
University of Nevada -Reno
Duke University
Feng Yan [email protected]
Duke University
University of Nevada -Reno
Duke University
Hai Li [email protected]
Duke University
University of Nevada -Reno
Duke University
AutoGrow: Automatic Layer Growing in Deep Convolutional Networks
We propose AutoGrow to automate depth discovery in Deep Neural Networks (DNNs): starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, the growth stops and the network depth is discovered. The residual and plain blocks are used as growing sub-modules to study DNNs with and without shortcuts. We propose generic growing and stopping policies to minimize human efforts spent on the optimal depth search. Our experiments show that by applying the same policy to different tasks, AutoGrow can always discover network depth effectively and achieve state-of-theart accuracy on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. Comparing to Neural Architecture Search (NAS) that often designs a gigantic search space and consumes tremendous resources, AutoGrow lies at the other end of the research spectrum: it focuses on efficient depth discovery and reduces the growing and searching time to a level similar to that of training a single DNN. Thus, AutoGrow is able to scale up to large datasets such as ImageNet. Our study also reveals that previous Network Morphism is sub-optimal for increasing layer depth. Finally, we demonstrate that AutoGrow enables the training of deeper plain networks, which has been problematic even using Batch Normalization.
Introduction
Layer depth is one of the decisive factors of the success of Deep Neural Networks (DNNs). For example, image classification accuracy keeps improving as the depth of network models grows [16,30,33,11,14]. Although shallow networks cannot ensure high accuracy, DNNs composed of too many layers may suffer from over-fitting and convergence difficulty in training. How to obtain the optimal depth for a DNN still remains mysterious. For instance, ResNet-152 [11] uses 3, 8, 36 and 3 residual blocks under output sizes of 56 × 56, 28 × 28, 14 × 14 and 7 × 7, respectively, which don't show an obvious quantitative relation. In practice, people usually reply on some brute-force trials and tests to obtain the depth of a network: they first design a DNN with a specific depth and then train and evaluate the network on a given dataset; Finally, they change the depth and repeat the procedure until the accuracy meets the requirement. Besides the high computational cost induced by the iteration process, such trial & test iterations must be repeated whenever dataset changes. In this paper, we propose AutoGrow that can automate depth discovery over different datasets meanwhile achieving state-of-the-art accuracy. Previously, VggNet [30] and DropIn [31] explored the methods of adding new layers onto shallower DNNs; Network Morphism [36,35,5] increased the layer depth meanwhile preserving the function of the shallow net. Their objective was enabling or accelerating the training of deeper DNNs, but we focus on automating depth discovery. More differently, all the above works need to manually determine the number and the locations of new layers, and new layers are usually added only once. In contrast, AutoGrow automatically learns the number and the locations of new layers and activates multiple growing/morphing.
Neural Architecture Search (NAS) [39,40] aims to search DNNs by exploring a gigantic search space. Such an approach, however, often requires tremendous computing resources. To speedup the search process, Network Mor-phism [36,35,5] is integrated into NAS by using a random sampler [8] or a reinforcement learning agent [3,4]. However, NAS is naturally slow and it is not yet able to support large-scale tasks like ImageNet. AutoGrow lies at the other end of the research spectrum, where search space is limited to depth but the searching time is as short as training a singe DNN, demonstrating substantially enhanced scalability. We also explored the integration of Network Morphism into Au-toGrow for growing layers. Interestingly, simple random initialization outperforms complex Network Morphism. We visualize the optimization trajectory in the parameter space and show that the initialization given by Network Morphism is inadequate for training a deeper net. This sheds light that Network Morphism may give sub-optimal signals for NAS. Figure 1 illustrates a simple example of AutoGrow. It starts from the shallowest backbone network and gradually grows sub-modules 1 ; the growth stops once a stopping policy is satisfied. We study multiple growing policies and surprisingly find that a simple Periodic Policy surpasses complicated Morphing Policy based on Network Morphism. Unlike previous wisdom [8,3,4], we find that it is more effective to grow sub-modules before a shallow net converges. This is because a fully converged shallow net is an inadequate initialization for training deeper net. To tackle this, we avoid full convergence during the growing by using (1) a constant large learning rate; (2) random initialization of new sub-modules; and (3) a short interval between growths. Our contributions are:
• We propose AutoGrow 2 to automate DNN layer growing and depth discovery. With the same set of hyperparameters, it adapts network depth to various datasets including MNIST, FashionMNIST, SVHN, CIFAR10 and CIFAR100. AutoGrow can also discover shallower DNNs when the dataset is a subset. • AutoGrow demonstrates high efficiency and scales up to ImageNet, because the layer growing is as fast as training a single DNN. On ImageNet, it discovers a new ResNets with better trade-off between accuracy and computation complexity. • We study Network Morphism in AutoGrow and visualize the optimization trajectory in the parameter space. Our experiments imply that Network Morphism is suboptimal to train deeper networks. • AutoGrow is able to train deeper plain DNNs, which has been problematic even with Batch Normalization.
Related Work
Neural Architecture Search (NAS) [39] and neural evolution [25,1,32,21,28] can search network architectures including layer depth, however, with very long search time. For example, on CIFAR10 dataset, NAS used 22, 400 Nvidia K40 GPU hours [39] and neural evolution [28] took 60, 000 GPU hours [8] to perform the search. To accelerate NAS, one-shot models [29,26,2], DARTS [22] and NAS with Transferable Cell [40,20] were proposed. The search time reduces dramatically but is still long from practical perspective. For example, 2, 000 Nvidia P100 GPU hours were used to search a transferable cell (a.k.a submodule) [40]. It is also very challenging to deploy these methods to larger datasets such as ImageNet. Moreover, DNN depth in one-shot models, DARTS and Transferable Cell based NAS has to be predefined. In contrast, we target to learn the depth of DNNs and our AutoGrow can scale up to ImageNet thanks to its short depth learning time, which is as efficient as training a single DNN.
Network Morphism [36,35,5,8,3,4] refers to as approaches which morph a smaller DNN to a wider or deeper DNN meanwhile keeping the same loss function of the smaller DNN. It was proposed to enable/accelerate the training of wider or deeper DNNs. Network Morphism is orthogonal to our work and we integrate it into our Auto-Grow framework for each growth. The corresponding experiments show that simple random initialization outperforms Network Morphism. We hypothesize that Network Morphism gives an inadequate initialization but noises help to escape it. We visualized optimization trajectory and proved our hypothesis is true.
In addition to architecture search which requires to train lots of DNNs from scratch, there are also many studies on learning neural structures within a single training. Structure pruning [37,18,17,27,12,24,23,7,13,10] and growing [9,38,27,7] were proposed for different goals, such as efficient inference [37,18,17,12,24,23,7,13,10], lifelong learning [38] and model adaptation [9,27]. However, those works fixed the network depth and limited structure learning within the existing layers. Optimization over a DNN with fixed depth is easier as the skeleton architecture is known. AutoGrow performs in a scenario where the DNN depth is unknown hence we need to seek for the optimal depth. A growing sub-network was utilized for semi-supervised learning [34], and the new network fully converged before a next growth; AutoGrow targets on supervised learning and grows the network before the new network converges. We find the growth ahead of convergence is important to avoid accuracy loss and early stop of the growth.
AdaNet [6] can adapt neural networks to different datasets. It designs a pool of sub-networks and selects the final network as a combination of some sub-networks to minimizes the loss function. The number of combinations explodes exponentially with the pool size so that the search time can be very long. As a consequence, the largest problem reported in AdaNet is only a binary classification simplified from CIFAR10. Figure 1 gives an overview of the proposed AutoGrow. In this paper, we use network, sub-networks, sub-modules and layers to describe the architecture hierarchy. A typical subnetwork is composed of sub-modules, which have the same output size. A typical sub-module (e.g. a residual block) is an elementary building block composed of a few layers. In this section, we rigorously formulate a generic version of AutoGrow which will be materialized later. A deep convolutional network g(X 0 ) is a cascade of sub-networks by composing functions as
AutoGrow -A Depth Growing Algorithm
g(X0) = l (fM−1 (fM−2 (· · · f1 (f0 (X0)) · · · ))) ,(1)
where X 0 is an input image, M is the number of subnetworks, l(·) is a loss function following a classifier, and X i+1 = f i (X i ) is a sub-network that operates on an input image or a feature tensor X i ∈ R ci×hi×wi and outputs X i+1 . Here, c i is the number of channels, and h i and w i are spatial dimensions.
f i (X i ) is a simpli- fied notation of f i (X i ; W i ), where W i is a set of sub- modules' parameters within the i-th sub-network. Thus W = {W i : i = 0 . . . M − 1}
denotes the whole set of parameters in the DNN. A sub-network f i (X i ; W i ) includes one or multiple sub-modules, and a sub-module contains one or more layers. To facilitate growing, the following properties are supported within a sub-network: (1) the first sub-module usually reduces the size of input feature maps, e.g., using pooling or convolution with a stride; and (2) all sub-modules in a sub-network maintain the same output size. As such, our framework can support popular networks, including VggNet-like plain networks [30], GoogLeNet [33], ResNets [11] and DenseNets [14]. In this paper, we select ResNets and VggNet-like nets as representatives of DNNs with and without shortcuts, respectively. Algorithm 1 describes the AutoGrow algorithm. In brief, AutoGrow starts with the shallowest net where every subnetwork has only one sub-module for spatial dimension reduction. AutoGrow loops over all sub-networks in sequence. For each sub-network, AutoGrow keeps stacking a new sub-module until the growth cannot improve the accuracy further. The details of our method will be materialized in the following subsections.
Seed Shallow Networks and Sub-modules
On MNIST, FashionMNIST, SVHN, CIFAR10 and CI-FAR100, we explore growing depth for four types of DNNs:
1. Basic3ResNet: the same ResNet used for CIFAR10 in [11], which has 3 residual sub-networks with output spatial sizes of 32×32, 16×16 and 8×8, respectively; 2. Basic4ResNet: a variant of ResNet with basic residual blocks 3 used for ImageNet in [11]. The dif- 3 A basic residual block contains two convolutions and one shortcut.
Algorithm 1 AutoGrow Algorithm. Input : 1 A seed shallow network g(X0) composed of M sub-networks F = {fi (·; Wi) : i = 0 . . . M − 1},
where each subnetwork has only one sub-module (a dimension reduction sub-module). 2 An epoch interval K to check growing and stopping policies. 3 The number of fine-tuning epochs N after growing. Initialization: 4 A Circular Linked List of sub-networks under growing:
subNetList = f0 (·; W0) → · · · → fM−1 (·; WM−1) ←−−−−−−−−−−−−−−−−−−−−−−−−−− ; 5
The current growing sub-network:
growingSub = subNetList.head() = f0 (·; W0); 6
The last grown sub-network: grownSub = None; Process : 7 while subNetList.size()>0 do 8 # comment: train the whole network g(X0) for K epochs Fine-tune the discovered network g(X0) for N epochs; Output : 25 A trained neural network g(X0) with learned depth.
ference is that there is only a 3 × 3 convolution layer before the first residual. There are 4 sub-networks with output spatial sizes of 32×32, 16×16, 8×8 and 4×4, respectively; 3. Plain3Net: a VggNet-like plain net by removing shortcuts in Basic3ResNet; 4. Plain4Net: a VggNet-like plain net by removing shortcuts in Basic4ResNet.
In plain DNNs, a sub-module is a stack of convolution, Batch Normalization and ReLU; in residual DNNs, a submodule is a residual block. In AutoGrow, a sub-network is a stack of all sub-modules with the same output spatial size. Unlike [11] which manually designed the depth, AutoGrow starts from a seed architecture in which each sub-network having only one sub-module and automatically learns the number of sub-modules. On ImageNet, we apply the same backbones in [11] as the seed architectures. A seed architecture has only one sub-module under each output spatial size. For a ResNet using basic residual blocks or bottleneck residual blocks [11], we respectively name it as Basic4ResNet or Bottleneck4ResNet. Plain4Net is also obtained by removing shortcuts in Basic4ResNet.
Sub-module Initializers
Here we explain how to initialize a new sub-module W by initializer(W) in Algorithm 1. Network Morphism changes DNN architecture meanwhile preserving the loss function via special initialization of new layers, that is,
g(X 0 ; W) = g(X 0 ; W ∪ W) ∀X 0 .(2)
A residual sub-module shows a nice property: when stacking a residual block and initializing the last Batch Normalization layer as zeros, the function of the shallower net is preserved but the DNN is morphed to a deeper net. Thus, Network Morphism can be easily implemented by this zero initialization (ZeroInit).
In this work, all layers in W are initialized using default randomization, except for a special treatment of the last Batch Normalization layer in a residual sub-module. Besides ZeroInit, we propose a new AdamInit for Network Morphism. In AdamInit, we freeze all parameters except the last Batch Normalization layer in W, and then use Adam optimizer [15] to optimize the last Bath Normalization for maximum 10 epochs till the training accuracy of the deeper net is as good as the shallower one. After AdamInit, all parameters are jointly optimized. We view AdamInit as a Network Morphism because the training function is similar after AdamInit. We empirically find that AdamInit can usually find a solution in less than 3 epochs. We also study random initialization of the last Batch Normalization layer using uniform (UniInit) or Gaussian (GauInit) noises with a standard deviation 1.0. GauInit obtains the best result based on our experiments.
Growing Policies
A growing policy refers to meetGrowingPolicy() in Algorithm 1. Two growing policies are studied here:
1. Morphing Growth: meetGrowingPolicy() returns true when the improvement of validation accuracy is less than τ in the last K epochs. In Morphing Growth, AutoGrow only grows when current network converges under an optimizer. Note that similar policies have been used for efficient NAS based on Network Morphism [8,3,4]. 2. Periodic Growth: meetGrowingPolicy() always returns true, that is, the network always grows every K epochs. Therefore, K is also the growing period. In the best practice of AutoGrow, K is small such that it grows before current network converges.
Our experiments show that Periodic Growth outperforms Morphing Growth. We hypothesize that a fully converged shallower net is an inadequate initialization to train a deeper net. We will perform experiments to prove this hypothesis and visualize optimization trajectory to illustrate it.
Stopping Policies
A stopping policy denotes meetStoppingPolicy() in Algorithm 1. Two stopping policies are studied:
1. Morphing Stop: meetStoppingPolicy() returns true when a sub-network just grew J epochs ago but the validation accuracy improves less than τ . Morphing Stop works with Morphing Growth, indicating that the last growth is meaningless and the sub-network reaches the maximum depth. 2. Periodic Stop: meetStoppingPolicy() returns true when the validation accuracy improves less than τ in the last J epochs.
Hyper-parameters τ , J and K control the operation of Au-toGrow and can be easily setup and well generalized. τ denotes the significance of accuracy improvement for classification. We simply set τ = 0.05% in all experiments. J represents how many epochs to wait for an accuracy improvement before stopping the growth of a sub-network. It is more meaningful to consider stopping when the new net is trained to some extent. As such, we set J to the number of epochs T under the largest learning rate when training a baseline. K means how frequently AutoGrow checks the polices. In Morphing Growth and Morphing Stop, we simply set K = T . In Periodic Growth and Periodic Stop, K is set to a fraction of T to enable faster growth before convergence; more importantly, K = 3 is very robust to all networks and datasets.
Experiments
In this paper, we use Basic3ResNet-2-3-2, for instance, to denote the model architecture which contains 2, 3 and 2 sub-modules in the first, second and third subnetworks, respectively. Sometimes we simplify it as 2-3-2 for convenience. AutoGrow always starts from the shallowest depth of 1-1-1 and uses the maximum validation accuracy as the metric to guide growing and stopping. All
Morphing
ResNet-32
Learning rate decay
Suboptimum of Network Morphism
In this section, we study Network Morphism itself and its integration into our AutoGrow as Network Morphism can be used to gradually morph the depth of DNNs [36,35,5,8,3,4]. When studying Network Morphism, we take the following steps: 1) train a shallower ResNet to converge, 2) stack residual blocks on top of each sub-network to morph to a deeper net, 3) use ZeroInit or AdamInit to initialize last Batch Normalization layers, and 4) train the deeper net in a standard way. We compare the accuracy difference ("∆") between Network Morphism and the deep net trained from scratch. Table 1 summaries our results. Network Morphism has a lower accuracy (negative "∆") in all the cases.
We hypothesize that a converged shallower net may not be an adequate initialization. Figure 2 visualizes and compares the optimization trajectories of Network Morphism and the training from scratch. In this figure, the shallower net is Basic3ResNet-3-3-3 (ResNet-20) and the deeper one is Basic3ResNet-5-5-5 (ResNet-32) in Table 1. The initializer is ZeroInit. The visualization method is extended from [19]. Points on the trajectory are evenly sampled every a few epochs. To maximize the variance of trajectory, we use PCA to project from a high dimensional space to a 2D space and use the first two Principle Components (PC) to form the axes in Figure 2. The contours of training loss function and the trajectory are visualized around the final minimum of the deeper net. When projecting a shallower net to a deeper net space, zeros are padded for the parameters not existing in the deeper net. We must note that the loss increase along the trajectory does not truly represent the situation in high dimensional space, as the trajectory is just a projection. It is possible that the loss remains decreasing in the high dimension while it appears in an opposite way in the 2D space. The sharp detour at "Morphing" in Figure 2(a) indicates that the shallower net converges to a point that the deeper net struggles to escape. In contrast, Figure 2(b) shows that the trajectory of the direct optimization in the deeper space smoothly converges to a better minimum.
To further validate our hypothesis, we integrate Network Morphism into AutoGrow and refer to it as m-AutoGrow with "m-" denoting "Morphing." More specific, we take ZeroInit or AdamInit as sub-module initializer and "Morphing Growth" and "Morphing Stop" policies in Algorithm 1. To recap, in this setting, AutoGrow trains a shallower net till it converges, then grows a sub-module which is initialized by Network Morphism, and repeats the same process till there is no further accuracy improvement. In the interval of K epochstrain(g(X 0 ), K), "staircase" learning rate is used. The learning rate is reset to 0.1 at the first epoch, and decayed by 0.1× at epoch K 2 and 3K 4 . The results are shown in Table 2 by "staircase" rows, which illustrate that m-AutoGrow can grow a DNN multiple times and finally find a net. However, there are two problems: 1) the final accuracy is lower than training the found net from scratch, as indicated by "∆"; 2) the depth learning stops too early with a relatively shallower net, while a deeper net beyond the found depth can achieve a higher accuracy as we will show in Table 6. These problems provide a circumstantial evidence of the hypothesis that Network Morphism gives a bad initialization. Thus, AutoGrow cannot receive signals to continue growing after a limited number of growths. Figure 3(a) visualizes the trajectory of m-AutoGrow corresponding to row "2-3-6" in Table 2. Along the trajectory, there are many trials to detour and escape an initialization from a shallower net.
Ablation Study for AutoGrow Design
Based on the findings in Section 4.1, we propose the following modifications to further enhance AutoGrow and refer it as p-AutoGrow, with "p-" denoting "Periodic": Stochastic gradient descent with a large learning rate intrinsically introduces noises, which help to avoid a full convergence into a bad initialization when training a shallower net. Note that staircase learning rate is still used for fine-tuning after discovering the final DNN; 2. Use random initialization (UniInit or GauInit) as noises to escape from an inadequate initialization; 3. Grow rapidly before a shallower net converges by taking Periodic Growth and Stop policies with a small K.
We first test the impact of the proposed modifications on m-AutoGrow. As shown in Table 2, by replacing the staircase learning rate with a constant learning rate, the accuracy of m-AutoGrow and therefore "∆" improves; further replacing Network Morphism (ZeroInit or AdamInit) with a random initializer (UniInit or GauInit) results in a bigger gain. Overall, combining a constant learning rate with GauInit performs the best. Figure 3(b) visualizes the trajectory corresponding to row "2-4-3" in Table 2, which is much smoother compared to Figure 3(a). Thus, constant learning rate and GauInit are adopted in the remaining experiments, unless we explicitly specify them.
Our ablation study results for p-AutoGrow are summarized in Tables 3-5. Table 3 analyzes the impact of the growing period K. In general, K is a hyper-parameter to trade off speed and accuracy: a smaller K takes a longer learning time but discovers a deeper net, or vice versa. Our results validate the preference of a faster growth (i.e. a smaller K). On CIFAR10, the accuracy becomes plateaus at K = 3; further reducing K produces a deeper net while the accuracy gain is marginal. In the following robustness test for p-AutoGrow, we simply select K = 3. Figure 3(c)(d) visualize the trajectories of p-AutoGrow with K = 50 and 3. The 2D projection gives limited information to reveal the advantages of p-AutoGrow comparing to m-AutoGrow in Figure 3(b), although the trajectory of our final p-AutoGrow in Figure 3(d) is plausibly more similar to the one of training from scratch in Figure 2(b). Moreover, our quantitative results in Table 3 show that p-AutoGrow overcomes the Table 2.
For sanity check, we perform the ablation study of initializers for p-AutoGrow. The results in Table 4 further validate our wisdom on selecting GauInit. The motivation of Network Morphism is to start a deeper net from a loss function that has been well optimized in a shallower net so as not to restart the deeper net from scratch [36,35,5,8,3,4]. In all our experiments, we find this is sure even with random initialization. Figure 4 plots the convergence curves and learning process for "42-42-42" in Table 3. Even with GauInit, the loss and accuracy rapidly recover and no restart is observed. The convergence pattern in the "Growing" stage is similar to the "Fine-tuning" stage under the same learning rate (the initial learning rate 0.1). Similar results on ImageNet will be shown in Figure 8.
At last, we perform the ablation study on the initial depth of the seed network. Table 5 demonstrates that a shallowest DNN works as well as a deeper seed. This implies that Au-toGrow can appropriately stop regardless of the depth of the seed network. As the focus of this work is on depth automation, we prefer starting with the shallowest seed to avoid a manual search of a seed depth.
Adaptability of AutoGrow
To verify the adaptability of AutoGrow, we use an identical configuration (p-AutoGrow with K = 3) and test over 5 datasets and 4 seed architectures. Table 6 includes the results of all 20 combinations. Figure 5 compares Auto-Grow with manual search which is obtained by training many DNNs with different depths from scratch. The results lead to the following conclusions and contributions:
1. In Table 6, AutoGrow adapts layer depth across all scenarios without any tuning meanwhile achieving stateof-the-art accuracy. Manual design needs m · n · k trials, where m and n are respectively the numbers of datasets and sub-module categories, and k is the number of trials per dataset per sub-module category; 2. For ResNets, a discovered depth (" " in Figure 5) falls at the location where accuracy saturates. This means AutoGrow discovers a near-optimal depth: a shallower depth will lose accuracy while a deeper one gains little. The final accuracy of AutoGrow is as good as training the discovered net from scratch as indicated by "∆" in Table 6. 3. For plain networks, there are large positive "∆"s in Table 6. It implies that baselines fail to train very deep plain networks even using Batch Normalization, but AutoGrow enables the training of these networks; 4. K can be used to trade off accuracy and model size. As shown in Figure 5, AutoGrow discovers smaller DNNs when increasing K from 3 (" ") to 50 (" "). Interestingly, the accuracy of plain networks even increases at K = 50. This implies the possibility of improving accuracy by tuning K although we stick to K = 3 for generalizability study. Table 7 shows the accuracy improvement of plain networks at larger K, which is close to the corresponding ResNets' accuracy. Figure 6 visualizes loss surfaces around minima by Auto-Grow and baseline. Intuitively, AutoGrow finds wider or deeper minima with less chaotic landscapes. 5. In Table 6, AutoGrow achieves different accuracies when using different sub-modules. The accuracy is limited by sub-module design, not by the AutoGrow framework. Table 8 summarizes the adaptability of AutoGrow to the sizes of dataset. In each set of experiments, dataset is randomly down-sampled to 100%, 75%, 50% and 25%. For a fair comparison, K is divided by the percentage of dataset such that the number of mini-batches between growths remains the same. As expected, our experiments show that AutoGrow adapts to shallower networks when the sizes are smaller.
Scaling to ImageNet and Efficiency
In small datasets, we set J in Periodic Stop policy as the number of epochs used for training a baseline under the largest learning rate. In ImageNet, we shrink J to one third (i.e., J = 10) for earlier stop and faster evaluation. We explore AutoGrow with K = 2 and K = 5 for both plain Figure 7: The comparison between AutoGrow and manual design [11] on ImageNet. The area of a marker is proportional to the size of the model. "basic"("bottleneck") refers to ResNets with basic (bottleneck) residual blocks. Table 10.
networks and ResNets. The results are shown in Table 9.
AutoGrow automatically finds a depth and the accuracy is higher than training the found net from scratch. The larger K = 5 enables AutoGrow to obtain a smaller DNN to tradeoff accuracy and model size (computation) and the smaller K = 2 achieves higher accuracy. The comparison of Auto-Grow and manual design [11] in Figure 7 shows that Auto-Grow achieves better trade-off between accuracy and computation (measured by floating point operations). Table 10 summarizes the breakdown of wall-clock time in AutoGrow. The growing/searching time is as efficient as (often more efficient than) fine-tuning the single discovered DNN. The scalability of AutoGrow comes from its intrinsic features that (1) it grows quickly with a short period K and stops immediately if no improvement is sensed; and (2) the network is small at the beginning of growing. Figure 8 plots the growing and converging curves for two DNNs in Table 10.
Figure 1 :
1A simple example of AutoGrow.
: get the corrent growing sub-network 15 fi (·; Wi) = growingSub; 16 # comment: stack a sub-module on top of fi (·; Wi)
DNN baselines are trained by SGD with momentum 0.9 using staircase learning rate. The initial learning rate is 0.1 in ResNets and 0.01 in plain networks. On ImageNet, baselines are trained using batch size 256 for 90 epochs, between which learning rate is decayed by 0.1× at epoch 30 and 60. In all other smaller datasets, baselines are trained using batch size 128 for 200 epochs and learning rate is decayed by 0.1× at epoch 100 and 150.
Figure 2 :
2An optimization trajectory comparison between (a) Network Morphism and (b) training from scratch.
Figure 3 :
3Optimization trajectory of AutoGrow, tested by Basic3ResNet on CIFAR10. (a) m-AutoGrow with staircase learning rate and ZeroInit during growing; (b) m-AutoGrow with constant learning rate and GauInit during growing; (c) p-AutoGrow with K = 50; and (d) p-AutoGrow with K = 3. For better illustration, the dots on the trajectory are plotted every 4, 20, 5 and 3 epochs in (a-d), respectively. 1. Use a large constant learning rate for growing, i.e., 0.1 for residual networks and 0.01 for plain networks.
Figure 4 :
4Convergence curves of p-AutoGrow on CIFAR10, with K = 3. The seed net is Basic3ResNet-1-1-1. Curves are sampled every K epochs.very-early stop issue of m-AutoGrow in
Figure 5 :
5AutoGrow vs manual search obtained by training many baselines from scratch. y − axis is accuracy and x − axis is number of parameters. Dataset is CIFAR10. AutoGrow 90.82%
Figure 6 :
6Loss surfaces around minima found by baselines and AutoGrow. Dataset is CIFAR10.
Figure 8 :
8The convergence curves and growing process on ImageNet for (a) Basic4ResNet-9-3-6-4 and (b) Plain4Net-6-6-6-6 in
Table 1 :
1Network Morphism tested on CIFAR10.net backbone
shallower
deeper
initializer
accu %
∆ *
Basic3ResNet
3-3-3
5-5-5
ZeroInit
92.71
-0.77
AdamInit
92.82
-0.66
Basic3ResNet
5-5-5
9-9-9
ZeroInit
93.64
-0.27
AdamInit
93.53
-0.38
Basic4ResNet 1-1-1-1 2-2-2-2
ZeroInit
94.96
-0.37
AdamInit
95.17
-0.16
* ∆ = (accuracy of Network Morphism) − (accuracy of training from scratch)
Table 2 :
2Ablation study of m-AutoGrow. Basic3ResNet * ∆ = (accuracy of m-AutoGrow) − (accuracy of training from scratch)dataset
learning
rate
initializer
found net † accu %
∆ *
CIFAR10
staircase ZeroInit
2-3-6
91.77
-1.06
staircase AdamInit
3-4-3
92.21
-0.59
constant ZeroInit
2-2-4
92.23
0.16
constant AdamInit
3-4-4
92.60
-0.41
constant
UniInit
3-4-4
92.93
-0.08
constant GauInit
2-4-3
93.12
0.55
CIFAR100
staircase ZeroInit
4-3-4
70.04
-0.65
staircase AdamInit
3-3-3
69.85
-0.65
constant ZeroInit
3-2-4
70.22
0.35
constant AdamInit
3-3-3
70.00
-0.50
constant
UniInit
4-4-3
70.39
0.36
constant GauInit
3-4-3
70.66
0.91
†
Table 3 :
3p-AutoGrow with different growing interval K.CIFAR10
K
found net †
accu %
50
6-5-3
92.95
20
7-7-7
93.26
10 19-19-19
93.46
5 23-22-22
93.98
3 42-42-42
94.27
1 77-76-76
94.30
† Basic3ResNet
CIFAR100
K
found net †
accu %
50
8-5-7
72.07
20 8-11-10
72.93
10 18-18-18
73.64
5 23-23-23
73.70
3 54-53-53
74.72
1 68-68-68
74.51
† Basic3ResNet
Table 4 :
4p-AutoGrow under initializers with K = 3.CIFAR10
initializer
found net †
accu
ZeroInit 31-30-30 93.57
AdamInit 37-37-36 93.79
UniInit 28-28-28 93.82
GauInit 42-42-42 94.27
† Basic3ResNet
Table 5 :
5p-AutoGrow with different seed architecture.dataset
seed net †
found net †
accuracy %
CIFAR10
1-1-1
42-42-42
94.27
5-5-5
46-46-46
94.16
CIFAR10
1-1-1-1 22-22-22-22
95.49
5-5-5-5 23-22-22-22
95.62
† Basic3ResNet or Basic4ResNet.
Table 6 :
6The adaptability of AutoGrow to datasetsnet
dataset
found net
accu %
∆ *
Basic3ResNet
CIFAR10
42-42-42
94.27
-0.03
CIFAR100
54-53-53
74.72
-0.95
SVHN
34-34-34
97.22
0.04
FashionMNIST 30-29-29
94.57
-0.06
MNIST
33-33-33
99.64
-0.03
Basic4ResNet
CIFAR10
22-22-22-22 95.49
-0.10
CIFAR100
17-51-16-16 79.47
1.22
SVHN
20-20-19-19 97.32
-0.08
FashionMNIST 27-27-27-26 94.62
-0.17
MNIST
11-10-10-10 99.66
0.01
Plain3Net
CIFAR10
23-22-22
90.82
6.49
CIFAR100
28-28-27
66.34
31.53
SVHN
36-35-35
96.79
77.20
FashionMNIST 17-17-17
94.49
0.56
MNIST
20-20-20
99.66
0.12
Plain4Net
CIFAR10
17-17-17-17 94.20
5.72
CIFAR100
16-15-15-15 73.91
29.34
SVHN
12-12-12-11 97.08
0.32
FashionMNIST 13-13-13-13 94.47
0.72
MNIST
13-12-12-12 99.57
0.03
* ∆ = (accuracy of AutoGrow) − (accuracy of training from scratch)
Table 7 :
7AutoGrow improves accuracy of plain nets.dataset
net layer #
method
accu %
CIFAR10
Plain4Net-6-6-6-6
26
baseline
93.90
Plain4Net-6-6-6-6
26
AutoGrow
K = 30
95.17
Basic4ResNet-3-3-3-3
26
baseline
95.33
CIFAR10
Plain3Net-11-11-10
34
baseline
90.45
Plain3Net-11-11-10
34
AutoGrow
K = 50
93.13
Basic3ResNet-6-6-5
36
baseline
93.60
Table 8 :
8The adaptability of AutoGrow to dataset sizesBasic3ResNet on CIFAR10
dataset size
found net
accu %
100%
42-42-42
94.27
75%
32-31-31
93.54
50%
17-17-17
91.34
25%
21-12-7
88.18
Basic4ResNet on CIFAR100
dataset size
found net
accu %
100%
17-51-16-16
79.47
75%
17-17-16-16
77.26
50%
12-12-12-11
72.91
25%
6-6-6-6
62.53
Plain3Net on MNIST
dataset size
found net
accu %
100%
20-20-20
99.66
75%
12-12-12
99.54
50%
12-11-11
99.46
25%
10-9-9
99.33
Plain4Net on SVHN
dataset size
found net
accu %
100%
12-12-12-11
97.08
75%
9-9-9-9
96.71
50%
8-8-8-8
96.37
25%
5-5-5-5
95.68
Table 9 :
9Scaling up to ImageNet † ∆ = (Top-1 of AutoGrow) − (Top-1 of training from scratch)net
K found net
Top-1 Top-5 † ∆ Top-1
Basic4ResNet
2
12-12-11-11 76.28 92.79
0.43
5
9-3-6-4
74.75 91.97
0.72
Bottleneck4ResNet
2
6-6-6-17
77.99 93.91
0.83
5
6-7-3-9
77.33 93.65
0.83
Plain4Net
2
6-6-6-6
71.22 90.08
0.70
5
5-5-5-4
70.54 89.76
0.93
Table 10 :
10The efficiency of AutoGrow
A sub-module can be one or more layers, e.g., a residual block. 2 Code: https://github.com/wenwei202/autogrow.
An evolutionary algorithm that constructs recurrent neural networks. P J Angeline, G M Saunders, J B Pollack, IEEE transactions on Neural Networks. 51P. J. Angeline, G. M. Saunders, and J. B. Pollack. An evolu- tionary algorithm that constructs recurrent neural networks. IEEE transactions on Neural Networks, 5(1):54-65, 1994. 2
Understanding and simplifying one-shot architecture search. G Bender, P.-J Kindermans, B Zoph, V Vasudevan, Q Le, International Conference on Machine Learning. G. Bender, P.-J. Kindermans, B. Zoph, V. Vasudevan, and Q. Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, pages 549-558, 2018. 2
Efficient architecture search by network transformation. H Cai, T Chen, W Zhang, Y Yu, J Wang, Thirty-Second AAAI Conference on Artificial Intelligence. 56H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang. Efficient architecture search by network transformation. In Thirty- Second AAAI Conference on Artificial Intelligence, 2018. 2, 4, 5, 6
Path-level network transformation for efficient architecture search. H Cai, J Yang, W Zhang, S Han, Y Yu, arXiv:1806.0263956arXiv preprintH. Cai, J. Yang, W. Zhang, S. Han, and Y. Yu. Path-level net- work transformation for efficient architecture search. arXiv preprint arXiv:1806.02639, 2018. 2, 4, 5, 6
T Chen, I Goodfellow, J Shlens, arXiv:1511.05641Net2net: Accelerating learning via knowledge transfer. 6arXiv preprintT. Chen, I. Goodfellow, and J. Shlens. Net2net: Accel- erating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015. 1, 2, 5, 6
Adanet: Adaptive structural learning of artificial neural networks. C Cortes, X Gonzalvo, V Kuznetsov, M Mohri, S Yang, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70C. Cortes, X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang. Adanet: Adaptive structural learning of artificial neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 874- 883. JMLR. org, 2017. 2
Nest: A neural network synthesis tool based on a grow-and-prune paradigm. X Dai, H Yin, N K Jha, arXiv:1711.02017arXiv preprintX. Dai, H. Yin, and N. K. Jha. Nest: A neural network synthesis tool based on a grow-and-prune paradigm. arXiv preprint arXiv:1711.02017, 2017. 2
Simple and efficient architecture search for cnns. T Elsken, J.-H Metzen, F Hutter, Workshop on Meta-Learning. 56T. Elsken, J.-H. Metzen, and F. Hutter. Simple and efficient architecture search for cnns. In Workshop on Meta-Learning (MetaLearn 2017) at NIPS, 2017. 2, 4, 5, 6
Learning the structure of deep convolutional networks. J Feng, T Darrell, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJ. Feng and T. Darrell. Learning the structure of deep convo- lutional networks. In Proceedings of the IEEE international conference on computer vision, pages 2749-2757, 2015. 2
. A Gordon, E Eban, O Nachum, B Chen, H Wu, T.-J , A. Gordon, E. Eban, O. Nachum, B. Chen, H. Wu, T.-J.
Morphnet: Fast & simple resourceconstrained structure learning of deep networks. E Yang, Choi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYang, and E. Choi. Morphnet: Fast & simple resource- constrained structure learning of deep networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1586-1595, 2018. 2
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 770-778, 2016. 1, 3, 4, 8
Channel pruning for accelerating very deep neural networks. Y He, X Zhang, J Sun, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionY. He, X. Zhang, and J. Sun. Channel pruning for accelerat- ing very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1389- 1397, 2017. 2
Condensenet: An efficient densenet using learned group convolutions. G Huang, S Liu, L Van Der Maaten, K Q Weinberger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionG. Huang, S. Liu, L. Van der Maaten, and K. Q. Wein- berger. Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2752- 2761, 2018. 2
Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition13G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Wein- berger. Densely connected convolutional networks. In Pro- ceedings of the IEEE conference on computer vision and pat- tern recognition, pages 4700-4708, 2017. 1, 3
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 4
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. 1
Fast convnets using groupwise brain damage. V Lebedev, V Lempitsky, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionV. Lebedev and V. Lempitsky. Fast convnets using group- wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2554- 2564, 2016. 2
. H Li, A Kadav, I Durdanovic, H Samet, H , H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P.
Pruning filters for efficient convnets. Graf, arXiv:1608.08710arXiv preprintGraf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. 2
Visualizing the loss landscape of neural nets. H Li, Z Xu, G Taylor, C Studer, T Goldstein, Advances in Neural Information Processing Systems. H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Vi- sualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pages 6391-6401, 2018. 5
Progressive neural architecture search. C Liu, B Zoph, M Neumann, J Shlens, W Hua, L.-J Li, L Fei-Fei, A Yuille, J Huang, K Murphy, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy. Progres- sive neural architecture search. In Proceedings of the Euro- pean Conference on Computer Vision (ECCV), pages 19-34, 2018. 2
Hierarchical representations for efficient architecture search. H Liu, K Simonyan, O Vinyals, C Fernando, K Kavukcuoglu, arXiv:1711.00436arXiv preprintH. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436, 2017. 2
H Liu, K Simonyan, Y Yang, arXiv:1806.09055Darts: Differentiable architecture search. arXiv preprintH. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018. 2
Learning efficient convolutional networks through network slimming. Z Liu, J Li, Z Shen, G Huang, S Yan, C Zhang, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZ. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 2736-2744, 2017. 2
Thinet: A filter level pruning method for deep neural network compression. J.-H Luo, J Wu, W Lin, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJ.-H. Luo, J. Wu, and W. Lin. Thinet: A filter level prun- ing method for deep neural network compression. In Pro- ceedings of the IEEE international conference on computer vision, pages 5058-5066, 2017. 2
Evolving deep neural networks. R Miikkulainen, J Liang, E Meyerson, A Rawal, D Fink, O Francon, B Raju, H Shahrzad, A Navruzyan, N Duffy, Artificial Intelligence in the Age of Neural Networks and Brain Computing. ElsevierR. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, et al. Evolving deep neural networks. In Artificial Intelli- gence in the Age of Neural Networks and Brain Computing, pages 293-312. Elsevier, 2019. 2
Efficient neural architecture search via parameter sharing. H Pham, M Y Guan, B Zoph, Q V Le, J Dean, arXiv:1802.03268arXiv preprintH. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Effi- cient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018. 2
G Philipp, J G Carbonell, arXiv:1712.05440Nonparametric neural networks. arXiv preprintG. Philipp and J. G. Carbonell. Nonparametric neural net- works. arXiv preprint arXiv:1712.05440, 2017. 2
Large-scale evolution of image classifiers. E Real, S Moore, A Selle, S Saxena, Y L Suematsu, J Tan, Q V Le, A Kurakin, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin. Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2902- 2911. JMLR. org, 2017. 2
Convolutional neural fabrics. S Saxena, J Verbeek, Advances in Neural Information Processing Systems. S. Saxena and J. Verbeek. Convolutional neural fabrics. In Advances in Neural Information Processing Systems, pages 4053-4061, 2016. 2
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.155613arXiv preprintK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1, 3
Gradual dropin of layers to train very deep neural networks. L N Smith, E M Hand, T Doster, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionL. N. Smith, E. M. Hand, and T. Doster. Gradual dropin of layers to train very deep neural networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4763-4771, 2016. 1
Evolving neural networks through augmenting topologies. K O Stanley, R Miikkulainen, 10Evolutionary computationK. O. Stanley and R. Miikkulainen. Evolving neural net- works through augmenting topologies. Evolutionary compu- tation, 10(2):99-127, 2002. 2
Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition13C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 1-9, 2015. 1, 3
Deep growing learning. G Wang, X Xie, J Lai, J Zhuo, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionG. Wang, X. Xie, J. Lai, and J. Zhuo. Deep growing learn- ing. In Proceedings of the IEEE International Conference on Computer Vision, pages 2812-2820, 2017. 2
T Wei, C Wang, C W Chen, arXiv:1701.03281Modularized morphing of neural networks. 6arXiv preprintT. Wei, C. Wang, and C. W. Chen. Modularized morphing of neural networks. arXiv preprint arXiv:1701.03281, 2017. 1, 2, 5, 6
Network morphism. T Wei, C Wang, Y Rui, C W Chen, International Conference on Machine Learning. 6T. Wei, C. Wang, Y. Rui, and C. W. Chen. Network mor- phism. In International Conference on Machine Learning, pages 564-572, 2016. 1, 2, 5, 6
Learning structured sparsity in deep neural networks. W Wen, C Wu, Y Wang, Y Chen, H Li, Advances in neural information processing systems. W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pages 2074-2082, 2016. 2
J Yoon, E Yang, J Lee, S J Hwang, arXiv:1708.01547Lifelong learning with dynamically expandable networks. arXiv preprintJ. Yoon, E. Yang, J. Lee, and S. J. Hwang. Lifelong learn- ing with dynamically expandable networks. arXiv preprint arXiv:1708.01547, 2017. 2
B Zoph, Q V Le, arXiv:1611.01578Neural architecture search with reinforcement learning. 1arXiv preprintB. Zoph and Q. V. Le. Neural architecture search with rein- forcement learning. arXiv preprint arXiv:1611.01578, 2016. 1, 2
Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q V Le, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697-8710, 2018. 1, 2
| [
"https://github.com/wenwei202/autogrow."
]
|
[
"Galaxy Dynamics from Edge-On Late-type Galaxies",
"Galaxy Dynamics from Edge-On Late-type Galaxies"
]
| [
"J J Dalcanton \nUniversity of Washington\nBox 35158098195SeattleWA\n",
"R A Bernstein \nCarnegie Observatories\n813 Santa Barbara St91101PasadenaCA\n"
]
| [
"University of Washington\nBox 35158098195SeattleWA",
"Carnegie Observatories\n813 Santa Barbara St91101PasadenaCA"
]
| [
"ASP Conference Series"
]
| We present first results of a program to study the dynamics of undisturbed bulgeless, low surface density disk galaxies in order to probe the underlying structure of dark matter halos. High resolution Hα rotation curves are combined with optical and infrared imaging to place strong limits on the halo profiles. We find noticable variation in the shapes of the rotation curves, in contrast to previous claims. The implied density profiles are still significantly more shallow than profiles derived from most N-body simulations; unlike previous HI observations, beam-smearing cannot significantly affect this result. Based upon stellar mass profiles derived from K ′ band observations, we derive the angular momentum distribution of the stellar disk and find it to be broader than that of a uniformly rotating solid-body sphere, but remarkably consistent from galaxy-to-galaxy. Finally, based upon K ′ band surface brightness profiles, we find that low surface density disks must be significantly submaximal. Furthermore, maximal disk fits based upon Modified Newtonian Dynamics (MOND) have maximum mass-to-light ratios which are too small to be consistent with stellar population models; without the ability to significantly adjust inclination angles or infrared mass-to-light ratios, this sample presents great difficulties for MOND. | null | [
"https://export.arxiv.org/pdf/astro-ph/9910219v1.pdf"
]
| 9,157,808 | astro-ph/9910219 | 431e52bf6ab3e0c85e604df9b1f368950bfc937e |
Galaxy Dynamics from Edge-On Late-type Galaxies
1999
J J Dalcanton
University of Washington
Box 35158098195SeattleWA
R A Bernstein
Carnegie Observatories
813 Santa Barbara St91101PasadenaCA
Galaxy Dynamics from Edge-On Late-type Galaxies
ASP Conference Series
31999Galaxy Dynamics: from the Early Universe to the Present
We present first results of a program to study the dynamics of undisturbed bulgeless, low surface density disk galaxies in order to probe the underlying structure of dark matter halos. High resolution Hα rotation curves are combined with optical and infrared imaging to place strong limits on the halo profiles. We find noticable variation in the shapes of the rotation curves, in contrast to previous claims. The implied density profiles are still significantly more shallow than profiles derived from most N-body simulations; unlike previous HI observations, beam-smearing cannot significantly affect this result. Based upon stellar mass profiles derived from K ′ band observations, we derive the angular momentum distribution of the stellar disk and find it to be broader than that of a uniformly rotating solid-body sphere, but remarkably consistent from galaxy-to-galaxy. Finally, based upon K ′ band surface brightness profiles, we find that low surface density disks must be significantly submaximal. Furthermore, maximal disk fits based upon Modified Newtonian Dynamics (MOND) have maximum mass-to-light ratios which are too small to be consistent with stellar population models; without the ability to significantly adjust inclination angles or infrared mass-to-light ratios, this sample presents great difficulties for MOND.
Although the internal structure of dark matter halos is an extremely important test of cosmological theories, few secure observational constraints currently exist. In disk galaxies, the structure of the halo is best explored with rotation curves. However, both the luminous and dark matter contribute significantly to the enclosed mass, disguising the dynamics of the dark matter halo, and altering its structure as well. Therefore, while rotation curves are the most sensitive dynamical indicators of a galaxy's total mass distribution, they are a poor measure of the dark matter profile alone.
More direct probes of the dark matter are provided by low surface brightness galaxies (LSBs). There is strong dynamical evidence that LSBs have low baryonic surface density, and thus the disk contributes little to the dynamics of the galaxy, and the resulting rotation curve is dominated by the dark halo; the few LSB rotation curves published to date rise remarkably slowly, becoming asymptotically flat only at several disk scale lengths (Goad & Roberts 1981, de Blok et al. 1996, Makarov et al. 1997, van Zee et al. 1997, van der Hulst et al. 1993. Furthermore, LSBs span a wide range in mass, and thus can be used to trace systematic variations in the shapes of dark matter halos as a function of mass. R band image (top) and extracted rotation curve (bottom), for a galaxy from the sample. All plots have been scaled to the same horizontal scale. Open symbols represent points of substantially lower signal-to-noise. In this preliminary work, rotation speed is currently extracted as the center of the Hα line, not at the extrema as would be proper for edge-on galaxies. However, as the line widths are ±3 km/s of the instrumental line widths, we expect little change when the proper rotation curves are extracted. Furthermore, for slowly rising rotation curves, our simulations have shown that using line centers as a measure of the rotation curve changes the rotation curve by less than 5%.
We have been pursuing a study of LSB dynamics using galaxies selected from the Flat Galaxy Catalog (Karanchetsev et al. 1993), a large sample of edge-on galaxies (a/b ≥ 7, a > 0.6 ′ ). We have selected 50 galaxies which appear to have low surface brightnesses when seen edge-on; because these galaxies are optically thin, their face-on central surface brightnesses will be even lower. We also required the galaxies to be completely bulgeless and undisturbed (i.e. no warps or gross asymmetries).
We obtained high resolution (∼ 1−1.5 ′′ ) Hα rotation curves for a subset of 35 of these galaxies. The rotation curves accurately probe the dynamics of the galaxies to very small radii, and at high resolution (0.1-0.5 kpc for the majority of our sample). An example image and rotation curve is shown in Figure 1. We have imaged the sample in B, R, and K ′ , and confirmed that all have extremely low surface brightnesses, in spite of having maximum rotation speeds up to 250 km/s; the median K-band surface brightnesses of our sample is more than 2 magnitudes per square arcsecond fainter than the median of the de Jong (1995) sample of face on spirals. The majority of these galaxies are extremely blue (R − K <2.5), lack dust lanes, and lie on the B-band Tully-Fisher relationship for low luminosity galaxies of Stil (1999), all of which suggests that extinction is not a significant problem for the majority of the sample. Roughly 8 galaxies which have R − K > 2.5 have been eliminated from further analysis, to alieviate any concerns about extinction affecting the rotation curves.
The addition of infrared imaging to our sample allows us to accurately subtract the mass of the stellar disk (M * ) from our measured rotation curves, given the insensitivity of the K-band mass-to-light ratio to variations in starformation history. We note, however, that HI is the largest baryonic contribution to the observed rotation curve; we find M HI /M * ∼ 1 − 4 for our sample. The rotation curves are indeed dark matter dominated ( M dark /M baryonic ∼ 3.5 at the last measured point).
There has been considerable attention paid in the literature to the apparent contradiction between N-body predictions of steeply rising rotation curves (cf. Navarro et al. 1997, Moore 1999; but see Kravstov et al. 1998) and HI observations of slowly rising rotation curves for dwarf and LSB galaxies (cf. de Blok et al. 1997). While there seem to be significant conflicts for dwarf galaxies, the All extracted rotation curves are shown in the left panel, scaled by fits to the model suggested by Kravstov et al. (1998). The heavy line is the "universal" rotation curve found by Kravstov et al. Considerably more scatter is found in the high-resolution data than in the HI data plotted by of the data points fall within ±20% of their preferred fit. Some of this scatter is due to the low extinction of the galaxies, however (see Figure 1). The implied density profiles are plotted in the right panel. The heaviest solid line is the NFW profile, and the second heaviest line is the shallower fit of Kravstov et al. No baryonic component has been subtracted from these plots, and hence the central density profile will become shallower. No galaxies with R − K > 2.5 have been included, to avoid problems with extinction; 60% of the galaxies remaining have R−K < 2.0. Note also that the scaling by Vs and Rs in the left-hand figure masks the large variation in halo density and inner profile shape implied by the density profiles in the right-hand figure. existing data on LSB galaxies was based upon relatively low resolution HI synthesis observations, and derived dark matter core radii which were comparable to the resolution of the beam. Kravstov et al. (1998) have used the same data to argue that all LSBs have similar rotation curves and a self-similar halo profile. In the left panel of Figure 2 we plot the comparable data for our sample of highresolution Hα rotation curves. Note that there is considerable scatter (±20%). The scatter also masks significant variations in the shape of the rotation curves, as can be seen by the density profiles derived from the RCs in Figure 2. Note also that the discrepancy between LSB observations and the steep cusps predicted by simulations persists at high-resolution, but that the central rotation curves are somewhat steeper than the fits of Kravstov et al. (1998).
One common feature of models of disk galaxy formation (see Mo, this volume) is the assumption of detailed angular momentum conservation for the collapse of a sphere of gas in solid body rotation (Crampin & Hoyle 1967). We test this assumption in the left panel of Figure 3, where we plot the angular momentum distributions of the stellar disk, derived from the K ′ observations and the rotation curve. The distributions are remarkably similar, although the data spans a factor of nearly 5 in rotation speed. The distributions are also broader than a sharp-edged sphere, as would be expected for smoother initial overdensity. Once the distribution of HI is known, we can calculate the full baryonic angular momentum distribution.
Finally, we can use the robustness of K ′ mass-to-light ratios (Υ K ′ ) to measure the mass contribution which baryonic disks make to the overall dynamics. [LEFT] Specific angular momentum distributions, based upon exponential disk fits to the K ′ images and the fitted rotation curves. All curves have been scaled to j 1/2 , the specific angular momentum containing half the mass, and to the same total disk mass. The heavy curve is the distribution expected for a sphere in solid body rotation, as is often assumed in disk formation models. HI has not been included.
[RIGHT] Maximal Mass-to-Light ratios in K ′ for the stellar disk under Newtonian gravity (upper) and MOND (lower). The right panels show expected M/L for a Scalo IMF and constant star-formation, for different metallicities (Bruzual & Charlot 1999). Under Newtonian gravity, disks become systematically sub-maximal with decreasing surface brightness; the maximal disk value of Υ K ′ is too high to be consistent with stellar populations. Under MOND, the stellar massto-light ratios are too small to be consistent with stellar population models. Including HI will make these limits more severe.
In the right panel of Figure 4, we show Υ K ′ derived from maximum disk fits to the rotation curve. The upper panel shows that disks become progressively "sub-maximal" at decreasing mass surface density. We have repeated this exercise for MOND dynamics, and find that the derived values of Υ K ′ are too low to be consisent with reasonable star formation histories and normal IMFs. The contribution from HI has not been included in these fits, and will further reduce the allowed values of Υ K ′ , making the limits for MOND more stringent.
2 Figure 1 .
21Figure 1.
Figure 2 .
2Figure 2.
Figure 3 .
3Figure 3.
Acknowledgments. We thank the staff of Carnegie Observatories for the generous allocations of telescope time which have made this project possible, and Frank van den Bosch and Ben Weiner for interesting discussions.
. G Bruzual, S Charlot, in prepBruzual, G. & Charlot, S. 1999, in prep.
. W J G De Blok, S S Mcgaugh, Van Der, J M Hulst, D J Crampin, F Hoyle, MNRAS. 99. Goad, J. W., & Roberts, M. S28379ApJde Blok, W. J. G., McGaugh, S. S., & van der Hulst, J. M. 1996, MNRAS, 283, 18. Crampin, D. J., & Hoyle, F. 1964, ApJ, 140, 99. Goad, J. W., & Roberts, M. S. 1981, ApJ, 250, 79.
. A V Kravstov, A A Klypin, J S Bullock, J R Primack, D I Makarov, I D Karanchentsev, A N Burenkov, N V Tyurina, G G Korotkova, Astron. Lett. 502638ApJKravstov, A. V., Klypin, A. A., Bullock, J. S., & Primack, J. R. 1998, ApJ, 502, 48. Makarov, D. I., Karanchentsev, I. D., Burenkov, A. N., Tyurina, N. V., & Korotkova, G. G. 1997, Astron. Lett., 23, 638.
. B Moore, F Governato, T Quinn, J Stadel, G Lake, L5, J F Navarro, C S Frenk, S D M White, 490. 493Stil, J. 499ApJ. Ph.D. Thesis, Kapteyn InstituutMoore, B. 1998, Governato, F., Quinn, T., Stadel, J., & Lake, G. 1998, ApJ, 499, L5. Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490. 493. Stil, J., 1999, Ph.D. Thesis, Kapteyn Instituut.
| []
|
[
"Springer Nature 2021 L A T E X template Joint Multivariate and Functional Modeling for Plant Traits and Reflectances",
"Springer Nature 2021 L A T E X template Joint Multivariate and Functional Modeling for Plant Traits and Reflectances"
]
| [
"Philip A White \nDepartment of Statistics\nBrigham Young University\n84602ProvoUtahUSA\n",
"Michael F Christensen [email protected] \nDepartment of Statistical Science\nDuke University\n27708DurhamUtahUSA\n",
"Henry Frye [email protected] \nDepartment of Ecology and Evolutionary Biology\nUniversity of Connecticut\n06269StorrsConnecticutUSA\n",
"Alan E Gelfand \nDepartment of Statistical Science\nDuke University\n27708DurhamUtahUSA\n",
"John A Silander [email protected] \nDepartment of Ecology and Evolutionary Biology\nUniversity of Connecticut\n06269StorrsConnecticutUSA\n"
]
| [
"Department of Statistics\nBrigham Young University\n84602ProvoUtahUSA",
"Department of Statistical Science\nDuke University\n27708DurhamUtahUSA",
"Department of Ecology and Evolutionary Biology\nUniversity of Connecticut\n06269StorrsConnecticutUSA",
"Department of Statistical Science\nDuke University\n27708DurhamUtahUSA",
"Department of Ecology and Evolutionary Biology\nUniversity of Connecticut\n06269StorrsConnecticutUSA"
]
| []
| The investigation of leaf-level traits in response to varying environmental conditions has immense importance for understanding plant ecology. Remote sensing technology enables measurement of the reflectance of plants to make inferences about underlying traits along environmental gradients. While much focus has been placed on understanding how reflectance and traits are related at the leaf-level, the challenge of modelling the dependence of this relationship along environmental gradients has limited this line of inquiry. Here, we take up the problem of jointly modeling traits and reflectance given environment. Our objective is to assess not only response to environmental regressors but also dependence between trait levels and the reflectance spectrum in the context of this regression. This leads to joint modeling of a response vector of traits with reflectance arising as a functional response over the wavelength spectrum. To conduct this investigation, we employ a dataset from a global biodiversity hotspot, the Greater Cape Floristic Region in South Africa. | null | [
"https://export.arxiv.org/pdf/2210.00409v1.pdf"
]
| 252,683,933 | 2210.00409 | 1405d2c5f5e61090c27168d241a962c9dd50f7d5 |
Springer Nature 2021 L A T E X template Joint Multivariate and Functional Modeling for Plant Traits and Reflectances
Philip A White
Department of Statistics
Brigham Young University
84602ProvoUtahUSA
Michael F Christensen [email protected]
Department of Statistical Science
Duke University
27708DurhamUtahUSA
Henry Frye [email protected]
Department of Ecology and Evolutionary Biology
University of Connecticut
06269StorrsConnecticutUSA
Alan E Gelfand
Department of Statistical Science
Duke University
27708DurhamUtahUSA
John A Silander [email protected]
Department of Ecology and Evolutionary Biology
University of Connecticut
06269StorrsConnecticutUSA
Springer Nature 2021 L A T E X template Joint Multivariate and Functional Modeling for Plant Traits and Reflectances
conditional model validationdimension reductionfunctional dataGaussian process convolutionMarkov chain Monte Carlomultivariate data
The investigation of leaf-level traits in response to varying environmental conditions has immense importance for understanding plant ecology. Remote sensing technology enables measurement of the reflectance of plants to make inferences about underlying traits along environmental gradients. While much focus has been placed on understanding how reflectance and traits are related at the leaf-level, the challenge of modelling the dependence of this relationship along environmental gradients has limited this line of inquiry. Here, we take up the problem of jointly modeling traits and reflectance given environment. Our objective is to assess not only response to environmental regressors but also dependence between trait levels and the reflectance spectrum in the context of this regression. This leads to joint modeling of a response vector of traits with reflectance arising as a functional response over the wavelength spectrum. To conduct this investigation, we employ a dataset from a global biodiversity hotspot, the Greater Cape Floristic Region in South Africa.
Introduction
Terrestrial ecosystems are reliant on the diversity and composition of plant species present in a community (O'Connor et al., 2017). Each species is comprised of a unique set of traits; these phenotypic characteristics include leaf size and shape, photosynthetic function, water content of leaves, and nutrient levels in leaves. The study of plant traits can provide insight into organismal function, plant-environment interactions, species coexistence and community dynamics, ecosystem structure and function, and biogeography and diversification. Thus, information on the diversity and abundance of plant traits can help us understand and predict complex ecological processes and responses to global change (Reich et al., 1997;Díaz and Cabido, 2001;Cadotte et al., 2011). Ecologists are particularly interested in how plant traits vary along spatial-environmental gradients since this can yield insights into the underlying principles of how communities originate and the convergence of survival strategies that plants evolved in response to their environment (Reich et al., 1999;Mcgill et al., 2006). These relationships provide a way to infer how ecosystems may change under novel environments, an important need as the rate human-driven environmental change increases (Schleuning et al., 2020).
Remote sensing provides an efficient and powerful way to measure plant traits and diversity at regional extents (Turner, 2014;Cavender-Bares et al., 2022). Image spectroscopy, which measures reflectance at high wavelength resolution across the reflectance spectrum, has been a prominent tool in predicting plant traits from remotely sensed imagery (Asner et al., 2011;Singh et al., 2015;Shiklomanov et al., 2016;Yang et al., 2016). At the leaf level, various wavelengths can be useful in predicting a suite of chemical, structural, and physiological traits (see Jacquemoud and Ustin, 2019a, and references therein) and the diversity of leaf-level reflectance within a community has been shown to be correlated with community plant diversity (Schweiger et al., 2018;Frye et al., 2021). Further, leaf-level spectra provide the basis for understanding mechanisms occurring at larger scales observed by remote sensing instruments flown aerially and in space.
There are still many gaps in our understanding of the relationship between plant optical properties and traits (Schimel et al., 2015;Jetz et al., 2016). One of these gaps is an explicit understanding of how traits and reflectance respond in tandem to environmental conditions. Because the relationship between reflectance and traits is an important crux of modern remote sensing efforts, their dependency over different environments is a major question to be explored. However, the high dimensionality of spectral data alongside the co-correlation of individual wavelengths and leaf traits poses a methodological hurdle. We offer novel joint modeling of traits and reflectances given environmental/habitat features. We consider joint modeling of two data types: multivariate continuous traits and functional reflectance data obtained at high wavelength resolution. We focus on understanding the effects of environmental regressors on plant traits and the reflectance spectrum at leaf-level, as well as the relationships between traits and reflectance, captured through correlations, under this regression. Modeling such relationships requires a multivariate and functional response to regressors, as well as a model that relates these responses. Traits may, in fact, be ordinal or categorical, e.g., the degree or state of leaf pubescence or waxiness, but consideration of such traits is beyond our scope here.
Conceptually, we can build trait/reflectance models over different taxonomic scales, e.g., family, genus, species. Here, we work at family scale to obtain the largest sample size of trait/reflectance data. This is needed in order to best understand the very large number of correlations of interest, i.e., four traits by 500 wavelength bands, across O(10 2 ) sites, each with individual environmental features. Thus, replicates are simply all of the observations available for the family in our database.
We remark that joint modeling of families, e.g., family level random effects, is not suitable because this implies a "global centering" of the families and borrowing strength across families. Exploratory analysis below suggests that, in our context, trait behavior across families is sufficiently different so that such shrinkage is not appropriate and that global parameters over families are not meaningful. Furthermore, different families have very different numbers of genera and species, further complicating joint interpretation. While analysis at a higher taxonomic scale is not appropriate with our data, our approach is applicable at higher scale and would enable potentially richer dependence stories.
We acknowledge that spatial dependence across sites is anticipated in joint modeling (White et al., 2022), where, with different intentions, White et al. (2022) focused on a marginal functional data model in a spatial setting. However, in attempting to obtain the needed large sample sizes for each of the families, we span regions that are disjoint and too spatially distant to employ sensible spatial modeling specifications. So, the observations are assumed to be conditionally independent. A fully spatial version, applied to an appropriate region, is a goal of future work.
Our primary contribution is jointly modeling a trait vector, T, and a functional reflectance spectrum, R, given environment/habitat features, E. Specifically, our model uses a joint multivariate and function-on-scalar regression, after which we can extract the residual association between T and R given E. We prefer a joint specification in the form [T, R|E] to conditional times marginal specification, [T|R, E][R|E] since the former directly reveals how we capture association in the residuals between traits and reflectances at the replicate level. Specifically, we directly model the correlation between R and T through a joint model for the coefficients of functional bases and trait residuals. Our model also provides functional heterogeneity and heteroscedasticity.
Turning to model assessment, under our specifications, we are assuming dependence among the traits, dependence across the reflectance spectrum, and dependence between traits and reflectances. So, model comparison should be based upon conditional prediction. We demonstrate improved out-of-sample prediction under the dependence model vs. an independence model given partial information at a site, i.e., when predicting traits or reflectances which we didn't collect at the site.
Multivariate modeling in ecology is well established (see, e.g., Schliep and Hoeting, 2013;Clark et al., 2017). In particular, modeling the joint patterns of plant traits improves prediction (see Schliep et al., 2018). Functional data analysis (FDA) is widely used to represent curves with continuous domains (see Ramsay, 2005;Ramsay and Silverman, 2007, for pioneering work in the field). In general, FDA relies on representing the function through a low-rank represention (e.g., splines, wavelets, or kernels). Our challenge is relating multivariate data response specifications for traits to functional data response specifications for reflectance to allow relational inference between the responses.
We use our modelling framework to elaborate upon the joint T, R, and E relationships for four families within the Greater Cape Floristic Region (GCFR) of South Africa. The GCFR is of special importance to global biodiversity as it contains two adjacent global biodiversity hotspots, the Fynbos and Succulent Karoo biomes. Such biodiversity hotspots are important to global biodiversity conservation because they are regions that not only contain large numbers of species, but also have many species that are not found anywhere else on Earth (Myers et al., 2000;Latimer et al., 2005;Born et al., 2006). The four families we focus our analysis on-the Aizoaceae, Asteraceae, Proteaceae, and Restionaceae-are iconic families found throughout the GCFR and are comprised of large species radiations (Manning and Goldblatt, 2012;Manning, 2013). We focus on four leaf traits-leaf water content, leaf mass per area, percent nitrogen, and succulence (water content/leaf area)-as these are both commonly used traits in the trait ecology and remote sensing literature and represent major evolutionary strategies among plants (see Wright et al., 2004;Jacquemoud and Ustin, 2019b, and references therein).
We continue the paper with a presentation of the dataset and exploratory data analysis that motivates our analysis in Section 2. Based on data characteristics demonstrated in our exploratory data analysis, we present a joint model for multivariate traits and functional reflectance data in 3. We then offer interpretation of our results for each of the four families in Section 4 and conclude with a brief summary and potential future work.
The Dataset
We model plant trait and reflectance data at the family level, fitting each of the families Aizoaceae, Asteraceae, Proteaceae, and Restionaceae individually. These four families are generally speciose in the Greater Cape Floristic Region (GCFR) in South Africa, though the Proteaceae and Restionaceae have less prevalence in the arid regions of the GCFR (Manning and Goldblatt, 2012;Manning, 2013). In Table 1, we provide the number of times each family is observed in the dataset as well as the number of genera and species within each family. We see that the families differ dramatically in how often they appear, as well as how many genera and species appear for that family. Although some plant families have similar spatial ranges, they are not generally co-located (See Figure 1). We analyze four continuous leaf traits -leaf water content (LWC), leaf mass per area (LMA), percent Nitrogen (pN), and leaf succulence (LS) along with leaf reflectance as a function over wavelength w ∈ [450, 950] nanometers, observed as a 500 dimensional reflectance vector, by nanometer. Leaf reflectance was measured from sun leaves collected from the top of the canopy using a USB-4000 Spectrometer (Ocean Optics, Largo, Florida, USA) with a leaf clip attachment. For further details on spectra and trait data collection see Frye et al. (2021) and Aiello-Lammens et al. (2017), respectively. In some cases, at a site, for a given species within a family, we have more than one trait observation and/or reflectance observation. Again, viewing the sites as replicates, such observations are averaged to obtain a single T value and a single R value for the site. However, there may be duplication of replicates at a site because more than one species is sampled at many sites.
We analyze traits and and reflectance on the log-scale as a standard transformation that improves the assumptions of our Gaussian model (discussed in Section 3 below). We define the response as y j = T j , R j (w) , where T j is a vector of four plant traits (on the log scale) and R j (w) is a log-reflectance function. We model R j (w) as a random function where, again, R j (w) is observed at 500 wavelengths at one nanometer (nm) spacing between 450-949 nm. For both traits and reflectance, we use j to index replication within family.
To visualize the overall patterns present in the data, we plot all log reflectances by family in Figure 2, including the family-specific mean. All families show some similarities with relatively low reflectance for blue (450 -500 nm) and some red wavelengths (600 -675 nm), with a local maximum around 550 nm (green). Reflectance increases in the red (around 700 nm) and remains uniformly high in the near-infrared (740-949 nm). In Figure 2, we also include box plots of the observed log plant traits for each family. Although traits and reflectances are similar across families, each family shows different amounts of heterogeneity. To jointly explain plant traits and reflectance, we use four environmental covariates in E j : (i) elevation (Elevation30m), (ii) annual precipitation (Gmap), (iii)rainfall concentration (RFL CONC), and (iv) minimum average temperature in January, the peak of the austral summer (tminave 01c). Elevation data was derived from 30 m resolution digital elevation maps (JPL, 2020) while the other climate variables were taken from Schulze (1997). We also introduce the family-level plant abundance at the site of the jth replicate as an explanatory variable. Though not a customary environmental predictor, we include it in E j to play the role of a proxy for site level environmental suitability for the family. That is, the other environmental regressors above operate at larger spatial scales and we seek to supply a more local regressor.
As a measure of suitability, we define abundance for a family at a site as the aggregated percent cover of all species in that family at the site. Due to some misalignment between the sites in this analysis and sites with available percent cover, we estimate percent cover using ordinary kriging on the scale of log(x+1) to yield predictions on [0, ∞) as well as to deal with many zeros. Specifically, we use an exponential covariance function with parameters estimated from empirical semivariograms.
To motivate our analysis, we calculate the empirical correlation between the environmental variables and each log trait and log reflectance for each family. We plot these correlations in Figure 3. Apart from a few exceptions, Fig. 3 Empirical correlations between environmental predictors (Top) Themselves, (Middle) log traits, and (Bottom) log reflectance. The results are presented for each plant family. The labels "AI", "AS", "P", and "R" represent Aizoaceae, Asteraceae, Proteaceae, and Restionaceae, respectively.
most correlations between E j and T j are weak, not surprising given only modest correlations between traits and environment within the region (Mitchell et al., 2015;Aiello-Lammens et al., 2017) and at global and other local scales (Wright et al., 2004;Wright and Sutton-Grier, 2012). Importantly, there is very little common correlation pattern shared across families, expected given the large differences in growth form and likely ecological strategies that each lineage has evolved. As with traits, examining the empirical correlations between the environmental variables and the log reflectance curves reveals little common pattern between the families. However, importantly, there are evident differences in the correlations between reflectance and environment over the wavelength spectrum, suggesting the need for wavelength-varying coefficients in functional modeling of reflectance as a response. We present the empirical correlations between log traits and log reflectance in Empirical correlations between traits and log reflectances for each family. The labels "AI", "AS", "P", and "R" represent Aizoaceae, Asteraceae, Proteaceae, and Restionaceae, respectively.
among the families. In general, the families with more data have weaker correlations between traits and reflectances. Families with more genera and species likely represent speciation "hot-beds", e.g., see Verboom et al. (2009), Pirie et al. (2016, and Mitchell et al. (2017), where we would expect that the number of species would result in greater variability of traits and reflectance within lineages. It has been shown that higher within-group trait variation dilutes trait associations (Laughlin et al., 2017;Anderegg et al., 2018) and thus we may anticipate a similar effect for trait and reflectance relationships. Again, our goal is to assess the strength of these trait/reflectance relationships while accounting for environment. Our exploratory analysis shows that the relationships between traits, reflectance, and environmental predictors (See Figures 3 and 4) differ greatly across families, both in shape and magnitude. Further, in preliminary modeling efforts, we found no benefit to modeling the families jointly. Therefore, as noted above, we model each family separately.
Model and Methods
The joint specification
Since we model the families individually, we need only subscript the replicates within a family. So, consider vector T j , an s × 1 vector of trait responses for replicate j. We assume, after the log-transformation, that this vector can be modeled to follow a multivariate normal distribution. With a binary or categorical response we would view the corresponding entry in T j as latent, driving the observed response. We model T j as
T j = α (T ) + B (T ) E j + U (T ) j .
(1)
Here, α (T ) are trait-specific intercepts, E j is a vector of environmental predictors, say p × 1, for replicate j and B (T ) is a s × p matrix of trait-specific regression coefficients. The U
(T ) j are pure errors, i.i.d. ∼ M V N (0, Ω (T ) )
. We model the reflectance as a functional response variable, observed at 500 wavelengths, with wavelengths denoted by w's. Specifically,
R j (w) = α (R) (w) + E j β (R) (w) + K U (w)U (R) j + ψ j (w).
(2)
Here, α (R) (w) = K α (w)α * (R) is a wave-length varying intercept, where dimen- sion reduction is given through l basis functions in K α (w). E j is, as above with the p × 1 wavelength specific coefficient vector β (R) (w) ≡ B (R) K β (w).
That is, we imagine K β (w) as an m × 1 vector of basis functions or convolution functions with B (R) an p × m matrix of coefficients providing dimension reduction. Aggregating, β
(R) p×500 = B (R) K β where K β is 500 × m. Further, again using dimension reduction, the term K U (w)U (R) j introduces the q×1 replicate level vector U (R) j
of reflectance random effects to supplement the fixed effects contribution. These random effects vectors adopt q << 500 to provide a dimension reduction for the reflectances. More will be said about the choice of q below. The K U (w) can be collected into a 500 × q matrix, K U . Finally, the ψ j (w) provide independent wavelength specific pure error terms with variances σ 2 (w). We model log(σ 2 (w)) = K σ γ σ as a linear spline as in White et al. (2022). The entire reflectance response vector for replicate j becomes
R j = α (R) + K T β B (R) E j + K U U (T ) j + Ψ j .
We introduce dependence between the traits and reflectances, the T 's and the R's, through the U 's. That is, the dependence is at the replicate level.
Specifically, we assume U (T ) j U (R) j is distributed as a mean 0 multivariate nor- mal with covariance matrix Ω = Ω (T ) Ω (T R) Ω (T R) Ω (R) . As a result the induced covariance matrix for T j R j becomes Σ = Ω (T ) Ω (T R) K U K U Ω (T R) K U Ω (R) K U + D ψ
where D ψ is the diagonal matrix of pure error reflectance variances. From this matrix we can extract all covariances and correlations. If Ω (T R) is a matrix of zeros, then we have an independence model, which we denote as [T|E] [R|E].
We provide details of the dimension reduction used in Section 3.2, justify the use of the joint model through cross validation in Section 3.4 and Appendix B, and then present the results using the joint model in Section 4.
Dimension reduction details
Following White et al. (2022), we specify the low-rank functional terms wavelength-varying intercept α (R) (w), random effects K U (w)U (R) j , and β (R) (w) through process convolutions enabling simple connection to Gaussian processes (GPs). That is, the kernels of the process convolution connect the low-rank process to the GP covariance (Higdon, 2002). For every basis, we include an intercept so that the wavelength-varying intercepts, regression coefficients, and variances have an overall centering.
We use a rich specification for the wavelength-varying intercept α (R) (w), employing Gaussian kernels with wavelength knots spaced every 10 nm from 450-950 nm (N α = 52, in total, including the intercept) to obtain K α (w). To specify the wavelength-varying random effects through K U (w), we use Gaussian kernels with wavelength knots spaced every 25 nm from 450-950 nm (N U = 22, in total, including an intercept). To specify wavelength-varying coefficient functions, we use Gaussian kernels with wavelength knots spaced every 100 nm from 450-950 nm (N β = 7, including the intercept) for K β (w). We allow the scale parameters for the dimension-reduced coefficient to be unknown. However, we fix the bandwidth of the Gaussian kernels to be 1.5 times the kernel spacing to alleviate well-known lack of identifiability with scale and range parameters of Gaussian process models (see, e.g., Zhang, 2004). Lastly, we use a linear splines with interior knots every 50 nm from 475-925 nm to specify K σ (N σ = 12, in total). The selection of knot spacing chosen here is motivated by a sensitivity analysis in White et al. (2022), where the simplest model is chosen that does not significantly decrease model performance.
Prior Distributions, Model Fitting, and Prediction
We adopt weakly informative prior distributions for all trait regression coefficients and the intercepts of the wavelength-varying parameters for reflectance. However, for wavelength-varying intercept and regression coefficient functions, we have unknown scale parameters that shrink the low-dimensional functional bases toward zero. Overall, these priors assume that the wavelength-varying parameters receive an overall centering given by associated intercepts. We use proper prior distribution with large variance for all variance and covariance parameters. Specifically, we use the following prior distributions:
α * 1 (R) ∼ N 0, 10 3 , α * j (R) iid ∼ N 0, σ 2 α ; j = 2, ..., N α , B (R) k1 iid ∼ N 0,Ω −1 ∼ Wishart s + N U + 1, 10 −3 I , σ −2 α ∼ Gamma(1, 1), σ β −2 k iid ∼ Gamma(1, 1); k = 1, ..., p,(3)
Letting η denote all model parameters, we estimate η using Markov chain Monte Carlo (MCMC). So, our model fitting yields M samples from the posterior distribution of all model parameters η 1 , ..., η M . For all parameters except γ σ , we are able to use a Gibbs sampler because posterior conditional distributions can be found in closed form. For γ σ , we use a Metropolis within Gibbs update with a Gaussian random walk proposal distribution. We run this algorithm for 200,000 iterations, discard a burn-in period of 100,000 iterations, and, to limit memory requirements, thin the remaining 100,000 samples to 5,000 samples. During the burn-in period of the model-fitting, we tune acceptance rate to be between 0.2 and 0.6 during the burn-in period of the model fitting. Specific details of the posterior conditional distributions are provided in Appendix A.
Beyond the primary goal of inferring relationships between environment, traits, and reflectance, a further use for this model is prediction of traits or reflectance in the frequent scenario where, at a given site, measurements of either plant traits or reflectances were made but not both. In such settings, the residuals/random effects U would be attached to only partially observed samples. Therefore, predictions for reflectance or traits are made by conditionally predicting U, U (R) |U (T ) or U (T ) |U (R) , respectively. Under our model, these predictions rely on conditional normal theory. To illustrate this, we drop the j subscript and consider prediction using a single posterior sample η m of all model parameters. The conditional prediction of traits or reflectance would, respectively, bẽ
T m = α (T ) m + B (T ) m E + U (T ) m , R(w) = α (R) m (w) + E β (R) m (w) + K U (w)U (R) m + ψ m (w), where U (T ) m ∼ N (µ T |R , Σ T |R ),, µ T |R = Ω (T R) Ω (R) −1 U (R) m , Σ T |R = Ω (T ) − Ω (T R) Ω (R) −1 Ω (T R) , U (R) m ∼ N (µ R|T , Σ R|T ), µ R|T = Ω (RR) Ω (T ) −1 U (T ) m , and Σ R|T = Ω (R) − Ω (T R) Ω (T ) −1 Ω (T R)
. This process is repeated for all posterior samples η 1 , ..., η M , yielding a set of M predictions.
Model Comparison
We justify the joint model specification [T, R|E] by comparing conditional predictive performance using 10-fold cross-validation. Specifically, we compare the joint model to a model where traits and reflectance are independent [T |E] [R|E]. For each fold of the cross validation, we hold out 10% of all traits (jointly), as well as 10% of reflectance spectra (the entire spectrum). The traits and reflectance spectrum are held out exactly one time in the 10-fold cross validation. We adopt this comparison approach to mirror the scenario above where either plant traits or reflectances are measured but not both. Of course, other holdout schemes could be investigated.
We focus our comparison here on the Asteraceae family because it has the most data to help in estimating the proposed correlation structure. The model comparison results are summarized in Table 2, while the comparison results for the other families are in Appendix B. We compare models by using predicted root mean squared error (RMSE), mean absolute error (MAE), and the mean energy score (ES),
ES(F, x) = 1 2 E F X − X − E F X − x ,
where X, X follow the same distribution F and x is a vector of hold-out values (see Gneiting and Raftery, 2007). To estimate this empirically, from a set of posterior predictions X 1 , ..., X M , formingF for a vector x, the energy score is calculated as
ES(F , x) = 1 2M 2 M m=1 M m =1 X m − X m − 1 M M m=1 X m − x ,
and we average this for all hold-out vectors. The ES compares multivariate predictions to multivariate quantities and is a proper scoring rule (Gneiting and Raftery, 2007). We use the ES as our primary model selection criterion when predicting all traits or all reflectances. We present MAE and RMSE for each trait individually but, for simplicity, choose to average these quantities for reflectances over all wavelengths. On the other hand, because either traits or reflectances are predicted jointly conditioning on the other, a single mean ES is given for all traits and for all reflectances, respectively. For the Asteraceae family, the joint model improves out-of-sample prediction performance for reflectances and all traits. Overall, the benefit of the joint model is much larger for reflectance than traits, and this benefit is substantial. Based on these findings, we use the joint model to present interpretation of the results. 4 Results
Correlation Between Traits and Reflectance
We focus our discussion on the estimated correlations between the plant traits (log leaf water content, log leaf mass area, log percent Nitrogen, and log succulence) and the log reflectances. In Figure 5, we plot the estimated correlations between log traits and reflectance for the Asteraceae, Proteaceae, and Restionaceae families. Specifically, we plot the posterior mean and 90% credible intervals for between trait and reflectance correlation. Given the environmental variables, we observe relatively weak relationships (typically between 0.2 to -0.2) between traits and reflectance. This finding is not entirely unexpected (Laughlin et al., 2017;Pau et al., 2022;Wang et al., 2022), despite the amount of literature devoted to the strength of relationships and prediction potential between traits and reflectance within species and communities (see Jacquemoud and Ustin (2019b) for a review). Fewer studies such as ours examine how these relationships vary across species within broader lineages such as families. The fact that we observe differing T and R relationships between families aligns with our expectation that the species within these families are comprised of various suites of leaf traits that in turn represent different adaptive strategies and ancestral constraints. In other words, spectra represent underlying biology, a point underscored in recent literature (Meireles et al., 2020;Cavender-Bares et al., 2022;Kothari and Schweiger, 2022).
Further, we again highlight the fact that we are examining T and R relationships jointly given environment. This type of inquiry has often been done indirectly through controlled experimental manipulations (Thenot et al., 2002;Inoue and Peñuelas, 2006;Ripullone et al., 2011;Caturegli et al., 2020) or along environmental gradients (Coops et al., 2002;Asner et al., 2009). What our modelling shows is the explicit joint effects that environment has in terms of traits and reflectance. In cases of 0 or insignificant correlation we interpret that either the chosen environmental parameters have little effect on that lineage's traits (which in turn affect reflectance) or that the species within the lineage differ in their responses to the same environmental parameter resulting in low signal, i.e., the ecological fallacy. In the case of significant correlations, we observe shifts in T and R relationships that suggest lineage-wide signals indicative of ecological and evolutionary processes such as adaptation or ancestral constraints.
We observe relatively weak relationships between log reflectance and log leaf water content with one exception. For wavelengths greater than 700 nm (red and near-infrared [NIR]), we estimate that Asteraceae's reflectance is positively correlated with leaf water content. The near infrared portion of the spectrum is well known in several instances to be a signal for various forms of water content in leaves (Pu et al., 2003;Rodríguez-Pérez et al., 2007;Seelig et al., 2008) and canopies (Peñuelas et al., 1993;Penuelas et al., 1997). On the other hand, Aizoaceae, Proteaceae, and Restionaceae show no or few wavelengths where the 90% credible interval excludes 0. Note that this does not indicate that leaf water content and reflectance are unrelated in these lineages, but rather, after accounting for environment, these correlations are negligible. We suspect that the leaf water signal observed in the Asteraceae is attributable to the fact that the lineage has one of the broadest distributions extending from the arid Succulent Karoo to more mesic Fynbos.
For log leaf mass per area, we generally expected to observe the strongest correlation within the near-infrared region (NIR), i.e., wavelengths greater than 700 nm (Asner et al., 2011;Jacquemoud and Ustin, 2019c;Serbin et al., 2019). Within the NIR, some families exhibit positive correlations with reflectance (Restionaceae), while others have negative correlations with reflectance (Asteraceae). Interestingly, Proteaceae has negative correlations for wavelengths between 500 and 725 nm for leaf mass per area. This could be the result of other co-correlated traits such as photosynthetic pigments that more often affect the visible region. There is some limited evidence within species for the negative correlation in the visible region for leaf mass per area (Ourcival et al., 1999).
Nitrogen within leaves is typically linked to the visible region of the spectrum (450-700 nm) given its strong links to the pigment chlorophyll, though there are nitrogen signals found at longer wavelengths (Jacquemoud and Ustin, 2019d). For Aizoaceae and Asteraceae, the relationship between log pN and reflectance appears to be weak for all wavelengths, suggesting no lineage wide signals across the environmental range present in the study. Proteaceae shows negative correlations between log pN and reflectance for wavelengths between 500 and 725 nm. Although weak, the estimated relationship between pN and reflectance is negative for most wavelengths greater than 550 nm for Restionaceae.
Leaf succulence is calculated by dividing the leaf area by leaf water content such that succulence represents the amount of water distributed throughout the leaf. Thus, we expected results like that for leaf water content. Overall, our results matched these expectations, but the results did have different significance compared to water content. Except for the Restionaceae, the other families had significant relationships within the visible range (wavelengths below 700 nm) which was surprising given the typically stronger signal of water within near infrared range. However, previous studies have found relationships between water content and the visible range attributed to the link between plant water status and photosynthetic machinery (Thenot et al., 2002;Inoue and Peñuelas, 2006;Ripullone et al., 2011;Hmimina et al., 2014). The Aizoaceae, a lineage dominated by succulent plants, now displayed significant, albeit weak, positive correlations for wavelengths below 700 nm. Asteraceae has weak negative correlations between succulence and log reflectance for wavelengths less than 700 nm but weak positive correlations for wavelengths greater than 700 nm. Proteaceae shows negative correlations between succulence and log reflectance for wavelengths less than 725 nm, but the correlations are essentially 0 otherwise. As with LMA, succulence and reflectance have very weak relationships for Restionaceae.
Effects of the environmental and abundance predictors
We report the estimated effect of the environmental predictors and abundance on reflectance and traits. As discussed in Section 2, we have centered and scaled the covariates to aid in interpretation of the results. In Figure 6, we plot the four estimated coefficient functions and associated 90% credible intervals for each family. In Figure 7, we plot the 90% credible intervals for the regression coefficients for each trait. Again, abundance is unlike the other environmental covariates in that it is not a direct driver of leaf traits and subsequent reflectances. It is viewed as a proxy for unmeasured local environmental contributing to the "success" of species based on their biomass. In terms of interpretation, we re-emphasize the fact that these models treat traits and reflectance jointly. As shown in Figure 3, the reflectance spectra within each lineage have clear correlations with environmental parameters. Our results in Figure 6 are much weaker and less varied for all families, which was expected given the joint nature of the model where more variation is likely to be attributable to leaf traits (Laughlin et al., 2017;Pau et al., 2022;Wang et al., 2022). We interpret Figure 6 as capturing the signal of family-wide spectral responses to environment that are being driven by traits that are not currently measured, e.g., pigments, leaf surface features, or other measures of leaf anatomy. We would expect the trait and environment relationships in figure 7 to roughly match the initial correlations in Figure 3 given that it is traits that respond to environment and are subsequently manifested in the reflectance spectra. All covariate effects should be interpreted as the estimated effect of the particular environmental covariate holding all other environmental covariates constant.
Aizoaceae
For Aizoaceae, reflectances have a negative relationship with abundance, elevation, annual precipitation and temperature for all wavelengths. In contrast, Aizoaceae's reflectances have a very weak positive relationship with rainfall concentration. Although the relationships are generally weak, it is the only family whose entire reflectance spectrum is related to all covariates. We suspect that this could be a signal of overall leaf succulence given that precipitation and temperature have similar relationships and the expectation for leaves to be driven towards succulence as an adaptation to drier and hotter environments. Alternatively these family wide responses could be the result of other unmeasured traits such as those in leaf epidermal surfaces (Heim et al., 2015). In terms of traits, elevation has a significant negative relationship with leaf water content (LWC) and percent nitrogen (pN). Temperature has a positive relationship with leaf mass per area (LMA) but negative relationships with LWC and pN. The LMA and temperature relationship roughly matches expectations found by the Leaf Economic Spectrum (LES), a study that examined trait and environment relationships at a global scale for a large sample of species (Wright et al., 2004). The LES also finds that nitrogen and leaf mass per area tend to be negatively related, fitting in with our results as well.
Asteraceae
For Asteraceae, the effects of all environmental covariates are very weak for the entire reflectance spectrum even though there are many significant estimated effects. Specifically, abundance and rainfall concentration are slightly negatively associated with log reflectance less than 700 nm, while annual precipitation has a weak positive relationship with reflectance for all wavelengths over 500 nm. Out of the families chosen, the Asteraceae appear to have some of the weakest environmental signals for reflectance. This may be attributable to the fact that the Asteraceae is one of the most widely distributed groups across the Greater Cape Floristic Region, with a high diversity of growth forms, e.g., annuals, succulents and geophytes, tolerating a high number of environmental conditions (Manning and Goldblatt, 2012;Manning, 2013). This diversity would likely result in a high variation of reflectance signals that could weaken relationships.
LWC is negatively associated with elevation and annual precipitation, holding other covariates constant, while LMA is positively associated with elevation, precipitation, temperature, and abundance. The latter results have mixed correspondence to previous global analyses, with global LMA having negative to insignificant relationships with precipitation (Wright et al., 2004). Percent nitrogen is negatively associated with elevation, annual precipitation, and January's minimum temperature, while it is positively associated with rainfall concentration. Leaf succulence has a weak positive relationship with temperature, while all other covariates have 90% credible intervals that include 0. This partially matches the expectation that leaves would become succulent in more arid areas.
Proteaceae
Of the four families, the log reflectance of Proteaceae has the strongest relationship with elevation and annual precipitation. For elevation, reflectance has a strong positive relationship for shorter wavelengths and a weak positive relationship for longer wavelengths. As iterated previously, we interpret the environment and reflectance relationships in Figure 6 as indicating trends in leaf traits that are unmeasured but varying across the environment. In the case of elevation and the visible region, this may be a trend in traits such as photosynthetic pigments which are strongly associated in the visible region of spectra (Jacquemoud and Ustin, 2019c). The estimated relationships between reflectance and annual precipitation is positive and significant for all wavelengths, likely representing changes in traits co-correlated with the amount of water in leaves. Reflectance has a weak negative relationship with temperature for wavelengths greater than 700 nm. Similarly, the relationship between abundance and reflectance is negative for wavelengths less than 750 nm.
In general, Proteaceae's traits have weak relationships with environmental covariates and abundance. However, LWC reveals a slightly positive relationship with temperature and negative relationship with abundance while LMA has a slightly positive relationship with rainfall concentration. In a study of the genus Protea, a prominent genus of the Proteaceae family within the Greater Cape Floristic Region, Mitchell et al. (2015) found similar results for the LWC and temperature relationship (though theirs were non-significant) and LMA and rainfall seasonality (a related measure to rainfall concentration).
Restionaceae
Although Restionaceae showed very weak correlations between traits and reflectance, reflectance is strongly connected with environmental covariates for shorter wavelengths, suggesting a shift in underlying traits associated with the visible region, e.g., photosynthetic pigments, along environmental gradients. Abundance and rainfall concentration are positively related to reflectance at short wavelengths, while both annual precipitation and temperature are negatively related to reflectance. We estimate that increases in temperature are related to decreases in reflectance for all wavelengths. For wavelengths above 525 nm, we find a negative relationship between elevation and reflectance.
Turning to traits, Restionaceae's LWC is slightly positively related to temperature and annual precipitation but appears to be negatively related to abundance and rainfall concentration. Leaf mass area has slight positive relationships with rainfall concentration and abundance. For log pN and LS, all 90% credible intervals include 0. We suspect that the lack of clear trait and environmental trends could be a result of other known environmental drivers, e.g., fire and soil fertility, that drive differing adaptive strategies within the Restionaceae (Wüest et al., 2016).
Summary and Future Work
For four plant families within the Greater Cape Floristic Region, we have presented modeling to enable assessment of the importance of environmental/habitat predictors in predicting traits and reflectance. This approach allows us to address the novel question of how trait and reflectance vary along environmental gradients. For remote sensing efforts aimed at regional and global extents, this question should be of immediate interest since it is the shifting nature of these relationships across different sets of plant functional types that reduces the generalizability of empirical models for trait prediction (Schimel et al., 2015;Kothari and Schweiger, 2022;Wang et al., 2022). Our current model presents an initial step in exploring an area that we feel has been underutilized in ecology given a lack of available statistical tools. Lastly, we have shown that joint modeling of traits and reflectances provides better conditional predictive performance than modeling them independently.
In future work, our approaches could be adapted to include discrete or categorical traits, as in Schliep and Hoeting (2013) or Clark et al. (2017). Extending the framework in White et al. (2022) to spatially model the dependence between traits and reflectance would also be of interest, possibly including shape constraints White et al. (2021). In addition, with richer datasets, we could explore how reflectance/trait relationships vary along environmental gradients. Overall, the functional data approach in ecology is underutilized despite a plethora of ecological data that would be suitable for such analysis, e.g., spectral reflectance, organismal movement, and time series. We envision models incorporating joint responses of both scalar and functional data will be of high value to ecological problems beyond those in the present study.
µ β R = (K β ⊗ E )vec r (R) −β (R) D −1 σ Σ β R = K β D −1 σ K β ⊗ E E + D −1 β (R) −1 µ β T = (I s×s ⊗ E )vec r (T ) −β (T ) Ω (T ) −1 Σ β T = Ω (T ) −1 ⊗ E E + D −1 β (T ) −1 µ α R = K α 1 r (R) −α (R) D −1 σ Σ α R = nK α D −1 σ K α + D −1 α (R) −1 µ α T = K α 1 r (R) −α (T ) Ω (T ) −1 Σ α T = nΩ (T ) −1 + D −1 α (T ) −1 µ Uj = K U D −1 σ r (R) j −Uj + Σ −1 Rj |Tj µ Rj |Tj Σ Uj = K U D −1 σ K U + Σ −1
Appendix B Model Comparison
In Tables B1 -B3, we present the cross-validation results (in order) for Restionaceae, Proteaceae, and Aizoaceae. The joint model improves out-ofsample prediction performance for reflectances for all families, and this benefit is significant. On the other hand, the conditional out-of-sample prediction performance for traits depends on the family. For Asteraceae and Restionaceae, the families with the most data, prediction of plant traits benefits from joint modeling of traits and reflectance. For Proteaceae, plant trait predictions are slightly better under the independent model using the ES. However, we emphasize that the improved prediction is minimal for the independent model. For Aizoaceae, the family with the fewest data, prediction of plant traits suffers under the joint model. In summary, we find that reflectance predictions are uniformly and significantly better under the joint model for all plant families. Trait predictions are better under the joint model for two of the four families (in terms of ES) and only marginally worse for Proteaceae. We speculate that the benefit of the joint model appears when there is enough data to adequately estimate the relationship between traits and reflectance. Based on these findings, we use the joint model to present interpretation of the results.
Figure 4 .
4There are few similarities in the trait-reflectance correlations
∼∼∼
10 3 ; k = 1, ...N 0, σ 2 β ; k = 1, ..., p; j = 2, ..., N β , N 0, 10 3 ; j = 1, ..., s; k = 1, ...N (0, 9) ; j = 2, ..., N σ ,
Fig. 5
5The estimated correlation between reflectance and all plant traits provided through Ω.
Fig. 6
6Estimated regression coefficient functions β (R) (w) the reflectance spectrum for all families and environmental predictors.
Fig. 7
7Estimated regression coefficients β (T ) for all traits, families, and environmental predictors.
Table 2
2Model comparison between joint and independent models for Asteraceae.Quantity
Model
MAE
RMSE
ES
log LMA
[T |E][R|E]
0.449
0.557
0.826
log FWC
0.646
0.814
log LS
0.533
0.706
log pN
0.395
0.495
log Reflectance
[T |E][R|E]
0.543
0.981 16.396
log LMA
[T, R|E]
0.398
0.517
0.658
log FWC
0.465
0.612
log LS
0.401
0.546
log pN
0.343
0.457
log Reflectance
[T, R|E]
0.166
0.244
3.703
Table B1
B1Model comparison between joint and independent models for Restionaceae.Table B2Model comparison between joint and independent models for Proteaceae.Table B3Model comparison between joint and independent models for Aizoaceae.Quantity
Model
MAE
RMSE
ES
log lma
[T |E][R|E]
0.354
0.431
0.585
log fwc
[T |E][R|E]
0.297
0.384
log succulence
[T |E][R|E]
0.443
0.580
log percent N
[T |E][R|E]
0.345
0.420
log Reflectance
[T |E][R|E]
0.587
1.079 17.925
log lma
[T, R|E]
0.294
0.424
0.488
log fwc
[T, R|E]
0.239
0.315
log succulence
[T, R|E]
0.333
0.439
log percent N
[T, R|E]
0.306
0.532
log Reflectance
[T, R|E]
0.162
0.260
3.900
Quantity
Model
MAE
RMSE
ES
log lma
[T |E][R|E]
0.246
0.322
0.397
log fwc
[T |E][R|E]
0.186
0.274
log succulence
[T |E][R|E]
0.283
0.367
log percent N
[T |E][R|E]
0.217
0.287
log Reflectance
[T |E][R|E]
0.449
0.773 13.107
log lma
[T, R|E]
0.250
0.325
0.405
log fwc
[T, R|E]
0.215
0.303
log succulence
[T, R|E]
0.279
0.361
log percent N
[T, R|E]
0.238
0.303
log Reflectance
[T, R|E]
0.139
0.224
3.432
Quantity
Model
MAE
RMSE
ES
log lma
[T |E][R|E]
0.496
0.322
0.728
log fwc
[T |E][R|E]
0.409
0.274
log succulence
[T |E][R|E]
0.490
0.367
log percent N
[T |E][R|E]
0.394
0.287
log Reflectance
[T |E][R|E]
0.444
0.773 12.705
log lma
[T, R|E]
0.793
0.325
1.063
log fwc
[T, R|E]
0.601
0.303
log succulence
[T, R|E]
0.445
0.361
log percent N
[T, R|E]
0.621
0.303
log Reflectance
[T, R|E]
0.140
0.224
3.158
Acknowledgments. We thank Matthew Aiello-Lammens, Douglas Euston-Brown, Hayley Kilroy Mollmann, Cory Merow, Jasper Slingsby, Helga van der Merwe, and Adam Wilson for their contributions in the data collection and curation. Special thanks to Cape Nature and the Northern Cape Department of Environment and Nature Conservation for permission for the collection of leaf spectra and traits. Data collection efforts were made possible by funding from National Science Foundation grant DEB-1046328 to J.A. Silander. Additional support was provided by NASA with a Future Investigators in NASA Earth and Space Science and Technology (FINESST) grant award (80NSSC20K1659) to H.A. Frye and J.A. Silander.Appendix A Markov Chain Monte Carlo DetailsIn this section, we provide the full conditional distributions used for the Gibb's sampler. We use θ| · · · to denote the full conditional distribution of the parameter θ. For simplicity, we let T be an n × s matrix of all observed traits and R be an n × 500 log reflectances. For traits and reflectances, respectively, we use r (T ) −θ and r (R) −θ to be the residuals when excluding a the parameter θ (θ is used as a placeholder). For example, r (R) −β (R) is the residuals when removing the environmental regression from the model. In the case of wavelength-varying parameters, we use a similar notation to indicate the exclusion of the first term in a vector (e.g., θ −1 ). In addition, we let D σ be diagonal matrix with elements of σ 2 (w) and D β (R) is a diagonal matrix with the prior variances given in (3). In the case of updates for U j , we refer to terms defined in Section 3.3. The posterior conditional distributions are as follows:
Processes of community assembly in an environmentally heterogeneous, high biodiversity region. M E Aiello-Lammens, J A Slingsby, C Merow, H K Mollmann, D Euston-Brown, C S Jones, J A SilanderJr, Ecography. 404Wiley Online LibraryAiello-Lammens, M. E., Slingsby, J. A., Merow, C., Mollmann, H. K., Euston- Brown, D., Jones, C. S., and Silander Jr, J. A. (2017). Processes of community assembly in an environmentally heterogeneous, high biodiversity region. Ecography, 40(4):561-576. Publisher: Wiley Online Library.
Within-species patterns challenge our understanding of the leaf economics spectrum. L D L Anderegg, L T Berner, G Badgley, M L Sethi, B E Law, J Hillerislambers, Ecology Letters. 215Anderegg, L. D. L., Berner, L. T., Badgley, G., Sethi, M. L., Law, B. E., and HilleRisLambers, J. (2018). Within-species patterns challenge our under- standing of the leaf economics spectrum. Ecology Letters, 21(5):734-744.
Leaf chemical and spectral diversity in Australian tropical forests. G P Asner, R E Martin, A J Ford, D J Metcalfe, M J Liddell, Ecological Applications. 191Asner, G. P., Martin, R. E., Ford, A. J., Metcalfe, D. J., and Liddell, M. J. (2009). Leaf chemical and spectral diversity in Australian tropical forests. Ecological Applications, 19(1):236-253.
Taxonomy and remote sensing of leaf mass per area (LMA) in humid tropical forests. G P Asner, R E Martin, R Tupayachi, R Emerson, P Martinez, F Sinca, G V N Powell, S J Wright, A E Lugo, Ecological Applications. 211Asner, G. P., Martin, R. E., Tupayachi, R., Emerson, R., Martinez, P., Sinca, F., Powell, G. V. N., Wright, S. J., and Lugo, A. E. (2011). Taxonomy and remote sensing of leaf mass per area (LMA) in humid tropical forests. Ecological Applications, 21(1):85-98.
The Greater Cape Floristic Region: Greater Cape Floristic Region. J Born, H P Linder, P Desmet, Journal of Biogeography. 341Born, J., Linder, H. P., and Desmet, P. (2006). The Greater Cape Floristic Region: Greater Cape Floristic Region. Journal of Biogeography, 34(1):147- 162.
Beyond species: functional diversity and the maintenance of ecological processes and services: Functional diversity in ecology and conservation. M W Cadotte, K Carscadden, N Mirotchnick, Journal of Applied Ecology. 485Cadotte, M. W., Carscadden, K., and Mirotchnick, N. (2011). Beyond species: functional diversity and the maintenance of ecological processes and services: Functional diversity in ecology and conservation. Journal of Applied Ecology, 48(5):1079-1087.
Effects of water stress on spectral reflectance of bermudagrass. L Caturegli, S Matteoli, M Gaetani, N Grossi, S Magni, A Minelli, G Corsini, D Remorini, M Volterrani, Scientific Reports. 10115055Caturegli, L., Matteoli, S., Gaetani, M., Grossi, N., Magni, S., Minelli, A., Corsini, G., Remorini, D., and Volterrani, M. (2020). Effects of water stress on spectral reflectance of bermudagrass. Scientific Reports, 10(1):15055.
Integrating remote sensing with ecology and evolution to advance biodiversity conservation. J Cavender-Bares, F D Schneider, M J Santos, A Armstrong, A Carnaval, K M Dahlin, L Fatoyinbo, G C Hurtt, D Schimel, P A Townsend, S L Ustin, Z Wang, A M Wilson, Nature Ecology & Evolution. 65Cavender-Bares, J., Schneider, F. D., Santos, M. J., Armstrong, A., Carnaval, A., Dahlin, K. M., Fatoyinbo, L., Hurtt, G. C., Schimel, D., Townsend, P. A., Ustin, S. L., Wang, Z., and Wilson, A. M. (2022). Integrating remote sensing with ecology and evolution to advance biodiversity conservation. Nature Ecology & Evolution, 6(5):506-519.
Generalized joint attribute modeling for biodiversity analysis: Median-zero, multivariate, multifarious data. J S Clark, D Nemergut, B Seyednasrollah, P J Turner, S Zhang, Ecological Monographs. 871Clark, J. S., Nemergut, D., Seyednasrollah, B., Turner, P. J., and Zhang, S. (2017). Generalized joint attribute modeling for biodiversity analy- sis: Median-zero, multivariate, multifarious data. Ecological Monographs, 87(1):34-56.
Comparison of green leaf eucalypt spectra using spectral decomposition. N Coops, S Dury, M.-L Smith, M Martin, S Ollinger, Australian Journal of Botany. 505567Coops, N., Dury, S., Smith, M.-L., Martin, M., and Ollinger, S. (2002). Comparison of green leaf eucalypt spectra using spectral decomposition. Australian Journal of Botany, 50(5):567.
Vive la différence: plant functional diversity matters to ecosystem processes. S Díaz, M Cabido, Trends in Ecology & Evolution. 1611Díaz, S. and Cabido, M. (2001). Vive la différence: plant functional diversity matters to ecosystem processes. Trends in Ecology & Evolution, 16(11):646- 655.
Plant spectral diversity as a surrogate for species, functional and phylogenetic diversity across a hyper-diverse biogeographic region. H A Frye, M E Aiello-Lammens, D Euston-Brown, C S Jones, H Kilroy Mollmann, C Merow, J A Slingsby, H Merwe, A M Wilson, J A Silander, Global Ecology and Biogeography. 307Frye, H. A., Aiello-Lammens, M. E., Euston-Brown, D., Jones, C. S., Kil- roy Mollmann, H., Merow, C., Slingsby, J. A., Merwe, H., Wilson, A. M., and Silander, J. A. (2021). Plant spectral diversity as a surrogate for species, functional and phylogenetic diversity across a hyper-diverse biogeographic region. Global Ecology and Biogeography, 30(7):1403-1417.
Strictly proper scoring rules, prediction, and estimation. T Gneiting, A E Raftery, Journal of the American Statistical Association. 102477Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, pre- diction, and estimation. Journal of the American Statistical Association, 102(477):359-378.
R Heim, N Jürgens, A Große-Stoltenberg, J Oldeland, The Effect of Epidermal Structures on Leaf Spectral Signatures of Ice Plants (Aizoaceae). Remote Sensing. 7Heim, R., Jürgens, N., Große-Stoltenberg, A., and Oldeland, J. (2015). The Effect of Epidermal Structures on Leaf Spectral Signatures of Ice Plants (Aizoaceae). Remote Sensing, 7(12):16901-16914.
Space and space-time modeling using process convolutions. D Higdon, Quantitative Methods for Current Environmental Issues. SpringerHigdon, D. (2002). Space and space-time modeling using process convolutions. In Quantitative Methods for Current Environmental Issues, pages 37-56. Springer.
Relationship between photochemical reflectance index and leaf ecophysiological and biochemical parameters under two different water statuses: towards a rapid and efficient correction method using real-time measurements: Disentangling PRI variability. G Hmimina, E Dufrêne, K Soudani, Plant, Cell & Environment. 372Hmimina, G., Dufrêne, E., and Soudani, K. (2014). Relationship between photochemical reflectance index and leaf ecophysiological and biochemical parameters under two different water statuses: towards a rapid and effi- cient correction method using real-time measurements: Disentangling PRI variability. Plant, Cell & Environment, 37(2):473-487.
Joint Modeling of Plants Traits and Reflectance. Joint Modeling of Plants Traits and Reflectance
Relationship between light use efficiency and photochemical reflectance index in soybean leaves as affected by soil water content. Y Inoue, J Peñuelas, International Journal of Remote Sensing. 2722Inoue, Y. and Peñuelas, J. (2006). Relationship between light use efficiency and photochemical reflectance index in soybean leaves as affected by soil water content. International Journal of Remote Sensing, 27(22):5109-5114.
Leaf Optical Properties. S Jacquemoud, S Ustin, Cambridge University Press1 editionJacquemoud, S. and Ustin, S. (2019a). Leaf Optical Properties. Cambridge University Press, 1 edition.
Leaf optical properties. S Jacquemoud, S Ustin, Cambridge University PressJacquemoud, S. and Ustin, S. (2019b). Leaf optical properties. Cambridge University Press.
Variation Due to Leaf structural, Chemical, and Physiological Traits. S Jacquemoud, S Ustin, Leaf Optical Properties. Cambridge University Press1 editionJacquemoud, S. and Ustin, S. (2019c). Variation Due to Leaf structural, Chemical, and Physiological Traits. In Leaf Optical Properties. Cambridge University Press, 1 edition.
Variations Due to Leaf Abiotic and Biotic Factors. S Jacquemoud, S Ustin, Leaf Optical Properties. Cambridge University Press1 editionJacquemoud, S. and Ustin, S. (2019d). Variations Due to Leaf Abiotic and Biotic Factors. In Leaf Optical Properties. Cambridge University Press, 1 edition.
Monitoring plant functional diversity from space. W Jetz, J Cavender-Bares, R Pavlick, D Schimel, F W Davis, G P Asner, R Guralnick, J Kattge, A M Latimer, P Moorcroft, M E Schaepman, M P Schildhauer, F D Schneider, F Schrodt, U Stahl, S L Ustin, Nature Plants. 2316024Jetz, W., Cavender-Bares, J., Pavlick, R., Schimel, D., Davis, F. W., Asner, G. P., Guralnick, R., Kattge, J., Latimer, A. M., Moorcroft, P., Schaepman, M. E., Schildhauer, M. P., Schneider, F. D., Schrodt, F., Stahl, U., and Ustin, S. L. (2016). Monitoring plant functional diversity from space. Nature Plants, 2(3):16024.
NASADEM Merged DEM Global 1 arc second V001. Type: dataset. N Jpl, JPL, N. (2020). NASADEM Merged DEM Global 1 arc second V001. Type: dataset.
Plant spectra as integrative measures of plant phenotypes. S Kothari, A K Schweiger, Journal of Ecology. Kothari, S. and Schweiger, A. K. (2022). Plant spectra as integrative measures of plant phenotypes. Journal of Ecology, pages 1365-2745.13972.
Neutral Ecological Theory Reveals Isolation and Rapid Speciation in a. A M Latimer, J A Silander, R M Cowling, Biodiversity Hot Spot. Science. 3095741Latimer, A. M., Silander, J. A., and Cowling, R. M. (2005). Neutral Ecological Theory Reveals Isolation and Rapid Speciation in a Biodiversity Hot Spot. Science, 309(5741):1722-1725.
Intraspecific trait variation can weaken interspecific trait correlations when assessing the whole-plant economic spectrum. D C Laughlin, C H Lusk, P J Bellingham, D F R P Burslem, A H Simpson, K R Kramer-Walter, Ecology and Evolution. 721Laughlin, D. C., Lusk, C. H., Bellingham, P. J., Burslem, D. F. R. P., Simp- son, A. H., and Kramer-Walter, K. R. (2017). Intraspecific trait variation can weaken interspecific trait correlations when assessing the whole-plant economic spectrum. Ecology and Evolution, 7(21):8936-8949.
Number 2 in Plants of the Greater Cape floristic region. J Manning, Number 29 in Strelitzia. SANBI, Biodiversity for Life. / John Manning and Peter Goldblatt. SANBI, Pretoria. Manning, J. and Goldblatt, P.PretoriaPlants of the Greater Cape Floristic RegionManning, J. (2013). The Extra Cape flora. Number 2 in Plants of the Greater Cape floristic region / John Manning and Peter Goldblatt. SANBI, Pretoria. Manning, J. and Goldblatt, P., editors (2012). Plants of the Greater Cape Floristic Region. Number 29 in Strelitzia. SANBI, Biodiversity for Life, Pretoria.
Rebuilding community ecology from functional traits. B Mcgill, B Enquist, E Weiher, M Westoby, Trends in Ecology & Evolution. 214Mcgill, B., Enquist, B., Weiher, E., and Westoby, M. (2006). Rebuilding community ecology from functional traits. Trends in Ecology & Evolution, 21(4):178-185.
Leaf reflectance spectra capture the evolutionary history of seed plants. J E Meireles, J Cavender-Bares, P A Townsend, S Ustin, J A Gamon, A K Schweiger, M E Schaepman, G P Asner, R E Martin, A Singh, F Schrodt, A Chlus, B C Meara, New Phytologist. 2282Meireles, J. E., Cavender-Bares, J., Townsend, P. A., Ustin, S., Gamon, J. A., Schweiger, A. K., Schaepman, M. E., Asner, G. P., Martin, R. E., Singh, A., Schrodt, F., Chlus, A., and O'Meara, B. C. (2020). Leaf reflectance spectra capture the evolutionary history of seed plants. New Phytologist, 228(2):485-493.
Anchored phylogenomics improves the resolution of evolutionary relationships in the rapid radiation of Protea L. N Mitchell, P O Lewis, E M Lemmon, A R Lemmon, K E Holsinger, American Journal of Botany. 1041Mitchell, N., Lewis, P. O., Lemmon, E. M., Lemmon, A. R., and Holsinger, K. E. (2017). Anchored phylogenomics improves the resolution of evolution- ary relationships in the rapid radiation of Protea L. American Journal of Botany, 104(1):102-115.
Functional Traits in Parallel Evolutionary Radiations and Trait-Environment Associations in the Cape Floristic Region of South Africa. N Mitchell, T E Moore, H K Mollmann, J E Carlson, K Mocko, H Martinez-Cabrera, C Adams, J A Silander, C S Jones, C D Schlichting, K E Holsinger, The American Naturalist. 1854Mitchell, N., Moore, T. E., Mollmann, H. K., Carlson, J. E., Mocko, K., Martinez-Cabrera, H., Adams, C., Silander, J. A., Jones, C. S., Schlichting, C. D., and Holsinger, K. E. (2015). Functional Traits in Parallel Evolution- ary Radiations and Trait-Environment Associations in the Cape Floristic Region of South Africa. The American Naturalist, 185(4):525-537.
Biodiversity hotspots for conservation priorities. N Myers, R A Mittermeier, C G Mittermeier, G A Da Fonseca, Kent , J , Nature. 4036772Myers, N., Mittermeier, R. A., Mittermeier, C. G., Da Fonseca, G. A., and Kent, J. (2000). Biodiversity hotspots for conservation priorities. Nature, 403(6772):853-858.
A general biodiversity-function relationship is mediated by trophic level. M I O'connor, A Gonzalez, J E K Byrnes, B J Cardinale, J E Duffy, L Gamfeldt, J N Griffin, D Hooper, B A Hungate, A Paquette, P L Thompson, L E Dee, K L Dolan, Oikos. 1261O'Connor, M. I., Gonzalez, A., Byrnes, J. E. K., Cardinale, B. J., Duffy, J. E., Gamfeldt, L., Griffin, J. N., Hooper, D., Hungate, B. A., Paquette, A., Thompson, P. L., Dee, L. E., and Dolan, K. L. (2017). A gen- eral biodiversity-function relationship is mediated by trophic level. Oikos, 126(1):18-31.
Exploring the relationships between reflectance and anatomical and biochemical properties in Quercus ilex leaves. J M Ourcival, R Joffre, S Rambal, New Phytologist. 1432Ourcival, J. M., Joffre, R., and Rambal, S. (1999). Exploring the relationships between reflectance and anatomical and biochemical properties in Quercus ilex leaves. New Phytologist, 143(2):351-364.
Poor relationships between NEON Airborne Observation Platform data and fieldbased vegetation traits at a mesic grassland. S Pau, J B Nippert, R Slapikas, D Griffith, S Bachle, B R Helliker, R C O'connor, W J Riley, C J Still, M Zaricor, Ecology. 2103Pau, S., Nippert, J. B., Slapikas, R., Griffith, D., Bachle, S., Helliker, B. R., O'Connor, R. C., Riley, W. J., Still, C. J., and Zaricor, M. (2022). Poor relationships between NEON Airborne Observation Platform data and field- based vegetation traits at a mesic grassland. Ecology, 103(2).
Estimation of plant water concentration by the reflectance Water Index WI (R900/R970). J Penuelas, J Pinol, R Ogaya, I Filella, International Journal of Remote Sensing. 1813Penuelas, J., Pinol, J., Ogaya, R., and Filella, I. (1997). Estimation of plant water concentration by the reflectance Water Index WI (R900/R970). International Journal of Remote Sensing, 18(13):2869-2875.
The reflectance at the 950-970 nm region as an indicator of plant water status. J Peñuelas, I Filella, C Biel, L Serrano, R Savé, International Journal of Remote Sensing. 1410Peñuelas, J., Filella, I., Biel, C., Serrano, L., and Savé, R. (1993). The reflectance at the 950-970 nm region as an indicator of plant water status. International Journal of Remote Sensing, 14(10):1887-1905.
The biodiversity hotspot as evolutionary hot-bed: spectacular radiation of Erica in the Cape Floristic Region. M D Pirie, E G H Oliver, A Mugrabi De Kuppler, B Gehrke, N C Le Maitre, M Kandziora, D U Bellstedt, BMC Evolutionary Biology. 161190Pirie, M. D., Oliver, E. G. H., Mugrabi de Kuppler, A., Gehrke, B., Le Maitre, N. C., Kandziora, M., and Bellstedt, D. U. (2016). The biodiversity hotspot as evolutionary hot-bed: spectacular radiation of Erica in the Cape Floristic Region. BMC Evolutionary Biology, 16(1):190.
Spectral absorption features as indicators of water status in coast live oak ( Quercus agrifolia ) leaves. R Pu, S Ge, N M Kelly, P Gong, International Journal of Remote Sensing. 249Pu, R., Ge, S., Kelly, N. M., and Gong, P. (2003). Spectral absorption features as indicators of water status in coast live oak ( Quercus agrifolia ) leaves. International Journal of Remote Sensing, 24(9):1799-1810.
Functional data analysis. Encyclopedia of Statistics in Behavioral Science. J Ramsay, Ramsay, J. (2005). Functional data analysis. Encyclopedia of Statistics in Behavioral Science.
Applied functional data analysis: Methods and case studies. J O Ramsay, B W Silverman, SpringerRamsay, J. O. and Silverman, B. W. (2007). Applied functional data analysis: Methods and case studies. Springer.
Generality of Leaf Trait Relationships: A test across six biomes. P B Reich, D S Ellsworth, M B Walters, J M Vose, C Gresham, J C Volin, W D Bowman, Ecology. 806Reich, P. B., Ellsworth, D. S., Walters, M. B., Vose, J. M., Gresham, C., Volin, J. C., and Bowman, W. D. (1999). Generality of Leaf Trait Relationships: A test across six biomes. Ecology, 80(6):1955-1969.
From tropics to tundra: Global convergence in plant functioning. P B Reich, M B Walters, D S Ellsworth, Proceedings of the National Academy of Sciences. 9425Reich, P. B., Walters, M. B., and Ellsworth, D. S. (1997). From tropics to tundra: Global convergence in plant functioning. Proceedings of the National Academy of Sciences, 94(25):13730-13734.
Effectiveness of the photochemical reflectance index to track photosynthetic activity over a range of forest tree species and plant water statuses. F Ripullone, A R Rivelli, R Baraldi, R Guarini, R Guerrieri, F Magnani, J Peñuelas, S Raddi, M Borghetti, Functional Plant Biology. 383177Ripullone, F., Rivelli, A. R., Baraldi, R., Guarini, R., Guerrieri, R., Magnani, F., Peñuelas, J., Raddi, S., and Borghetti, M. (2011). Effectiveness of the photochemical reflectance index to track photosynthetic activity over a range of forest tree species and plant water statuses. Functional Plant Biology, 38(3):177.
Evaluation of Hyperspectral Reflectance Indexes to Detect Grapevine Water Status in Vineyards. J R Rodríguez-Pérez, D Riaño, E Carlisle, S Ustin, D R Smart, Publisher: American Journal of Enology and Viticulture Section: Article. 583American Journal of Enology and ViticultureRodríguez-Pérez, J. R., Riaño, D., Carlisle, E., Ustin, S., and Smart, D. R. (2007). Evaluation of Hyperspectral Reflectance Indexes to Detect Grapevine Water Status in Vineyards. American Journal of Enology and Viticulture, 58(3). Publisher: American Journal of Enology and Viticulture Section: Article.
Observing terrestrial ecosystems and the carbon cycle from space. D Schimel, R Pavlick, J B Fisher, G P Asner, S Saatchi, P Townsend, C Miller, C Frankenberg, K Hibbard, P Cox, Global Change Biology. 215Schimel, D., Pavlick, R., Fisher, J. B., Asner, G. P., Saatchi, S., Townsend, P., Miller, C., Frankenberg, C., Hibbard, K., and Cox, P. (2015). Observ- ing terrestrial ecosystems and the carbon cycle from space. Global Change Biology, 21(5):1762-1776.
Trait-Based Assessments of Climate-Change Impacts on Interacting Species. M Schleuning, E L Neuschulz, J Albrecht, I M Bender, D E Bowler, D M Dehling, S A Fritz, C Hof, T Mueller, L Nowak, M C Sorensen, K Böhning-Gaese, W D Kissling, Trends in Ecology & Evolution. 354Schleuning, M., Neuschulz, E. L., Albrecht, J., Bender, I. M., Bowler, D. E., Dehling, D. M., Fritz, S. A., Hof, C., Mueller, T., Nowak, L., Sorensen, M. C., Böhning-Gaese, K., and Kissling, W. D. (2020). Trait-Based Assessments of Climate-Change Impacts on Interacting Species. Trends in Ecology & Evolution, 35(4):319-328.
Assessing the joint behaviour of species traits as filtered by environment. E M Schliep, A E Gelfand, R M Mitchell, M E Aiello-Lammens, J A SilanderJr, Methods in Ecology and Evolution. 93Schliep, E. M., Gelfand, A. E., Mitchell, R. M., Aiello-Lammens, M. E., and Silander Jr, J. A. (2018). Assessing the joint behaviour of species traits as filtered by environment. Methods in Ecology and Evolution, 9(3):716-727.
Multilevel latent gaussian process model for mixed discrete and continuous multivariate response data. E M Schliep, J A Hoeting, Journal of Agricultural, Biological, and Environmental Statistics. 184Schliep, E. M. and Hoeting, J. A. (2013). Multilevel latent gaussian process model for mixed discrete and continuous multivariate response data. Journal of Agricultural, Biological, and Environmental Statistics, 18(4):492-513.
South African atlas of agrohydrology and climatology: Contribution towards a final report to the water research commission on project 492. R E Schulze, TT82-96Water Resource CommissionPretoria, South AfricaTechnical ReportSchulze, R. E. (1997). South African atlas of agrohydrology and climatol- ogy: Contribution towards a final report to the water research commission on project 492. Technical Report TT82-96, Water Resource Commission, Pretoria, South Africa.
Plant spectral diversity integrates functional and phylogenetic components of biodiversity and predicts ecosystem function. A K Schweiger, J Cavender-Bares, P A Townsend, S E Hobbie, M D Madritch, R Wang, D Tilman, J A Gamon, Nature Ecology & Evolution. 26Schweiger, A. K., Cavender-Bares, J., Townsend, P. A., Hobbie, S. E., Madritch, M. D., Wang, R., Tilman, D., and Gamon, J. A. (2018). Plant spectral diversity integrates functional and phylogenetic components of bio- diversity and predicts ecosystem function. Nature Ecology & Evolution, 2(6):976-982.
The assessment of leaf water content using leaf reflectance ratios in the visible, near-, and short-wave-infrared. H Seelig, A Hoehn, L S Stodieck, D M Klaus, Iii Adams, W W Emery, W J , International Journal of Remote Sensing. 2913Seelig, H., Hoehn, A., Stodieck, L. S., Klaus, D. M., Adams III, W. W., and Emery, W. J. (2008). The assessment of leaf water content using leaf reflectance ratios in the visible, near-, and short-wave-infrared. International Journal of Remote Sensing, 29(13):3701-3713.
From the Arctic to the tropics: multibiome prediction of leaf mass per area using leaf reflectance. S P Serbin, J Wu, K S Ely, E L Kruger, P A Townsend, R Meng, B T Wolfe, A Chlus, Z Wang, Rogers , A , New Phytologist. 2244Serbin, S. P., Wu, J., Ely, K. S., Kruger, E. L., Townsend, P. A., Meng, R., Wolfe, B. T., Chlus, A., Wang, Z., and Rogers, A. (2019). From the Arctic to the tropics: multibiome prediction of leaf mass per area using leaf reflectance. New Phytologist, 224(4):1557-1568.
Quantifying the influences of spectral resolution on uncertainty in leaf trait estimates through a Bayesian approach to RTM inversion. A N Shiklomanov, M C Dietze, T Viskari, P A Townsend, S P Serbin, Remote Sensing of Environment. 183Shiklomanov, A. N., Dietze, M. C., Viskari, T., Townsend, P. A., and Serbin, S. P. (2016). Quantifying the influences of spectral resolution on uncer- tainty in leaf trait estimates through a Bayesian approach to RTM inversion. Remote Sensing of Environment, 183:226-238.
Imaging spectroscopy algorithms for mapping canopy foliar chemical and morphological traits and their uncertainties. A Singh, S P Serbin, B E Mcneil, C C Kingdon, P A Townsend, Ecological Applications. 258Singh, A., Serbin, S. P., McNeil, B. E., Kingdon, C. C., and Townsend, P. A. (2015). Imaging spectroscopy algorithms for mapping canopy foliar chemical and morphological traits and their uncertainties. Ecological Applications, 25(8):2180-2197.
The Photochemical Reflectance Index (PRI) as a water-stress index. F Thenot, M Méthy, T Winkel, International Journal of Remote Sensing. 2323Thenot, F., Méthy, M., and Winkel, T. (2002). The Photochemical Reflectance Index (PRI) as a water-stress index. International Journal of Remote Sensing, 23(23):5135-5139.
Sensing biodiversity. W Turner, Science. 3466207Turner, W. (2014). Sensing biodiversity. Science, 346(6207):301-302.
Origin and diversification of the Greater Cape flora: Ancient species repository, hotbed of recent radiation, or both?. G A Verboom, J K Archibald, F T Bakker, D U Bellstedt, F Conrad, L L Dreyer, F Forest, C Galley, P Goldblatt, J F Henning, K Mummenhoff, H P Linder, A M Muasya, K C Oberlander, V Savolainen, D A Snijman, T V Niet, T L Nowell, Molecular Phylogenetics and Evolution. 511Verboom, G. A., Archibald, J. K., Bakker, F. T., Bellstedt, D. U., Conrad, F., Dreyer, L. L., Forest, F., Galley, C., Goldblatt, P., Henning, J. F., Mum- menhoff, K., Linder, H. P., Muasya, A. M., Oberlander, K. C., Savolainen, V., Snijman, D. A., Niet, T. v. d., and Nowell, T. L. (2009). Origin and diversification of the Greater Cape flora: Ancient species repository, hot- bed of recent radiation, or both? Molecular Phylogenetics and Evolution, 51(1):44-53.
Leaf spectroscopy reveals divergent inter-and intra-species foliar trait covariation and trait-environment relationships across NEON domains. Z Wang, P A Townsend, E L Kruger, New Phytologist. 2353Wang, Z., Townsend, P. A., and Kruger, E. L. (2022). Leaf spectroscopy reveals divergent inter-and intra-species foliar trait covariation and trait-environment relationships across NEON domains. New Phytologist, 235(3):923-938.
Resprouter fraction in Cape Restionaceae assemblages varies with climate and soil type. R O Wüest, G Litsios, F Forest, C Lexer, H P Linder, N Salamin, N E Zimmermann, P B Pearman, Functional Ecology. 309Wüest, R. O., Litsios, G., Forest, F., Lexer, C., Linder, H. P., Salamin, N., Zimmermann, N. E., and Pearman, P. B. (2016). Resprouter fraction in Cape Restionaceae assemblages varies with climate and soil type. Functional Ecology, 30(9):1583-1592.
Spatial functional data modeling of plant reflectances. P A White, H Frye, M F Christensen, A E Gelfand, J A SilanderJr, Annals of Applied Statistics. White, P. A., Frye, H., Christensen, M. F., Gelfand, A. E., and Silander Jr, J. A. (2022). Spatial functional data modeling of plant reflectances. Annals of Applied Statistics.
Hierarchical integrated spatial process modeling of monotone west antarctic snow density curves. P A White, D G Keeler, S Rupper, The Annals of Applied Statistics. 152White, P. A., Keeler, D. G., and Rupper, S. (2021). Hierarchical integrated spatial process modeling of monotone west antarctic snow density curves. The Annals of Applied Statistics, 15(2):556-571.
The worldwide leaf economics spectrum. I J Wright, P B Reich, M Westoby, D D Ackerly, Z Baruch, F Bongers, J Cavender-Bares, T Chapin, J H Cornelissen, M Diemer, Nature. 4286985Wright, I. J., Reich, P. B., Westoby, M., Ackerly, D. D., Baruch, Z., Bongers, F., Cavender-Bares, J., Chapin, T., Cornelissen, J. H., Diemer, M., and others (2004). The worldwide leaf economics spectrum. Nature, 428(6985):821-827.
Does the leaf economic spectrum hold within local species pools across varying environmental conditions?. J P Wright, A Sutton-Grier, Functional Ecology. 266Wright, J. P. and Sutton-Grier, A. (2012). Does the leaf economic spectrum hold within local species pools across varying environmental conditions? Functional Ecology, 26(6):1390-1398.
Seasonal variability of multiple leaf traits captured by leaf spectroscopy at two temperate deciduous forests. X Yang, J Tang, J F Mustard, J Wu, K Zhao, S Serbin, J.-E Lee, Remote Sensing of Environment. 179Yang, X., Tang, J., Mustard, J. F., Wu, J., Zhao, K., Serbin, S., and Lee, J.-E. (2016). Seasonal variability of multiple leaf traits captured by leaf spectroscopy at two temperate deciduous forests. Remote Sensing of Environment, 179:1-12.
Inconsistent estimation and asymptotically equal interpolations in model-based geostatistics. H Zhang, Journal of the American Statistical Association. 99465Zhang, H. (2004). Inconsistent estimation and asymptotically equal interpo- lations in model-based geostatistics. Journal of the American Statistical Association, 99(465):250-261.
| []
|
[
"Conceptual design of 20 T hybrid accelerator dipole magnets",
"Conceptual design of 20 T hybrid accelerator dipole magnets"
]
| [
"P Ferracin ",
"G Ambrosio ",
"M Anerella ",
"D Arbelaez ",
"L Brouwer ",
"E Barzi ",
"L Cooley ",
"J Cozzolino ",
"L Garcia ",
"Fa-Jardo ",
"R Gupta ",
"M Juchno ",
"V V Kashikhin ",
"F Kurian ",
"V Marinozzi ",
"I Novitski ",
"E Rochepault ",
"J Stern ",
"G Val-Lone ",
"B Yahia ",
"A V Zlobin "
]
| []
| []
| Hybrid magnets are currently under consideration as an economically viable option towards 20 T dipole magnets for next generation of particle accelerators. In these magnets, High Temperature Superconducting (HTS) materials are used in the high field part of the coil with so-called "insert coils", and Low Temperature Superconductors (LTS) like Nb3Sn and Nb-Ti superconductors are used in the lower field region with so-called "outsert coils". The attractiveness of the hybrid option lays on the fact that, on the one hand, the 20 T field level is beyond the Nb3Sn practical limits of 15-T for accelerator magnets and can be achieved only via HTS materials; on the other hand, the high cost of HTS superconductors compared to LTS superconductors makes it advantageous exploring a hybrid approach, where the HTS portion of the coil is minimized. We present in this paper an overview of different design options aimed at generating 20 T field in a 50 mm clear aperture. The coil layouts investigated include the Cos-theta design (CT), with its variations to reduce the conductor peak stress, namely the Canted Costheta design (CCT) and the Stress Management Cos-theta design (SMCT), and, in addition, the Block-type design (BL) including a form of stress management and the Common-Coil design (CC).Results from a magnetic and mechanical analysis are discussed, with particular focus on the comparison between the different options regarding quantity of superconducting material, field quality, conductor peak stress, and quench protection.Index Terms-Superconducting magnets, dipole magnets, Nb3Sn magnets, HTS, hybrid magnets. | 10.1109/tasc.2023.3250382 | [
"https://export.arxiv.org/pdf/2302.04940v1.pdf"
]
| 256,808,315 | 2302.04940 | d79a2f6c665d76db793dc18ab8630883a09adff4 |
Conceptual design of 20 T hybrid accelerator dipole magnets
Template version 8.0d, 22 August 2017
P Ferracin
G Ambrosio
M Anerella
D Arbelaez
L Brouwer
E Barzi
L Cooley
J Cozzolino
L Garcia
Fa-Jardo
R Gupta
M Juchno
V V Kashikhin
F Kurian
V Marinozzi
I Novitski
E Rochepault
J Stern
G Val-Lone
B Yahia
A V Zlobin
Conceptual design of 20 T hybrid accelerator dipole magnets
Template version 8.0d, 22 August 20171Index Terms-Superconducting magnetsdipole magnetsNb3Sn magnetsHTShybrid magnets
Hybrid magnets are currently under consideration as an economically viable option towards 20 T dipole magnets for next generation of particle accelerators. In these magnets, High Temperature Superconducting (HTS) materials are used in the high field part of the coil with so-called "insert coils", and Low Temperature Superconductors (LTS) like Nb3Sn and Nb-Ti superconductors are used in the lower field region with so-called "outsert coils". The attractiveness of the hybrid option lays on the fact that, on the one hand, the 20 T field level is beyond the Nb3Sn practical limits of 15-T for accelerator magnets and can be achieved only via HTS materials; on the other hand, the high cost of HTS superconductors compared to LTS superconductors makes it advantageous exploring a hybrid approach, where the HTS portion of the coil is minimized. We present in this paper an overview of different design options aimed at generating 20 T field in a 50 mm clear aperture. The coil layouts investigated include the Cos-theta design (CT), with its variations to reduce the conductor peak stress, namely the Canted Costheta design (CCT) and the Stress Management Cos-theta design (SMCT), and, in addition, the Block-type design (BL) including a form of stress management and the Common-Coil design (CC).Results from a magnetic and mechanical analysis are discussed, with particular focus on the comparison between the different options regarding quantity of superconducting material, field quality, conductor peak stress, and quench protection.Index Terms-Superconducting magnets, dipole magnets, Nb3Sn magnets, HTS, hybrid magnets.
I. INTRODUCTION
HE superconducting magnet community, which is working on the next generation of magnets for future particle colliders, has being considering the option of a "20 T" dipole magnet since approximately 20 years. The first proposal was formulated by P. McIntyre et al. [1], who, considering the nominal field of 8.3 T of the LHC dipoles, explored in 2005 the possibility of a 24 T dipole magnet for an "LHC tripler". In 2011, the design studies carried out by E. Todesco, et al. [2]- [3] and by R. Gupta, et al. [4] were focused on dipole magnets generating an operational field of 20 T, with the goal of "opening the way for a 16.5 TeV beam energy accelerator in the LHC tunnel", being 7 TeV the nominal beam energy of the LHC. A similar field level was then considered for the future Super proton-proton Collider (SppC) in China by G. Sabbi, et al. [5] and by Q. Xu et al. [6], and for the European Future Circular Collider (FCC) by J. van Nugteren, et al. [7].
A different viewpoint to explain the rationale behind the idea of a 20 T accelerator magnet lays in the continuous push towards high field magnets to achieve higher collision energy [8], and in particular to a sort of "4 T step" that has characterized the R&D on superconducting accelerator magnets in the last two decades. In fact, a 4 T jump has characterized the increase in field from the Nb-Ti dipole magnets installed in the LHC [9] to the Nb3Sn magnets (in this case quadrupoles) planned for the HL-LHC project and expected to operate with a conductor peak field approaching 12 T [10]. The FCC design study has then worked on arc dipoles with a bore field of 16 T, a level considered as the practical limit for the Nb3Sn technology [11]- [12]. In this landscape, the next natural milestone is represented by a 20 T magnet, where so called High Temperature Superconductor (HTS), in particular Bi2212 [13] and REBCO [14], need to be adopted to push the field beyond the Low Temperature Superconductors (namely Nb3Sn) limits.
As a last consideration, one has to take into account the still relevant higher cost of HTS conductor compared to Nb3Sn. The significant difference in superconductor price justifies investigating the hybrid option, where Nb3Sn is included in the coil design to minimize the quantity of HTS material. This option was recently tested with the FRESCA2 large aperture dipole magnet as outsert and with the HTS EUCARD2 coil as insert [15]- [16], and explored in a recent conceptual design study [17].
We describe in this paper three conceptual designs of a 20 T hybrid magnet. The work is a continuation of a preliminary and broader investigation carried out in [18] as part of the US Magnet Development Program (MDP) [19]. After summarizing in Section I the design criteria, in Section II we perform a parametric analysis using sector coils. In Section III we then describe costheta, block and common-coil designs, focusing on magnetic parameters and coil stresses. Some consideration regarding fabrication options and challenges will also be provided.
T
II. DESIGN CRITERIA AND CONDUCTOR PARAMETERS
The design criteria set as a goal of the conceptual design are given in Table I. The dipole has to generate a 20 T field of accelerator field quality with appropriate margin in a 50 mm clear bore. With respect to the criteria considered in [18], the target geometrical harmonics is reduced to <3 units. In addition, the maximum load-line fraction Iop/Iss, i.e. the ratio between the operational current and the magnet current limit based on conductor properties (short sample current) is set to 87%, the same value adopted for the LHC dipoles [9] and similar to the 86% considered in the FCC design study [11]. Again, similarly to the FCC criteria, the maximum Von Mises stress allowed in the Nb3Sn coils is 180 MPa at 1.9 K; for the HTS conductor, a more conservative limit of 120 MPa has been assumed. The two dashed lines in Fig. 1 depict the engineering current densities (je = Istrand/Astrand) used in the magnetic computations. For the Nb3Sn conductor, the curves correspond to a superconductor current density (virgin strand) of 3000 A/mm 2 at 12 T and 4.2 K (a level achieved within the US Conductor Development Program [20]), which, assuming a 1.1 Cu/Non-Cu ratio, results in a je of 870 A/mm 2 at 16 T, 1.9 K, including 5% of cabling degradation. For the HTS conductor, we assumed a je of 740 A/mm 2 at 1.9 K and 20 T. This current level was achieved in short samples of Bi2212 strands used in racetrack sub-scale coils [21].
III. SENSITIVITY ANALYSIS WITH SECTOR COILS
By simulating the superconducting coil as a 60 sector with a uniform overall current density (jo = Icable/Ains_cable) it is possible to carry out a sensitivity analysis where the key magnet parameters are investigated, as show in [22]- [23]. The magnetic numerical model (implemented in ANSYS 2D) assumes a 0.67 ratio between jo and je (obtained by considering the Nb3Sn insulated cable for the MQXF project [24]) and a 250 mm thick iron yoke placed at 25 mm from the outer radius of the coil. In order to investigate the stress induced on the coil mid-plane by the azimuthal and radial electro-magnetic (e.m.) forces, the numerical mechanical model (implemented in ANSYS 2D) imposes an infinitely rigid structure all around the coil. The coil itself is also simulated with an infinity rigidity (to avoid bending effects) and with minimum shear modulus, in such a way that only the accumulation of e.m. forces on the mid-plane and on the outer radius are estimated. As output of the computations we focus on coil size, stresses and stored energies.
As a result of the slow and almost linear decrease in critical current as a function for the applied field observed in the HTS (see Fig. 1), the bore field increases almost linearly with the coil width, without exhibiting the "saturation" towards 10 T and 16 T observed in the Nb-Ti and Nb3Sn dipole magnets [23]. At a load-line fraction of 87%, a 20 T sector coil has a width of about 70 mm, compared to about 45 mm at 16 T (see Fig. 2).
The peak azimuthal and radial compressive stresses on the mid-plane due to the accumulation of the azimuthal and radial e.m. forces (see Fig. 3) reach -150 MPa with a bore field of 16 T and increase to more than -200 MPa at 20 T. This level of stress implies that stress management components have to be inserted in the coil design to reduce not only the azimuthal stress, as traditionally assumed, but also the radial stress, which appears to be the largest at 20 T and more dependent to the bore field.
With a value of 2.2 MJ/m, the 20T sector coil more than double the estimate of the stored energy for the 16 T (see Fig. 4). However, if the stored energy density over the insulated cable total area is considered, a value of 0.13 J/mm 3 is obtained, still higher but more similar to the values computed or the FCC dipole magnets [25].
IV. CONCEPTUAL DESIGNS
In [18], 10 different designs were preliminary investigated to provide a first feedback on the general coils' size, load-line margin, and field quality. Starting from that analysis, we introduce in this paper the stress criteria provided in Table I. The results are described in the next sub-sections, where three designs are considered: a cos-theta (CT), a block (BL) and a common-coil (CC). The cable and magnet parameters of the three designs are summarized in Table II.
In terms of magnetic analysis, the strands diameters for both the Nb3Sn and HTS ranges from 0.85 to 1.15 mm, and the cable width from 13.3 mm to 24.4 mm. A cable compaction similar to the one of the MQXF cable [24] is assumed, again for both Nb3Sn and HTS cables. As for the sector coil analysis, a 250 mm thick iron is considered in the computations. The loadlines are shown in Fig.1, where the markers indicated the operational and shorts sample conditions. As expected, meeting the coil stress criteria turned out to be the biggest challenge during the optimization of the coil design, since the high e.m. forces impose the use of stress management elements within the coil turns. The optimization was carried out to maintain the Von Mises stress below 120 MPa in the HTS [26], [27], and below 180 MPa in the Nb3Sn, consistently to previous design studies [2], [11] and experimental studies [28], [29]. In addition, the following assumptions were set: 1) an elastic modulus of 25 GPa is associated to the coil turns and blocks; 2) the coil turns and blocks are surrounded by solid (i.e. "deformable") components made of stainless steel, bronze or Ti alloy (indicated in the following figure captions); 3) the coil turns and blocks are allowed to separate and slide with a 0.2 friction factor with respect to the stress management elements; 4) the surrounding iron yoke, not shown in the following crosssection figures is assumed to be infinitely rigid; 5) no pre-stress nor cool-down is applied. The mechanical analysis, whose results are described in the following sub-sections, is aimed exclusively at providing a first investigation on the level of stress interception and of the type of intercepting elements required to reduce the coil stresses produced only by the accumulation of the e.m. forces. It does not address the design of support structure, the pre-stress process, and cool-down conditions, which will be covered in the next phase of the conceptual design.
A. Cos-theta (CT) Design
The cross-section of the cos-theta design, analyzed in details in [30], is shown in Fig. 5, where the central red circle represents the 50 mm clear aperture and the dashed lines indicate the separation between the HTS insert and the LTS outsert. The layout is characterized by three double-layer coils wound with a continuous cable unit length. This option prevents the use of internal splices, as in most of the cos-theta Nb3Sn coils fabricated so far, with the exception of the CERN-ELIN and UT-CERN dipole magnets [12]. In the innermost two layers HTS cable turns are wound into individual slots in the coil support structure, as in a canted cos-theta (CCT) design [31]- [33]. In the two central layers, groups of turns (turn blocks) are wound in the coil structure groves, as it is done in the Stress Management cos-theta (SMCT) design [34]- [36]. Finally, the two outermost layers can be defined a traditional cos-theta coil with turn blocks separated by spacers [37], [38].
The cable width ranges from 17.7 mm in layers 5-6 to 24.4 mm in layer 3-4. The use of wider cable in layer 3-4 compared to layer 1-2 is aimed at minimizing the size of the HTS coils by increasing the size of the LTS ones, a design choice inspired to the "anti-grading" sector coils shown in [18].
In operational conditions with a bore field of 20 T, the calculated geometrical harmonics are within 3 units, the conductor peak field is 20.5 T in the HTS and 16.0 T in the LTS, and the corresponding load-line ratio is 80% in all coils.
The use of three different cos-theta coil designs is exclusively related to the outputs of the mechanical analysis. In fact, the combined effect of deformation induced by the large e.m. forces and of the low stress limit of 120 MPa assumed for the HTS coils could be overcome only by implementing a high level of stress interception (see Fig. 6). This is the case in the CCT-like layer 1-2, where each turn is separated by ribs. The ribs have a minimum thickness of 0.4 mm and are connected to a 5 mm spar (or mandrel). In the layer 3-4, a lower level of stress interception, magnetically more efficient, was adopted to maintain the Nb3Sn coil stress level below 180 MPa, where coil blocks (not the individual turns) are separated by ribs, following the SMCT design. Finally, no stress management elements were used in layer 5-6. As can be seen in Fig. 7, both HTS and LTS coils have Von Mises stress under the limits established by the design criteria, except for small corner effects (gray areas in Fig. 7, left) in the HTS turns of layer 2.
B. Block (BL) Design
The block design, shown in Fig. 8 and analyzed in [39], features also three double-layers coils, all composed by narrow HTS inner blocks and wide LTS outer blocks. As for the CT option, no internal splices are assumed. The overall design follows the main characteristics of the HD2 [40] and FRESCA2 [41] designs and of other conceptual designs [42], [43], with blocks aligned in the outer edge. The cable width is 14.7 mm for both HTS and LTS coils, but, similarly to the CT design, with a higher thickness in the LTS. The design meets the field quality requirements, and with a bore field of 20 T it operates at a load-line ratio of 75% in the HTS and 84% in the LTS. Also, both the HTS and the LTS coil area are similar to the CT design.
The mechanical design (see Fig. 9) is characterized by a 10 mm thick internal support (winding pole) which brings the coil aperture to 70 mm. A similar support was implemented in both HD2 and FRESCA2. In addition, the coils are vertically separated by horizontal plates, which provide vertical stress management, and by vertical ribs, which separate the HTS and LTS blocks and provide horizontal stress management. In particular, the ribs transfer the horizontal e.m. force to the horizontal plates, in a way that maintains the coil stress within the limits in both the LTS and HTS. Horizontal plates aimed at intercepting the vertical forces were included in the design of the Test Facility Dipole [44]. The most challenging aspect of the optimization consisted in minimizing the bending of the ribs, which could generate extremely high stress in the corners of the coil blocks. A solution was found by including gaps (or clearances) of 0.200 to 0.300 mm between the ribs and the plates. Under these conditions, only an initial small fraction of the e.m. force is transferred from the HTS blocks to the LTS blocks. And once the ribs get in contact with the plates, the force is transmitted to the latter, and the ribs bending is minimized. The results of the mechanical analysis are shown in Fig. 10, with all the stress within the design criteria.
As a last general consideration regarding the block design, it is important to point out that at the moment no block coil has been fabricated with different cables sizes (grading) or different superconducting materials (hybrid). Therefore, inserting an HTS block coil inside an LTS block coil appears to be the biggest design and fabrication challenge for this option. Possible fabrication and assembly solutions for this issue are provided in [39].
C. Common-Coil (CC) Design
The common-coil design (CC) is characterized by large racetrack coils that cover both apertures [45]- [47]. In Fig. 11, the coil cross-section of one aperture is shown. Unlike the CT and BL designs, the coil aperture and the clear aperture are identical, so no internal support is considered, similarly to [46]. The HTS part is composed by two blocks (per quadrants) close to the aperture, each with two turns, and by a single-layer large racetrack coil. All HTS blocks are wound with an 18.4 mm wide cable. The two-turns blocks close to the aperture, often referred as "pole coils", have the main function of correcting the field quality, and they were included also in previous design studies [6], [45]- [47]. Since pole coils require some sort of hard-way bend of the cable to clear the path of the bore tube, they represent a departure from the typical common-coil advantage of using simple racetrack coils. However, the bending radius remains significantly larger compared to the CT design. and, not being ever implemented in previous CC magnets, they constitute a design and fabrication challenge. In fact, since some sort of hard-way bend of the cable is required to clear the path of the bore tube, they represent a departure from the typical common-coil advantage of using simple racetrack coils.
The Nb3Sn part of the coil is composed by three layers, all using the same 13.3 mm wide cable. Unlike the CT and BL design, a single layer coil can be easily connected to another single layer coil, thanks to the wide central winding pole which can provide enough real estate for the splicing operation. Therefore, double-layer coils were not imposed to the CC design, as was done for the previous two designs. Another important characteristic of the CC lay-out is that the vertical dimensions of the layers can be easily fine-tuned by simply stacking or removing turns. This possibility is not available for example in the BL design, were the vertical dimensions are defined by layers with a given cable width. These two advantages of the CC design (single layer coils and vertical tunability of blocks' size) provide an additional flexibility in the optimization of the coil shape compared to the CT and BL designs.
The CC design has all geometric harmonics below 3 units, and load-line ratio is is within 1 % of the limits set in the criteria, i.e. 88% in the HTS and 86% in the LTS.
Stress management in the CC design is obtain again by vertical plates and horizontal ribs (see Fig. 12). The vertical plates are allowed to slide with respect to the external collars. Similarly, the ribs are allowed to slide with respect to the plates. As a result, no vertical stress management is provided, and only the horizontal forces are intercepted, in this case by the vertical plates supported by the horizontal ribs. With this mechanical design, the stress in the HTS blocks is maintained within 120 MPa. However, stresses higher than 180 MPa can be seen in the top part of the LTS coils (see Fig. 13).
The total area of the HTS block is similar to the one of the CT and BL designs, but a significant lower area for the LTS is observed in the CC. However, it is important to point out that the CC has a lower coil aperture, a lower load-line margin, and still a higher conductor peak stress in the LTS compared to the CT and BL designs.
V. CONCLUSIONS
We presented in this paper the conceptual design of a dipole magnet with an operational field of 20 T, generated by a hybrid coil made with both HTS and LTS (Nb3Sn) superconducting materials. The analysis included both a magnetic study, focused on bore field, load-line ratio and field quality, and a mechanical study, aimed at keeping the Von Mises stress below 180 (120) MPa in the LTS (HTS) conductor. An initial analytical/numerical study using sector coils indicated that in a 20 T dipole magnet, 1) the coil has to be about 70 mm wide, 2) both radial and azimuthal stress in the coil induced by the accumulation of the e.m. forces are above 200 MPa, and 3) the stored energy densities in the insulated cables are of about 0.13 J/mm 3 . Three were the design options analyzed, all with stress management elements: 1) a cos-theta design, including CCT-like SMCT traditional cos-theta two-layer coils, 2) a block-type design, and 3) a common-coil design. All layouts meet the bore field, margin, and field quality requirements. In terms of conductor quantity, the designs have similar HTS conductor area, while a lower LTS area is obtained in the common-coil. The mechanical analysis showed that the cos-theta option requires individual turn support in the HTS layers and coil blocks support in the inner LTS layers to reduce the coil peak stress. Also, in both the block and common coil designs, a series of plates and ribs are necessary to intercept the e.m. forces and to keep the accumulated stress within the limits.
This work was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, through the US Magnet Development Program (Corresponding author: Paolo Ferracin). P. Ferracin, D. Arbelaez, L. Brouwer, L. Garcia Fajardo, M. Juchno and G. Vallone are with Lawrence Berkeley National Lab, Berkeley, CA 94720, USA (email: [email protected]). G. Ambrosio, E. Barzi, V.V. Kashikhin, V. Marinozzi, I. Novitski, A.V. Zlobin are with Fermi National Accelerator Laboratory, Batavia, IL 80510 USA.
Fig. 1 .
1Engineering current density (je = Istrand/Astrand) assumed in the computations for Nb3Sn and Bi2212 strands (dashed lines). Solid lines represent the load-lines defined by the operational and short sample currents (markers) for the cos-theta (CT), block (BL) and common-coil (CC) designs in the HTS and LTS coils.
Fig. 2 .
2Bore field vs coil width computed with a sector coil numerical model for an 87% and 100% load-line fraction Iop/Iss.
Fig. 3 .
3Maximum azimuthal and radial stress on the mid-plane vs bore field computed with a sector coil numerical model for an 87% load-line fraction Iop/Iss.
Fig. 4 .
4Stored energy per aperture and stored energy density (considering the total insulated cable area) vs. bore field computed with a sector coil numerical model for an 87% load-line fraction Iop/Iss.
Fig. 5 .
5Cross-section of the cos-theta (CT) design. The circle and the center of the coil aperture indicates the 50 mm clear aperture. The dashed line separates the HTS insert from and LTS outsert.
Fig. 6 .
6Mechanical design of the cos-theta CT) design. The structural elements are assumed to be in stainless steel (purple), Ti alloy (orange) and Al-Br (red).
Fig. 7 .
7Von Mises stress (Pa) in the conductor under the action of e.m. forces: HTS inserts (left) and LTS outsert (right).
Fig. 8 .
8Cross-section of the block (BL) design. The circle and the center of the coil aperture indicates the 50 mm clear aperture. The dashed line separates the HTS insert from and LTS outsert.
Fig. 9 .
9Mechanical design of the block (BL) design. All the structure elements are assumed to be in stainless steel (purple) and Ti alloy (orange).
Fig. 10 .
10Von Mises stress (Pa) in the conductor under the action of e.m. forces: HTS inserts (left) and LTS outsert (right).
Fig. 11 .
11Cross-section (one aperture) of the common-coil (BL) design. The circle and the center of the coil aperture indicates the 50 mm clear aperture. The dashed line separates the HTS insert from and LTS outsert.
Fig. 12 .
12Mechanical design of the common-coil (CC) design. All the structure elements are assumed to be in stainless steel.
6 Fig. 13 .
613Von Mises stress (Pa) in the conductor under the action of e.m. forces: HTS inserts (left) and LTS outsert (right).
M. Anerella, J. Cozzolino, R. Gupta, F. Kurian, and B. Yahia are with BNL, Upton, NY 11973-5000, USA. L.D. Cooley is with the Applied Superconductivity Center, National High Mag-netic Laboratory, Tallahassee, FL 32310, USA
E. Rochepault is with IRFU, CEA, Univers Paris-Saclay, Paris F-91191,
France.
J. Stern is with TUFTS University, 419 Boston Ave, Medford, MA 02155,
USA.
TABLE I DESIGN
ICRITERIA ON MAGNET PARAMETERSParameter
Unit
Clear aperture
mm
50
Operational temperature
K
1.9
Operational bore field Bbore_op
T
20
Load-line fraction (Iop/Iss)
%
87
Geometrical harmonics (20 T, Rref=17 mm)
unit
<3
Maximum Nb3Sn coil eq. stress (293 K)
MPa
150
Maximum Nb3Sn coil eq. stress (1.9 K)
MPa
180
Maximum HTS coil eq. stress (293K, 1.9 K)
MPa
120
Maximum hot spot temperature
K
350
TABLE II
IIGiven by the inner radius of the innermost cable on the mid-plane.20 T HYBRID MAGNET PARAMETERS
Parameter
Unit
CTHTS
CTLTS I
CTLTS II
BLHTS
BLLTS
CCHTS
CCLTS
Strand diameter
mm
0.95
1.15
0.85
1.00
1.13
0.85
0.90
N strands
36
40
40
28
24
40
28
Cable width
mm
18.590
24.380
17.730
14.700
14.700
18.350
13.300
Cable mid-thickness
mm
1.705
2.085
1.515
1.800
2.030
1.520
1.600
Insulation thickness
mm
0.150
0.150
0.150
0.150
0.150
0.150
0.150
Clear aperture
mm
50
50
50
Coil aperture*
mm
60
70
50
N turns per quadrant
37
64
95
56
210
42
105
Area ins. cable per quadrant
mm 2
1401
3767
3109
1764
7340
1426
2713
Current_op
kA
13.5
10.3
13.6
B_bore_op
T
20.0
20.0
19.9
B_peak_op
T
20.5
16.0
13.6
20.84
15.85
20.7
13.8
Je _op
A/mm 2
529
325
595
470
429
599
763
Magnet current_ss
kA
16.8
12.3
15.4
B_bore_ss
T
24.6
23.5
22.4
Load-line ratio
%
80
80
80
75
84
88
86
*
On the Feasibility of a Tripler Upgrade for the LHC. P Mcintyre, A Sattarov, PAC. 634P. McIntyre and A. Sattarov, "On the Feasibility of a Tripler Upgrade for the LHC", PAC (2005) 634.
Conceptual design of 20 T dipoles for highenergy LHC. L Rossi, E Todesco, Geneva, Switzerland, CERN Yellow RepCERNL. Rossi and E. Todesco, "Conceptual design of 20 T dipoles for high- energy LHC," CERN, Geneva, Switzerland, CERN Yellow Rep. 2011-3, pp. 13-19, 2011.
Dipoles for High-Energy LHC. E Todesco, IEEE Trans. Appl. Supercond. 243E. Todesco, et al., "Dipoles for High-Energy LHC", IEEE Trans. Appl. Supercond., vol. 24, no. 3, June 2014, Art. no. 4004306.
Hybrid High-Field Cosine-Theta Accelerator Magnet R&D With Second-Generation HTS. R Gupta, IEEE Trans. Appl. Supercond. 253R. Gupta, et al., "Hybrid High-Field Cosine-Theta Accelerator Magnet R&D With Second-Generation HTS", IEEE Trans. Appl. Supercond., vol. 25, no. 3, June 2015, Art. no. 4003704.
. G Sabbi, G. Sabbi, et al., https://indico.ihep.ac.cn/event/4900.
20-T Dipole Magnet with Common-Coil Configuration: Main Characteristics and Challenges. Qingjin Xu, IEEE Trans. Appl. Supercond. 264Qingjin Xu, et al., "20-T Dipole Magnet with Common-Coil Configura- tion: Main Characteristics and Challenges", IEEE Trans. Appl. Super- cond., vol. 26, no. 4, June 2016, Art. no. 4000404.
Toward REBCO 20 T+ Dipoles for Accelerators. J Van Nugteren, IEEE Trans. Appl. Supercond. 284J. van Nugteren, et al., "Toward REBCO 20 T+ Dipoles for Accelerators", IEEE Trans. Appl. Supercond., vol. 28, no. 4, June 2018, Art. no. 4008509.
Superconducting magnets for the LHC main lattice. L Rossi, IEEE Trans. Appl. Supercond. 142153L. Rossi, "Superconducting magnets for the LHC main lattice", IEEE Trans. Appl. Supercond., vol. 14, no. 2, June 2004, pp. 153.
The High Luminosity Large Hadron Collider. O. Brüning and L. RossiWorld Scientific"The High Luminosity Large Hadron Collider", edited by O. Brüning and L. Rossi, World Scientific, October 2015.
The 16 T Dipole Development Program for FCC and HE-LHC. D Schoerling, IEEE Trans. Appl. Supercond. 295D. Schoerling, et al., "The 16 T Dipole Development Program for FCC and HE-LHC", IEEE Trans. Appl. Supercond., vol. 29, no. 5, August 2019, Art. no. 4003109.
Nb3Sn Accelerator Magnets. D. Schoerling and A. ZlobinSpringer Open"Nb3Sn Accelerator Magnets", edited by D. Schoerling and A. Zlobin, Springer Open, 2019.
Isotropic Round-Wire Multifilament Cuprate Superconductor for Generation of Magnetic Fields above 30 T. D C Larbalestier, Nature Materials. 134Larbalestier, D. C., et al., "Isotropic Round-Wire Multifilament Cuprate Superconductor for Generation of Magnetic Fields above 30 T", Nature Materials 13 (4): 375-81.
V Selvamanickam, High temperature superconductor (HTS) wires and tapes in High Temperature Superconductors (HTS) for Energy Applications. Melhem, Z., Ed; Cambridge, UKWoodhead PublishingSelvamanickam, V., "High temperature superconductor (HTS) wires and tapes in High Temperature Superconductors (HTS) for Energy Applica- tions", Melhem, Z., Ed.; Woodhead Publishing Series in Energy; Wood- head Publishing: Cambridge, UK, 2012; pp. 34-68.
The EuCARD2 Future Magnets Program for Particle Accelerator High-Field Dipoles: Review of Results and Next Steps. L Rossi, IEEE Trans. Appl. Supercond. 283Art. no. 4001810L. Rossi, et al. "The EuCARD2 Future Magnets Program for Particle Ac- celerator High-Field Dipoles: Review of Results and Next Steps", IEEE Trans. Appl. Supercond., vol. 28, no. 3, April 2018, Art. no. 4001810.
Preliminary Integration for Testing HTS Feather-M2 in the FRESCA2 Dipole Magnet. D Martins Araujo, IEEE Trans. Appl. Supercond. 304D. Martins Araujo, et al. "Preliminary Integration for Testing HTS Feather-M2 in the FRESCA2 Dipole Magnet", IEEE Trans. Appl. Super- cond., vol. 30, no. 4, June 2020, Art. no. 4003605.
Strategies for conformal REBCO windings. J S Rogers, IOP Conf. Series: Materials Science and Engineering 1241. 12029J. S. Rogers, et al., "Strategies for conformal REBCO windings", IOP Conf. Series: Materials Science and Engineering 1241 (2022) 012029.
Towards 20 T Hybrid Accelerator Dipole Magnets. P Ferracin, IEEE Trans. Appl. Supercond. 326P. Ferracin, et al., "Towards 20 T Hybrid Accelerator Dipole Magnets", IEEE Trans. Appl. Supercond., vol. 32, no. 6, September 2022, Art. no. 4000906.
A New Generation Nb3Sn Wire, and the Prospects for Its Use in Particle Accelerators. R M Scanlan, D R Dietderich, S A Gourlay, AIP Conference Proceedings. 711349R. M. Scanlan, D. R. Dietderich, and S. A. Gourlay, "A New Generation Nb3Sn Wire, and the Prospects for Its Use in Particle Accelerators", AIP Conference Proceedings 711, 349 (2004).
Superconducting accelerator magnets based on high temperature superconducting Bi-2212 round wires. T Shen, L Garcia Fajardo, Instruments. 2020217T. Shen, L. Garcia Fajardo, "Superconducting accelerator magnets based on high temperature superconducting Bi-2212 round wires", Instruments, 2020, 4(2), 17.
Electromagnetic design of superconducting quadrupoles. L Rossi, E Todesco, Phys. Rev. ST Accel. Beams. 10112401L. Rossi, E. Todesco, "Electromagnetic design of superconducting quad- rupoles", Phys. Rev. ST Accel. Beams 10 (2007) 112401.
Electromagnetic design of superconducting dipoles based on sector coils. L Rossi, Ezio Todesco, Phys. Rev. ST Accel. Beams. 9102401L. Rossi and Ezio Todesco, "Electromagnetic design of superconducting dipoles based on sector coils", Phys. Rev. ST Accel. Beams 9 (2006) 102401.
The HL-LHC low-b quadrupole magnet MQXF: from short model to long prototype. P Ferracin, IEEE Trans. Appl. Supercond. 295Art. no. 4001309P. Ferracin, et al., "The HL-LHC low-b quadrupole magnet MQXF: from short model to long prototype", IEEE Trans. Appl. Supercond. vol. 29, no. 5, August 2019, Art. no. 4001309.
Quench protection analysis integrated in the design of dipoles for the Future Circular Collider. Salmi, Physical Review Accelerators and Beams. 2032401Salmi, et al., "Quench protection analysis integrated in the design of di- poles for the Future Circular Collider", Physical Review Accelerators and Beams, 20, 2017, 032401.
Critical current variation of Rutherford cable of Bi-2212 in high magnetic fields with transverse stress. D R Dietderich, Physica C. 4PartD. R. Dietderich, et al., "Critical current variation of Rutherford cable of Bi-2212 in high magnetic fields with transverse stress", Physica C, Vol. 341-348, Part 4, November 2000, Pages 2599-2600.
Effect of Transverse Compressive Monotonic and Cyclic Loading on the Performance of Superconducting CORC® Cables and Wires. D C Van Der Laan, Supercond. Sci. Technol. 32115002D.C. van der Laan, et al., "Effect of Transverse Compressive Monotonic and Cyclic Loading on the Performance of Superconducting CORC® Ca- bles and Wires." 2018 2018 Supercond. Sci. Technol 32 (1): 015002.
Performance of a Nb3Sn quadrupole under high stress. H Felice, IEEE Trans. Appl. Supercond. 2131849H. Felice, et al., "Performance of a Nb3Sn quadrupole under high stress", IEEE Trans. Appl. Supercond., vol. 21, no. 3, June 2011, pag 1849.
Irreversible degradation of Nb3Sn Rutherford cables due to transverse compressive stress at room temperature. P Ebermann, Supercond. Sci. Technol. 3165009P. Ebermann, et al., "Irreversible degradation of Nb3Sn Rutherford cables due to transverse compressive stress at room temperature", 2018 Super- cond. Sci. Technol. 31 065009
Conceptual design of a 20 T hybrid cos-theta dipole superconducting magnet for future High-Energy particle accelerators. V Marinozzi, IEEE Trans. Appl. Supercond. submitted for publicationV. Marinozzi, et al., "Conceptual design of a 20 T hybrid cos-theta dipole superconducting magnet for future High-Energy particle accelerators", IEEE Trans. Appl. Supercond., submitted for publication.
Design of a Canted-Cosine-Theta Superconducting Dipole Magnet for Future Colliders. S Caspi, IEEE Trans. Appl. Supercond. 274Art. no. 4001505S. Caspi, et al., "Design of a Canted-Cosine-Theta Superconducting Di- pole Magnet for Future Colliders", IEEE Trans. Appl. Supercond., Vol. 27, No. 4, June 2017, Art. no. 4001505.
Electromechanical Design of a 16-T CCT Twin-Aperture Dipole for FCC. B Auchmann, IEEE Trans. Appl. Supercond. 283B. Auchmann, et al., "Electromechanical Design of a 16-T CCT Twin-Aperture Dipole for FCC", IEEE Trans. Appl. Supercond., Vol. 28, No. 3, April 2018, Art. no. 4000705.
Design of CCT6: a large-aperture, 12 T, Nb3Sn Dipole Magnet. L Brouwer, IEEE Trans. Appl. Supercond. submitted for publicationL. Brouwer, et al., "Design of CCT6: a large-aperture, 12 T, Nb3Sn Dipole Magnet", IEEE Trans. Appl. Supercond., submitted for publication.
Large-Aperture High-Field Nb3Sn Dipole magnets. A Zlobin, Proceedings of IPAC2018. IPAC2018Vancouver, BC, Canada, WEPML026A. Zlobin, et al., "Large-Aperture High-Field Nb3Sn Dipole magnets", Proceedings of IPAC2018, Vancouver, BC, Canada, WEPML026.
Development of a 120-mm aperture Nb3Sn dipole coil with stress management. I Novitski, IEEE Trans. Appl. Supercond. submitted for publicationI. Novitski, et al., "Development of a 120-mm aperture Nb3Sn dipole coil with stress management", IEEE Trans. Appl. Supercond., submitted for publication.
Test of New Accelerator Superconducting Dipoles Suitable for High Precision Field. A Patoux, J Perot, J M Rifflet, IEEE Trans. on Nuclear Science. 304A. Patoux, J. Perot, J. M. Rifflet, "Test of New Accelerator Superconduct- ing Dipoles Suitable for High Precision Field", IEEE Trans. on Nuclear Science, Vol. 30, No. 4, Aug. 1983.
Design of 11 T twin-aperture Nb3Sn dipole demonstrator magnet for LHC upgrades. M Karppinen, IEEE Trans. Appl. Supercond. 2234901504M. Karppinen, et al., "Design of 11 T twin-aperture Nb3Sn dipole demon- strator magnet for LHC upgrades" IEEE Trans. Appl. Supercond., vol. 22 no. 3, pp. 4901504, June 2012.
Baseline Design of a 16 T cos θ Bending Dipole for the Future Circular Collider. R Valente, IEEE Trans. Appl. Supercond. 295Art. no. 4003005R. Valente, et al., "Baseline Design of a 16 T cos θ Bending Dipole for the Future Circular Collider", IEEE Trans. Appl. Supercond., Vol. 29, No. 5, August 2019, Art. no. 4003005.
20 T Hybrid Nb3Sn-HTS Block-coil Accelerator Dipole with Stress-Management. E Rochepault, IEEE Trans. Appl. Supercond. submitted for publicationE. Rochepault, et al., "20 T Hybrid Nb3Sn-HTS Block-coil Accelerator Dipole with Stress-Management", IEEE Trans. Appl. Supercond., submit- ted for publication.
Development of the 15 T Nb3Sn Dipole HD2. P Ferracin, IEEE Trans. Appl. Supercond. 182277P. Ferracin, et al., "Development of the 15 T Nb3Sn Dipole HD2", IEEE Trans. Appl. Supercond., Vol. 18, No. 2, June 2008, pg. 277.
Development of the EuCARD Nb3Sn Dipole Magnet FRESCA2. P Ferracin, IEEE Trans. Appl. Supercond. 233Art. no. 4002005P. Ferracin, et al., "Development of the EuCARD Nb3Sn Dipole Magnet FRESCA2", IEEE Trans. Appl. Supercond., Vol. 23, No. 3, June 2013, Art. no. 4002005.
Design Study of a 16-T Block Dipole for FCC. G Sabbi, IEEE Trans. Appl. Supercond. 2634004705G. Sabbi, et al., "Design Study of a 16-T Block Dipole for FCC", IEEE Trans. Appl. Supercond., Vol. 26, No. 3, April 2016, 4004705.
2-D and 3-D design of the bock-coil dipole option for the future circular collider. M Segreti, IEEE Trans. Appl. Supercond. 295M. Segreti et al., "2-D and 3-D design of the bock-coil dipole option for the future circular collider," IEEE Trans. Appl. Supercond., vol. 29, no. 5, Aug. 2019, Art. no. 4000404.
Magnetic and Mechanical Analysis of a Large Aperture 15 T Cable Test Facility Dipole Magnet. G Vallone, IEEE Trans. Appl. Supercond. 315G. Vallone, et al., "Magnetic and Mechanical Analysis of a Large Aper- ture 15 T Cable Test Facility Dipole Magnet", IEEE Trans. Appl. Super- cond., Vol. 31, No. 5, August 2021, Art. no. 9500406
Common Coil Dipoles for Future High Energy Colliders. R Gupta, IEEE Trans. Appl. Supercond. 274R. Gupta, et al., "Common Coil Dipoles for Future High Energy Collid- ers", IEEE Trans. Appl. Supercond., Vol. 27, No. 4, June 2017, Art. no. 4000605.
Magnetic and Mechanical Design of a 16 T Common Coil Dipole for an FCC. F Toral, IEEE Trans. Appl. Supercond. 283F. Toral, et al., "Magnetic and Mechanical Design of a 16 T Common Coil Dipole for an FCC", IEEE Trans. Appl. Supercond., Vol. 28, No. 3, April 2018, Art. no. 4004305.
Design of a Compact 16 T Common-Coil Dipole for Future Colliders. E Ravaioli, G Sabbi, IEEE Trans. Appl. Supercond. 284E. Ravaioli and G. Sabbi, "Design of a Compact 16 T Common-Coil Di- pole for Future Colliders", IEEE Trans. Appl. Supercond., Vol. 28, No. 4, June 2018, Art. no. 4008005.
| []
|
[
"Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing",
"Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing"
]
| [
"Zhuo Wang \nBeijing University of Posts and Telecommunications\n\n",
"Zezheng Wang \nKuaishou Technology\n\n",
"Zitong Yu \nCMVS\nUniversity of Oulu\n\n",
"Weihong Deng [email protected]@oulu.fiwangzezheng \nBeijing University of Posts and Telecommunications\n\n",
"Jiahong Li \nKuaishou Technology\n\n",
"Tingting Gao \nKuaishou Technology\n\n",
"Zhongyuan Wang [email protected] \nKuaishou Technology\n\n"
]
| [
"Beijing University of Posts and Telecommunications\n",
"Kuaishou Technology\n",
"CMVS\nUniversity of Oulu\n",
"Beijing University of Posts and Telecommunications\n",
"Kuaishou Technology\n",
"Kuaishou Technology\n",
"Kuaishou Technology\n"
]
| []
| With diverse presentation attacks emerging continually, generalizable face anti-spoofing (FAS) has drawn growing attention. Most existing methods implement domain generalization (DG) on the complete representations. However, different image statistics may have unique properties for the FAS tasks. In this work, we separate the complete representation into content and style ones. A novel Shuffled Style Assembly Network (SSAN) is proposed to extract and reassemble different content and style features for a stylized feature space. Then, to obtain a generalized representation, a contrastive learning strategy is developed to emphasize liveness-related style information while suppress the domain-specific one. Finally, the representations of the correct assemblies are used to distinguish between living and spoofing during the inferring. On the other hand, despite the decent performance, there still exists a gap between academia and industry, due to the difference in data quantity and distribution. Thus, a new large-scale benchmark for FAS is built up to further evaluate the performance of algorithms in reality. Both qualitative and quantitative results on existing and proposed benchmarks demonstrate the effectiveness of our methods. The codes will be available at | 10.1109/cvpr52688.2022.00409 | [
"https://arxiv.org/pdf/2203.05340v4.pdf"
]
| 247,362,876 | 2203.05340 | 6e260c7dfa51449d364bda5c77a6675f42459c1f |
Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing
Zhuo Wang
Beijing University of Posts and Telecommunications
Zezheng Wang
Kuaishou Technology
Zitong Yu
CMVS
University of Oulu
Weihong Deng [email protected]@oulu.fiwangzezheng
Beijing University of Posts and Telecommunications
Jiahong Li
Kuaishou Technology
Tingting Gao
Kuaishou Technology
Zhongyuan Wang [email protected]
Kuaishou Technology
Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing
With diverse presentation attacks emerging continually, generalizable face anti-spoofing (FAS) has drawn growing attention. Most existing methods implement domain generalization (DG) on the complete representations. However, different image statistics may have unique properties for the FAS tasks. In this work, we separate the complete representation into content and style ones. A novel Shuffled Style Assembly Network (SSAN) is proposed to extract and reassemble different content and style features for a stylized feature space. Then, to obtain a generalized representation, a contrastive learning strategy is developed to emphasize liveness-related style information while suppress the domain-specific one. Finally, the representations of the correct assemblies are used to distinguish between living and spoofing during the inferring. On the other hand, despite the decent performance, there still exists a gap between academia and industry, due to the difference in data quantity and distribution. Thus, a new large-scale benchmark for FAS is built up to further evaluate the performance of algorithms in reality. Both qualitative and quantitative results on existing and proposed benchmarks demonstrate the effectiveness of our methods. The codes will be available at
Introduction
As the most successful computer vision technology, face recognition (FR) [10,48] has been widely employed in different application scenarios, such as mobile access control and electronic payments. Despite great success, FR systems may still suffer from presentation attacks (PAs), including print attacks, video replay, and 3D masks. To tackle these issues, a series of face anti-spoofing (FAS) methods have been proposed, from hand-craft descriptors based methods [8,36] to deep representation based ones [49,52,54,56,58]. * denotes the corresponding author.
Encoder AdaIN Decoder
Live Spoof Stylized (Spoof) Figure 1. The illustration of style transfer using the method of [17] when live face as content input and spoof face as style input.
The previous FAS methods have achieved promising performance in intra-domain scenarios, but may encounter dramatic degradation under the cross-domain settings. The major reason behind this lies in the conflict between the limitations of training data and the capability of networks [16,31,55], which makes the models trapped in dataset bias [41] and leads to poor generalization toward new domains. To address this problem, domain adaptation (DA) techniques [23,45] are used to alleviate the discrepancy between source and target domains by using unlabeled target data. However, in most real-world FAS scenarios, it is inefficient to collect sufficient unlabeled target data for training.
Thus, domain generalization (DG) methods are proposed to generalize well on the unseen target domain, which can be coarsely classified into three categories: learning a common feature space [20,39], learning for disentangled representations [46], and learning to learn [38,40]. These methods almost implement DG on the complete representations from common modules (i.e., CNN-BN-ReLU), but ignore fully taking advantage of subtle properties of global and local image statistics in FAS. Specifically, different normalization approaches lay stress on different statistics information in FAS. For example, Batch Normalization (BN) [18] based structures are usually used to summarize global image statistics, such as semantic features and physical at-tributes. Instance Normalization (IN) [43] based structures focus on the specific sample for distinctive characteristics, such as liveness-related texture and domain-specific external factors. Thus, to mine different statistics in FAS, [29] adopts an adaptive approach to adjust the ratio of IN and BN in feature extraction. Differently, we adopt BN and IN based structures to separate the complete representation into global and local image statistics, denoted as content and style features respectively, then implement specific measures on them for generalizable FAS.
Besides, style transfer [17] can be used to reassemble the pairs of content features as global statistics and style features as local statistics to form stylized features for specific supervision. As shown in Fig. 1, spoofing cues as style input can be applied to live faces to generate the corresponding spoof manipulations. Thus, [22,51] directly utilize this approach for data augmentation before the training in FAS. However, these two-stage methods are inefficient in largescale training. Thus, an end-to-end approach is adopted based on style transfer at the feature level in this work.
Combined with the abovementioned viewpoints, we propose a novel framework, called shuffled style assembly network (SSAN), based on style transfer at the feature level. Specifically, a two-stream structure is utilized to extract content and style features, respectively. For content information, they mainly record some global semantic features and physical attributes, thus a shared feature distribution is easily acquired by using adversarial learning. For style information, they preserve some discriminative information that is beneficial to enhance the distinction between living and spoofing. Different from the image-to-image style transfer proposed in [17], we stack up successive shuffled style assembly layers to reassemble various content and style features for a stylized feature space. Then, a contrastive learning strategy is adopted to enhance livenessrelated style information and suppress domain-specific one. Lastly, our end-to-end architecture and training approach are more suitable for large-scale training in reality.
Due to the data distribution difference between academic and industrial scenarios, previous evaluation protocols are limited to reflect the genuine performance of algorithms in reality. Thus, to simulate the data quantity and distribution in reality, we combine twelve datasets to build a large-scale evaluation benchmark and further verify the effectiveness of algorithms. Specifically, the TPR@FPR at specific values as the metrics are utilized to evaluate the performance of different models on each dataset, where all live samples as negative cases and partial spoof samples as positive cases.
The main contributions of this work are four-fold:
• To utilize the global and local statistics separately for their unique properties, we propose a novel architecture called shuffled style assembly network (SSAN) for generalizable face anti-spoofing.
• To enhance liveness-relative style information and suppress domain-specific one, we adopt a contrastive learning approach to control the stylized features close or far from the anchor feature. The corresponding loss function is utilized to supervise our network.
• Based on the real-world data distribution, we combine twelve public datasets into a large-scale benchmark for face anti-spoofing in reality. The metric of single-side TPR@FPR is proposed for a comprehensive assessment.
• Our proposed methods achieve the state-of-the-art performance on existing and proposed benchmarks.
Related Work
Face Anti-Spoofing. Traditional methods usually extract hand-crafted features such as LBP [8] and SIFT [36] to split living and spoofing. In the era of deep learning, [52] trains CNNs to learn a binary classifier. Auxiliary information such as depth map [2], reflection map [53], and rPPG [25] is utilized to explore additional details for FAS.
To make the algorithm generalize well to unseen scenarios, domain adaptation (DA) and domain generation (DG) techniques are developed. [23] minimizes MMD [14] to pull close between different distributions. [45] leverages adversarial domain adaptation to learn a shared embedding space. [39] utilizes multiple domain discriminators to learn a generalized feature space. [20] forms single-side adversarial learning to further improve the performance. [46,61] utilize disentangled representation learning to isolate the liveness-related features for classification. To obtain general learning, meta-learning based methods [5,37,38,40,47] are introduced and developed for regular optimization.
Different from previous DG methods, we split the complete representation into content and style ones with various supervision. Then, a generalized feature space is obtained by resembling features under a contrastive learning strategy.
Normalization and Style Transfer. Normalization layers are essential in deep networks to eliminate covariate shifts and accelerate training. Batch Normalization (BN) [18] utilizes the statistics of the mini-batch to induce universal characteristics. Differently, Instance Normalization (IN) [43] is proposed to exploit stylized characteristics for specific samples. Thus, the former lays stress on the global statistics and the latter emphasizes specific ones. [17] proposes Adaptive Instance Normalization (AdaIN) for style transfer by utilizing target samples to control the scaling and shifting of source image normalized features. This style manipulation is widely used in generative tasks for texture synthesis [35] and style transfer [21]. Observing its effect on texture patterns, our method adopts this module to FAS.
Different from the previous method [22,29,51] operating on normalization and image-level transformation, our method adopts AdaIN based layers to assemble different content and style features for a generalized feature space. Figure 2. The overall architecture of our shuffled style assembly network (SSAN). Firstly, RGB images from different domains are fed into the feature generator to obtain feature embeddings. Then, the feature extractor with GRL is trained to make the content feature indistinguishable for different domains by using adversarial learning. Meanwhile, another feature extractor collects multi-scale generated features to capture coarse-to-fine style information. Furthermore, to refine the style information related to FAS, a cascade of style assembly layers (SAL) are utilized to reassemble different content and style features when the corresponding contrastive learning strategy is designed.
Protocols for Face Anti-Spoofing. To evaluate the effectiveness of FAS methods, various protocols have been established, including intra-dataset intra-type protocol [3,31], cross-dataset intra-type protocol [23], intra-dataset cross-type protocol [13,32], and cross-dataset cross-type protocol [1,57]. Especially, most protocols are merely constituted of single or double datasets, which may limit their evaluation capabilities for multiply data distributions. Thus, protocol OCIM [39,40] is used to evaluate their domaingeneralization performance across multiple domains.
Moreover, due to the limited amount of data, [7] proposes an open-source framework to aggregate heterogeneous datasets for specific evaluation. Differently, we focus on the real-world data distribution, and more complex domain fields with different data distributions are obtained by fusing twelve different datasets including image and video formats. Thus, the merged dataset contains more sophisticated attack types, such as print, replay, mask, makeup, waxworks, etc. Besides, the evaluations under intra-and cross-domain scenarios among multiple datasets have been investigated by using the metric of single-side TPR@FPR, which is more suitable for realistic spectacles.
Proposed Approach
In this section, we introduce our shuffled style assembly network (SSAN) shown in Fig. 2. Firstly, we present the two-stream part in our network for content and style in-formation aggregation. Secondly, a shuffled style assembly approach is proposed to recombine various content and style features for a stylized feature space. Then, to suppress domain-specific style information and enhance livenessrelated ones, contrastive learning is used in the stylized feature space. Lastly, the overall loss is integrated to optimize the network for stable and reliable training.
Content and Style Information Aggregation
Content information is usually represented by common factors in FAS, mainly including semantic features and physical attributes. Differently, style information describes some discriminative cues that can be divided into two parts in FAS tasks: domain-specific and liveness-related style information. Thus, content and style features are captured in the two-stream paths separately in our network. Specifically, the feature generator as a shallow embedding network captures multi-scale low-level information. Then, content and style feature extractors collect different image statistics by using specific normalization layers (i.e., BN and IN).
For content information aggregation, we conjecture that small distribution discrepancies exist in different domains, based on the following facts: 1) Considering samples from various domains, they both contain facial areas, thus share a common semantic feature space; 2) Whether bona fide or attack presentation, their physical attributes such as shape and size are often similar. Therefore, we adopt adversarial learning to make generated content features indistinguish-able for different domains. Specifically, the parameters of the content feature generator are optimized by maximizing the adversarial loss function while the parameters of the domain discriminator are optimized in the opposite direction. Thus, this process can be formulated as follows:
min D max G L adv (G, D) = − E (x,y)∼(X,Y D ) M i=1 1 [i = y] logD (G (x)) ,(1)
where Y D is the set of domain labels and M is the number of different data domains. G and D represent the content feature generator and domain discriminator, respectively. To optimize G and the D simultaneously, the gradient reversal layer (GRL) [12] is used to reverse the gradient by multiplying it by a negative scalar during the backward propagation.
For style information aggregation, we collect multi-layer features along with the hierarchical structure in a pyramidlike [26] approach, due to the different scales of style characteristics. For example, the brightness of scenes is mainly implicated in broad-scale features, while the texture of presentation materials usually focuses on local-scale regions.
Shuffled Style Assembly
Adaptive Instance Normalization (AdaIN) [17] is an adaptive style transfer method, which can assemble a content input x and a style input y, as follows:
AdaIN (x, γ, β) = γ x − µ(x) σ(x) + β,(2)
where µ(·) and σ(·) represent channel-wise mean and standard deviation respectively, γ and β are affine parameters generated from the style input y.
In this work, to combine content feature f c and style feature f s , style assembly layers (SAL) are built up by using AdaIN layers and convolution operators with residual mapping, described as below:
γ, β = MLP [GAP (f s )] , z = ReLU [AdaIN(K 1 ⊗ f c , γ, β)] , SAL (f c , f s ) = AdaIN(K 2 ⊗ z, γ, β) + f c ,(3)
where K 1 and K 2 are 3 × 3 convolution kernels, ⊗ is the convolution operation, and z is the intermediate variable.
However, f s contains not only liveness-related information, but also domain-specific one that may cause domain bias during network optimization. To alleviate this problem, the shuffled style assembly method is proposed to form auxiliary stylized features for domain generalization.
Given an input sequence of length N in a mini-batch, The illustration of the contrastive learning between self-assembly and shuffle-assembly features. Different shapes represent data from different domains: round=domain 1, square=domain 2. Different colors represent different liveness information: green=living, red=spoofing. Lastly, the dotted line represents style information while the interior solid represents content information.
feature as f s (x i ). Thus, the corresponding assembled feature space S(x i , x i ) can be formulated as follows:
S (x i , x i ) = SAL (f c (x i ) , f s (x i )) ,(4)
which represents the process of style assembly using paired content and style features of input sample x i . Thus S (x i , x i ) can be denoted as self-assembly features. Furthermore, to exploit liveness-related style features, we synthesize an auxiliary feature space by shuffling the original pairs of f c (x i ) and f s (x i ) randomly, as follows:
S (x i , x i * ) = SAL (f c (x i ) , f s (x i * )) , i * ∈ random {1, 2, . . . , N } ,(5)
where random means a uniformly chosen permutation.
S (x i , x i * ) can be denoted as shuffle-assembly features.
Contrastive Learning for Stylized Features
From the view of style features, a major obstacle is that domain-specific style features may conceal livenessrelated ones in cross-domain scenarios, which may cause mistakes in judgment. To solve this problem, we propose a contrastive learning approach to emphasize liveness-related style features as well as suppress domain-specific ones.
After combining content and style features, we obtain self-assembly features S(x i , x i ) and shuffle-assembly features S(x i , x i * ). For S(x i , x i ), we input them to the classifier and supervise them using our binary ground-turth signals with the loss function L cls . For S(x i , x i * ), we measure their difference with S(x i , x i ) by using cosine similarity:
Sim (a, b) = − a a 2 · b b 2 ,(6)
where · 2 is l 2 -norm, a and b represent two compared features. This is equivalent to the mean squared error of l 2normalized vectors [15]. As shown in Fig. 3, self-assembly features S(x i , x i ) are set as anchors in the stylized features space. Inspired by [4], a stop-gradient (stopgrad) operation is implemented on S(x i , x i ) to fix their position in the feature space. Then, the shuffle-assembly features S(x i , x i * ) are guided to go close or far toward their corresponding anchors S(x i , x i ) according to the liveness information. During the process, backpropagation is applied through the shuffle-assembly features but not through self-assembly ones, and the livenessintensive style information is further aggregated. Thus, the contrastive loss L contra can be formulated as follows:
L contra = N i=1 Eq (x i , x i * )·Sim (stopgrad(a), b) ,(7)
where a = S(x i , x i ) and b = S(x i , x i * ). Eq(x i , x i * ) measures the consistency of the liveness labels between x i and x i * , which can be formulated as follows:
Eq (x i , x i * ) = + 1, label(x i ) == label(x i * ), − 1, otherwise.(8)
Finally, The whole process of our framework can be described in Algorithm 1 in detail.
Algorithm 1
The optimization strategy of SSAN.
Input: Mixture domain dataset D s = {x s i , y s i } ns i=1 , initial- ized CNN model Φ 0 (·). Output: Final CNN model parameter Φ T (·).
1: while not end of iteration do 2:
Shuffle the input sequence for the permuted sequence
{x i * | i * = random [1, 2, . . . , N ]}.
5:
Assemble f c (x i ) and f s (x i ) to get self-assembly features S(x i , x i ). Assemble f c (x i ) and f s (x i * ) to get shuffle-assembly features S(x i , x i * ).
6:
Input S(x i , x i ) to the classifier and compute the classification loss L cls .
7:
Utilize S(x i , x i ) and S(x i , x i * ) to compute the contrastive loss L contra based on Eqn. (7). 8: Compute L overall = L cls + λ 1 · L adv + λ 2 · L contra . Make gradient back propagation and update the model parameters Φ(·). 9: end while 10: Evaluate Φ T (·) on the testing data D t .
Loss Function
After describing the operating of our network, we collect the overall loss function L overall for stable and reliable training, which can be formulated as follows:
L overall = L cls + λ 1 · L adv + λ 2 · L contra ,(9)
where λ 1 and λ 2 are two hyper-parameters to balance the proportion of different loss functions.
Large-Scale FAS Benchmarks
There exists a gap between academia and industry, which can be summarized as the following two aspects.
Data Quantity. Compared with the authentic scenarios, the amount of data in academia is still too small, which may cause overfitting of the model and limit the development of the algorithm. To overcome this problem, we merge twelve datasets then design corresponding intra-and inter-dataset testing protocols to further evaluate our method.
Data Distribution and Evaluation Metrics. In realworld data distribution, live faces usually account for the majority. However, most existing evaluation protocols collect almost equivalent live and spoof faces as testing set to calculate their average error rate for evaluation, which may disagree with the reality. Besides, data in reality usually consists of multiple fields with different distributions. Nevertheless, academic datasets usually contain fewer data domains. To reduce the above inconsistencies, multiple datasets are used as training and testing sets simultaneously in our protocols. Specifically, in the training stages, all of the training data are used to optimize our models. In the inferring stages, due to the similar distribution of live faces [20], we gather all live data from each testing dataset as the negative cases, then partial spoof data in the current testing dataset is arranged as positive cases. Lastly, the mean and variance of true positive rate (TPR) of falsepositive rate (FPR) are computed along with each testing dataset for an overall evaluation.
Twelve datasets are used in the large-scale FAS Benchmarks, which are numbered as shown in Table 1. The evaluation protocols are designed as follows:
• Protocol 1. This protocol is implemented in an intradataset evaluation scenario. Specifically, all datasets are used as training and testing sets, simultaneously.
• Protocol 2. This protocol is implemented in a crossdomain evaluation scenario by dividing these datasets into two piles: P 1: {D3, D4, D5, D10, D11, D12}, P 2: {D1, D2, D6, D7, D8, D9}. Thus, there contain two subprotocols: protocol 2 1: training on P 1 and testing on P 2; Protocol 2 2: training on P 2 and testing on P 1. Note that the cross-domain protocols are more challenging as the testing set covers more unseen datasets and more complex unknown attacks, which are correlated to real-world scenarios.
More details are provided in supplementary materials. Table 1 contain image and video data. For image data, we utilize all images of them. For video data, we extract frames of them at specific intervals. After obtaining data in image format, we adopt MTCNN [60] for face detection, then crop and resize faces to 256 × 256 as RGB input. Moreover, a dense face alignment approach (i.e., PRNet [11]) is used to generate the ground-truth depth maps with size 32 × 32 for genuine faces, while spoof depth maps are set to zeros.
Networks Setting. Similar to [20], two structures are established, denoted as SSAN-M and SSAN-R. Specifically, SSAN-M adopts the embedding part of DepthNet [31] while SSAN-R adopts that of ResNet-18 [16] for feature generation. More details are in supplementary materials.
Training Setting. Due to the limit of the GPU memory size, the batch size is set to 16 for SSAN-M and set to 256 for SSAN-R. Different ground-turth are used as supervision signals: depth maps for SSAN-M and binary labels for SSAN-R. Therefore, their corresponding L cls are meansquared and cross-entropy loss, respectively. λ 1 and λ 2 are set to 1 in training. The Adam optimizer with the learning rate (lr) of 1e-4 and weight decay of 5e-5 is used in the experiments on OCIM. The SGD optimizer with the momentum of 0.9 and weight decay of 5e-4 is used in the experiments on proposed protocols. Its initial lr is 0.01 and decreases by 0.2 times every two epochs until the 30 th epoch.
Testing Setting. In testing, we calculate the final classification score to separate bona fide and attack presentations. Specifically, the mean value of the predicted depth map is the final score for SSAN-M, while the value of the sigmoid function on living is the final score for SSAN-R.
Experiment on OCIM.
Four datasets are used to evaluate the performance of SSAN in different cross-domain scenarios following the implementation of [39]: OULU-NPU [3] (O), CASIA-MFSD [64] (C), Replay-Attack [6] (I), and MSU-MFSD [50] (M).
Experiment in Leave-One-Out (LOO) Setting. For an overall evaluation, we conduct cross-dataset testing by using the LOO strategy: three datasets are selected for training, and the rest one for testing. We compare our models with the recent SOTA methods, as shown in Table 2. It can be observed that our SSAN-M shows the best performance on protocols of O&C&I to M, O&M&I to C, O&C&M to I, and the competitive performance on the protocol of I&C&M to O. These results demonstrate the domain generalization capacity of our method. Moreover, when we adopt the ResNet18-based network denoted as SSAN-R, its performance obtains an excellent improvement and exceeds the model SSDG-R proposed in [20] with similar settings. The above phenomenon indicates our network SSAN-R is more effective in the cross-dataset scenario, thus will be further measured in the large-scale protocols we propose. Experiment on Limited Source Domains. We also evaluate our method when extremely limited source domains are available. Specifically, MSU-MFSD and Replay-Attack are selected as the source domains for training and the remaining two (i.e., CASIA-MFSD and OULU-NPU) will be used as the target domains for testing respectively. As shown in Table 3, our method achieves the lowest HTER and the highest AUC despite limited source data, which proves the modeling efficiency and generalization capability of our network in a challenging task.
Experiment on Proposed Benchmarks
To further evaluate the performance of our method in reality, we conduct the experiments on the large-scale FAS benchmark we proposed, as shown in Table 4. Different network structures (i.e., CNN [16] and Transformer [42]) and some recent SOTA methods (i.e., CDCN [58] and SSDG [20]) are also conducted in their default settings for comparison. From the evaluation results, we can observe that our method achieves the best performance, exceeding that of other compared methods, which proves the effectiveness of our SSAN in real-world data distribution. It is worth noting that some methods have achieved excellent performance on existing protocols, but may suffer an acute degeneration in the large-scale benchmarks. This phenomenon further reveals the mismatch between academia and industry in FAS. More detailed analyses are in supplementary materials.
Ablation Study
To verify the superiority of our SSAN as well as the contributions of each component, multiple incomplete models are built up by controlling different variables. All results are measured in the same manner, as shown in Table 5.
Effectiveness of Different Components. To verify the effectiveness of generalized content feature space, we conduct the experiments of SSAN w/o L adv . Specifically, content features usually record some common patterns in FAS, which is easier to reduce their domain difference, compared to directly operating on the complete features. Besides, to make assembly between arbitrary combinations of content and style features for domain generalization, stripping domain distinction from content information is indispensable.
On the other hand, to prove the importance of contrastive learning for shuffled stylized features, the experiments of SSAN w/o L contra are implemented for comparison. The quantitative results indicate that the style assembly guided by liveness-intensive cues is beneficial to improve the performance for cross-domain FAS tasks.
Impact of the Stop-Gradient Operation. In contrastive learning for stylized features, the self-assembly features adopt the approach of stop-gradient to fix their position in the feature space as an anchor. Then, their correspond- ing shuffle-assembly features obeying on the liveness information to go close or far toward them. The ablation experiment of SSAN w/o stop-grad shows its effectiveness of feature aggregation in contrastive learning for emphasizing liveness-related style information and suppressing domain-specific ones. Besides, from the continuous evaluation curves shown in Fig. 4, it can be summarized that the stop-gradient operation will contribute to stable training. Comparison Between the Hard and Soft Supervision. The relative movement approach in contrastive learning we adopt can be regarded as soft supervision in stylized feature space, compared to the direct supervision using the ground-turth. To investigate their different efficiency, we conduct the experiment of w/ hard-sup for an ablation study between them, as shown in Table 5. The declining performance shows the soft supervision method is more suitable for our networks under the cross-domain testing scenarios.
Analysis of Contrastive Learning. Existing works [28,34] implement classical supervised contrastive learning (SCL) on the complete representation in FAS. Differently, our method conducts contrastive learning between self-assembly and shuffle-assembly features. To make a comparison between them, the experiment of w/ SCL is conducted by implementing contrastive learning on selfassembly directly. The final results demonstrate the efficiency of the auxiliary features in contrastive learning, which are built in a shuffle-then-assembly approach.
Visualization and Analysis
Features Visualization. To analyze the feature space learned by our SSAN method, we visualize the distribution of different features using t-SNE [44], as shown in Fig. 5. For content features, it can be observed that their distribution is more compact and mixed, though they may belong to multiple databases or various liveness attributions. For style features, there exists a coarse boundary between living and spoofing along with a narrow distribution, despite no direct supervision on them. This phenomenon indicates that our contrastive learning for stylized features is effective to emphasize liveness-related style features as well as suppress other irrelevant ones, such as domain-specific information.
For stylized features, we combine the content and style information for the classification between living and spoofing. The visualization results show that even though encountering an unknown distribution, our method still can generalize well to the target domain. Attention Visualization. To find the regions that led to content feature extraction and liveness detection, we adopt the Grad-CAM [65] to describe their activation maps upon the original images, as shown in Fig. 6. It can be observed that despite living and spoofing, their content features both mainly focus on the landmark areas in faces that contain abundant semantic features and physical attributes. Then, after combined with the style information, the stylized features for classification show different activation properties: (1) For the live faces, our model lays the stress on the face regions to seek cues for judgment; (2) For the spoofing faces, some spoofing cues will be concentrated by our method, such as the moire phenomenon in replay attacks and the photo cut position in print attacks.
Conclusion
In this paper, we have proposed a novel shuffled style assembly network (SSAN) for generalizable face antispoofing (FAS). Different from the previous methods implemented on the complete features, we operate on content and style features separately due to their various properties. For content features, adversarial learning is adopted to make them domain-indistinguishable. For style features, a contrastive learning strategy is used to emphasize livenessrelated style information while suppress domain-specific one. Then, the correct pairs of content and style features are reassembled for classification. Moreover, to bridge the gap between academia and industry, a large-scale benchmark for FAS is built up by aggregating existing datasets. Experimental results on existing and proposed benchmarks have demonstrated the superiority of our methods.
x i represents the input sample, where i ∈ {1, 2 . . . N }. Its content feature can be expressed as f c (x i ) while the style
Figure 3 .
3Figure 3. The illustration of the contrastive learning between self-assembly and shuffle-assembly features. Different shapes represent data from different domains: round=domain 1, square=domain 2. Different colors represent different liveness information: green=living, red=spoofing. Lastly, the dotted line represents style information while the interior solid represents content information.
i for content feature f c (x i ) and style feature f s (x i ). Input x i * for style feature f s (x i * ).4:Input f c (x i ) to the discriminator and compute the adversarial loss L adv based on Eqn.(1).
AUCFigure 4 .
4SSAN-M HTER SSAN-M AUC SSAN-M w/o stop-grad HTER SSAN-M w/o stop-grad The comparison curves between SSAN-M and SSAN-M w/o stop-grad under protocol O&C&I to M. The x-axis represents the number of epochs while the y-axis records the value of AUC(%) the HTER(%), as shown in the legend.
Figure 5 .Figure 6 .
56The t-SNE [44] visualization of different features under protocol O&C&I to M. The graphs of (a), (b), and (c) describe the feature distribution of content features, style features, and stylized features, respectively. Different colors indicate features from different domains: green=O, blue=C, yellow=I, red=M. Different shapes represent different liveness information: point=living, cross=spoofing. Grad-CAM [65] visualizations of activation areas under protocol O&M&I to C. (a): Original images. (b): Visualizations for content features generation. (c): Visualizations for assembled features (content + style) generation.
Table 1 .
1The datasets and their corresponding numbers we use in the large-scale benchmark.Dataset
Number
Dataset
Number
CASIA-MFSD [64]
D1
Rose-Youtu [23]
D7
REPLAY-ATTACK [6]
D2
WFFD [19]
D8
MSU-MFSD [50]
D3
CelebA-Spoof [63]
D9
HKBU-MARs V2 [59]
D4
CASIA-SURF [62]
D10
OULU-NPU [3]
D5
WMCA [13]
D11
SiW [31]
D6
CeFA [27]
D12
Table 2 .
2The results of cross-dataset testing on OULU-NPU, CASIA-MFSD, Replay-Attack, and MSU-MFSD.Method
O&C&I to M
O&M&I to C
O&C&M to I
I&C&M to O
HTER(%) AUC(%)
HTER(%) AUC(%)
HTER(%) AUC(%)
HTER(%) AUC(%)
MMD-AAE [24]
27.08
83.19
44.59
58.29
31.58
75.18
40.98
63.08
MADDG [39]
17.69
88.06
24.50
84.51
22.19
84.99
27.98
80.02
SSDG-M [20]
16.67
90.47
23.11
85.45
18.21
94.61
25.17
81.83
DR-MD-Net [46]
17.02
90.10
19.68
87.43
20.87
86.72
25.02
81.47
RFMeta [40]
13.89
93.98
20.27
88.16
17.30
90.48
16.45
91.16
NAS-FAS [57]
19.53
88.63
16.54
90.18
14.51
93.84
13.80
93.43
D 2 AM [5]
12.70
95.66
20.98
85.58
15.43
91.22
15.27
90.87
SDA [47]
15.40
91.80
24.50
84.40
15.60
90.10
23.10
84.30
DRDG [30]
12.43
95.81
19.05
88.79
15.56
91.79
15.63
91.75
ANRL [29]
10.83
96.75
17.83
89.26
16.03
91.04
15.67
91.90
SSAN-M (Ours)
10.42
94.76
16.47
90.81
14.00
94.58
19.51
88.17
SSDG-R [20]
7.38
97.17
10.44
95.94
11.71
96.59
15.61
91.54
SSAN-R (Ours)
6.67
98.75
10.00
96.67
8.88
96.79
13.72
93.63
5. Experiments
5.1. Implementation Details
Data Preparation. The datasets shown in
Table 3 .
3Comparison results on limited source domains.Method
M&I to C
M&I to O
HTER(%) AUC(%) HTER(%) AUC(%)
MS-LBP [33]
51.16
52.09
43.63
58.07
IDA [50]
45.16
58.80
54.52
42.17
LBP-TOP [9]
45.27
54.88
47.26
50.21
MADDG [39]
41.02
64.33
39.35
65.10
SSDG-M [20]
31.89
71.29
36.01
66.88
DR-MD-Net [46]
31.67
75.23
34.02
72.65
ANRL [29]
31.06
72.12
30.73
74.10
SSAN-M (Ours)
30.00
76.20
29.44
76.62
Table 4 .
4The results on the large-scale FAS benchmarks.Prot.
Method
TPR@FPR(%)
10%
1%
0.1%
1
ResNet18 [16] 96.04±11.96 89.32±26.08 69.10±34.34
Deit-T [42]
97.75±5.70 90.38±16.08 73.42±30.00
CDCN [58]
92.59±15.99 84.40±31.93 71.54±32.05
SSDG-R [20] 96.48±10.37 89.13±25.59 68.12±39.12
SSAN-R (Ours) 98.31±4.19 90.51±22.31 78.45±31.98
2 1
ResNet18 [16] 55.64±22.05 17.53±13.44 3.64±3.93
Deit-T [42]
44.03±17.77 10.15±6.08
1.25±1.04
CDCN [58]
55.92±21.45 11.07±8.21
0.69±0.74
SSDG-R [20] 53.44±19.23 3.27±3.09
0.06±0.06
SSAN-R (Ours) 63.61±21.69 25.56±18.07 6.58±5.56
2 2
ResNet18 [16] 63.38±27.54 41.53±30.41 19.00±14.79
Deit-T [42]
63.29±13.39 30.46±19.15 11.30±9.45
CDCN [58]
20.97±25.23 3.58±4.83
0.58±0.88
SSDG-R [20] 41.13±28.45 7.19±8.73
1.94±2.35
SSAN-R (Ours) 64.54±28.36 47.07±33.71 31.61±23.33
Table 5 .
5Evaluations of different components of the proposed method with different architectures.Method
O&C&I to M
O&M&I to C
O&C&M to I
I&C&M to O
HTER(%) AUC(%)
HTER(%) AUC(%)
HTER(%) AUC(%)
HTER(%) AUC(%)
SSAN-M w/o L adv
10.42
94.83
24.44
81.60
24.75
83.01
27.11
80.41
SSAN-M w/o Lcontra
12.50
93.59
17.59
89.33
14.75
92.67
22.47
85.79
SSAN-M w/o stop-grad
12.50
93.33
20.93
85.02
16.38
89.78
23.65
83.14
SSAN-M w/ hard-sup
12.08
93.42
28.89
77.70
20.61
86.46
24.83
82.39
SSAN-M w/ SCL
12.92
92.50
23.70
84.67
18.75
87.28
25.45
82.03
SSAN-M (Ours)
10.42
94.76
16.67
90.81
14.00
94.58
19.51
88.17
SSAN-R w/o L adv
10.83
94.08
14.26
94.48
12.25
94.93
14.27
92.83
SSAN-R w/o Lcontra
12.08
95.62
12.59
94.97
10.75
95.01
15.31
92.31
SSAN-R w/o stop-grad
11.25
93.46
11.30
95.11
9.00
96.03
14.06
93.14
SSAN-R w/ hard-sup
11.67
96.04
14.63
94.65
11.38
94.61
15.21
92.97
SSAN-R w/ SCL
11.25
94.00
12.04
94.91
12.50
95.34
15.80
92.95
SSAN-R (Ours)
6.67
98.75
10.00
96.67
8.88
96.79
13.72
93.63
(c) Stylized Features
(b) Style Features
(a) Content Features
An anomaly detection approach to face spoofing detection: A new formulation and evaluation protocol. Josef Shervin Rahimzadeh Arashloo, William Kittler, Christmas, IEEE access. 53Shervin Rahimzadeh Arashloo, Josef Kittler, and William Christmas. An anomaly detection approach to face spoof- ing detection: A new formulation and evaluation protocol. IEEE access, 5:13868-13882, 2017. 3
Face anti-spoofing using patch and depth-based cnns. Yousef Atoum, Yaojie Liu, Amin Jourabloo, Xiaoming Liu, 2017 IEEE International Joint Conference on Biometrics (IJCB). Yousef Atoum, Yaojie Liu, Amin Jourabloo, and Xiaoming Liu. Face anti-spoofing using patch and depth-based cnns. In 2017 IEEE International Joint Conference on Biometrics (IJCB), pages 319-328. IEEE, 2017. 2
Oulu-npu: A mobile face presentation attack database with real-world variations. Zinelabinde Boulkenafet, Jukka Komulainen, Lei Li, Xiaoyi Feng, Abdenour Hadid, 12th IEEE international conference on automatic face & gesture recognition (FG 2017). 56Zinelabinde Boulkenafet, Jukka Komulainen, Lei Li, Xiaoyi Feng, and Abdenour Hadid. Oulu-npu: A mobile face pre- sentation attack database with real-world variations. In 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017), pages 612-618. IEEE, 2017. 3, 5, 6
Exploring simple siamese representation learning. Xinlei Chen, Kaiming He, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXinlei Chen and Kaiming He. Exploring simple siamese rep- resentation learning. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 15750-15758, 2021. 5
Generalizable representation learning for mixture domain face antispoofing. Zhihong Chen, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin Li, Feiyue Huang, Xinyu Jin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence356Zhihong Chen, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin Li, Feiyue Huang, and Xinyu Jin. General- izable representation learning for mixture domain face anti- spoofing. In Proceedings of the AAAI Conference on Arti- ficial Intelligence, volume 35, pages 1132-1139, 2021. 2, 6
On the effectiveness of local binary patterns in face antispoofing. Ivana Chingovska, André Anjos, Sébastien Marcel, 2012 BIOSIG-proceedings of the international conference of biometrics special interest group (BIOSIG). IEEE56Ivana Chingovska, André Anjos, and Sébastien Marcel. On the effectiveness of local binary patterns in face anti- spoofing. In 2012 BIOSIG-proceedings of the international conference of biometrics special interest group (BIOSIG), pages 1-7. IEEE, 2012. 5, 6
Generalized presentation attack detection: a face anti-spoofing evaluation proposal. Artur Costa-Pazo, David Jiménez-Cabello, Esteban Vázquez-Fernández, José Luis Alba-Castro, Roberto J López-Sastre , 2019 International Conference on Biometrics (ICB). IEEEArtur Costa-Pazo, David Jiménez-Cabello, Esteban Vázquez-Fernández, José Luis Alba-Castro, and Roberto J López-Sastre. Generalized presentation attack detection: a face anti-spoofing evaluation proposal. In 2019 Interna- tional Conference on Biometrics (ICB), pages 1-8. IEEE, 2019. 3
Lbp-top based countermeasure against face spoofing attacks. Freitas Tiago De, André Pereira, Anjos, Asian Conference on Computer Vision. Springer1José Mario De Martino, and Sébastien MarcelTiago de Freitas Pereira, André Anjos, José Mario De Mar- tino, and Sébastien Marcel. Lbp-top based countermeasure against face spoofing attacks. In Asian Conference on Com- puter Vision, pages 121-132. Springer, 2012. 1, 2
Face liveness detection using dynamic texture. Freitas Tiago De, Jukka Pereira, André Komulainen, José Mario De Anjos, Abdenour Martino, Matti Hadid, Sébastien Pietikäinen, Marcel, EURASIP Journal on Image and Video Processing. 20141Tiago de Freitas Pereira, Jukka Komulainen, André Anjos, José Mario De Martino, Abdenour Hadid, Matti Pietikäinen, and Sébastien Marcel. Face liveness detection using dynamic texture. EURASIP Journal on Image and Video Processing, 2014(1):1-15, 2014. 6
Arcface: Additive angular margin loss for deep face recognition. Jiankang Deng, Jia Guo, Niannan Xue, Stefanos Zafeiriou, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionJiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 4690-4699, 2019. 1
Joint 3d face reconstruction and dense alignment with position map regression network. Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, Xi Zhou, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), pages 534- 551, 2018. 6
Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, PMLRInternational conference on machine learning. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 4
Biometric face presentation attack detection with multi-channel convolutional neural network. Anjith George, Zohreh Mostaani, David Geissenbuhler, Olegs Nikisins, André Anjos, and Sébastien Marcel. 155Anjith George, Zohreh Mostaani, David Geissenbuhler, Olegs Nikisins, André Anjos, and Sébastien Marcel. Bio- metric face presentation attack detection with multi-channel convolutional neural network. IEEE Transactions on Infor- mation Forensics and Security, 15:42-55, 2019. 3, 5
A kernel two-sample test. Arthur Gretton, M Karsten, Borgwardt, J Malte, Bernhard Rasch, Alexander Schölkopf, Smola, The Journal of Machine Learning Research. 131Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bern- hard Schölkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723- 773, 2012. 2
Bootstrap your own latent: A new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, H Pierre, Elena Richemond, Carl Buchatskaya, Bernardo Doersch, Zhaohan Daniel Avila Pires, Mohammad Gheshlaghi Guo, Azar, arXiv:2006.07733arXiv preprintJean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Do- ersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Moham- mad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020. 4
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition67Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1, 6, 7
Arbitrary style transfer in real-time with adaptive instance normalization. Xun Huang, Serge Belongie, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionXun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceed- ings of the IEEE International Conference on Computer Vi- sion, pages 1501-1510, 2017. 1, 2, 4
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, PMLRInternational conference on machine learning. 1Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. In International conference on machine learn- ing, pages 448-456. PMLR, 2015. 1, 2
3d face anti-spoofing with factorized bilinear coding. Shan Jia, Xin Li, Chuanbo Hu, Guodong Guo, Zhengquan Xu, IEEE Transactions on Circuits and Systems for Video Technology. Shan Jia, Xin Li, Chuanbo Hu, Guodong Guo, and Zhengquan Xu. 3d face anti-spoofing with factorized bilin- ear coding. IEEE Transactions on Circuits and Systems for Video Technology, 2020. 5
Single-side domain generalization for face anti-spoofing. Yunpei Jia, Jie Zhang, Shiguang Shan, Xilin Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition67Yunpei Jia, Jie Zhang, Shiguang Shan, and Xilin Chen. Single-side domain generalization for face anti-spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 8484-8493, 2020. 1, 2, 5, 6, 7
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019. 2
Style transfer applied to face liveness detection with usercentered models. R Laurensi, Luciana T Israel, N Menon, Penna, Alessandro L Camillo, Koerich, BrittoJr, 1907arXiv e-printsR Laurensi, A Israel, Luciana T Menon, N Penna, O Ma- noel Camillo, Alessandro L Koerich, and Alceu S Britto Jr. Style transfer applied to face liveness detection with user- centered models. arXiv e-prints, pages arXiv-1907, 2019. 2
Unsupervised domain adaptation for face anti-spoofing. Haoliang Li, Wen Li, Hong Cao, Shiqi Wang, Feiyue Huang, Alex C Kot, IEEE Transactions on Information Forensics and Security. 137Haoliang Li, Wen Li, Hong Cao, Shiqi Wang, Feiyue Huang, and Alex C Kot. Unsupervised domain adaptation for face anti-spoofing. IEEE Transactions on Information Forensics and Security, 13(7):1794-1809, 2018. 1, 2, 3, 5
Domain generalization with adversarial feature learning. Haoliang Li, Shiqi Sinno Jialin Pan, Alex C Wang, Kot, CVPR. Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In CVPR, pages 5400-5409, 2018. 6
Face liveness detection by rppg features and contextual patch-based cnn. Bofan Lin, Xiaobai Li, Zitong Yu, Guoying Zhao, Proceedings of the 2019 3rd International Conference on Biometric Engineering and Applications. the 2019 3rd International Conference on Biometric Engineering and ApplicationsBofan Lin, Xiaobai Li, Zitong Yu, and Guoying Zhao. Face liveness detection by rppg features and contextual patch-based cnn. In Proceedings of the 2019 3rd Interna- tional Conference on Biometric Engineering and Applica- tions, pages 61-68, 2019. 2
Feature pyramid networks for object detection. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyra- mid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 2117-2125, 2017. 4
Casia-surf cefa: A benchmark for multimodal cross-ethnicity face anti-spoofing. Ajian Liu, Zichang Tan, Jun Wan, Sergio Escalera, Guodong Guo, Stan Z Li, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionAjian Liu, Zichang Tan, Jun Wan, Sergio Escalera, Guodong Guo, and Stan Z Li. Casia-surf cefa: A benchmark for multi- modal cross-ethnicity face anti-spoofing. In Proceedings of the IEEE/CVF Winter Conference on Applications of Com- puter Vision, pages 1179-1187, 2021. 5
Contrastive context-aware learning for 3d high-fidelity mask face presentation attack detection. Ajian Liu, Chenxu Zhao, Zitong Yu, Jun Wan, Anyang Su, Xing Liu, Zichang Tan, Sergio Escalera, Junliang Xing, Yanyan Liang, arXiv:2104.06148arXiv preprintAjian Liu, Chenxu Zhao, Zitong Yu, Jun Wan, Anyang Su, Xing Liu, Zichang Tan, Sergio Escalera, Junliang Xing, Yanyan Liang, et al. Contrastive context-aware learning for 3d high-fidelity mask face presentation attack detection. arXiv preprint arXiv:2104.06148, 2021. 7
Adaptive normalized representation learning for generalizable face anti-spoofing. Shubao Liu, Ke-Yue Zhang, Taiping Yao, Mingwei Bi, Shouhong Ding, Jilin Li, Feiyue Huang, Lizhuang Ma, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on Multimedia26Shubao Liu, Ke-Yue Zhang, Taiping Yao, Mingwei Bi, Shouhong Ding, Jilin Li, Feiyue Huang, and Lizhuang Ma. Adaptive normalized representation learning for generaliz- able face anti-spoofing. In Proceedings of the 29th ACM International Conference on Multimedia, pages 1469-1477, 2021. 2, 6
Dual reweighting domain generalization for face presentation attack detection. Shubao Liu, Ke-Yue Zhang, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin Li, Yuan Xie, Lizhuang Ma, IJCAI. Shubao Liu, Ke-Yue Zhang, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin Li, Yuan Xie, and Lizhuang Ma. Dual reweighting domain generalization for face pre- sentation attack detection. In IJCAI, 2021. 6
Learning deep models for face anti-spoofing: Binary or auxiliary supervision. Yaojie Liu, Amin Jourabloo, Xiaoming Liu, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition56Yaojie Liu, Amin Jourabloo, and Xiaoming Liu. Learning deep models for face anti-spoofing: Binary or auxiliary su- pervision. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 389-398, 2018. 1, 3, 5, 6
Deep tree learning for zero-shot face anti-spoofing. Yaojie Liu, Joel Stehouwer, Amin Jourabloo, Xiaoming Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYaojie Liu, Joel Stehouwer, Amin Jourabloo, and Xiaoming Liu. Deep tree learning for zero-shot face anti-spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 4680-4689, 2019. 3
Face spoofing detection from single images using micro-texture analysis. Jukka Määttä, Abdenour Hadid, Matti Pietikäinen, 2011 international joint conference on Biometrics (IJCB). IEEEJukka Määttä, Abdenour Hadid, and Matti Pietikäinen. Face spoofing detection from single images using micro-texture analysis. In 2011 international joint conference on Biomet- rics (IJCB), pages 1-7. IEEE, 2011. 6
Improved detection of face presentation attacks using image decomposition. Kuntal Shlok Kumar Mishra, Max Sengupta, Wen-Sheng Horowitz-Gelb, Sofien Chu, David Bouaziz, Jacobs, arXiv:2103.12201arXiv preprintShlok Kumar Mishra, Kuntal Sengupta, Max Horowitz- Gelb, Wen-Sheng Chu, Sofien Bouaziz, and David Jacobs. Improved detection of face presentation attacks using image decomposition. arXiv preprint arXiv:2103.12201, 2021. 7
Permuted adain: Reducing the bias towards global statistics in image classification. Oren Nuriel, Sagie Benaim, Lior Wolf, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionOren Nuriel, Sagie Benaim, and Lior Wolf. Permuted adain: Reducing the bias towards global statistics in image clas- sification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9482- 9491, 2021. 2
Secure face unlock: Spoof detection on smartphones. Keyurkumar Patel, Hu Han, Jain, IEEE transactions on information forensics and security. 1110Keyurkumar Patel, Hu Han, and Anil K Jain. Secure face unlock: Spoof detection on smartphones. IEEE transactions on information forensics and security, 11(10):2268-2283, 2016. 1, 2
Meta-teacher for face antispoofing. Yunxiao Qin, Zitong Yu, Longbin Yan, Zezheng Wang, Chenxu Zhao, Zhen Lei, IEEE TPAMI. 2Yunxiao Qin, Zitong Yu, Longbin Yan, Zezheng Wang, Chenxu Zhao, and Zhen Lei. Meta-teacher for face anti- spoofing. IEEE TPAMI, 2021. 2
Learning meta model for zero-and few-shot face antispoofing. Yunxiao Qin, Chenxu Zhao, Xiangyu Zhu, Zezheng Wang, Zitong Yu, Tianyu Fu, Feng Zhou, Jingping Shi, Zhen Lei, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yunxiao Qin, Chenxu Zhao, Xiangyu Zhu, Zezheng Wang, Zitong Yu, Tianyu Fu, Feng Zhou, Jingping Shi, and Zhen Lei. Learning meta model for zero-and few-shot face anti- spoofing. In Proceedings of the AAAI Conference on Artifi- cial Intelligence, volume 34, pages 11916-11923, 2020. 1, 2
Multi-adversarial discriminative deep domain generalization for face presentation attack detection. Rui Shao, Xiangyuan Lan, Jiawei Li, Pong C Yuen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Rui Shao, Xiangyuan Lan, Jiawei Li, and Pong C Yuen. Multi-adversarial discriminative deep domain generalization for face presentation attack detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10023-10031, 2019. 1, 2, 3, 6
Regularized fine-grained meta face anti-spoofing. Rui Shao, Xiangyuan Lan, Pong C Yuen, AAAI. 346Rui Shao, Xiangyuan Lan, and Pong C Yuen. Regularized fine-grained meta face anti-spoofing. In AAAI, volume 34, pages 11974-11981, 2020. 1, 2, 3, 6
Unbiased look at dataset bias. Antonio Torralba, Alexei A Efros, CVPR 2011. IEEEAntonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR 2011, pages 1521-1528. IEEE, 2011. 1
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou, PMLR, 2021. 7International Conference on Machine Learning. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through at- tention. In International Conference on Machine Learning, pages 10347-10357. PMLR, 2021. 7
Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky, arXiv:1607.08022stance normalization: The missing ingredient for fast stylization. arXiv preprintDmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. In- stance normalization: The missing ingredient for fast styliza- tion. arXiv preprint arXiv:1607.08022, 2016. 2
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 911Laurens Van der Maaten and Geoffrey Hinton. Visualiz- ing data using t-sne. Journal of machine learning research, 9(11), 2008. 7, 8
Improving cross-database face presentation attack detection via adversarial domain adaptation. Guoqing Wang, Hu Han, Shiguang Shan, Xilin Chen, 2019 International Conference on Biometrics (ICB). IEEE1Guoqing Wang, Hu Han, Shiguang Shan, and Xilin Chen. Improving cross-database face presentation attack detection via adversarial domain adaptation. In 2019 International Conference on Biometrics (ICB), pages 1-8. IEEE, 2019. 1, 2
Cross-domain face presentation attack detection via multidomain disentangled representation learning. Guoqing Wang, Hu Han, Shiguang Shan, Xilin Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition6Guoqing Wang, Hu Han, Shiguang Shan, and Xilin Chen. Cross-domain face presentation attack detection via multi- domain disentangled representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 6678-6687, 2020. 1, 2, 6
Self-domain adaptation for face anti-spoofing. Jingjing Wang, Jingyi Zhang, Ying Bian, Youyi Cai, Chunmao Wang, Shiliang Pu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence356Jingjing Wang, Jingyi Zhang, Ying Bian, Youyi Cai, Chun- mao Wang, and Shiliang Pu. Self-domain adaptation for face anti-spoofing. In Proceedings of the AAAI Conference on Ar- tificial Intelligence, volume 35, pages 2746-2754, 2021. 2, 6
Mei Wang, Weihong Deng, arXiv:1804.06655Deep face recognition: A survey. arXiv preprintMei Wang and Weihong Deng. Deep face recognition: A survey. arXiv preprint arXiv:1804.06655, 2018. 1
Learning multi-granularity temporal characteristics for face anti-spoofing. Zhuo Wang, Qiangchang Wang, Weihong Deng, Guodong Guo, IEEE Transactions on Information Forensics and Security. 1Zhuo Wang, Qiangchang Wang, Weihong Deng, and Guodong Guo. Learning multi-granularity temporal char- acteristics for face anti-spoofing. IEEE Transactions on In- formation Forensics and Security, pages 1-1, 2022. 1
Face spoof detection with image distortion analysis. Di Wen, Hu Han, Jain, IEEE Transactions on Information Forensics and Security. 1046Di Wen, Hu Han, and Anil K Jain. Face spoof detection with image distortion analysis. IEEE Transactions on Information Forensics and Security, 10(4):746-761, 2015. 5, 6
Fewshot domain expansion for face anti-spoofing. Bowen Yang, Jing Zhang, Zhenfei Yin, Jing Shao, arXiv:2106.14162arXiv preprintBowen Yang, Jing Zhang, Zhenfei Yin, and Jing Shao. Few- shot domain expansion for face anti-spoofing. arXiv preprint arXiv:2106.14162, 2021. 2
Jianwei Yang, Zhen Lei, Stan Z Li, arXiv:1408.5601Learn convolutional neural network for face anti-spoofing. 1arXiv preprintJianwei Yang, Zhen Lei, and Stan Z Li. Learn convolu- tional neural network for face anti-spoofing. arXiv preprint arXiv:1408.5601, 2014. 1, 2
Face anti-spoofing with human material perception. Zitong Yu, Xiaobai Li, Xuesong Niu, Jingang Shi, Guoying Zhao, European Conference on Computer Vision. SpringerZitong Yu, Xiaobai Li, Xuesong Niu, Jingang Shi, and Guoy- ing Zhao. Face anti-spoofing with human material percep- tion. In European Conference on Computer Vision, pages 557-575. Springer, 2020. 2
Deep learning for face antispoofing: A survey. Zitong Yu, Yunxiao Qin, Xiaobai Li, Chenxu Zhao, Zhen Lei, Guoying Zhao, arXiv:2106.14948arXiv preprintZitong Yu, Yunxiao Qin, Xiaobai Li, Chenxu Zhao, Zhen Lei, and Guoying Zhao. Deep learning for face anti- spoofing: A survey. arXiv preprint arXiv:2106.14948, 2021. 1
Auto-fas: Searching lightweight networks for face anti-spoofing. Zitong Yu, Yunxiao Qin, Xiaqing Xu, Chenxu Zhao, Zezheng Wang, Zhen Lei, Guoying Zhao, ICASSP. IEEEZitong Yu, Yunxiao Qin, Xiaqing Xu, Chenxu Zhao, Zezheng Wang, Zhen Lei, and Guoying Zhao. Auto-fas: Searching lightweight networks for face anti-spoofing. In ICASSP, pages 996-1000. IEEE, 2020. 1
Dual-cross central difference network for face anti-spoofing. Zitong Yu, Yunxiao Qin, Hengshuang Zhao, Xiaobai Li, Guoying Zhao, IJCAI. Zitong Yu, Yunxiao Qin, Hengshuang Zhao, Xiaobai Li, and Guoying Zhao. Dual-cross central difference network for face anti-spoofing. In IJCAI, 2021. 1
Nas-fas: Static-dynamic central difference network search for face anti-spoofing. Zitong Yu, Jun Wan, Yunxiao Qin, Xiaobai Li, Z Stan, Guoying Li, Zhao, IEEE Transactions on Pattern Analysis and Machine Intelligence. 36Zitong Yu, Jun Wan, Yunxiao Qin, Xiaobai Li, Stan Z Li, and Guoying Zhao. Nas-fas: Static-dynamic central difference network search for face anti-spoofing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 3, 6
Searching central difference convolutional networks for face anti-spoofing. Zitong Yu, Chenxu Zhao, Zezheng Wang, Yunxiao Qin, Zhuo Su, Xiaobai Li, Feng Zhou, Guoying Zhao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition17Zitong Yu, Chenxu Zhao, Zezheng Wang, Yunxiao Qin, Zhuo Su, Xiaobai Li, Feng Zhou, and Guoying Zhao. Searching central difference convolutional networks for face anti-spoofing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5295- 5305, 2020. 1, 7
Shengping Zhang, and Guoying Zhao. 3d mask face anti-spoofing with remote photoplethysmography. Siqi Pong Chi Yuen, Liu, US Patent. 105Pong Chi Yuen, Siqi Liu, Shengping Zhang, and Guoying Zhao. 3d mask face anti-spoofing with remote photoplethys- mography, Aug. 13 2019. US Patent 10,380,444. 5
Joint face detection and alignment using multitask cascaded convolutional networks. Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, Yu Qiao, IEEE Signal Processing Letters. 2310Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499-1503, 2016. 6
Face anti-spoofing via disentangled representation learning. Ke-Yue Zhang, Taiping Yao, Jian Zhang, Ying Tai, Shouhong Ding, Jilin Li, Feiyue Huang, Haichuan Song, Lizhuang Ma, European Conference on Computer Vision. SpringerKe-Yue Zhang, Taiping Yao, Jian Zhang, Ying Tai, Shouhong Ding, Jilin Li, Feiyue Huang, Haichuan Song, and Lizhuang Ma. Face anti-spoofing via disentangled represen- tation learning. In European Conference on Computer Vi- sion, pages 641-657. Springer, 2020. 2
A dataset and benchmark for large-scale multi-modal face anti-spoofing. Shifeng Zhang, Xiaobo Wang, Ajian Liu, Chenxu Zhao, Jun Wan, Sergio Escalera, Hailin Shi, Zezheng Wang, Stan Z Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShifeng Zhang, Xiaobo Wang, Ajian Liu, Chenxu Zhao, Jun Wan, Sergio Escalera, Hailin Shi, Zezheng Wang, and Stan Z Li. A dataset and benchmark for large-scale multi-modal face anti-spoofing. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 919-928, 2019. 5
Celeba-spoof: Largescale face anti-spoofing dataset with rich annotations. Yuanhan Zhang, Zhenfei Yin, Yidong Li, Guojun Yin, Junjie Yan, Jing Shao, Ziwei Liu, European Conference on Computer Vision. SpringerYuanhan Zhang, ZhenFei Yin, Yidong Li, Guojun Yin, Jun- jie Yan, Jing Shao, and Ziwei Liu. Celeba-spoof: Large- scale face anti-spoofing dataset with rich annotations. In European Conference on Computer Vision, pages 70-85. Springer, 2020. 5
A face antispoofing database with diverse attacks. Zhiwei Zhang, Junjie Yan, Sifei Liu, Zhen Lei, Dong Yi, Stan Z Li, 5th IAPR international conference on Biometrics (ICB). IEEE56Zhiwei Zhang, Junjie Yan, Sifei Liu, Zhen Lei, Dong Yi, and Stan Z Li. A face antispoofing database with diverse attacks. In 2012 5th IAPR international conference on Bio- metrics (ICB), pages 26-31. IEEE, 2012. 5, 6
Learning deep features for discriminative localization. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Computer Vision and Pattern Recognition. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discrimi- native localization. In Computer Vision and Pattern Recog- nition, 2016. 8
| []
|
[
"A Reinforcement Learning Approach to the View Planning Problem",
"A Reinforcement Learning Approach to the View Planning Problem"
]
| [
"Mustafa Devrim Kaba [email protected] \nGeneral Electric Global Research Center\n1 Research Circle12309NiskayunaNY\n",
"Mustafa Gokhan Uzunbas \nGeneral Electric Global Research Center\n1 Research Circle12309NiskayunaNY\n",
"Nam Ser \nGeneral Electric Global Research Center\n1 Research Circle12309NiskayunaNY\n",
"Lim \nGeneral Electric Global Research Center\n1 Research Circle12309NiskayunaNY\n"
]
| [
"General Electric Global Research Center\n1 Research Circle12309NiskayunaNY",
"General Electric Global Research Center\n1 Research Circle12309NiskayunaNY",
"General Electric Global Research Center\n1 Research Circle12309NiskayunaNY",
"General Electric Global Research Center\n1 Research Circle12309NiskayunaNY"
]
| []
| We present a Reinforcement Learning (RL) solution to the view planning problem (VPP), which generates a sequence of view points that are capable of sensing all accessible area of a given object represented as a 3D model. In doing so, the goal is to minimize the number of view points, making the VPP a class of set covering optimization problem (SCOP). The SCOP is N P -hard, and the inapproximability results tell us that the greedy algorithm provides the best approximation that runs in polynomial time. In order to find a solution that is better than the greedy algorithm, (i) we introduce a novel score function by exploiting the geometry of the 3D model, (ii) we model an intuitive human approach to VPP using this score function, and (iii) we cast VPP as a Markovian Decision Process (MDP), and solve the MDP in RL framework using well-known RL algorithms. In particular, we use SARSA, Watkins-Q and TD with function approximation to solve the MDP. We compare the results of our method with the baseline greedy algorithm in an extensive set of test objects, and show that we can outperform the baseline in almost all cases. | 10.1109/cvpr.2017.541 | [
"https://arxiv.org/pdf/1610.06204v2.pdf"
]
| 1,770,350 | 1610.06204 | f016cbdc4e36e7958e44ffae7cfda9a318c2164f |
A Reinforcement Learning Approach to the View Planning Problem
Mustafa Devrim Kaba [email protected]
General Electric Global Research Center
1 Research Circle12309NiskayunaNY
Mustafa Gokhan Uzunbas
General Electric Global Research Center
1 Research Circle12309NiskayunaNY
Nam Ser
General Electric Global Research Center
1 Research Circle12309NiskayunaNY
Lim
General Electric Global Research Center
1 Research Circle12309NiskayunaNY
A Reinforcement Learning Approach to the View Planning Problem
We present a Reinforcement Learning (RL) solution to the view planning problem (VPP), which generates a sequence of view points that are capable of sensing all accessible area of a given object represented as a 3D model. In doing so, the goal is to minimize the number of view points, making the VPP a class of set covering optimization problem (SCOP). The SCOP is N P -hard, and the inapproximability results tell us that the greedy algorithm provides the best approximation that runs in polynomial time. In order to find a solution that is better than the greedy algorithm, (i) we introduce a novel score function by exploiting the geometry of the 3D model, (ii) we model an intuitive human approach to VPP using this score function, and (iii) we cast VPP as a Markovian Decision Process (MDP), and solve the MDP in RL framework using well-known RL algorithms. In particular, we use SARSA, Watkins-Q and TD with function approximation to solve the MDP. We compare the results of our method with the baseline greedy algorithm in an extensive set of test objects, and show that we can outperform the baseline in almost all cases.
Introduction
In this work, we present a solution to the view planning problem (VPP), which aims to automatically determine a minimum number of camera perspectives for viewing a given object in order to achieve a coverage requirement. View planning is becoming increasingly important as the advent of autonomous platforms is placing demand on developing algorithms that can provide such a solution, particularly for robots and UAVs mounted with cameras whose missions are to collect imageries that fully cover the object of interest (See Figure 1). In this paper, we will focus on model-based view planning where an object's 3D model is available. In model-based view planning, one can take a more global view of the optimization problem involved. * These authors contributed to this paper equally. This can be seen in Sheinin et al.'s recent work [25] that tries to take a rough 3D underwater sonar model and exploit an optimization criterion that is based on information gain, optimizing viewpoints so that the descattered albedo is least noisy. In contrast, non-model-based view planning [30,29] often relies on stochastic state analysis, utilizing uncertainty estimation to plan the next best view (NBV). One of the earliest applications of view planning was indoor and outdoor surveillance, which is also known as the art gallery problem [14]. More recently, there has been an increased interest in the use of drones in surveillance, inspection and 3D reconstruction, all of which require view planning [17,11,31,18,3,20,21,22]. In many of these applications, prior 3D models are available. For example, in rescue missions, it is critical that survivors be found quickly, and often such search and rescue missions are conducted from the air. 3D models of search regions are often readily available (such as from Google Earth), and can be exploited in view planning to plan the search paths.
Model based VPP can be regarded as a set covering optimization problem (SCOP) and is constrained by the limitations of SCOP [28]. Under reasonable complexity assumptions, the naïve greedy algorithm is essentially the best polynomial time approximation algorithm to the N P -hard SCOP [7]. Even though one can often find a better solution specific to the problem in hand, to the best of our knowl- Figure 2. Illustration of the bad performance of greedy algorithm in the VPP. Green color represents regions covered by a single camera only, while red color represents regions covered by multiple cameras at different times. The greedy algorithm returns a solution with 13 cameras. However, the contribution of the last a few cameras are, in fact, minor.
edge, there is no generic method which is guaranteed to out-perform the solution provided by the greedy algorithm. Therefore, we will use the greedy algorithm as a benchmark for the VPP. On the other hand, greedy algorithm can easily fail and cannot guarantee the optimal solution to a generic coverage problem. Figure 2 illustrates an example of bad performance of the greedy algorithm. In this example, a 3D knot model is covered with a virtual camera from multiple view points. In the figure, color code represents areas covered by single (green) and multiple (red) cameras. Even though the first few cameras effectively increase the coverage, the last few of them are needed only to cover very small areas that remained uncovered (magnified in the figure). In this work we propose an intelligent planning scheme which is capable of reducing this redundancy. Figure 3. Illustration of the purely greedy vs. the intuitive human approach to the set coverage optimization problem. Non-greedy intermediate steps chosen by human leads to a more efficient solution.
In particular, we show that even though the VPP is a set covering optimization problem, the geometric structure of 3D models opens a path to a more flexible treatment of VPP. To this end we propose a new set cover score function which allows us to switch between the greedy and non-greedy steps. The score function achieves this by penalizing long circumferences if needed. We show that this new scoring scheme can be used to model the human approach to VPP. We claim that if a human was asked to solve this problem, s/he would avoid proceeding greedily at certain steps along the way (See Figure 3), and this would eliminate the use of excess view points.
Choosing between greedy and non-greedy actions at each step intelligently requires a sequential decision making process which takes the future actions into account. This essentially converts VPP to a Markov Decision Process (MDP). The standard way of solving such MDPs are dynamic programming and reinforcement learning (RL). Therefore, we employ a RL framework where an agent learns which actions to take by considering its future consequences. More specifically, our RL agent learns how to set the parameter of our new score function at each stage of the coverage task. We implement three RL algorithms which are mainly built around learning a value function. More precisely, we use SARSA and Watkins-Q algorithms, which learn the action value function, and TD algorithm which learns the state value function [26].
A typical VPP has a large number of initial view points, which induces a MDP with a very large number of states, which in return, necessitates the use of function approximation in RL framework. Hence, we couple the above mentioned algorithms with a nonlinear function approximation scheme.
Our contributions:
• By exploiting the geometry, we propose a novel, fully automated RL method to solve VPP.
• We define a new set coverage score function that can be used to model the human approach to VPP.
• With sufficient exploration and learning time, our RL based method provides a solution which is guaranteed to perform at least as good as the greedy algorithm.
Related Work
Existing methods that propose solutions to VPP are mainly divided into two groups: model-based and nonmodel-based. Non-model-based view planning differs from the former as the target environment is not fully observable and is out of the scope of this paper. In this work we constrain ourselves to model-based view planning, and assume that a 3D CAD model of the environment is already available. In the literature the model-based view planning is divided into two parts. The first part is the process of finding the best view locations to cover the object, and the second part is the planning of the optimal path which includes visiting these selected locations. The second part is essentially a Traveling Salesman Problem (TSP), and it also remains out of the scope of this work. We exclusively refer to the first part when we mention VPP. However, it is worth noting here that an efficient solution to the first part is very crucial, as it effectively decreases the size of TSP which has to be tackled in the second part.
Detailed surveys of proposed solutions to VPP can be found in [27,19,23]. In particular, [27] summarizes the efforts in VPP for inspection, recognition and reconstruction, [19] covers the work on VPP for inspection, and finally [23] addresses the VPP problem as it appears in reconstruction and inspection problems. Among all notable work studying VPP, our work is in the sprit of the seminal work of Tarbox and Gottschlich [28]. Tarbox and Gottschlich also identify the phenomena portrayed in Figure 2 as the cause of the non-optimality of the greedy algorithm. However, our proposed solution to handle this problem differs substantially from what they suggested, namely randomized search with simulated annealing. In the VPP literature, greedy algorithm is still the most commonly used algorithm for view point selection [24,1]. There is a couple of recent work that uses more sophisticated methods such as linear programming relaxation and genetic algorithms for full 3D model coverage [5,12]. However, they don't necessarily suggest performance gains over the greedy algorithm. Lastly, to the best of our knowledge there is no method in the literature which uses a RL based approach to solve VPP. However, recently there has been a rapidly growing interest in combining RL techniques with computer vision. Although they are not related to VPP, for completeness, we would like to mention [15,8,13,16] as the most recent notable works, which mainly combine deep networks and RL for digit classification, object detection, person identification and playing arcade games, respectively.
Problem Formulation
In this paper, we study the view planning problem for 3D models. Without loss of generality, the 3D models we consider are triangular meshes, although other type of meshes could be used as well. We process models of various objects like geographical terrains, big structures, interesting geometrical objects or even machine parts. We formally define the view planning problem for 3D meshes as follows Problem 1. Given a 3D mesh model Ω of an object and a finite set of view points ( i ) together with associated directions (d i ), S := {( i , d i )}, find a subset T ⊂ S of minimum size such that if identical cameras are placed in locations and in the directions provided by T, then Ω can sufficiently be covered by these cameras.
We unify the two cases where multiple cameras or a single moving camera is employed and we treat them simultaneously.
Notation and Background
First, we summarize the mathematical notation that is used. For a given set Y , we will denote the power set of Y , i.e. the collection of all subsets of Y , by 2 Y . The set of non-negative real numbers will be denoted by R ≥0 . We denote the triangular mesh of interest with Ω. Then, each element of 2 Ω will be a submesh and we denote a submesh (possibly arising from coverage of a single or non-singleton set of views) by X.
Set covering optimization problem
Given a set S with finite number of elements, and a collection {S i } i∈I ⊆ 2 S of subsets of S indexed by I, the set covering optimization problem is the problem of finding a subset J of I with smallest number of elements satisfying S = j∈J S j . This problem is known to be N P -hard, and approximate solutions such as greedy that run in polynomial time are well known, [10]. However, in many instances, it has also been shown that greedy algorithm can not provide the optimal solution, [6]. Nevertheless, under reasonable assumptions, the inapproximability results of [7] and [4] show that the greedy algorithm is the best polynomial-time approximation algorithm one can hope for.
The view planning we posed in Problem 1 can be regarded as a special case of the set covering optimization problem. Hence, in its naive form, it is also constrained by the facts above. In this work, we aim to answer the following question: Can one do better than the greedy algorithm, by utilizing the geometric structure of the objects and combining them with a learning paradigm?
Reinforcement Learning
The learning paradigm we use in this paper is the standard reinforcement learning setting where an agent learns to accomplish a certain task by interacting with an environment over a number of discrete time steps. We restrict our attention to the approaches which are mainly built around estimating a so-called value function.
View planning can be cast as a finite Markov Decision Process (MDP). Hence, in principle, we will be using RL techniques to solve a finite MDP. Formally, a finite MDP is a quintuple (S, A, T, R, γ), where S denotes a finite set of Markovian states, A = s∈S A s denotes the finite collection of all admissible actions. In particular, A s denotes the finite set of all admissible actions at state s ∈ S. T = {T a } a∈A is the collection of all transition probability functions. For any (s, s ) ∈ S × S, and a ∈ A s , T a (s, s ) = P r{s t+1 = s |s t = s and a t = a} is the probability that system reaches state s at time t + 1, after taking action a at state s. The reward signal r t : S × S → R returns the (expected) immediate reward received after transitioning from state s to s at time t. Lastly, γ ∈ [0, 1] is the discount factor, which simply allows us to emphasize the importance of present rewards over future ones.
In most RL systems, the state is basically agent's observation of the environment. It can be a complete or rough estimate of the current status of the environment. At any given state the agent chooses its action according to a policy. Hence, a policy is a road map for the agent, which determines the action to take at each state. Once the agent takes an action, the environment returns the new state and the immediate reward. Then, the agent uses this information, together with the discount factor to update its internal understanding of the environment, which, in our case, is accomplished by updating a value function.
One can use different RL algorithms to solve an MDP. In this paper we specifically use the well-known SARSA, Watkins-Q and Temporal Difference (TD) algorithms with function approximation. For a given policy π, SARSA and Watkins-Q algorithms learn q π (s, a), namely the action value function, which is defined as the expected discounted total reward (i.e. return) after taking the action a at state s and following the policy π q π (s, a) = E π { ∞ k=0 γ k r t+k+1 |s t = s and a t = a} (1) The TD algorithm, on the other hand, learns the so-called state value function, v π (s) for a state s. In a similar fashion, it is defined as the expected discounted total reward starting from the state s and following the policy π v π (s) = E π { ∞ k=0 γ k r t+k+1 |s t = s}
(2)
Reinforcement Learning for View Planning
The simplest approach to solve VPP in RL framework would be defining each available view point at a given state as an admissible action. However, in practice, this approach would not be feasible. In our setting, the size of the state space increases exponentially with the increasing number of predefined view points. If there is no rule restricting the admissible actions the problem would quickly become intractable. In order to be able to place the problem in RL framework and solve it efficiently, one desperately needs a strategy to reduce the number of admissible actions at each state, while keeping the problem sufficiently general.
Our inspiration in reducing the admissible actions comes from the human approach to the problem. As we argued in Section 1, we claim that a human would choose nongreedy steps in between greedy ones to solve the VPP. We model the intuitive behavior of the human agent by using the family of functions f λ : 2 Ω → R, defined as
f λ (X) := A(X) L(X) λ .(3)
Here A(X) denotes the total surface area covered by the submesh X, L(X) denotes the total boundary length of the area covered by X, and λ ∈ R ≥0 . Now, we claim that using the functions f λ , the behavior of the human agent can be modeled as follows:
At each step pick a λ and choose the set which maximizes the function f λ .
As one can immediately notice, in this setting, choosing λ = 0 corresponds to proceeding greedily, whereas nonzero λ's allow non-greedy steps. In other words, if λ = 0, given two view points introducing two different coverages X 1 and X 2 with the same surface area, A(X 1 ) = A(X 2 ), maximizing f λ implies that the algorithm prefers the view point that introduces a covered area with shorter perimeter (See Algorithm 1). As one can immediately notice, in this setting, choosing λ = 0 corresponds to proceeding greedily, whereas nonzero λ's allow non-greedy steps. In other words, if λ = 0, given two view points introducing two different coverages X 1 and X 2 with the same surface area, A(X 1 ) = A(X 2 ), maximizing f λ implies that the algorithm prefers the view point that introduces a covered area with shorter perimeter (See Algorithm 1).
For a fixed λ ≥ 0, we call the approach of maximizing f λ at each step, as λ−greedy algorithm. For high λ values, the λ−greedy algorithm proceeds quite conservatively, preferring shorter boundaries over larger coverage, in return, causing increased number of views. Therefore, fixing λ from the very beginning results in poor solutions. As we argued above, like a human agent does, we need to employ different values of λ at each step. Therefore, the VPP boils down to the following decision problem:
Which λ ≥ 0 to choose at each step?
As we will see in the experiments section, achieving a performance better than the purely greedy approach requires a subtle choice of λ at every step. In our experiments, we see that an ad hoc approach like alternating the λ value between zero and a non-zero value would rarely lead to the best results. A more sophisticated strategy is needed to generate a sequence of λ's that would lead to smaller number of views.
Remark 1. A crucial component of our implementation is
to calculate the boundary of a union of two submeshes. For two submeshes X 1 , X 2 ⊆ Ω, we calculate the boundary for c ∈ view point list do 6: f ← submesh observed by c 7:
if (f ∩ F = ∅)||(F == ∅) then 8: s ← COMPUTE SCORE(f ∪ F, λ)
Eq. 3 9: if s > S then 10: S ← s 11:
C ← c return C bd(X 1 ∪ X 2 ) according to bd(X 1 ∪ X 2 ) = [bd(X 1 ) \ ed(X 2 )] ∪ [bd(X 2 ) \ ed(X 1 )] ∪ [bd(X 1 ) ∩ bd(X 2 )](4)
where ed(·) denotes the set of all edges of the submesh.
Beating Greedy by Learning λ
Even though λ is a continuous variable, we expect that the function assigning λ's to the associated view is piecewise continuous. Therefore, we can consider a small, finite set of λ's to choose from at each step of our algorithm. In order to find a sequence of λ's that leads to a solution better than the one offered by the greedy algorithm, we device a RL scheme. In this setup, our state is a vector of length equals to the number of initial view points, which is denoted by N . The set of chosen view points uniquely define the state: If at a given state, the view point i is chosen, then the i th entry of the state vector is set, otherwise it remains zero. This way, we introduce a state space with 2 N states. Obviously, this definition of the state satisfies the Markov property. In this setting, at each state, taking an action corresponds to choosing a λ value. However, the learning agent is allowed to choose a λ value only from a finite set of admissible λ's, which is denoted by Λ. We assume that Λ remains unchanged at each state. We further assume that the agent follows a deterministic policy, hence all transition probabilities are trivial. Since we would like to accomplish the coverage in as few steps as possible, we introduce a reward of −1 for each state transition. We don't use any discount factor, and the coverage task is naturally episodic. In this setting, the VPP becomes a finite Markov Decision Process.
Learning stage
In order to solve this MDP, we use three different RL algorithms: On-policy control algorithm SARSA, off-policy control algorithm Watkins-Q and on-policy learning algorithm TD. The former two algorithms learn q π (s, a), the action value function, whereas the last algorithm learns v π (s), Algorithm 2 Watkins-Q Agent 1: procedure LEARNING 2:
θ ← random network weights 3: α ← learning rate, µ e ← eligibility f actor 4: ε ← exploration probability 5: repeat 6: c ← random view point 7: s ← {c}, e ← 0, r ← −1, δ ← 0 8:
if random number > ε then 9: λ * ← arg max λq π (θ, s, λ)
10:
else 11: λ * ← random λ f rom Λ
12:
while true do 13: e ← e + ∇ θqπ (θ, s, λ * ) 14: δ ← r −q π (θ, s, λ * ) 15: if s is Terminal then 16: θ ← θ + α · δ · e 17:
break 18: c ← NBV(λ * ) see Alg. 1 19: s ← s ∪ {c} 20: δ ← δ + max λq π (θ, s, λ)) 21:
θ ← θ + α · δ · e 22:
if random number > ε then 23: λ * ← arg max θ ← random network weights 3: α ← learning rate, µ e ← eligibility f actor 4: repeat 5: c ← random view point 6: s ← {c}, e ← 0, r ← −1, δ ← 0, S ← ∅ 7:
while true do 8: e ← e + ∇ θvπ (θ, s) 9: δ ← r −v π (θ, s) 10: if s is Terminal then 11: θ ← θ + α · δ · e 12: δ ← δ +v π (θ, s) 16:
θ ← θ + α · δ · e 17:
e ← µ e · e 18: until Max nr of episodes is reached the state value function. For the convenience of the reader, we include our learning procedures implementing Watkins- Q and TD algorithms in the algorithm boxes 2 and 3 (We refer to the supplementary material for the implementation of SARSA). In these algorithms, we call a state terminal if a certain coverage criteria is met. Our coverage criteria is relative in the sense that the coverage task is assumed to be completed once we cover a certain percentage of the area that can be covered by the union of all initial view points. We call this number the relative coverage criteria, or RCC.
In order to boost the learning performance, we use eligibility traces. Moreover, since the number of states quickly becomes huge, we need to deploy a function approximation scheme. In all cases the value function is approximated by a neural network with one hidden layer which has sigmoid neurons. The output layer of the neural network is an affine function, i.e. a linear function with weights and a bias term. In case of SARSA and Watkins-Q, the input to the network is the concatenation of the state vector and a one-hot action vector encoding the chosen λ. In order to achieve this, we basically enumerate the admissible λ's in Λ and the i th entry of the action vector is set if the corresponding λ is chosen. As for the implementation of TD with function approximation, the input to the network is the state vector only. We refer to Figure 4 for an illustration of the value function network. If we let σ i denote the output of the i th hidden sigmoid neuron, w i and b denote the weights and the bias of the output layer, then the output of the neural network, Φ, is given by
Φ = b + w i σ i(5)
Furthermore, if we let w ij denote the weights and b i denote the bias of the i th hidden sigmoid neuron, then the gradient of the network, which is required for the implementation of the algorithms above, can be calculated by Planning using the policy π
∂Φ ∂b = 1, ∂Φ ∂b i = w i σ i (1 − σ i ),(6a)∂Φ ∂w i = σ i , ∂Φ ∂w ij = w i σ i (1 − σ i )x j(
Once we build a system that estimates the action or state values, a policy which will suggest a solution to the VPP can be derived quite easily. The derived policy acts greedily with respect to the estimated values. To be more precise, in case of SARSA and Watkins-Q, at each state, we go through all admissible actions, find the action which has the highest value, and take that action to move to the next state, until the coverage task is completed. On the other hand, in case we are using the TD algorithm, at each state we calculate all possible next states by going through all admissible actions and finally pick the action that leads to the state with the highest estimated value. This process eventually produces a sequence of λ's that solves the VPP.
Experimental Setup
In order to test the performance of the solution method we proposed, we experimented on 3D meshes of 20 different objects. The first 8 objects consisted mostly of those which could be of potential interest in the application areas we mentioned in the introduction. Particularly, we tested the method on the 3D model of a mountainous region, Yosemite Valley, a wind turbine, a skull, Statue of Liberty, an engine block, and finally, as toy examples, a knot and a plane. The second group of test objects are obtained from the data set appeared in [9]. For each model, we tested our method against two different methods: Purely Greedy and Alternating-λ. As the name suggests, in Purely Greedy approach, we basically complete coverage by proceeding greedily at each step, whereas in Alternating-λ case, after starting with a greedy step, we let λ alternate between 0 and 1 sequentially, and choose the view which maximizes the score function (3) at each step. We used a virtual rgb camera as a sensor, and for fair comparison of the methods, for each object we kept the initial set of cameras and their settings fixed while changing the solution method. Figure 5 shows a few of the models (with coverage map) from the first group and their sample views. The images of the rest of the models can be found in the supplementary material.
As we mentioned previously, we used three different reinforcement learning algorithms to implement our method. During these implementations we allowed only two actions: λ = 0 and λ = 1. The hidden layer of the value network included 200 neurons. We used eligibility traces with eligibility factor equals 0.5. The learning rate was set to 0.01 and the maximum number of episodes was set to 100K. Once this maximum number is reached, we terminated the learning phase, and ran the trained network to accomplish the coverage task. For the first group of objects we mentioned above, RCC was set to 0.99, for the second set this number was increased to 1.0. During the learning and the planning phases same RCC was targeted. In the learning phase of the Watkins-Q agent, we allowed exploration within the first 50K episodes, whereas for SARSA and TD agents no exploration was allowed.
Results and Discussions
Given sufficient time for learning and exploration, our method is expected to perform at least as good as the Purely Greedy or Alternating-λ approaches. As expected, we see from Table 1 that in almost all test cases, our method provides a solution which is better than the solution provided by either of the baseline methods. An exception to this is the duck data, where the Alternating-λ approach performed surprisingly better than any other method. However, in general, even without introducing explicit exploration, our reinforcement learning based method successfully reduces the number of cameras required to ensure the coverage of the object. In this set of experiment, we limited the learning phase to 100K episodes and as shown in Figure 6, we observed that the average performance of agents did not Avg. # of cameras Figure 6. Average convergence performance of RL algorithms using twelve models from [9]. change significantly after 65K episodes.
Another interesting result reflected by this experiment is that, when the RCC is not 1.0, the adverse effects of the purely greedy approach is less visible. This is simply because when RCC is 0.99, a solution leaving small uncovered areas behind is considered a success, even though there cameras in the initial view point set seeing those uncovered areas. This relaxation works very much in favor of the purely greedy algorithm, which already tends to leave plenty of those small uncovered areas while maximizing the overall coverage. Note that, as mentioned before, the RCC was 0.99 for the first set of 8 objects, and 1.0 for the remaining 12 in the table and we see that the average performance gain of RL based methods over the purely greedy approach in the second set, is shrunk from 4 to 2 view points for the first set of objects.
For a thorough analysis of RCC on the performance of our method, in an auxiliary experiment we retrained all three RL-based systems for different RCC values. The results of this auxiliary experiment is summarized in Figure 7. Each plot in this figure compares the average performance of baseline methods against the average performance of RLbased methods. The performance average is obtained after running each of these algorithms for each of the 12 objects appearing in the second data set. After learning, during test time we recorded the average number of cameras selected by each method when RCC is varying from 0.9 to 1.0. We observed appealing results: i) RL based methods beat greedy algorithms with larger margins when the coverage task is completed, i.e. RCC is met; ii) RL agents trained with a certain RCC value can perform worse than greedy methods for lower RCC at test; iii) when RCC is set lower in both learning and planning stages, the performance gain of our RL based methods is reduced but they still perform better than greedy methods.
Finally, in order to verify the precision of the value function approximation, we compared the actual return (i.e. the sum of actual rewards observed following the policy) and the estimated return (i.e. estimated state value in case of TD, and maximum estimated action value in case of SARSA and Watkins-Q) of a number of states. In order to do that, we collected data by starting from all possible initial states, i.e. states corresponding to a single camera only, and following the policy suggested by the network, as explained in Section 5.1. For each state visited, we calculated the estimated and actual returns. As a small sample of this analysis, in Figure 8 we include the results from experiments of duck and cat objects. The analysis for other objects can be found in the supplementary material. Considering the results shown in Table 1, we chose two object-method pairs. Accordingly, cat-SARSA experiment shows an example of good approximation, and duck-TD experiment illustrates a bad approximation. Figure 7. Comparison of the average performance of different algorithms with varying relative coverage criteria (RCC). The average is taken over the second dataset, which consists of 12 models.
Estimated Returns SARSA TD Actual Returns Figure 8. Error plots for two cases. The solid black line indicates the mean, the error bars indicate the standard deviation, data min and data max. Red dots represent the initial states. We mark the actual best initial state and the initial state selected by the policy.
In the plots of Figure 8, the absolute value of the actual return tells us how many cameras more we need to place in order to accomplish the coverage task. In the ideal case the estimated return and the actual return should be equal and we should see a distribution on the y = x line only. We see that, in both cases the expected value of the estimated returns satisfy this property. Moreover, even though the outliers do exist, the standard deviation of the approximation error is rather small. As expected, for the states that are visited more frequently, networks provided quite good approximations, whereas the values of the states that are visited less often, e.g. the initial states (represented by red dots in Figure 8), are often approximated rather poorly. However, the overall approximation quality of the networks are quite high.
This analysis helps us understand why TD method for duck object failed to perform as good as Alternating-λ. As shown in the corresponding plot, the initial state selected by the policy and the initial state which leads to the best result differs. This is due to bad estimation of the state value of the true best initial state.
Conclusion
In this paper, we proposed a fully automated reinforcement learning (RL) based method to solve the view planning problem (VPP) for coverage of 3D object models. The solution given in this paper is neither limited to structure of the 3D models nor the type of the sensors that are used. Given sufficient exploration and learning time, the proposed method is guaranteed to perform at least as good as the greedy algorithm. In an extensive set of test cases, we showed that our proposed method out-performs the greedy algorithm, and we further showed that a similar performance metrics cannot be attained by ad hoc approaches like Alternating-λ. A natural extension of our work is to add path planning to proposed approach and provide an extensive treatment of model-based VPP.
Figure 1 .
1(a) View planning for UAV terrain modeling, (b) Given a set of initial view points, (c) The goal is to find minimum number of views that provide sufficient coverage. Here, color code represents correspondence between selected views and the coverage.
Figure 4 .
4Function approximation using neural network in SARSA and Watkins-Q settings. Note that, in TD method, actions are omitted from the input.
Figure 5 .
5Visual results of coverage and sample views on various models. In the top row, lines represent location and direction of the selected cameras. Colors represent coverage by different cameras. Best seen in color and electronic format.
Table 1. Comparison of the performance of different algorithms on different 3D models. Columns show the number of cameras proposed by each method. Last column shows the duration of a single episode.Mountains
Valley
Turbine
Skull
Statue
Engine
Knot
Plane
Ape
Cat
Iron
Can
Lamp
Phone
Glue
Driller
Eggbox
Cam
Duck
Bench
Avg. Learning
Time (Ep/sec)
Avg. Planning
Time (Ep/sec)
# of init. cams 376 270 264 270 265 289 286 189 312 332 333 412 245 302 344 342 319 342 343 321 n/a n/a
Greedy
36 42 34 39 31 50 13 16 22 15 16 16 26 13 13 16 11 11 9 30 n/a 0.17
Altern. λ
38 43 33 42 32 52 12 17 19 12 17 16 28 14 12 19 13 12 7 29 n/a 0.17
Ours
SARSA 34 39 32 37 29 48 11 13 17 11 15 13 23 11 10 15 11 9 8 26 0.52 0.51
Watkins-Q 34 39 32 37 29 48 11 13 17 11 14 13 23 11 10 15 11 9 8 26 0.51 0.50
TD
34 39 32 37 29 48 11 13 17 11 14 13 23 11 10 15 10 9 8 26 1.12 1.01
Avg. # of Cams
RCC = 1.0
RCC = 0.98
RCC = 0.96
RCC = 0.94
RCC = 0.92
Relative Coverage
Data acquisition and view planning for 3-d modeling tasks. P S Blaer, P K Allen, IEEE/RSJ International Conference on Intelligent Robots and Systems. P. S. Blaer and P. K. Allen. Data acquisition and view plan- ning for 3-d modeling tasks. In IEEE/RSJ International Con- ference on Intelligent Robots and Systems, pages 417-422, 2007.
The isoperimetric problem. The American Mathematical Monthly. V Blåsjö, 112V. Blåsjö. The isoperimetric problem. The American Math- ematical Monthly, 112(6):526-566, 2005.
Multisensor placement in 3d environments via visibility estimation and derivative-free optimization. F.-M De Rainville, J.-P Mercier, C Gagné, P Giguere, D Laurendeau, 2015 IEEE International Conference on Robotics and Automation (ICRA). F.-M. De Rainville, J.-P. Mercier, C. Gagné, P. Giguere, and D. Laurendeau. Multisensor placement in 3d environments via visibility estimation and derivative-free optimization. In 2015 IEEE International Conference on Robotics and Au- tomation (ICRA), pages 3327-3334, 2015.
Analytical approach to parallel repetition. I Dinur, D Steurer, Proceedings of the 46th Annual ACM Symposium on Theory of Computing. the 46th Annual ACM Symposium on Theory of ComputingI. Dinur and D. Steurer. Analytical approach to parallel rep- etition. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 624-633, 2014.
Planning complex inspection tasks using redundant roadmaps. B Englot, F Hover, Proc. Int. Symp. Robotics Research. Int. Symp. Robotics ResearchB. Englot and F. Hover. Planning complex inspection tasks using redundant roadmaps. In Proc. Int. Symp. Robotics Re- search, 2011.
Automated camera layout to satisfy task-specific and floor plan-specific coverage requirements. U M Erdem, S Sclaroff, Computer Vision and Image Understanding. 1033U. M. Erdem and S. Sclaroff. Automated camera layout to satisfy task-specific and floor plan-specific coverage re- quirements. Computer Vision and Image Understanding, 103(3):156-169, 2006.
A threshold of ln n for approximating set cover. U Feige, J. ACM. 454U. Feige. A threshold of ln n for approximating set cover. J. ACM, 45(4):634-652, July 1998.
Recurrent attention models for depth-based person identification. A Haque, A Alahi, L Fei-Fei, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Haque, A. Alahi, and L. Fei-Fei. Recurrent attention models for depth-based person identification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. S Hinterstoisser, V Lepetit, S Ilic, S Holzer, G Bradski, K Konolige, N Navab, 11th Asian Conference on Computer Vision. Berlin HeidelbergSpringerS. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab. Model based training, detection and pose estimation of texture-less 3d objects in heavily clut- tered scenes. In 11th Asian Conference on Computer Vision. Springer Berlin Heidelberg, 2013.
A near-optimal sensor placement algorithm to achieve complete coverage-discrimination in sensor networks. F Y Lin, P.-L Chiu, IEEE Communications Letters. 91F. Y. Lin and P.-L. Chiu. A near-optimal sensor placement al- gorithm to achieve complete coverage-discrimination in sen- sor networks. IEEE Communications Letters, 9(1):43-45, 2005.
Active vision for complete scene reconstruction and exploration. E Marchand, F Chaumette, IEEE Transactions on Pattern Analysis and Machine Intelligence. 211E. Marchand and F. Chaumette. Active vision for complete scene reconstruction and exploration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(1):65-72, 1999.
Evolutionary view planning for optimized uav terrain modeling in a simulated environment. R A Martin, I Rojas, K Franke, J D Hedengren, Remote Sensing. 8126R. A. Martin, I. Rojas, K. Franke, and J. D. Hedengren. Evo- lutionary view planning for optimized uav terrain modeling in a simulated environment. Remote Sensing, 8(1):26, 2015.
Reinforcement learning for visual object detection. S Mathe, A Pirinen, C Sminchisescu, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Mathe, A. Pirinen, and C. Sminchisescu. Reinforcement learning for visual object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
A general method for sensor planning in multi-sensor systems: Extension to random occlusion. A Mittal, L S Davis, International Journal of Computer Vision. 761A. Mittal and L. S. Davis. A general method for sensor plan- ning in multi-sensor systems: Extension to random occlu- sion. International Journal of Computer Vision, 76(1):31- 52, 2008.
Recurrent models of visual attention. V Mnih, N Heess, A Graves, Advances in Neural Information Processing Systems. V. Mnih, N. Heess, A. Graves, et al. Recurrent models of vi- sual attention. In Advances in Neural Information Processing Systems, pages 2204-2212, 2014.
V Mnih, K Kavukcuoglu, D Silver, A Graves, I Antonoglou, D Wierstra, M Riedmiller, arXiv:1312.5602Playing atari with deep reinforcement learning. arXiv preprintV. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Play- ing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Uav-based autonomous image acquisition with multi-view stereo quality assurance by confidence prediction. C Mostegel, M Rumpler, F Fraundorfer, H Bischof, arXiv:1605.01923arXiv preprintC. Mostegel, M. Rumpler, F. Fraundorfer, and H. Bischof. Uav-based autonomous image acquisition with multi-view stereo quality assurance by confidence prediction. arXiv preprint arXiv:1605.01923, 2016.
Active monocular localization: towards autonomous monocular exploration for multirotor mavs. C Mostegel, A Wendel, H Bischof, 2014 IEEE International Conference on Robotics and Automation (ICRA). C. Mostegel, A. Wendel, and H. Bischof. Active monocular localization: towards autonomous monocular exploration for multirotor mavs. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 3848-3855, 2014.
A survey of automated visual inspection. T S Newman, A K Jain, Computer Vision and Image Understanding. 612T. S. Newman and A. K. Jain. A survey of automated vi- sual inspection. Computer Vision and Image Understanding, 61(2):231-262, 1995.
Fractal trajectories for online non-uniform aerial coverage. S A Sadat, J Wawerla, R Vaughan, IEEE International Conference on Robotics and Automation (ICRA). S. A. Sadat, J. Wawerla, and R. Vaughan. Fractal trajectories for online non-uniform aerial coverage. In 2015 IEEE In- ternational Conference on Robotics and Automation (ICRA), pages 2971-2976, 2015.
View planning for multi-view stereo 3d reconstruction using an autonomous multicopter. K Schmid, H Hirschmüller, A Dömel, I Grixa, M Suppa, G Hirzinger, Journal of Intelligent & Robotic Systems. 651-4K. Schmid, H. Hirschmüller, A. Dömel, I. Grixa, M. Suppa, and G. Hirzinger. View planning for multi-view stereo 3d reconstruction using an autonomous multicopter. Journal of Intelligent & Robotic Systems, 65(1-4):309-323, 2012.
View planning for automated 3d object reconstruction inspection. W Scott, G Roth, J.-F Rivest, ACM Computing Surveys. 351W. Scott, G. Roth, and J.-F. Rivest. View planning for auto- mated 3d object reconstruction inspection. ACM Computing Surveys, 35(1), 2003.
View planning for automated 3d object reconstruction inspection. W Scott, G Roth, J.-F Rivest, ACM Computing Surveys. 351W. Scott, G. Roth, and J.-F. Rivest. View planning for auto- mated 3d object reconstruction inspection. ACM Computing Surveys, 35(1), 2003.
Model-based view planning. Machine Vision and Applications. W R Scott, 20W. R. Scott. Model-based view planning. Machine Vision and Applications, 20(1):47-69, 2009.
The next best underwater view. M Sheinin, Y Y Schechner, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). M. Sheinin and Y. Y. Schechner. The next best underwater view. In The IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), June 2016.
Introduction to Reinforcement Learning. R S Sutton, A G Barto, MIT PressCambridge, MA, USA1st editionR. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, 1st edition, 1998.
A survey of sensor planning in computer vision. K A Tarabanis, P K Allen, R Y Tsai, IEEE transactions on Robotics and Automation. 111K. A. Tarabanis, P. K. Allen, and R. Y. Tsai. A survey of sensor planning in computer vision. IEEE transactions on Robotics and Automation, 11(1):86-104, 1995.
Planning for complete sensor coverage in inspection. G H Tarbox, S N Gottschlich, Computer Vision and Image Understanding. 611G. H. Tarbox and S. N. Gottschlich. Planning for complete sensor coverage in inspection. Computer Vision and Image Understanding, 61(1):84-111, 1995.
Online next-bestview planning for accuracy optimization using an extended e-criterion. M Trummer, C Munkelt, J Denzler, Pattern Recognition (ICPR), 2010 20th International Conference on. M. Trummer, C. Munkelt, and J. Denzler. Online next-best- view planning for accuracy optimization using an extended e-criterion. In Pattern Recognition (ICPR), 2010 20th Inter- national Conference on, pages 1642-1645, 2010.
Active visual object reconstruction using d-, e-, and t-optimal next best views. S Wenhardt, B Deutsch, E Angelopoulou, H Niemann, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Wenhardt, B. Deutsch, E. Angelopoulou, and H. Niemann. Active visual object reconstruction using d-, e-, and t-optimal next best views. In The IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), June 2007.
Autonomous exploration: Driven by uncertainty. P Whaite, F P Ferrie, IEEE Transactions on Pattern Analysis and Machine Intelligence. 193P. Whaite and F. P. Ferrie. Autonomous exploration: Driven by uncertainty. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(3):193-205, 1997.
| []
|
[
"A Mini-Neptune from TESS and CHEOPS Around the 120 Myr Old AB Dor member HIP 94235",
"A Mini-Neptune from TESS and CHEOPS Around the 120 Myr Old AB Dor member HIP 94235"
]
| [
"George Zhou [email protected] \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Christopher P Wirth \nHarvard University\n02138CambridgeMAUSA\n\nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"Chelsea X Huang \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Alexander Venner \nAberdeenUK\n",
"Kyle Franson \nDepartment of Astronomy\nThe University of Texas at Austin\n78712TXUSA\n",
"Samuel N Quinn \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"L G Bouma \nCahill Center for Astrophysics\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n",
"Adam L Kraus \nDepartment of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"Andrew W Mann \nDepartment of Physics and Astronomy\nThe University of North Carolina at Chapel Hill\nChapel Hill27599NCUSA\n",
"Elisabeth R Newton \nDepartment of Physics and Astronomy\nDartmouth College\n03755HanoverNHUSA\n",
"Diana Dragomir \nDepartment of Physics and Astronomy\nUniversity of New Mexico\n210 Yale Blvd NE87106AlbuquerqueNMUSA\n",
"Alexis Heitzmann \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Nataliea Lowson \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Stephanie T Douglas \nDepartment of Physics\nLafayette College\n730 High St18042EastonPAUSA\n",
"Matthew Battley \nDepartment of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n\nCentre for Exoplanets and Habitability\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK\n",
"Edward Gillen \nAstronomy Unit\nMary University of London\nMile End RoadE1 4NSLondonQueenUK\n\nAstrophysics Group\nCavendish Laboratory\nJ.J. Thomson AvenueCB3 0HECambridgeUK\n",
"Amaury Triaud \nSchool of Physics & Astronomy\nUniversity of Birmingham\nEdgbastonB15 2TTBirminghamUK\n",
"David W Latham \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"Steve B Howell \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n",
"J D Hartman \nDepartment of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08540PrincetonNJUSA\n",
"Benjamin M Tofflemire \nDepartment of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"Robert A Wittenmyer \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Brendan P Bowler \nDepartment of Astronomy\nThe University of Texas at Austin\n78712TXUSA\n",
"Jonathan Horner \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Stephen R Kane \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n",
"John Kielkopf \nDepartment of Physics and Astronomy\nUniversity of Louisville\n40292LouisvilleKYUSA\n",
"Peter Plavchan \nGeorge Mason University\n4400 University Drive MS 3F322030FairfaxVAUSA\n",
"Duncan J Wright \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Brett C Addison \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Matthew W Mengel \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"Jack Okumura \nCentre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia\n",
"George Ricker \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Roland Vanderspek \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Sara Seager \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Earth, Atmospheric and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Aeronautics and Astronautics\nMIT\n77 Massachusetts Avenue02139CambridgeMAUSA\n",
"Jon M Jenkins \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n",
"Joshua N Winn \nDepartment of Astrophysical Sciences\nPrinceton University\n08544PrincetonNJUSA\n",
"Tansu Daylan \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Astrophysical Sciences\nPrinceton University\nPeyton Hall08544PrincetonNJ\n",
"Michael Fausnaugh \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Michelle Kunimoto \nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMA\n",
"George Zhou "
]
| [
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Harvard University\n02138CambridgeMAUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"AberdeenUK",
"Department of Astronomy\nThe University of Texas at Austin\n78712TXUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Cahill Center for Astrophysics\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Department of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA",
"Department of Physics and Astronomy\nThe University of North Carolina at Chapel Hill\nChapel Hill27599NCUSA",
"Department of Physics and Astronomy\nDartmouth College\n03755HanoverNHUSA",
"Department of Physics and Astronomy\nUniversity of New Mexico\n210 Yale Blvd NE87106AlbuquerqueNMUSA",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Department of Physics\nLafayette College\n730 High St18042EastonPAUSA",
"Department of Physics\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK",
"Centre for Exoplanets and Habitability\nUniversity of Warwick\nGibbet Hill RoadCV4 7ALCoventryUK",
"Astronomy Unit\nMary University of London\nMile End RoadE1 4NSLondonQueenUK",
"Astrophysics Group\nCavendish Laboratory\nJ.J. Thomson AvenueCB3 0HECambridgeUK",
"School of Physics & Astronomy\nUniversity of Birmingham\nEdgbastonB15 2TTBirminghamUK",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"NASA Ames Research Center\n94035Moffett FieldCAUSA",
"Department of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08540PrincetonNJUSA",
"Department of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Department of Astronomy\nThe University of Texas at Austin\n78712TXUSA",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA",
"Department of Physics and Astronomy\nUniversity of Louisville\n40292LouisvilleKYUSA",
"George Mason University\n4400 University Drive MS 3F322030FairfaxVAUSA",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Centre for Astrophysics\nUniversity of Southern Queensland\nWest Street4350ToowoombaQLDAustralia",
"Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Department of Earth, Atmospheric and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Department of Aeronautics and Astronautics\nMIT\n77 Massachusetts Avenue02139CambridgeMAUSA",
"NASA Ames Research Center\n94035Moffett FieldCAUSA",
"Department of Astrophysical Sciences\nPrinceton University\n08544PrincetonNJUSA",
"Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Department of Astrophysical Sciences\nPrinceton University\nPeyton Hall08544PrincetonNJ",
"Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMA"
]
| []
| The TESS mission has enabled discoveries of the brightest transiting planet systems around young stars. These systems are the benchmarks for testing theories of planetary evolution. We report the discovery of a mini-Neptune transiting a bright star in the AB Doradus moving group. HIP 94235 (TOI-4399, TIC 464646604) is a V mag = 8.31 G-dwarf hosting a 3.00 +0.32 −0.28 R ⊕ mini-Neptune in a 7.7 day period orbit. HIP 94235 is part of the AB Doradus moving group, one of the youngest and closest associations. Due to its youth, the host star exhibits significant photometric spot modulation, lithium absorption, Zhou, Wirth, Huang et al. and X-ray emission. Three 0.06% transits were observed during Sector-27 of the TESS Extended Mission, though these transit signals are dwarfed by the 2% peak-to-peak photometric variability exhibited by the host star. Follow-up observations with CHEOPS confirmed the transit signal and prevented the erosion of the transit ephemeris. HIP 94235 is part of a 50 AU G-M binary system. We make use of diffraction limited observations spanning 11 years, and astrometric accelerations from Hipparchos and Gaia, to constrain the orbit of HIP 94235 B. HIP 94235 is one of the tightest stellar binaries to host an inner planet. As part of a growing sample of bright, young planet systems, HIP 94235 b is ideal for follow-up transit observations, such as those that investigate the evaporative processes driven by high-energy radiation that may sculpt the valleys and deserts in the Neptune population. | 10.3847/1538-3881/ac69e3 | [
"https://arxiv.org/pdf/2204.11975v2.pdf"
]
| 248,392,044 | 2204.11975 | 260a019a7b7961f098be44c0f3aab54c0e99466b |
A Mini-Neptune from TESS and CHEOPS Around the 120 Myr Old AB Dor member HIP 94235
April 28, 2022 27 Apr 2022
George Zhou [email protected]
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Christopher P Wirth
Harvard University
02138CambridgeMAUSA
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
Chelsea X Huang
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Alexander Venner
AberdeenUK
Kyle Franson
Department of Astronomy
The University of Texas at Austin
78712TXUSA
Samuel N Quinn
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
L G Bouma
Cahill Center for Astrophysics
California Institute of Technology
91125PasadenaCAUSA
Adam L Kraus
Department of Astronomy
The University of Texas at Austin
78712AustinTXUSA
Andrew W Mann
Department of Physics and Astronomy
The University of North Carolina at Chapel Hill
Chapel Hill27599NCUSA
Elisabeth R Newton
Department of Physics and Astronomy
Dartmouth College
03755HanoverNHUSA
Diana Dragomir
Department of Physics and Astronomy
University of New Mexico
210 Yale Blvd NE87106AlbuquerqueNMUSA
Alexis Heitzmann
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Nataliea Lowson
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Stephanie T Douglas
Department of Physics
Lafayette College
730 High St18042EastonPAUSA
Matthew Battley
Department of Physics
University of Warwick
Gibbet Hill RoadCV4 7ALCoventryUK
Centre for Exoplanets and Habitability
University of Warwick
Gibbet Hill RoadCV4 7ALCoventryUK
Edward Gillen
Astronomy Unit
Mary University of London
Mile End RoadE1 4NSLondonQueenUK
Astrophysics Group
Cavendish Laboratory
J.J. Thomson AvenueCB3 0HECambridgeUK
Amaury Triaud
School of Physics & Astronomy
University of Birmingham
EdgbastonB15 2TTBirminghamUK
David W Latham
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
Steve B Howell
NASA Ames Research Center
94035Moffett FieldCAUSA
J D Hartman
Department of Astrophysical Sciences
Princeton University
4 Ivy Lane08540PrincetonNJUSA
Benjamin M Tofflemire
Department of Astronomy
The University of Texas at Austin
78712AustinTXUSA
Robert A Wittenmyer
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Brendan P Bowler
Department of Astronomy
The University of Texas at Austin
78712TXUSA
Jonathan Horner
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Stephen R Kane
Department of Earth and Planetary Sciences
University of California
92521RiversideCAUSA
John Kielkopf
Department of Physics and Astronomy
University of Louisville
40292LouisvilleKYUSA
Peter Plavchan
George Mason University
4400 University Drive MS 3F322030FairfaxVAUSA
Duncan J Wright
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Brett C Addison
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Matthew W Mengel
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
Jack Okumura
Centre for Astrophysics
University of Southern Queensland
West Street4350ToowoombaQLDAustralia
George Ricker
Department of Physics
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
02139CambridgeMAUSA
Roland Vanderspek
Department of Physics
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
02139CambridgeMAUSA
Sara Seager
Department of Physics
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
02139CambridgeMAUSA
Department of Earth, Atmospheric and Planetary Sciences
Massachusetts Institute of Technology
02139CambridgeMAUSA
Department of Aeronautics and Astronautics
MIT
77 Massachusetts Avenue02139CambridgeMAUSA
Jon M Jenkins
NASA Ames Research Center
94035Moffett FieldCAUSA
Joshua N Winn
Department of Astrophysical Sciences
Princeton University
08544PrincetonNJUSA
Tansu Daylan
Department of Physics
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
02139CambridgeMAUSA
Department of Astrophysical Sciences
Princeton University
Peyton Hall08544PrincetonNJ
Michael Fausnaugh
Department of Physics
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
02139CambridgeMAUSA
Michelle Kunimoto
Kavli Institute for Astrophysics and Space Research
Massachusetts Institute of Technology
02139CambridgeMA
George Zhou
A Mini-Neptune from TESS and CHEOPS Around the 120 Myr Old AB Dor member HIP 94235
April 28, 2022 27 Apr 2022Submitted to AJDraft version Typeset using L A T E X twocolumn style in AASTeX62 Corresponding author:planetary systems -stars: individual (HIP 94235) techniques: spectroscopic, photometric
The TESS mission has enabled discoveries of the brightest transiting planet systems around young stars. These systems are the benchmarks for testing theories of planetary evolution. We report the discovery of a mini-Neptune transiting a bright star in the AB Doradus moving group. HIP 94235 (TOI-4399, TIC 464646604) is a V mag = 8.31 G-dwarf hosting a 3.00 +0.32 −0.28 R ⊕ mini-Neptune in a 7.7 day period orbit. HIP 94235 is part of the AB Doradus moving group, one of the youngest and closest associations. Due to its youth, the host star exhibits significant photometric spot modulation, lithium absorption, Zhou, Wirth, Huang et al. and X-ray emission. Three 0.06% transits were observed during Sector-27 of the TESS Extended Mission, though these transit signals are dwarfed by the 2% peak-to-peak photometric variability exhibited by the host star. Follow-up observations with CHEOPS confirmed the transit signal and prevented the erosion of the transit ephemeris. HIP 94235 is part of a 50 AU G-M binary system. We make use of diffraction limited observations spanning 11 years, and astrometric accelerations from Hipparchos and Gaia, to constrain the orbit of HIP 94235 B. HIP 94235 is one of the tightest stellar binaries to host an inner planet. As part of a growing sample of bright, young planet systems, HIP 94235 b is ideal for follow-up transit observations, such as those that investigate the evaporative processes driven by high-energy radiation that may sculpt the valleys and deserts in the Neptune population.
INTRODUCTION
Young planets offer a time lapse view of the construction of the exoplanet demographics. Planetary systems are thought to undergo rapid evolution within the first hundreds of millions of years after their formation. Follow-up characterization of small young planets helps to test our models for the contraction and mass loss processes that they undergo during this time frame.
Thousands of close-in Neptunes and super-Earths were discovered by the primary Kepler mission (e.g. Petigura et al. 2013;Burke et al. 2015;Zhu et al. 2018). The mechanisms that sculpted the period-radius distribution of these planets can shed light on the early formation and evolution of planetary systems. The evaporation of primordial Hydrogen and Helium envelopes, driven by UV and X-ray radiation from young stars, can reproduce the sub-Neptune desert and radius valley (e.g. Lopez et al. 2012;Owen & Wu 2013, 2017. These processes act on rapid timescales because the high-energy fluxes from young stars rapidly decline over time. Mass loss can also be driven by heat leaking out from a planet's deep interior, a process that can last hundreds of millions of years as the planets cool down after accretion (e.g. Lopez & Fortney 2013;Ginzburg et al. 2018;Gupta & Schlichting 2021). Giant impacts within compact super-Earth and Neptune systems may erode the envelopes of some planets, occurring after disk dissipation and before the systems dyanmically cool within the first hundred million years (e.g. Marcus et al. 2009;Inamdar & Schlichting 2015). As their primordial gaseous envelopes are stripped away, it is possible that some planets lying in less energetic environments may be replenished by secondary atmospheres (e.g. Kite & Barnett 2020). Other mechanisms, such as in-situ formation in gas-poor disks, may naturally carve * 51 Pegasi b Fellow † Winton Fellow out the current period-radius distribution of small planets (Lee & Connors 2021;Lee et al. 2022), and the radii of small planets may evolve much slower than predicted from run-away mass loss models (David et al. 2021). Young planets can help establish the time-scales for the evolution of the period-radius distribution of the super-Earth and Neptune populations. For some planets, mass loss through photoevaporation occurs throughout their lifetime without significantly changing their radii. Extended atmospheres have been observed for Neptune sized planets about older field stars in X-ray (Ehrenreich et al. 2012), Lyman-α (Kulow et al. 2014;Ehrenreich et al. 2015;Lavie et al. 2017;Bourrier et al. 2018), and He I (Spake et al. 2018;Allart et al. 2019;Kirk et al. 2020). However, for planets between 2-4 R ⊕ , run-away evaporation may occur. This process strips away the outer primordial envelope of these Neptunes and super-Earths, leaving behind rocky cores. Real-time measurements of photoevaporation for younger systems can help establish the timescale for this process and its influence on the radius evolution of planets. Zhang et al. (2022b) detected the Lyman-α transit of the outer planet HD 63433 c in a 400 Myr old planetary system. They also reported the lack of a detectable escaping hydrogen atmosphere for the inner planet, suggesting that it may have already undergone run-away evaporation and lost its primordial atmosphere. Rockcliffe et al. (2021) reported a non-detection of escaping hydrogen for the 650 Myr old K2-25b, with one possibility for the non-detection being factors related to the star's youth. Zhang et al. (2022a) reported escaping He I for the 500 Myr old mini-Neptune HD 73583b, showing an excess 0.68% absorption for the He I 10830Å line. Tentative detections of atmospheric escape have been reported for planets in the V1298 Tau system in Ca II, H-α (Feinstein et al. 2021), and He I (Gaidos et al. 2022) The K2 mission (Howell et al. 2014) yielded some of the first young transiting planets (e.g. Mann et al. 2016a,b;David et al. 2016;Vanderburg et al. 2018;Riz-zuto et al. 2018;David et al. 2019b,a;Livingston et al. 2019; Barragán et al. 2019), and provided insight into the radius distribution of these planets compared to those about older stars (e.g. Rizzuto et al. 2017;David et al. 2021). The TESS mission (Ricker et al. 2015) is finding young planets that are more suitable for indepth characterization: DS Tuc A (Newton et al. 2019), HD 63433 , and AU Mic (Plavchan et al. 2020) are amongst the brightest planet-hosting stars known. These are the best targets for in-depth follow-up studies with the suite of new astronomical observatories coming online this decade.
In this paper, we present the discovery of a mini-Neptune sized planet transiting the V = 8.3 star HIP 94235, a member of the ∼ 120 Myr old AB Doradus moving group, one of the youngest and closest stellar associations. HIP 94235 was identified as a planet host through a search for planets around active stars . The star's youth is confirmed from its spectroscopic and photometric characteristics, such as its rapid rotation and significant photometric modulation, strength of lithium absorption, and X-ray emission. The kinematics and independent age estimation of HIP 94235 agree with that of members of the AB Doradus moving group. The shallow 600 ppm transits of HIP 94235 b were identified in the single sector of observations obtained by TESS during the first sector of its ongoing Extended Mission. Subsequent observations via the CHEOPS space telescope made it possible to confirm the existence of the transits and improved our ability to predict future transit times for follow-up studies.
OBSERVATIONS
Candidate identification with TESS
The Transiting Exoplanet Survey Satellite (TESS, Ricker et al. 2015) performed photometric measurements of HIP 94235 in its Sector 27 Camera 2 observations between 2020-07-04 and 2020-07-30. HIP 94235 was observed at two minute cadence via target pixel stamp observations, and was also included as a target of multiple TESS Guest Investigator Programs (G03251, D. Huber; G03272, J. Burt). We make use of Simple Aperture Photometry light curves (Twicken et al. 2010;Morris et al. 2020) from the Science Processing Observation Center (SPOC, Jenkins et al. 2016), extracted from the two minute target pixel files for subsequent analyses.
HIP 94235 was identified as a planet candidate by a dedicated search for planets around young stars . We first identified HIP 94235 as a possible young star via its rotationally modulated light curve. Three transits were identified using a Box-fitting Least Squares search (Kovács et al. 2002) on the light curve after detrending using a high order spline-fitting procedure (Vanderburg & Johnson 2014). HIP 94235 was also identified as a threshold crossing event in both the SPOC pipeline (Jenkins et al. 2016) and the MIT Quick-Look Pipeline (Huang et al. 2020a). The SPOC threshold crossing event diagnostic tests indicate the transit is on target to within 5.3 ± 2.5 . However, it did not survive the TOI vetting process initially because of the large amplitude of the residuals in the detrended light curves, which is due to the intrinsic variability of the star. The candidate was promoted to become a TESS Object of Interest (TOI) after the confirmation of the photometric transit signals using the CHEOPS observations (Section 2.2).
To account for the instrumental noise due to spacecraft motion and stellar variability in the light curves, we perform a simultaneous detrending using the spacecraft quarternions and basis splines following a similar process to that described in Vanderburg et al. (2019). We iteratively fit for linear coefficients of the mean, standard deviation, and skew terms of the three quarternions, together with a spline matrix created by the lightkurve package (Lightkurve Collaboration et al. 2018). The detrended light curve is adopted for our subsequent analyses in Section 4.
The corrected TESS light curve from Sector-27 is shown in Figure 2. Figure 3 shows close-ups of individual transits of HIP 94235 b in the raw and detrended TESS light curves. Figure 4 shows the phase-folded TESS transit light curve and its best fit model from our global modeling (Section 4).
Follow-up photometry with CHEOPS
To confirm the existence and constrain the transit ephemeris of HIP 94235 b, we obtained space-based photometric observations using the ESA's CHaracterising ExOPlanets Satellite (CHEOPS Benz et al. 2021), through the CHEOPS Guest Observers Programme (AO2-005). CHEOPS is a 0.32 m Ritchey-Chrétien telescope in a nadir-locked 700 km low-Earth orbit, along the Earth's day/night terminator, with a period of 98.725 minutes. The telescope has a field of view of 17 × 17 , and is instrumentally defocused with a pointspread function of 16 at a plate scale of 1 pixel −1 .
A transit of HIP 94235 b was observed by CHEOPS between 2021-08-19 23:59 to 2021-08-20 07:33 UTC (visit ID 1568350). The visit consists of five orbits, with an exposure time of 17 s coadded onboard to a cadence of 34 s. The visit had an observing efficiency of 61% (fraction of time on target), with interruptions primarily due to Earth stray light. A total of 498 exposures were obtained, of which, nine exposures were affected by stray light and Earth occultation, eight by South Atlantic Anomaly crossings. The observations were reduced by the CHEOPS Data Reduction Pipeline v13.1.0 (Hoyer et al. 2020), accounting for bias, dark current, flat fielding, bad pixel correction, smear contamination and linearization. We adopt the optimal aperture light curve, with a circular aperture of 31 pixels, for our anal-ysis. Aperture contamination by background stars is estimated to be at the 6 × 10 −4 level, and is accounted for via the simulations from the data reduction pipeline. The CHEOPS light curve exhibits a smooth hourslong trend that can be attributed to the spot modulated rotational variations of HIP 94235. Shorter timescale instrumental variations associated with the spacecraft roll angle are seen on the orbital timescales (Maxted et 2021). We model the spot modulation over the five orbits of observations with a 4th order polynomial after removal of the transit model. We also fit for a 5th order correlation between the spacecraft roll angle and the resulting light curve residuals. This modeling is performed simultaneous to the global fit of the TESS light curve and associated parameters, such that the uncertainties from this detrending process are fully accounted for in our results (Section 4). The raw and corrected CHEOPS light curves are shown in Figure 5, and the binned and corrected phase-folded data is compared with that from TESS in Figure 4.
The CHEOPS observation was most crucial in refining the ephemeris of HIP 94235 b. With a transit depth of 600 ppm, detecting the photometric transit event with ground-based facilities is difficult. With only three transits available during the single sector of TESS observations over the entire primary and first extended mission, the TESS ephemeris would have quickly become stale. The ephemeris uncertainty using TESS observations alone would have been 5.2 hours after five years, making any transit follow-up studies more difficult to schedule. The single CHEOPS transit reduced the fiveyear transit timing uncertainty to 8 minutes. Figure 6 illustrates the reduction in transit ephemeris uncertainty enabled by the CHEOPS observation.
Clearing of nearby eclipsing binaries from Las Cumbres Observatory
The large TESS and CHEOPS point-spread functions allow for multiple stars within a photometric aperture to be the source of a photometric transit detection. We obtained a transit observation of HIP 94235 b with the Las Cumbres Observatory Global Network ( LCOGT Brown et al. 2013) 1 m telescope at Las Campanas Observatory on 2021-05-26 UTC. The observations were performed in the g band, and were defocused to a full width at half maximum of 7 due to the brightness of the target star. The observations showed that no nearby stars visually separable with HIP 94235 within 1 exhibited eclipsing or transiting events during the predicted transit of HIP 94235 b. Though the transit of HIP 94235 b was not detected due to its shallow depth, this LCOGT observation cleared nearby stars of being stellar eclipsing binaries.
Reconnaissance Spectroscopy
To characterize the target star and confirm its youth spectroscopically, we obtained three observations of HIP 94235 with the High Resolution Spectrograph (HRS Crause et al. 2014) on the Southern African Large Telescope (SALT; Buckley et al. 2006) in October 2020. HRS is a fibre fed echelle spectrograph with a resolving power of R ∼ 65, 000 over the wavelength range of 3700 − 8900Å. Spectral extraction was performed via the MIDAS pipeline (Kniazev et al. 2016(Kniazev et al. , 2017 1 . These observations confirmed the presence of the lithium 6708Å doublet and strong chromospheric emission in the Calcium H & K line cores. The lack of significant radial velocity variations or secondary sets of spectral lines indicated the transiting candidate was not an obvious blended eclipsing stellar binary system.
We obtained 16 observations of HIP 94235 using the CHIRON fiber fed cross-dispersed echelle spectrometer at the SMARTS 1.5-meter telescope located at Cerro Tololo Inter-American Observatory, Chile (Tokovinin et al. 2013). CHIRON has a spectral resolving power of R = 80, 000 over the wavelength range of 4100 to 8700Å. Spectra from CHIRON were reduced as per Paredes et al. (2021) Figure 5. Transit of HIP 94235 b as captured by CHEOPS on 2021-08-19 over five CHEOPS orbits. The first panel panel shows the pre-detrending light curve extracted at an optimal aperture of 31 pixels. The large scale smooth variation is induced by the spot modulation of the target star, and is accounted for in our model via a fourth order polynomial. Orbital-timescale variations are also present, and are accounted for via a 5th order polynomial fit between the spacecraft roll angle and the light curve residuals, after subtraction of the transit model and the spot modulation trend. The second panel shows the detrended CHEOPS light curve. The third panel shows the residuals after subtraction of the transit, spot modulation signal, and instrumental systematics models. The fourth panel shows the correlation between the light curve, after removal of the transit and spot modulation signals, and the spacecraft roll angle, with the best fit model overlaid. following the procedure from Zhou et al. (2021) via a least-squares deconvolution of each observation against a synthetic non-rotating template generated from the ATLAS9 model atmospheres (Castelli & Kurucz 2004). The radial velocity measurements are presented in Table 1.
Residual
In addition, we follow Zhou et al. (2021) and measure the spectroscopic atmospheric parameters of HIP 94235 from the CHIRON spectra. We match the spectrum of HIP 94235 against an interpolated library of ∼ 10, 000 observed spectra pre-classified by the Spectroscopic Classification Pipeline (Buchhave et al. 2012), finding that HIP 94235 has an effective temperature of 5991±50 K, surface gravity of 4.46±0.02 dex, and metallicity of [M/H] = −0.05 ± 0.10 dex. We adopt the spectroscopic effective temperature as a prior in the global modeling described in Section 4.
We also used the MINERVA-Australis telescope array for further reconnaissance of HIP 94235. MINERVA-Australis is an array of four identical 0.7 m CDK700 telescopes located at Mt Kent Observatory, Australia (Addison et al. 2019). The telescopes feed into a single KiwiSpec echelle spectrograph with a spectral resolving power of R ≈ 80, 000 over the wavelength region of 4800 − 6200Å. The instrument is environmentally controlled inside a vacuum chamber, and simultaneous wavelength calibration is provided by two calibration fibers that bracket the science fibers on the detector, each fed from a quartz lamp via an iodine cell. Radial velocities are measured from the extracted spectra as per our CHIRON analyses described above, via a leastsquares deconvolution between the observations and the synthetic non-rotating template. The radial velocities are provided in Table 1.
AGE AND MEMBERSHIP
HIP 94235 shares space velocities with the AB Doradus moving group (Section 3.1.1). The age of the group has been estimated to be between 50-150 Myr. The group has also been linked to be co-evolving with the Pleiades. We adopt an AB Doradus age of 120 Myr for the remainder of this discussion. In addition, a neighborhood search for tangentially co-moving stars with HIP 94235 reveals a tentative population of stars, with an age of ∼ 120 Myr, determined by their rotation periods. The following sections examine the kinematic properties of HIP 94235 and the association, as well as the photometric and spectroscopic characteristics of HIP 94235 that identify its youth independent of the kinematics.
As accretion dwindles, young stars spin up as they conserve their angular momentum while they contract in radius. The spin up leads to increased magnetic activity, large photospheric spots, and increased chromospheric activity. Both rotation and activity then decay with time as angular momentum is lost through the stellar wind over the main sequence lifetime of the star. The TESS light curve of HIP 94235 exhibits significant rotational spot modulation, which first drew our attention to its potential youth. The presence of X-ray emission and lithium absorption for the Sun-like host star confirm its youth. A flare event was also observed during the second orbit of the TESS sector, consistent with behavior expected for a young star.
Kinematics
The AB Doradus moving group
Accurate estimation of the age of a single star is notoriously difficult (Soderblom 2010). Stars in clusters and associations have accurate age estimates since the population can be assessed as a whole, the stars can be seen to be co-evolving based on their color-magnitude, rotation, and lithium abundance distributions. Planets found in co-evolving populations offer much more stringent tests on the temporal evolution of planet properties. Zuckerman et al. (2004) identified a set of stars within ∼ 50 pc co-moving with AB Doradus. The star AB Doradus itself is amongst the closest (15 pc) and most well studied young stars. Membership of the group has been revised based on chemo-kinematic analyses of the homogeneity of the stars (da Silva et al. 2009) and updated kinematics from new missions (e.g. Malo et al. 2013;Gagné et al. 2018). Today, dozens of bona-fide members define the extent and characteristics of the group, and its age has been estimated to range from 50 Myr (Zuckerman et al. 2004), ∼ 100 − 120 Myr (Luhman et al. 2005), > 110 Myr (Barenfeld et al. 2013), to ∼ 150 Myr (Bell et al. 2015). Figure 8 shows the color-magnitude and space-motion of HIP 94235 alongside the AB Doradus moving group and the Pleiades cluster. HIP 94235 shares kinematic properties with the moving group, and has been classified as a bona-fide member via Hipparcos (Malo et al. 2013) and Gaia (Gagné et al. 2018;Ujjwal et al. 2020) space-motions.
The group has long been linked to Pleiades cluster due to their shared kinematic velocities (Luhman et al. 2005;Ortega et al. 2007). Recent mapping of new low density moving groups from Gaia (Kounkel & Covey 2019) suggest that the AB Doradus moving group, alongside newly identified Theia 301 and Theia 369 associations, form a long tidal tail streaming away from the Pleiades (Gagné et al. 2021).
Though more distant (130 pc), the Pleiades cluster contains ∼ 1000 members, and its age has been more thoroughly investigated than AB Doradus. Estimates converge to ∼ 120 Myr via non-rotating isochrones (e.g. Meynet et al. 1993), lithium depletion boundary (e.g. Stauffer et al. 1998;Barrado y Navascués et al. 2004), and 3D rotational isochrones (e.g. Brandt & Huang 2015). The spectroscopic and photometric characteristics of HIP 94235 agree with members of the Pleiades, reaffirming our adopted age of ∼ 120 Myr for the system.
An independent rotation sequence for co-moving and co-evolving stars
The sparsity and spread of the AB Doradus moving group makes it difficult to securely identify true members of the group. The canonical membership list has been evolving with each new chemo-kinematic dataset since it was originally defined in Zuckerman et al. (2004). Upcoming Gaia releases with refined radial velocities for fainter distributions of cool stars may help redefine groups like AB Doradus, and its links with other associations.
To ensure that our age estimation does not hinge on literature classifications of HIP 94235, we also independently search for a co-moving population that may confirm its youth.
We follow Tofflemire et al. (2021) and search for stars within 50 pc of HIP 94235 that share its tangential veloc-ity to within 5 km s −1 using the comove package 2 . In this search, we assume the radial velocity of HIP 94235 for all neighborhood stars. The search returns a set of ∼ 2000 stars between 3 < G < 20 in magnitude. We then query for TESS MIT QLP light curves, returning 300 matches up to T mag = 13.5. We apply a Lomb Scargle period search for each light curve, with an upper limit on the rotation period of P rot < 13. After manual examination, we find 140 stars that exhibit rotational modulation in their light curves with secure periods. Figure 9 shows the rotation distribution of these 140 stars relative to the Pleiades. The vast majority of these stars are relatively bright, and have well measured radial velocities from Gaia. We find 21 stars with space motion velocities within 5 km s −1 of HIP 94235, these are coloured in blue for clarity. This population forms a rotation sequence that agrees with HIP 94235. This set of co-evolving stars are presented in Table 2.
To estimate the age of the distribution, we fit its color and rotation period with a rotation-age relationship. We select eight F, G, K stars (0.1 < B − V < 0.8) within our sample that lie on the slow sequence of the rotation sequence, and model their rotation periods with the age-color-rotation relationship from Mamajek & Hillenbrand (2008). The posterior age distribution for these eight stars, and their joint posterior, is shown in Figure 10. We find that the color-rotation distribution of this subsample can be described by the slow sequence with an age of 118 +18 −16 Myr. We obtained a small number of spectra of this subsample using the Las Cumbres Observatory (Brown et al. 2013) Robotic Echelle Spectrographs (NRES) facilities. The stars that were observed exhibit lithium absorption as expected for their youth. The lithium 6708Å equivalent width has been noted in Table 2 where available. Future works examining the lithium absorption of this subsample may yield a lithium depletion boundary to confirm this age estimate.
Of these 21 co-moving stars, only one is a canonical member of AB Doradus (Gagné et al. 2018). It is likely that the AB Doradus moving group itself is dispersed and ill defined, and our subset of co-moving and potentially co-evolving stars form part of the extended AB Doradus group.
X-ray
The rapid rotation in young stars leads to increased chromospheric activity. As a result, young stars often exhibit higher X-ray and UV emissions than their slowly rotating older counterparts. HIP 94235 is cataloged in the Second ROSAT All-sky Bright Source Cat- Age (Myr) Posterior Figure 10. The age posterior of a subsample of F,G,K stars that are co-moving with HIP 94235. We fit the age relationship from Mamajek & Hillenbrand (2008) to this population. The age posterior of each individual star is shown in grey, and the joint posterior in blue. We find a best-fit age of 118 +18 −16 Myr to the distribution.
alog (Boller et al. 2016). HIP 94235 matches with 2RXS J191057.9-601611, with a count rate of 0.1766 ± 0.0425 counts/second and a hardness ratio of −0.124 ± 0.176. Using the calibration from Fleming et al. (1995), we find an X-ray luminosity for HIP 94235 of log(L X /L bol ) = −3.93 ± 0.13. The strength of the X-ray activity can help yield a qualitative age estimate for a single star. We adopt Equation A3 from Mamajek & Hillenbrand (2008) to find an approximate age of 50 − 130 Myr (1σ) for HIP 94235 from its X-ray luminosity, consistent with the expected age of the AB Doradus group.
Lithium
Stars undergo lithium depletion during their mainsequence evolution. Lithium is depleted during proton collisions in the cores of stars. Convective mixing between the envelope and the core leads to a gradual depletion of lithium absorption in the observed spectra of Sun-like stars. As such, the lithium abundance of a star can be a tracer for its youth, though direct age estimates from the lithium absorption strength is only qualitative, as is the case for any age indicator when interpreting single stars. Our CHIRON spectra of HIP 94235 reveal significant absorption about the 6708Å Li doublet. We model the doublet and the nearby Fe I line simultaneously, measuring a Li equivalent width of 0.1413 ± 0.0092Å for HIP 94235. (Quinn et al. 2012(Quinn et al. , 2014Zhou et al. 2021). Bottom panel shows the distribution of rotation periods for the same set of stars. Figure 11 compares the lithium 6708Å equivalent width against distributions from known AB Doradus members (da Silva et al. 2009), Pleiades, and Praesepe clusters (Quinn et al. 2012(Quinn et al. , 2014Zhou et al. 2021). HIP 94235 is consistent with that of other known members of AB Doradus and Pleiades, and exhibits convincingly stronger lithium absorption than members of the 600 Myr Praesepe cluster, in agreement with a 120 Myr age estimate.
Rotation
The TESS observations show that HIP 94235 exhibits significant photometric variability at the 2% level, consistent with the semi-periodic signature of spot modulation. Figure 12 shows the periodicity of HIP 94235 via a Lomb-Scargle periodogram, with a peak rotation period of 2.24 ± 0.11 days. We adopt the width of the rotational peak of the periodogram as the uncertainty on the measured period. The rotational modulation is clear in phase-folded light curve in the right panel of Figure 12, and the evolution of starspots over successive rotations can also be seen. Additionally, we modeled the light curve via a stochastically-driven damped harmonic oscillator through a Gaussian process using the celerite package (Foreman-Mackey et al. 2017). Modeling the posterior via a Markov chain through emcee (Foreman-Mackey et al. 2013), we found a posterior distribution for the frequency term log ω 0 to be −0.791 +0.047 −0.045 , corresponding to a period of 2.20 ± 0.10 days. Figure 11 shows the rotation periods of AB Doradus, as well as the Pleiades and Praesepe cluster members. We adopt the membership list for AB Doradus from Gagné et al. (2018), and derived rotation periods for stars with available TESS light curves from the MIT FFI QLP library (Huang et al. 2020b) for stars with available light curves. Rotation periods are measured via a Lomb-Scargle period search, and down-selected by hand to remove stars that do not show unambiguous rotational signatures. By-hand corrections of period aliases were also applied. Of the 66 members listed in Gagné et al. (2018), 55 were bright enough and had accessible TESS QLP light curves, and 37 yielded convincing detections of a rotation signal. Rotation periods for Pleiades members were adopted from Rebull et al. (2016), and for Praesepe from Rebull et al. (2017). The rotation of HIP 94235 agrees with the Pleiades and AB Doradus distribution, in agreement with its kinematic age estimates.
GLOBAL MODEL
To best estimate the stellar and planetary properties of the HIP 94235 system, we perform a global modeling incorporating the TESS and CHEOPS transits, radial velocities, and photometric and spectroscopic properties of HIP 94235.
The transit models were computed as per Mandel & Agol (2002), implemented via batman (Kreidberg 2015). The free parameters for this model are the stellar mass and radius, the time of transit center T 0 , the orbital period P , the planetary radius ratio R p /R , and the eccentricity parameters √ e cos ω and √ e sin ω, where e is the eccentricity and ω is the longitude of periastron.
We model the CHEOPS transit and its associated stellar variability and instrumental characteristics simultaneous to the global modeling. The transit model is computed as per Mandel & Agol (2002). The hours-long rotational modulation signal is modeled via a 4th degree polynomial with respect to time. The correlation between the light curve and the spacecraft motion is modeled via a 5th degree polynomial against the roll angle. Figure 5 shows the CHEOPS light curve before
Proper Motion
Gaia ( and after the removal of the best fit stellar variability and instrumental model. The out-of-transit Keplerian radial velocity was modeled with additional parameters describing the systemic velocity γ, planetary mass M p , and a jitter term for each instrument.
The stellar mass and radius were modeled using the MIST isochrones (Dotter 2016), and constrained by their photometric magnitudes and parallax priors from Gaia G, Bp, Rp (Gaia Collaboration et al. 2018), Hipparcos TYCHO B and V bands (Perryman et al. 1997), 2MASS J, H, and Ks bands (Skrutskie et al. 2006). Additionally, the age was restricted to be 120 ± 50 Myr as per the age of the AB Dor moving group, and the limb darkening coefficients were fixed to theoretically interpolated values (Claret & Bloemen 2011;Claret 2017). All other parameters were assigned uniform priors with physically motivated boundaries.
The modeling was performed simultaneously for all parameters using Monte Carlo Markov Chain (MCMC) analysis, making use of the emcee package (Foreman-Mackey et al. 2013). Results are listed in Tables 3 and 4, the best fit light curve model is shown in Figure 2, and the best fit radial velocity model is shown in Figure 7. The spectral energy distribution of HIP 94235 is shown in Figure 13, along with the template ATLAS9 model spectrum (Castelli & Kurucz 2004) computed at the best fit stellar parameters of HIP 94235. We note that the mid-infrared WISE magnitudes show no excess that might be indicative of a remnant debris disk around the young star.
HIP 94235 B: A 60 AU M-DWARF COMPANION
Diffraction limited observations revealed a bound Mdwarf companion to HIP 94235. The companion was identified in speckle imaging observations of HIP 94235 during the candidate vetting process, and also identified from archival adaptive optics observations searching for wide Jovian companions to young stars 11 years prior. We describe the observations below, and perform compute astrometric orbital solutions for HIP 94235B.
We obtained high-contrast imaging at 562nm and 832 nm with the Zorro speckle imager on the 8 m Gemini South Observatory on 2021-07-23 and 2021-10-22, which revealed a faint stellar companion to HIP 94235. Zorro is a dual-channel speckle imager with a pixel scale of 0.01 pixel −1 and an approximate full width at half maximum of 0.02 . Data reduction and analysis were performed as per Howell et al. (2011) andHowell et al. (2016). Figure 14 shows the Zorro image at 562nm and 832 nm of HIP 94235 on 2021-10-22. The 832 nm observation achieved a contrast ratio of ∆m = 6.68 at 0.5 3,000 10,000 30,000 100,000
Wavelength (Å) separation. A companion of ∆m = 5.3 was identified at a separation of 0.6 in the red band. No companions with contrasts brighter than ∆m = 4.7 were detected in the blue arm. Observations from 2021-07-23 also identified the same companion in the 832 nm observations, though the blue arm was not functional at the time.
Adaptive optics imaging of HIP 94235 was also carried out as part of a large program to characterize the occurrence rates of giant planets at large separations by Chauvin et al. (2015). These observations, using the NaCo high contrast Adaptive Optics (AO) imager on VLT-UT4, were obtained on 2010-07-30. They revealed the stellar companion at a separation of 0.5 with a contrast of ∆H = 3.8±0.3. The positional information from the diffraction limited imaging observations are listed in Table 5.
Based on the Gaia and Hipparcos proper motions measurements, the accumulated motion of HIP 94235 over the 11 year interval is 1.1 . In contrast, the relative motion between the imaged companion and the target star is 0.1 , which strongly suggests the pair is a a bound stellar binary with a projected separation of 31 AU. We henceforth refer to this companion as HIP 94235 B.
To determine the approximate properties of HIP 94235 B, we adopt the 100 Myr MIST isochrones (Choi et al. 2016) to model its 832 nm and H-band magnitudes. We approximate the 832nm band with the I To check if this companion is capable of being the source of the transit signal, we deblend and refit for the transit about the M-dwarf companion. We find a best fit radius ratio of R 2 /R 1 = 0.46 ± 0.01 for such a system, with a V-shaped transit that is incompatible with the observed light curve. A model comparison yields a Bayesian information criterion difference of ∆BIC = 65.9 between the best fit model of a transit about the companion M-dwarf and that of our nominal planetary transit scenario. We therefore rule out the companion being the source of the observed transit signal. Although the available imaging detections cover only a small part of the orbital arc of HIP 94235 B, it is possible in principle that the binary orbit could be con- parcos to produce long-timescale astrometric data that can be used to detect the reflex orbital motion of orbiting companions. Brandt et al. (2019) extended this technique further by jointly fitting Hipparcos-Gaia astrometry with radial velocities and relative astrometry from direct imaging, and found that the orbits of massive companions can be precisely constrained even when the orbital periods are much longer than the observational duration. This method has been profitably applied to many further systems; Bowler et al. (2021), for example, were able to extract precise orbital parameters and masses for two white dwarfs with orbital periods in excess of 200 yrs despite possessing no more than ∼ 30 yrs of observational data in both cases.
Inspection the Hipparcos-Gaia Catalog of Accelerations (Brandt 2021) shows that HIP 94235 displays a statistically significant astrometric acceleration of 0.24 mas yr −1 (χ 2 = 58.5) between the Gaia and Hipparcos-Gaia proper motions, equivalent to a drift in the stellar tangential velocity of ≈ 70 m s −1 , which can plausibly be attributed to HIP 94235 B. This motivates us to attempt an orbital fit for HIP 94235 B based on the available data.
To initialize our model we first used orbitize! (Blunt et al. 2020) to fit the relative astrometry of HIP 94235 B assuming system total system mass of 1.09 + 0.26 = 1.35 M . orbitize! makes use of the Orbits For The Impatient (OFTI) Bayesian sampling method (Blunt et al. 2017) that is well-suited to fitting the orbits of directly detected companions with orbital periods much longer than the observational span such as HIP 94235 B. Next we extracted the posteriors from the orbitize! fit and calculated the corresponding χ 2 value for a fit to the Hipparcos-Gaia astrometry for each set of orbital parameters assuming stellar masses of M A = 1.09 M , M B = 0.26 M . This allowed us to identify a tightly constrained initial parameter space for most orbital parameters.
To model the orbit of HIP 94235 B, we run a joint fit to the astrometry based on that of Brandt et al. (2019). For modelling the Hipparcos-Gaia astrometry we use the equations described in Venner et al. (2021b), while the corresponding expressions for the relative astrometry can be found in Pourbaix (1998). As in Venner et al. (2021a,b), we resample the Hipparcos and Gaia proper motions using the observational epochs recorded in the Hipparcos Intermediate Astrometric Data for the former and the Gaia Observation Forecast Tool 3 for the latter. To explore the model parameter space we use the differential evolution MCMC sampler edmcmc 4 (Vanderburg 2021).
A total of 11 parameters are used for the model: the system parallax , the primary mass M A , the secondary mass M B , the semi-major axis a, the eccentricity e and argument of periastron ω parameterized as √ e sin ω and √ e cos ω, the mean anomaly at an arbitrary reference epoch BJD = 2457000, the orbital inclination i, the longitude of node Ω, and finally two terms for the proper motion of the system barycenter.
The radial velocity trend generated by HIP 94235 B is too small to be detected in the available data so we do not make use of RV data in the joint fit. The lack of radial velocity information in the fit results in the classical 180-degree degeneracy in the longitude of node Ω and argument of periastron ω; following convention we report the solution with Ω in the range [0, 180] deg. The argument of periastron used in our model is that of the primary's orbit rather than that of the companion.
Of the 11 parameters used in the joint model, the parallax was assigned a Gaussian prior of 17.061 ± 0.037 mas based on the Gaia EDR3 astrometric solu- Table 3 to avoid unduly biasing the model. Initial trials of the orbital fit were run without informed priors on the secondary mass M B , however these runs tended to produce results skewed towards implausibly small values for this parameter (M B < 0.1 M ), which in turn resulted in excessively broad distributions in the orbital parameters. It was therefore deemed prudent to adopt an informed prior of M B = 0.26 ± 0.04 M for the final model. The results of our joint model for the orbit of HIP 94235 B are presented in Table 6. Despite the span of observations being much shorter than the orbital period of the binary, we are able to robustly constrain most orbital parameters. We measure a semi-major axis of a = 56 +9 −7 AU for HIP 94235 B, approximately ≈ 80% larger than the projected separation, corresponding to an orbital period of 365 +92 −69 yr. Our model shows a preference for relatively low orbital eccentricities (0.25 +0.22 −0.14 , e < 0.61 at 95% confidence), resulting in a periastron separation of 43 +11 −15 AU for the binary. We obtain a tightly constrained orbital inclination of 67.8 +2.7 −2.9 degrees and a precise longitude of node of 20 +11 −7 degrees, while our measurements of ω and the time of periastron for HIP 94235 B are comparatively imprecise (300 +30 −80 deg, 2184 +107
−74 CE). Our fit to the astrometric data is shown in Figures 15 and 16. In the fit to the absolute astrometry it can be observed that the Hipparcos proper motion measurement is too imprecise to significantly detect the astrometric signal. Conversely, the Gaia and averaged Hipparcos-Gaia proper motions are so precise that their uncertainties are not visible in the figure, and it is these measurements that drive the detection of the astrometric reflex signal of HIP 94235 B. The importance of the > 10-year timespan of relative astrometry is evident from Figures 15 and 16, as were it not for the detection of HIP 94235 B in an archival NaCo observation in 2010 it would not be possible to derive such strong constraints on the binary orbit.
We note that the astrometric orbit solution is dependent on the priors we adopt for the system. For example, when the mass prior for HIP 94235 B is removed, the orbital semi-major axis becomes degenerate with eccentricity. As such, future diffraction limited observations remain important in validating and refining the orbital parameters of the system.
Based on our results, we predict the magnitude of the radial velocity trend on HIP 94235 generated by HIP 94235 B to be 9.6 ± 1.3 m s −1 yr −1 . This trend is not detectable in our RV data, but it is possible that future high-precision radial velocity measurements will be able to detect this acceleration. If so, the radial velocity information could then be used to improve the constraints on the orbit of HIP 94235 B. We report the discovery and statistical validation of a mini-Neptune around a bright young Sun-like star. HIP 94235 b is a 3.00 +0.32 −0.28 R ⊕ planet in a 7.7 day period orbit around its V mag = 8.31 host star. Based on its kinematics, HIP 94235 can be placed in the AB Doradus moving group, with an age of ∼ 120 Myr. The kinematics age is in agreement with that expected from the stellar rotation rate, lithium abundance, and X-ray emission intensity.
The 600 ppm transits of HIP 94235 b were identified at a signal-to-noise of 9.7 from TESS Extended Mission Section 27 observations. However, the 2-day periodicity 2% stellar variability led to the planetary signal slipping through the TESS official planet selection processes. The detection of HIP 94235 b demonstrated the necessity of a dedicated search for planets around noisy, active young stars. Confirmation of transits came from the CHEOPS mission. We obtained five orbits of observations with CHEOPS to recover the transit of HIP 94235 b. Such follow-up would have been difficult to schedule with ground-based facilities due to the shallow transit. Figure 6 demonstrates the importance of CHEOPS follow-up in preserving the transit timing ephemeris. Without such observations, the timing derived from the three transits observed in the single sector TESS observation would have eroded by ∼ 1 hour per year. If a follow-up confirmation was not obtained within the first year after discovery, targeted transit observations would have been difficult to schedule and the recovery of such a small planet challenging (see Dragomir et al. 2020).
HIP 94235 b is one of the smallest planets that has been found transiting a young star. Figure 17 shows the position of HIP 94235 b in the distribution of small planets around young stars. Most TESS planets that transit young stars, including HIP 94235 b, have radii that place them above the radius valley delineated by Fulton et al. (2017) and Owen & Wu (2017). HIP 94235 b is at the edge of detectability for the TESS light curve of HIP 94235, and selection biases likely shape the current TESS distribution of young planets. However, Rizzuto et al. (2017) found young clusters and associations surveyed by K2 hosted planets larger than the equivalent distribution about field stars. A similar holistic study of Figure 15. The dotted line marks the intersection of the planes of the orbit and of the sky and the black arrowhead indicates the direction of motion. Despite the short arc of observations our joint fit allows us to robustly constrain several orbital parameters, including strong constraints on the orbital inclination (i = 67.8 +2.7 −2.9 deg) and semi-major axis (a = 56 +9 −7 AU) of the binary.
young stars surveyed by TESS may elucidate the radius evolution timescale for young planets. Interestingly, despite continuous monitoring from the K2 mission of ∼ 1000 Pleiades members, no confirmed planets have yet been found in the 125 Myr old cluster (e.g. Hartman et al. 2010;Rizzuto et al. 2017). Systems such as TOI-451 in the Pisces-Eridanus stream (Meingast et al. 2019;Curtis et al. 2019), and HIP 94235 in AB Doradus, may help constrain the occurrence rates and radius properties of planets at the ∼ 100 Myr age range. These can help infer if the absence of planets in the Pleiades is due to detection biases, or if other astrophysical mechanisms may be at play.
Planets like HIP 94235 b, lying near the edge of the sub-Neptune valley, can provide key observational tests for the mechanisms of mass loss in young planets. HIP 94235 b and other recently discovered planets around young stars are subjected to significant high-energy radiation, which can be a dominant driver for rapid mass loss within the first hundreds of millions of years post formation (Owen & Jackson 2012). To estimate the mass evolution of HIP 94235 b, we adopt the analytical approach from Owen & Wu (2017). We find that we can replicate the current radius of HIP 94235 b via a planet model that has a high initial envelope mass fraction, and undergoes rapid mass loss with a time scale of ∼ 100-200 Myr (Figure 18). HIP 94235 b has a current energy-limited mass-loss rate of ∼ 5 M ⊕ Gyr −1 . At the end of this process, we expect the envelope mass fraction to reduce from 10% to ∼ 1% of the total planet mass. Such mass and radius evolution is expected for many close-in Neptunes and super-Earths around Sunlike stars. Figure 19 shows the X-ray irradiation experienced by HIP 94235 b compared to other systems about X-ray sources identified in the Second ROSAT Point Source Catalog (Boller et al. 2016). To convert ROSAT count rates to fluxes, we adopt the calibration in Fleming et al. (1995) and parallaxes made available from Gaia Collaboration et al. (2018). The majority of planetary systems around ROSAT X-ray sources are young, and the recent TESS discoveries of nearby planet-hosting young stars are the most suitable targets for follow-up X-ray and UV observations. AU Mic (Plavchan et al. 2020) has the highest ROSAT count rate for any planet hosting star. Similarly, other nearby systems such as the planets around V1298 Tau (David et al. 2019b,a), the ultra-short period super-Earth TOI-1807b (Hedges et al. 2021), DS Tuc Ab (Newton et al. 2019), and HIP 94235 b mark the inner boundary of X-ray irradiation for small planets. The only planets around stars not identified as young in literature residing in more energetic environments are the hot Jupiter NGTS-16b (Tilbrook et al. 2021) and the Earth-sized inner planet in the Kepler-1514 system (Dalba et al. 2021). Neither have radii susceptible to significant modification by photoevaporation. Recent observations by Zhang et al. (2022b) detected the Lyman-alpha transit of HD63433c. Similar observations in the X-ray for young active stars have the potential of anchoring the photoevaporation models.
Other avenues of mass loss are also possible. Atmospheric erosion from giant impact events can occur even more quickly, before compact super-Earth systems settle dynamically, acting on the tens of millions of years time scale post disk dispersal (e.g. Izidoro et al. 2017). Core-powered mass loss may also reproduce the observed radius distribution independent of the host star irradiation (Lopez & Fortney 2013;Ginzburg et al. 2018). By inferring the ages of Kepler systems through numerous indicators, David et al. (2021) found the radius gap may form at longer timescales. As such, it is unclear which mechanism dominates in shaping our current planet distribution.
At 50 AU, HIP 94235 is one of the tightest stellar binaries (Su et al. 2021). Like the DS Tuc AB system Planet Radius (R ) Figure 18. At 100 Myrs, HIP 94235 is undergoing run-away mass-loss evolution. The top shows the mass evolution of primordial envelope for the planet as per Owen & Wu (2017). Within the first ∼ 100 Myr, HIP 94235 b is expected to lose most of its primordial envelope. The bottom panel shows the corresponding radius evolution expected for HIP 94235 b over the next few hundred million years. (Newton et al. 2019), the orbit of the binary is aligned with that of the planetary orbit. This follows the trend from Christian et al. (2022) that wide transiting planethosting binaries with separations between 100-700 AU are preferentially found in edge-on orbits. Christian et al. (2022) suggest that such trends may be due to the companions being formed from disk fragmentation, or the realignment of the inner disk by the perturbing outer companion. The efforts by TESS follow-up teams to provide diffraction limited imaging of a majority of planet candidates, such as HIP 94235, will help probe the continuation of this trend to < 100 AU separations. The brightness of HIP 94235 makes the system suitable for follow-up atmospheric characterization with the next generation of space and ground-based facilities. Adopting the mass-radius relationship from Wolfgang et al. (2016), HIP 94235 b has a predicted mass of 11.2±1.4 M ⊕ , yielding a transmission spectroscopic metric of 96 ± 25. As such, HIP 94235 b ranks amongst the top dozen known planets between 1.6 − 4 R ⊕ in its suitability for follow-up transmission observations. In the era of JWST, we can compare the atmosphere of HIP 94235 b against planets of similar radii about older stars. We may find that young planets host primarily primordial atmospheres dominated by hydrogen and helium, with older planets hosting heavier water-rich atmospheres, or that some highly energetic environments may never allow secondary atmospheres to form. Or- Figure 19. A large fraction of host stars that exhibit significant X-ray emission are known young stars. The X-ray irradiation received by all known transiting planets systems that have X-ray counterparts in the second ROSAT point source catalog (Boller et al. 2016) are shown. We see a paucity of planets with radii between 2 − 10 R⊕ in energetic environments. Older systems are marked by open light blue points, planets around known young stars or association and cluster members are marked by closed points, with older systems in light blue, younger systems in red. For brevity, names with 'TOI' have been truncated to their TOI numbers only, and the V1298 Tau system are denoted by 'Vb', 'Vc', 'Vd', 'Ve'.
bital obliquities of young planets can help constrain the timescales of migration for planets that may have formed further out in their planetary systems. We expect a ∼ 10 m s −1 Rossiter-McLaughlin signal if HIP 94235 b is in a well aligned projected orbit. Stellar activity will be a limiting factor in achieving a secure detection of the spectroscopic transit, though past works have shown that this can be mitigated on transit-timescales due to the smoothly varying nature of the rotational modulated velocity noise (e.g. Palle et al. 2020;Benatti et al. 2021).
We respectfully acknowledge the traditional custodians of all lands throughout Australia, and recognise their continued cultural and spiritual connection to the land, waterways, cosmos, and community. We pay our deepest respects to all Elders, ancestors and descendants of the Giabal, Jarowair, and Kambuwal nations, upon whose lands the MINERVA-Australis facility at Mt Kent is situated. GZ thanks the support of the ARC DECRA program DE210101893. CW and GZ thank the support of the TESS Guest Investigator Program G03007. CH thanks the support of the ARC DE-CRA program DE200101840. EG gratefully acknowledges support from the David and Claudia Harding Foundation in the form of a Winton Exoplanet Fellow-ship. CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden and the United Kingdom. We thank support from the CHEOPS GO Programme and Science Operations Centre for help in the preparation and analysis of the CHEOPS observations. This research has used data from the CTIO/SMARTS 1.5m telescope, which is operated as part of the SMARTS Consortium by RECONS This study was based in part on observations made using the Las Cumbres Observatory global telescope network, using time allocated by the National Science Foundation's NOIRLab (NOIRLab Prop. ID NOAO2021A-009; principal investigator: J. Hartman). Some of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT). Some of the observations in the paper made use of the High-Resolution Imaging instrument Zorro obtained under Gemini LLP Proposal Number: GN/S-2021A-LP-105. Zorro was funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. Zorro was mounted on the Gemini North (and/or South) telescope of the international
Figure 1 .
1The field around HIP 94235 from Left Deep Sky Survey R' band, Middle TESS, and Right CHEOPS. The images have approximately the same field sizes (5 ) and orientations.
Figure 3 .
3Individual transits of HIP 94235 b from TESS Sector-27. The top row shows the transits pre-detrending, the bottom row shows the same transits post-detrending. The best fit transit model is overlaid in red.
Figure 4 .
4Phase folded transit light curves from TESS (Top) and CHEOPS (Bottom). The black points show the dataset binned at 30 minute intervals. The red lines show the respective models from our global analysis.
Figure 6 .
6Transit timing uncertainty for HIP 94235 b with and without follow-up CHEOPS observations. With only the single sector of TESS observations, the timing uncertainty erodes by ∼ 1 hr yr −1 , quickly making targeted transit follow-up difficult. The CHEOPS transit allowed the transit ephemeris to be preserved.
Figure 7 .
7Radial velocities for HIP 94235 from CHIRON (red) and MINERVA-Australis (black), with errorbars representing the quadrature addition of the observational uncertainties and the best fit jitter. No orbital variations were detected, as expected for a small planet about an active, rapidly rotating star. The velocity orbit upper limits at 1σ = 0.35 Mjup and 3σ = 1.27 Mjup are shown in the solid and dashed red curves respectively.
Figure 8 .
8HIP 94235 can be placed kinematically in the ∼ 120 Myr old AB Doradus moving group. The figure shows the photometric and space motion properties of stars in AB Doradus and Pleiades fromGagné et al. (2018). Both groups have ages of ∼ 120 Myr, and are thought to be co-moving(Luhman et al. 2005;Ortega et al. 2007;Kounkel & Covey 2019). The top left panel shows the Gaia color magnitude diagram for the two clusters. The remaining panels show the galactic positions X, Y, Z and motions u, v, w of the selection. HIP 94235 is marked by the red star in all panels. The local standard of rest has not been corrected for in the velocities.
Figure 9 .
9The distribution of stars sharing tangential velocities of HIP 94235 identified in our neighborhood search. Top left We find 140 stars to exhibit clear rotational signatures in their TESS light curves. The blue points are located within 5 km s −1 of HIP 94235 in U , V , and W . These blue points form rotation sequence that agrees with the expected age of HIP 94235 at 120 Myr. The Pleiades sequence fromRebull et al. (2016) is shown in yellow. The remaining panels show the U , V , and W velocity distribution of the tangentially co-moving stars. A clear group can be found around HIP 94235, and is marked out by the 5 km s −1 boundary in grey.
Figure 11 .
11Age indicators of HIP 94235 are consistent with that from AB Doradus and Pleiades members. AB Doradus members are shown in open orange circles, Pleiades members in closed orange circles, and Praesepe members in grey. Top panel shows the distribution of Li 6708Å doublet equivalent widths for AB Doradus (da Silva et al. 2009), Pleiades and Praesepe members
MRFigure 12 .
122016.1) RA Proper Motion (mas yr −1 ) . . . . . . . . . . . 11.632 ± 0.025 Gaia Collaboration et al. (2021) Gaia (2016.1) Dec Proper Motion (mas yr −1 ) . . . . . . . . . . −100.836 ± 0.025 Gaia Collaboration et al. (2021) Hipparcos (1991.1) RA Proper Motion (mas yr −1 ) . . . . .12.755 ± 0.890 199(1997( ) Hipparcos (1991) Dec Proper Motion (mas yr −1 ) . . . . . mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.758 ± 0.006 Stassun et al. (2018) B (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.943 ± 0.027 Henden et al. (2016) V (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.31 ± 0.03 Henden et al. (2016) J (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.201 ± 0.023 Skrutskie et al. (2006) H (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.966 ± 0.023 Skrutskie et al.(2006) K (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.881 ± 0.027 Skrutskie et al. (2006) Gaia (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.173207 ± 0.00031 Gaia Collaboration et al. (2021) GaiaBP (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4634 ± 0.0013 Gaia Collaboration et al. (2021) GaiaRP (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.70637 ± 0.00087 Gaia Collaboration et al. (2021) WISE W1 (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.841 ± 0.066 Cutri & et al. (2012) WISE W2 (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.823 ± 0.02 Cutri & et al. (2012) WISE W3 (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.836 ± 0.016 Cutri & et al. (2012) WISE W4 (mag) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.728 ± 0.067 Cutri & et al. (2012) Kinematics and Position U (km s −1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −5.61 ± 0.34 Derived V (km s −1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −27.03 ± 0.17 Derived W (km s −1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −11.82 ± 0.18 Derived Distance (pc) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (R ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T eff (K) . . . . . . . . . . . . . . . . . . . . . . . . . 5991 ± 50 This paper Surface Gravity log g (cgs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.460 ± 0.05 This paper [m/H] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.0 (Barenfeld et al. 2013) v sin I (km s −1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.4 ± 1.0 This paper I ( • ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . > 70 (3σ) Calculated as per Masuda & Winn (2020) Age (Myr) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50-150 Zuckerman et al. (2004) Luhman et al. (2005) Bell et al. (2015) Limb darkening coefficients (TESS) . . . . . . . . . . . . . . . . . . . . (0.17, 0.41) Claret (2017) Limb darkening coefficients (CHEOPS) . . . . . . . . . . . . . . . . Prot (days) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.24 ± 0.11 This paper Lithium 6708Å Equivalent Width (Å) . . . . . . . . . . . . . . . . .0.1413 ± 0.0092 This paper X-Ray luminosity log(L X /L bol ) Left: Lomb-Scargle periodogram of the HIP 94235 from its TESS light curve. The red dashed line denotes the adopted rotation period, while the dotted purple lines denote the aliases at P/2 and 2P . The orbital period of the planet is marked in orange. Right: Phase-folded TESS light curve, with color gradient applied such that later periods are a lighter blue.
Figure 13 .
13Spectral energy distribution of HIP 94235.
Figure 14 .
14strained. Several authors (e.g.Calissendorff & Janson 2018;Snellen & Brown 2018;Brandt 2018;Kervella et al. 2019) have demonstrated that it is possible to combine proper motion measurements from Gaia and Hip-Gemini South Zorro speckle observations of HIP 94235 on 2021-10-22. The speckle auto cross correlation functions are shown in the top row, with the image from the Blue camera at 562nm on the left, and the Red camera at 832nm on the right. A companion is detected in the red arm with a contrast of ∆m = 5.3 at a separation of 0.6 . No companions were detected on the blue arm with a limit of ∆m > 4.7. The companion is marked in the red arm image.
Figure 15 .
15Keplerian orbital model to the proper motion of HIP 94235 (top left in right ascension, top right in declination) and relative astrometry of HIP 94235 B (bottom left in separation, bottom right in position angle). The best-fit model is shown in black, while the orbits in gray are drawn randomly from the posteriors. In the fit to the proper motions, all values are normalised to the proper motion of the system barycenter.
Figure 16 .
16The projected sky orbit of HIP 94235 B. The data format is as in
Figure 17 .
17HIP 94235 b amongst the distribution of small planets. The left panel shows the radius and equilibrium temperature distribution of planets from Fulton et al. (2017). The sizes of each point mark the V band magnitude of each host star, with the scale defined by the left panel. Planets around known young stars marked by the individual points. HIP 94235 b lies near the edge of the mini-Neptune desert. The right panel compares the planetary radius and host star magnitude of HIP 94235 b compared to other systems. Young planets are highlighted. The cyan points mark other transiting systems from the NASA Exoplanet Archive (January 2021). Young systems included in the figure are TOI-1227 (Mann et al. 2021), V1298 Tau (David et al. 2019a,b), K2-33 (Mann et al. 2016b; David et al. 2016), DS Tuc A (Newton et al. 2019; Benatti et al. 2019), Kepler-63 (Sanchis-Ojeda et al. 2013), HIP 67522 (Rizzuto et al. 2020), HD63433 (Mann et al. 2020), TOI-451 (Newton et al. 2021), AU Mic (Plavchan et al. 2020), TOI-251 (Zhou et al. 2021), TOI-942 (Zhou et al. 2021; Carleo et al. 2021), K2-284 (David et al. 2018a), TOI-837 (Bouma et al. 2020), TOI-1098 (Tofflemire et al. 2021), K2-233 (David et al. 2018b), K2-77 (Gaidos et al. 2017), K2-95, K2-100, K2-101, K2-102, K2-104, EPIC-211822797 (Mann et al. 2017), TOI-1807, TOI-2076 (Hedges et al. 2021), Kepler 1627A (Bouma et al. 2021), TOI-1268 (Dong et al. 2022).
ray irradiation (log 10 F x /F x, Earth )
Figure 2. Sector 27 2 min cadence TESS photometry of the HIP 94235 system. Top: Simple Aperture TESS light curve of HIP 94235. The three detected transits of HIP 94235 b are indicated by the arrow marks. Bottom: The custom detrended TESS light curve of HIP 94235 are shown, with the best fit model overlaid in red.0.985
0.990
0.995
1.000
1.005
1.010
1.015
1.020
1.025
Rel. Flux
2040
2045
2050
2055
2060
BJD-2457000
0.998
0.999
1.000
1.001
1.002
Detrended Flux
al.0.996
0.998
1.000
1.002
1.004
Rel. Flux
2037.7 2037.8 2037.9 2038.0
BJD-2457000
0.999
1.000
1.001
Detrended Flux
0.994
0.996
0.998
1.000
1.002
2045.4 2045.5 2045.6 2045.7
BJD-2457000
0.999
1.000
1.001
0.996
0.998
1.000
1.002
1.004
2053.1 2053.2 2053.3 2053.4
BJD-2457000
0.999
1.000
1.001
. Radial velocities were measured 1 http://www.saao.ac.za/ akniazev/pub/HRS MIDAS/HRS pipeline.pdf0.996
0.999
1.002
Optimal Aperture
90
0
90
Roll angle (deg)
0.001
0.000
0.001
Raw -transit model
0.998
0.999
1.000
1.001
Corrected Flux
0.02
0.01
0.00
0.01
Orbital Phase
0.001
0.000
0.001
Table 1 .
1Radial Velocity Measurements of HIP 94235BJD
RV
RV Error
Instrument
(km s −1 ) (km s −1 )
2459305.91348
8.131
0.100
CHIRON
2459309.87630
8.211
0.134
CHIRON
2459311.89601
8.220
0.073
CHIRON
2459320.89052
8.152
0.082
CHIRON
2459327.86151
8.246
0.079
CHIRON
2459335.86210
8.340
0.120
CHIRON
2459337.89892
8.123
0.099
CHIRON
2459339.86391
8.144
0.087
CHIRON
2459459.55982
8.069
0.087
CHIRON
2459461.57939
8.255
0.083
CHIRON
2459463.56573
8.285
0.072
CHIRON
2459464.59690
8.273
0.068
CHIRON
2459465.56939
8.271
0.282
CHIRON
2459467.53083
8.394
0.092
CHIRON
2459478.48380
8.459
0.098
CHIRON
2459479.48429
8.123
0.119
CHIRON
2459480.51450
8.355
0.093
CHIRON
2459481.51666
8.466
0.096
CHIRON
2459482.55682
8.350
0.068
CHIRON
2459485.52901
8.485
0.062
CHIRON
2459506.52501
8.237
0.071
CHIRON
2459508.52808
8.107
0.102
CHIRON
2459319.14408
8.841
0.104 MINERVA-Australis Tel 3
2459324.21159
8.964
0.175 MINERVA-Australis Tel 3
2459332.10157
8.914
0.178 MINERVA-Australis Tel 3
2459348.10572
9.096
0.165 MINERVA-Australis Tel 3
2459452.00282
9.152
0.143 MINERVA-Australis Tel 3
2459453.04237
8.976
0.150 MINERVA-Australis Tel 3
2459453.99779
9.120
0.140 MINERVA-Australis Tel 3
2459456.02452
9.171
0.139 MINERVA-Australis Tel 3
2459476.98480
8.961
0.109 MINERVA-Australis Tel 3
2459319.14408
9.208
0.161 MINERVA-Australis Tel 4
2459324.21159
8.766
0.101 MINERVA-Australis Tel 4
2459348.10572
8.995
0.167 MINERVA-Australis Tel 4
2459452.00282
9.177
0.194 MINERVA-Australis Tel 4
2459453.04237
9.017
0.146 MINERVA-Australis Tel 4
2459453.99779
9.265
0.148 MINERVA-Australis Tel 4
2459476.98480
9.121
0.179 MINERVA-Australis Tel 4
2459319.14408
8.955
0.176 MINERVA-Australis Tel 5
2459324.21159
9.085
0.137 MINERVA-Australis Tel 5
2459332.10157
8.741
0.123 MINERVA-Australis Tel 5
2459348.10572
9.077
0.181 MINERVA-Australis Tel 5
Table 2 .
2Candidate co-moving and co-evolving stars with HIP 94235TIC
RA (deg)
DEC (deg) U (km s −1 ) V (km s −1 ) W (km s −1 ) B (mag) V (mag) T (mag) K (mag) Prot (d) Li EW (Å)
71314712 330.63261
-47.67748
-4.0
-26.0
-4.0
9.3
8.8
8.2
7.4
2.4
91231096 310.66707
-46.67197
-1.9
-26.3
-1.9
11.0
10.0
9.1
7.6
4.6
93839949 176.15935
-49.41756
-7.3
-24.7
-7.3
9.9
8.9
8.0
6.5
1.9
101403239 300.79541
-52.96796
-1.7
-26.0
-1.7
12.8
11.9
10.9
9.4
12.7
108272865 285.27521
-28.71442
-5.2
-27.4
-5.2
9.0
8.5
8.0
7.2
2.4
142144969
99.73088
-74.42526
-5.2
-26.1
-5.2
10.5
9.7
9.0
7.9
7.0
197597944 327.20235
-39.48626
-8.5
-28.1
-8.5
10.3
9.7
9.0
8.0
4.1
206603521
21.80129
-57.29366
-7.9
-26.8
-7.9
14.5
13.0
11.1
8.8
3.8
234299476 358.66836
-60.85971
-8.3
-27.2
-8.3
11.1
10.0
9.0
7.5
9.0
270200832 331.81221
-74.08641
-3.0
-26.4
-3.0
9.5
9.0
8.5
7.8
2.0
270259954 359.04562
-39.05314
-7.5
-27.7
-7.5
9.2
8.2
7.3
5.9
7.8
270377865 345.08089
-26.15444
-3.1
-26.3
-3.1
8.1
7.5
6.9
5.9
3.6
278271178 112.29658
-82.20387
-4.4
-24.9
-4.4
14.3
13.1
11.1
8.9
10.9
280683734 342.73660
-79.16594
-7.5
-27.0
-7.5
9.6
9.0
8.4
7.6
2.6
0.136
290081380 319.52174
-29.52160
-7.0
-29.4
-7.0
15.1
13.9
11.5
9.3
10.3
341498715 318.52194
-63.70066
-6.9
-27.1
-6.9
10.9
10.1
9.4
8.2
5.9
357709300 189.06702
-79.52628
-7.3
-26.8
-7.3
12.2
11.0
10.2
8.7
7.0
369897885 171.32165
-84.95449
-8.4
-25.4
-9.6
8.1
7.6
7.2
6.5
1.5
0.030
389660501 321.33728
-43.51207
-3.5
-25.8
-3.5
13.4
12.1
11.0
9.1
7.1
405077613 112.74683
-84.32411
-8.6
-27.2
-8.6
10.8
10.0
9.2
7.9
4.9
0.200
409141582 312.08079
-28.02455
-5.7
-27.3
-5.7
13.7
13.6
11.4
9.1
2.7
409141582 312.08194
-28.02453
-9.2
-25.0
-9.2
13.7
13.6
11.4
9.1
2.7
439417806 301.15247
-35.21447
-3.7
-26.2
-3.7
9.4
8.9
12.2
7.7
1.4
464646604 287.74114
-60.27264
-5.7
-27.0
-5.7
8.9
8.3
7.8
6.9
2.2
0.141
466277708 305.70238
-65.25823
-6.9
-27.0
-6.9
10.3
9.6
9.0
8.1
3.8
Table 3 .
3Properties of HIP 94235 Astrometry Right Ascension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Declination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parallax (mas) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Parameter
Value
Source
19:10:57.87
Gaia Collaboration et al. (2021)
−60:16:21.49
Gaia Collaboration et al. (2021)
17.061 ± 0.037
Gaia Collaboration et al. (2021)
Table 4 .
4Derived parameters for HIP 94235 b T0 (BJD) . . . . . . . . . . . . . . . . . . . . . . Derived a/R . . . . . . . . . . . . . . . . . . . . . . . . . . . Derived a (AU ) . . . . . . . . . . . . . . . . . . . . . . . . . Derived Krv (m s −1 ) . . . . . . . . . . . . . . . . . . . . < 150 (3σ) Derived Teq (K) . . . . . . . . . . . . . . . . . . . . . . . . 1060 ± 50 DerivedParameter
Joint model
Priors
Fitted Parameters
2459037.8704 +0.0011
−0.0022
Uniform
P (days) . . . . . . . . . . . . . . . . . . . . . . .
7.713057 +0.000021
−0.000021
Uniform
Rp/R . . . . . . . . . . . . . . . . . . . . . . . . .
0.0253 +0.00075
−0.00059
Uniform
i (deg) . . . . . . . . . . . . . . . . . . . . . . . . .
87.14 +0.16
−0.17
Uniform
√
e cos ω . . . . . . . . . . . . . . . . . . . . . . . .
0.07 +0.50
−0.54
Uniform
√
e sin ω . . . . . . . . . . . . . . . . . . . . . . . .
0.26 +0.20
−0.27
Uniform
Mp (M⊕) . . . . . . . . . . . . . . . . . . . . . .
< 379 (3σ) Uniform
γ MINERVA−Australis3 . . . . . . . . . . .
9013 +58
−70
Uniform
γ MINERVA−Australis5 . . . . . . . . . . .
8960 +123
−126
Uniform
γ MINERVA−Australis6 . . . . . . . . . . .
9050 +88
−81
Uniform
γCHIRON . . . . . . . . . . . . . . . . . . . . . . .
8264 +31
−30
Uniform
RV Jitter MINERVA-Australis 3
80 +73
−56
Uniform
RV Jitter MINERVA-Australis 5
154 +210
−98
Uniform
RV Jitter MINERVA-Australis 6
140 +113
−79
Uniform
RV Jitter CHIRON . . . . . . . . . . . . .
100 +35
−28
Uniform
Inferred parameters
e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.32 +0.20
−0.20
Derived
ω (deg) . . . . . . . . . . . . . . . . . . . . . . . .
17 +67
−92
Derived
Rp (R⊕) . . . . . . . . . . . . . . . . . . . . . . .
3.00 +0.32
−0.28
15.7 +1.6
−1.5
0.07870 +0.00056
−0.00017
Derived
Transit duration (days) . . . . . . . .
0.103 +0.009
−0.018
Table 5 .
5Diffraction limited measurements of HIP 94235 BInstrument
Epoch
Separation Position Angle
∆m
Reference
(mas)
( • )
(mag)
VLT-NaCo
2010-07-30
506 ± 7
150.6 ± 0.8 3.8 ± 0.3 (H band) Chauvin et al. (2015), Desidera et al. (2015)
Gemini-Zorro 2021-07-23
596 ± 5
162.87 ± 0.48
5.84 (832 nm) This Work
Gemini-Zorro 2021-10-22
600 ± 8
161.73 ± 0.75
5.31 (832 nm) This Work
band magnitudes from MIST, and adopt magnitudes for
the companion of I = 10.76 ± 0.3 and H = 13.05 ± 0.4.
We adopt a fixed metallicity of [M/H] = 0 as per that
for the AB Doradus moving group (da Silva et al. 2009),
and adopt a strong Gaussian prior for the distance to
HIP 94235 B as per its Gaia parallax. We find the
companion is an M-dwarf of mass 0.26 ± 0.04 M and
radius 0.31 ± 0.03 R . The companion is incapable of
hosting the transiting companion responsible for the
planetary transit signal detected in TESS.
Table 6 .
6Orbital parameters of HIP 94235 B MA (M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Parameter
Median ± 1σ
Mode
Priors
Informed Priors
Parallax (mas) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.061 ± 0.037
-
Gaussian
1.094 ± 0.05
-
Gaussian
MB (0.26 ± 0.04
-
Gaussian
Fitted Parameters
a (AU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56 +9
−7
54 Log-uniform
√
e sin ω . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.35 +0.14
−0.19
0.40
Uniform
√
e cos ω . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.20 +0.48
−0.34
0.48
Uniform
Mean anomaly at BJD = 2457000 (deg) . . . . . . . . . .
170 +110
−80
130
Uniform
i (deg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67.8 +2.7
−2.9
67.4
sin i
Ω (deg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20 +11
−7
18
Uniform
Barycentric RA proper motion (mas yr −1 ) . . . . . . .
10.50 ± 0.24
10.46
Uniform
Barycentric declination proper motion (mas yr −1 )
−102.99 ± 0.35 102.91
Uniform
Derived Parameters
P (years) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
365 +92
−69
362
Derived
e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.25 +0.22
−0.14
0.19
Derived
ω (deg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
300 +30
−80
320
Derived
Periastron distance (AU) . . . . . . . . . . . . . . . . . . . . . . . . .
43 +11
−15
47
Derived
Time of periastron (years CE) . . . . . . . . . . . . . . . . . . .
2184 +107
−74
2164
Derived
tion while the primary mass M A was given a Gaus-
sian prior of 1.094 ± 0.05 M , with an inflated uncer-
tainty as compared to the value in
https://github.com/adamkraus/Comove
https://gaia.esac.esa.int/gost/ 4 https://github.com/avanderburg/edmcmc
Astrometric and photometric star catalogues derived from the ESA HIPPARCOS Space Astrometry Mission. ESA Special Publication. 1200The HIPPARCOS and TYCHO catalogues, ESA Special Publication, Vol. 1200, The HIPPARCOS and TYCHO catalogues. Astrometric and photometric star catalogues derived from the ESA HIPPARCOS Space Astrometry Mission
. B Addison, D J Wright, R A Wittenmyer, PASP. 131115003Addison, B., Wright, D. J., Wittenmyer, R. A., et al. 2019, PASP, 131, 115003
. R Allart, V Bourrier, C Lovis, A&A. 62358Allart, R., Bourrier, V., Lovis, C., et al. 2019, A&A, 623, A58
. A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy CollaborationAJ. 156123Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
. S A Barenfeld, E J Bubar, E E Mamajek, P A Young, ApJ. 7666Barenfeld, S. A., Bubar, E. J., Mamajek, E. E., & Young, P. A. 2013, ApJ, 766, 6
. D Barrado Y Navascués, J R Stauffer, R Jayawardhana, ApJ. 614386Barrado y Navascués, D., Stauffer, J. R., & Jayawardhana, R. 2004, ApJ, 614, 386
. O Barragán, S Aigrain, D Kubyshkina, MNRAS. 490698Barragán, O., Aigrain, S., Kubyshkina, D., et al. 2019, MNRAS, 490, 698
. C P M Bell, E E Mamajek, T Naylor, MNRAS. 454593Bell, C. P. M., Mamajek, E. E., & Naylor, T. 2015, MNRAS, 454, 593
. S Benatti, D Nardiello, L Malavolta, A&A. 63081Benatti, S., Nardiello, D., Malavolta, L., et al. 2019, A&A, 630, A81
. S Benatti, M Damasso, F Borsa, A&A. 65066Benatti, S., Damasso, M., Borsa, F., et al. 2021, A&A, 650, A66
. W Benz, C Broeg, A Fortier, Experimental Astronomy. 51109Benz, W., Broeg, C., Fortier, A., et al. 2021, Experimental Astronomy, 51, 109
. S Blunt, E L Nielsen, R J De Rosa, AJ. 153229Blunt, S., Nielsen, E. L., De Rosa, R. J., et al. 2017, AJ, 153, 229
. S Blunt, J J Wang, I Angelo, AJ. 15989Blunt, S., Wang, J. J., Angelo, I., et al. 2020, AJ, 159, 89
. T Boller, M J Freyberg, J Trümper, A&A. 588103Boller, T., Freyberg, M. J., Trümper, J., et al. 2016, A&A, 588, A103
. L G Bouma, J D Hartman, R Brahm, AJ. 160239Bouma, L. G., Hartman, J. D., Brahm, R., et al. 2020, AJ, 160, 239
. L G Bouma, J L Curtis, K Masuda, arXiv:2112.14776arXiv e-printsBouma, L. G., Curtis, J. L., Masuda, K., et al. 2021, arXiv e-prints, arXiv:2112.14776
. V Bourrier, A Lecavelier Des Etangs, D Ehrenreich, A&A. 620147Bourrier, V., Lecavelier des Etangs, A., Ehrenreich, D., et al. 2018, A&A, 620, A147
. B P Bowler, W D Cochran, M Endl, AJ. 161106Bowler, B. P., Cochran, W. D., Endl, M., et al. 2021, AJ, 161, 106
. T D Brandt, ApJS. 23931Brandt, T. D. 2018, ApJS, 239, 31
. ApJS. 25442-. 2021, ApJS, 254, 42
. T D Brandt, T J Dupuy, B P Bowler, AJ. 158140Brandt, T. D., Dupuy, T. J., & Bowler, B. P. 2019, AJ, 158, 140
. T D Brandt, C X Huang, ApJ. 80758Brandt, T. D., & Huang, C. X. 2015, ApJ, 807, 58
. T M Brown, N Baliber, F B Bianco, PASP. 1251031Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031
. Zhou, Wirth, Huang, Zhou, Wirth, Huang et al.
. L A Buchhave, D W Latham, A Johansen, Nature. 486375Buchhave, L. A., Latham, D. W., Johansen, A., et al. 2012, Nature, 486, 375
D A H Buckley, G P Swart, J G Meiring, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 626762670Proc. SPIEBuckley, D. A. H., Swart, G. P., & Meiring, J. G. 2006, in Proc. SPIE, Vol. 6267, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 62670Z
. C J Burke, J L Christiansen, F Mullally, ApJ. 8098Burke, C. J., Christiansen, J. L., Mullally, F., et al. 2015, ApJ, 809, 8
. P Calissendorff, M Janson, A&A. 615149Calissendorff, P., & Janson, M. 2018, A&A, 615, A149
. I Carleo, S Desidera, D Nardiello, A&A. 64571Carleo, I., Desidera, S., Nardiello, D., et al. 2021, A&A, 645, A71
. F Castelli, R L Kurucz, astro-ph/0405087ArXiv Astrophysics e-prints. Castelli, F., & Kurucz, R. L. 2004, ArXiv Astrophysics e-prints, astro-ph/0405087
. G Chauvin, A Vigan, M Bonnefoy, A&A. 573127Chauvin, G., Vigan, A., Bonnefoy, M., et al. 2015, A&A, 573, A127
. J Choi, A Dotter, C Conroy, ApJ. 823102Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102
. S Christian, A Vanderburg, J Becker, arXiv:2202.00042arXiv e-printsChristian, S., Vanderburg, A., Becker, J., et al. 2022, arXiv e-prints, arXiv:2202.00042
. A Claret, A&A. 60030Claret, A. 2017, A&A, 600, A30
. A Claret, S Bloemen, A&A. 52975Claret, A., & Bloemen, S. 2011, A&A, 529, A75
L A Crause, R M Sharples, D G Bramall, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 914791476Crause, L. A., Sharples, R. M., Bramall, D. G., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, 91476T
. J L Curtis, M A Agüeros, E E Mamajek, J T Wright, J D Cummings, AJ. 15877Curtis, J. L., Agüeros, M. A., Mamajek, E. E., Wright, J. T., & Cummings, J. D. 2019, AJ, 158, 77
VizieR Online Data Catalog. R M Cutri, II/311Cutri, R. M., & et al. 2012, VizieR Online Data Catalog, II/311
S Czesla, S Schröter, C P Schneider, ascl:1906.010PyA: Python astronomy-related packages. Czesla, S., Schröter, S., Schneider, C. P., et al. 2019, PyA: Python astronomy-related packages, ascl:1906.010
. L Da Silva, C A O Torres, R De La Reza, A&A. 508833da Silva, L., Torres, C. A. O., de La Reza, R., et al. 2009, A&A, 508, 833
. P A Dalba, S R Kane, H Isaacson, AJ. 161103Dalba, P. A., Kane, S. R., Isaacson, H., et al. 2021, AJ, 161, 103
. T J David, E A Petigura, R Luger, ApJL. 88512David, T. J., Petigura, E. A., Luger, R., et al. 2019a, ApJL, 885, L12
. T J David, L A Hillenbrand, E A Petigura, Nature. 534658David, T. J., Hillenbrand, L. A., Petigura, E. A., et al. 2016, Nature, 534, 658
. T J David, E E Mamajek, A Vanderburg, AJ. 156302David, T. J., Mamajek, E. E., Vanderburg, A., et al. 2018a, AJ, 156, 302
. T J David, I J M Crossfield, B Benneke, AJ. 155222David, T. J., Crossfield, I. J. M., Benneke, B., et al. 2018b, AJ, 155, 222
. T J David, A M Cody, C L Hedges, AJ. 15879David, T. J., Cody, A. M., Hedges, C. L., et al. 2019b, AJ, 158, 79
. T J David, G Contardo, A Sandoval, AJ. 161265David, T. J., Contardo, G., Sandoval, A., et al. 2021, AJ, 161, 265
. S Desidera, E Covino, S Messina, A&A. 573126Desidera, S., Covino, E., Messina, S., et al. 2015, A&A, 573, A126
. J Dong, C X Huang, G Zhou, arXiv:2201.12836arXiv e-printsDong, J., Huang, C. X., Zhou, G., et al. 2022, arXiv e-prints, arXiv:2201.12836
. A Dotter, ApJS. 2228Dotter, A. 2016, ApJS, 222, 8
. D Dragomir, M Harris, J Pepper, AJ. 159219Dragomir, D., Harris, M., Pepper, J., et al. 2020, AJ, 159, 219
. D Ehrenreich, V Bourrier, X Bonfils, A&A. 54718Ehrenreich, D., Bourrier, V., Bonfils, X., et al. 2012, A&A, 547, A18
. D Ehrenreich, V Bourrier, P J Wheatley, Nature. 522459Ehrenreich, D., Bourrier, V., Wheatley, P. J., et al. 2015, Nature, 522, 459
. A D Feinstein, B T Montet, M C Johnson, AJ. 162213Feinstein, A. D., Montet, B. T., Johnson, M. C., et al. 2021, AJ, 162, 213
. T A Fleming, J H M M Schmitt, M S Giampapa, ApJ. 450401Fleming, T. A., Schmitt, J. H. M. M., & Giampapa, M. S. 1995, ApJ, 450, 401
. D Foreman-Mackey, E Agol, R Angus, S Ambikasaran, Foreman-Mackey, D., Agol, E., Angus, R., & Ambikasaran, S. 2017, ArXiv
. D Foreman-Mackey, D W Hogg, D Lang, J Goodman, PASP. 125306Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
. B J Fulton, E A Petigura, A W Howard, AJ. 154109Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109
. J Gagné, J K Faherty, L Moranta, M Popinchalk, ApJL. 91529Gagné, J., Faherty, J. K., Moranta, L., & Popinchalk, M. 2021, ApJL, 915, L29
. J Gagné, E E Mamajek, L Malo, ApJ. 85623Gagné, J., Mamajek, E. E., Malo, L., et al. 2018, ApJ, 856, 23
. A G A Brown, Gaia CollaborationA Vallenari, Gaia CollaborationA&A. 6161Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1
. A&A. 6491-. 2021, A&A, 649, A1
. E Gaidos, A W Mann, A Rizzuto, MNRAS. 464850Gaidos, E., Mann, A. W., Rizzuto, A., et al. 2017, MNRAS, 464, 850
. E Gaidos, T Hirano, C Beichman, MNRAS. 5092969Gaidos, E., Hirano, T., Beichman, C., et al. 2022, MNRAS, 509, 2969
. S Ginzburg, H E Schlichting, R Sari, MNRAS. 476759Ginzburg, S., Schlichting, H. E., & Sari, R. 2018, MNRAS, 476, 759
. A Gupta, H E Schlichting, MNRAS. 5044634Gupta, A., & Schlichting, H. E. 2021, MNRAS, 504, 4634
. J D Hartman, G Á Bakos, G Kovács, R W Noyes, MNRAS. 408475Hartman, J. D., Bakos, G.Á., Kovács, G., & Noyes, R. W. 2010, MNRAS, 408, 475
. C Hedges, A Hughes, G Zhou, AJ. 16254Hedges, C., Hughes, A., Zhou, G., et al. 2021, AJ, 162, 54
VizieR Online Data Catalog. A A Henden, M Templeton, D Terrell, 2336Henden, A. A., Templeton, M., Terrell, D., et al. 2016, VizieR Online Data Catalog, 2336
. S B Howell, M E Everett, E P Horch, ApJL. 8292Howell, S. B., Everett, M. E., Horch, E. P., et al. 2016, ApJL, 829, L2
. S B Howell, M E Everett, W Sherry, E Horch, D R Ciardi, AJ. 14219Howell, S. B., Everett, M. E., Sherry, W., Horch, E., & Ciardi, D. R. 2011, AJ, 142, 19
. S B Howell, C Sobeck, M Haas, PASP. 126398Howell, S. B., Sobeck, C., Haas, M., et al. 2014, PASP, 126, 398
. S Hoyer, P Guterman, O Demangeon, A&A. 63524Hoyer, S., Guterman, P., Demangeon, O., et al. 2020, A&A, 635, A24
. C X Huang, A Vanderburg, A Pál, Research Notes of the American Astronomical Society. 4204Huang, C. X., Vanderburg, A., Pál, A., et al. 2020a, Research Notes of the American Astronomical Society, 4, 204
. - , Research Notes of the American Astronomical Society. 4206-. 2020b, Research Notes of the American Astronomical Society, 4, 206
. N K Inamdar, H E Schlichting, MNRAS. 4481751Inamdar, N. K., & Schlichting, H. E. 2015, MNRAS, 448, 1751
. A Izidoro, M Ogihara, S N Raymond, MNRAS. 4701750Izidoro, A., Ogihara, M., Raymond, S. N., et al. 2017, MNRAS, 470, 1750
J M Jenkins, J D Twicken, S Mccauliff, Proc. SPIE. SPIE991399133Jenkins, J. M., Twicken, J. D., McCauliff, S., et al. 2016, in Proc. SPIE, Vol. 9913, Software and Cyberinfrastructure for Astronomy IV, 99133E
. P Kervella, F Arenou, F Mignard, F Thévenin, A&A. 62372Kervella, P., Arenou, F., Mignard, F., & Thévenin, F. 2019, A&A, 623, A72
. J Kirk, M K Alam, M López-Morales, L Zeng, AJ. 159115Kirk, J., Alam, M. K., López-Morales, M., & Zeng, L. 2020, AJ, 159, 115
E S Kite, M N Barnett, Proceedings of the National Academy of Science. the National Academy of Science11718264Kite, E. S., & Barnett, M. N. 2020, Proceedings of the National Academy of Science, 117, 18264
. A Y Kniazev, V V Gvaramadze, L N Berdnikov, Monthly Notices of the Royal Astronomical Society. 4593068Kniazev, A. Y., Gvaramadze, V. V., & Berdnikov, L. N. 2016, Monthly Notices of the Royal Astronomical Society, 459, 3068
A Y Kniazev, V V Gvaramadze, L N Berdnikov, arXiv:1612.00292Stars: From Collapse to Collapse. 510480Kniazev, A. Y., Gvaramadze, V. V., & Berdnikov, L. N. 2017, in Stars: From Collapse to Collapse, Vol. 510, eprint: arXiv:1612.00292, 480
. M Kounkel, K Covey, AJ. 158122Kounkel, M., & Covey, K. 2019, AJ, 158, 122
. G Kovács, S Zucker, T Mazeh, A&A. 391369Kovács, G., Zucker, S., & Mazeh, T. 2002, A&A, 391, 369
. L Kreidberg, PASP. 1271161Kreidberg, L. 2015, PASP, 127, 1161
. J R Kulow, K France, J Linsky, R O P Loyd, ApJ. 786132Kulow, J. R., France, K., Linsky, J., & Loyd, R. O. P. 2014, ApJ, 786, 132
. B Lavie, D Ehrenreich, V Bourrier, A&A. 7Lavie, B., Ehrenreich, D., Bourrier, V., et al. 2017, A&A, 605, L7
. E J Lee, N J Connors, ApJ. 90832Lee, E. J., & Connors, N. J. 2021, ApJ, 908, 32
. E J Lee, A Karalis, D P Thorngren, arXiv:2201.09898arXiv e-printsLee, E. J., Karalis, A., & Thorngren, D. P. 2022, arXiv e-prints, arXiv:2201.09898
Lightkurve: Kepler and TESS time series analysis in Python. J V D M Cardoso, Lightkurve CollaborationC Hedges, Lightkurve Collaborationascl:1812.013Astrophysics Source Code Library. Lightkurve Collaboration, Cardoso, J. V. d. M., Hedges, C., et al. 2018, Lightkurve: Kepler and TESS time series analysis in Python, Astrophysics Source Code Library, ascl:1812.013
. J H Livingston, F Dai, T Hirano, MNRAS. 4848Livingston, J. H., Dai, F., Hirano, T., et al. 2019, MNRAS, 484, 8
. E D Lopez, J J Fortney, ApJ. 7762Lopez, E. D., & Fortney, J. J. 2013, ApJ, 776, 2
. E D Lopez, J J Fortney, N Miller, ApJ. 76159Lopez, E. D., Fortney, J. J., & Miller, N. 2012, ApJ, 761, 59
. K L Luhman, J R Stauffer, E E Mamajek, ApJL. 62869Luhman, K. L., Stauffer, J. R., & Mamajek, E. E. 2005, ApJL, 628, L69
. L Malo, R Doyon, D Lafrenière, ApJ. 76288Malo, L., Doyon, R., Lafrenière, D., et al. 2013, ApJ, 762, 88
. E E Mamajek, L A Hillenbrand, ApJ. 6871264Mamajek, E. E., & Hillenbrand, L. A. 2008, ApJ, 687, 1264
. K Mandel, E Agol, ApJL. 580171Mandel, K., & Agol, E. 2002, ApJL, 580, L171
. A W Mann, E Gaidos, G N Mace, ApJ. 81846Mann, A. W., Gaidos, E., Mace, G. N., et al. 2016a, ApJ, 818, 46
. A W Mann, E R Newton, A C Rizzuto, AJ. 15261Mann, A. W., Newton, E. R., Rizzuto, A. C., et al. 2016b, AJ, 152, 61
. A W Mann, E Gaidos, A Vanderburg, AJ. 15364Mann, A. W., Gaidos, E., Vanderburg, A., et al. 2017, AJ, 153, 64
. A W Mann, M C Johnson, A Vanderburg, AJ. 160179Mann, A. W., Johnson, M. C., Vanderburg, A., et al. 2020, AJ, 160, 179
. A W Mann, M L Wood, S P Schmidt, arXiv:2110.09531arXiv e-printsMann, A. W., Wood, M. L., Schmidt, S. P., et al. 2021, arXiv e-prints, arXiv:2110.09531
. R A Marcus, S T Stewart, D Sasselov, L Hernquist, ApJL. 700118Marcus, R. A., Stewart, S. T., Sasselov, D., & Hernquist, L. 2009, ApJL, 700, L118
. K Masuda, J N Winn, AJ. 15981Masuda, K., & Winn, J. N. 2020, AJ, 159, 81
. P F L Maxted, D Ehrenreich, T G Wilson, arXiv:2111.08828arXiv e-printsMaxted, P. F. L., Ehrenreich, D., Wilson, T. G., et al. 2021, arXiv e-prints, arXiv:2111.08828
. S Meingast, J Alves, V Fürnkranz, A&A. 62213Meingast, S., Alves, J., & Fürnkranz, V. 2019, A&A, 622, L13
. G Meynet, J C Mermilliod, A Maeder, A&AS. 98477Meynet, G., Mermilliod, J. C., & Maeder, A. 1993, A&AS, 98, 477
Kepler Data Processing Handbook: Photometric Analysis. R L Morris, J D Twicken, J C Smith, KSCI-19081-003Kepler Data Processing HandbookMorris, R. L., Twicken, J. D., Smith, J. C., et al. 2020, Kepler Data Processing Handbook: Photometric Analysis, Kepler Data Processing Handbook (KSCI-19081-003)
. E R Newton, A W Mann, B M Tofflemire, ApJL. 88017Newton, E. R., Mann, A. W., Tofflemire, B. M., et al. 2019, ApJL, 880, L17
. E R Newton, A W Mann, A L Kraus, AJ. 16165Newton, E. R., Mann, A. W., Kraus, A. L., et al. 2021, AJ, 161, 65
. V G Ortega, E Jilinski, R De La Reza, B Bazzanella, MNRAS. 377441Ortega, V. G., Jilinski, E., de La Reza, R., & Bazzanella, B. 2007, MNRAS, 377, 441
. J E Owen, A P Jackson, MNRAS. 4252931Owen, J. E., & Jackson, A. P. 2012, MNRAS, 425, 2931
. J E Owen, Y Wu, ApJ. 77529ApJOwen, J. E., & Wu, Y. 2013, ApJ, 775, 105 -. 2017, ApJ, 847, 29
. E Palle, M Oshagh, N Casasayas-Barris, A&A. 64325Palle, E., Oshagh, M., Casasayas-Barris, N., et al. 2020, A&A, 643, A25
. L A Paredes, T J Henry, S N Quinn, AJ. 162176Paredes, L. A., Henry, T. J., Quinn, S. N., et al. 2021, AJ, 162, 176
. M A C Perryman, L Lindegren, J Kovalevsky, A&A. 500501Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al. 1997, A&A, 500, 501
. E A Petigura, A W Howard, G W Marcy, Proceedings of the National Academy of Science. 110Petigura, E. A., Howard, A. W., & Marcy, G. W. 2013, Proceedings of the National Academy of Science, 110, 19273
. P Plavchan, T Barclay, J Gagné, Nature. 582497Plavchan, P., Barclay, T., Gagné, J., et al. 2020, Nature, 582, 497
. D Pourbaix, A&AS. 131377Pourbaix, D. 1998, A&AS, 131, 377
. S N Quinn, R J White, D W Latham, ApJL. 75627ApJQuinn, S. N., White, R. J., Latham, D. W., et al. 2012, ApJL, 756, L33 -. 2014, ApJ, 787, 27
. L M Rebull, J R Stauffer, L A Hillenbrand, ApJ. 83992Rebull, L. M., Stauffer, J. R., Hillenbrand, L. A., et al. 2017, ApJ, 839, 92
. L M Rebull, J R Stauffer, J Bouvier, AJ. 152113Rebull, L. M., Stauffer, J. R., Bouvier, J., et al. 2016, AJ, 152, 113
. G R Ricker, J N Winn, R Vanderspek, Journal of Astronomical Telescopes, Instruments, and Systems. 114003Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
. A C Rizzuto, A W Mann, A Vanderburg, A L Kraus, K R Covey, AJ. 154224Rizzuto, A. C., Mann, A. W., Vanderburg, A., Kraus, A. L., & Covey, K. R. 2017, AJ, 154, 224
. A C Rizzuto, A Vanderburg, A W Mann, AJ. 156195Rizzuto, A. C., Vanderburg, A., Mann, A. W., et al. 2018, AJ, 156, 195
. A C Rizzuto, E R Newton, A W Mann, AJ. 16033Rizzuto, A. C., Newton, E. R., Mann, A. W., et al. 2020, AJ, 160, 33
. K E Rockcliffe, E R Newton, A Youngblood, AJ. 162116Rockcliffe, K. E., Newton, E. R., Youngblood, A., et al. 2021, AJ, 162, 116
. R Sanchis-Ojeda, J N Winn, G W Marcy, ApJ. 77554Sanchis-Ojeda, R., Winn, J. N., Marcy, G. W., et al. 2013, ApJ, 775, 54
. M F Skrutskie, R M Cutri, R Stiening, AJ. 1311163Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163
. I A G Snellen, A G Brown, Nature Astronomy. 2883Snellen, I. A. G., & Brown, A. G. A. 2018, Nature Astronomy, 2, 883
. D R Soderblom, ARA&A. 48581Soderblom, D. R. 2010, ARA&A, 48, 581
. J J Spake, D K Sing, T M Evans, Nature. 55768Spake, J. J., Sing, D. K., Evans, T. M., et al. 2018, Nature, 557, 68
. K G Stassun, R J Oelkers, J Pepper, AJ. 156102Stassun, K. G., Oelkers, R. J., Pepper, J., et al. 2018, AJ, 156, 102
. J R Stauffer, G Schultz, J D Kirkpatrick, ApJL. 499199Stauffer, J. R., Schultz, G., & Kirkpatrick, J. D. 1998, ApJL, 499, L199
. X.-N Su, J.-W Xie, J.-L Zhou, P Thebault, AJ. 162272Su, X.-N., Xie, J.-W., Zhou, J.-L., & Thebault, P. 2021, AJ, 162, 272
. R H Tilbrook, M R Burleigh, J C Costes, MNRAS. 5046018Tilbrook, R. H., Burleigh, M. R., Costes, J. C., et al. 2021, MNRAS, 504, 6018
. B M Tofflemire, A C Rizzuto, E R Newton, AJ. 161171Tofflemire, B. M., Rizzuto, A. C., Newton, E. R., et al. 2021, AJ, 161, 171
. A Tokovinin, D A Fischer, M Bonati, PASP. 1251336Tokovinin, A., Fischer, D. A., Bonati, M., et al. 2013, PASP, 125, 1336
J D Twicken, B D Clarke, S T Bryson, Proc. SPIE. SPIE7740774023Twicken, J. D., Clarke, B. D., Bryson, S. T., et al. 2010, in Proc. SPIE, Vol. 7740, Software and Cyberinfrastructure for Astronomy, 774023
. K Ujjwal, S S Kartha, B Mathew, P Manoj, M Narang, AJ. 159166Ujjwal, K., Kartha, S. S., Mathew, B., Manoj, P., & Narang, M. 2020, AJ, 159, 166
. A Vanderburg, 10.5281/zenodo.5599854Vanderburg, A. 2021, avanderburg/edmcmc: v1.0.0, doi:10.5281/zenodo.5599854
. A Vanderburg, J A Johnson, PASP. 126948Vanderburg, A., & Johnson, J. A. 2014, PASP, 126, 948
. A Vanderburg, A W Mann, A Rizzuto, AJ. 15646Vanderburg, A., Mann, A. W., Rizzuto, A., et al. 2018, AJ, 156, 46
. A Vanderburg, C X Huang, J E Rodriguez, ApJL. 88119Vanderburg, A., Huang, C. X., Rodriguez, J. E., et al. 2019, ApJL, 881, L19
. A Venner, L A Pearce, A Vanderburg, arXiv:2111.03676arXiv e-printsVenner, A., Pearce, L. A., & Vanderburg, A. 2021a, arXiv e-prints, arXiv:2111.03676
. A Venner, A Vanderburg, L A Pearce, AJ. 16212Venner, A., Vanderburg, A., & Pearce, L. A. 2021b, AJ, 162, 12
. A Wolfgang, L A Rogers, E B Ford, ApJ. 82519Wolfgang, A., Rogers, L. A., & Ford, E. B. 2016, ApJ, 825, 19
. M Zhang, H A Knutson, L Wang, F Dai, O Barragán, AJ. 16367Zhang, M., Knutson, H. A., Wang, L., Dai, F., & Barragán, O. 2022a, AJ, 163, 67
. M Zhang, H A Knutson, L Wang, AJ. 16368Zhang, M., Knutson, H. A., Wang, L., et al. 2022b, AJ, 163, 68
. G Zhou, S N Quinn, J Irwin, AJ. 1612Zhou, G., Quinn, S. N., Irwin, J., et al. 2021, AJ, 161, 2
. W Zhu, C Petrovich, Y Wu, S Dong, J Xie, ApJ. 860101Zhu, W., Petrovich, C., Wu, Y., Dong, S., & Xie, J. 2018, ApJ, 860, 101
. B Zuckerman, I Song, M S Bessell, ApJL. 61365Zuckerman, B., Song, I., & Bessell, M. S. 2004, ApJL, 613, L65
| [
"https://github.com/adamkraus/Comove",
"https://github.com/avanderburg/edmcmc"
]
|
[
"Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem",
"Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem",
"Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem",
"Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem",
"Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem",
"Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem"
]
| [
"Matteo Becchetti [email protected] \nPhysics Department\nTorino University\nINFN Torino\nVia Pietro Giuria 1I-10125TorinoItaly\n",
"Marco Bochicchio [email protected] \nPhysics Department\nINFN Roma1\nPiazzale A. Moro 2I-00185RomaItaly\n",
"Matteo Becchetti [email protected] \nPhysics Department\nTorino University\nINFN Torino\nVia Pietro Giuria 1I-10125TorinoItaly\n",
"Marco Bochicchio [email protected] \nPhysics Department\nINFN Roma1\nPiazzale A. Moro 2I-00185RomaItaly\n",
"Matteo Becchetti [email protected] \nPhysics Department\nTorino University\nINFN Torino\nVia Pietro Giuria 1I-10125TorinoItaly\n",
"Marco Bochicchio [email protected] \nPhysics Department\nINFN Roma1\nPiazzale A. Moro 2I-00185RomaItaly\n"
]
| [
"Physics Department\nTorino University\nINFN Torino\nVia Pietro Giuria 1I-10125TorinoItaly",
"Physics Department\nINFN Roma1\nPiazzale A. Moro 2I-00185RomaItaly",
"Physics Department\nTorino University\nINFN Torino\nVia Pietro Giuria 1I-10125TorinoItaly",
"Physics Department\nINFN Roma1\nPiazzale A. Moro 2I-00185RomaItaly",
"Physics Department\nTorino University\nINFN Torino\nVia Pietro Giuria 1I-10125TorinoItaly",
"Physics Department\nINFN Roma1\nPiazzale A. Moro 2I-00185RomaItaly"
]
| []
| Recently, a differential-geometric approach to operator mixing in massless QCD-like theories -that involves canonical forms, obtained by means of gauge transformations, based on the Poincarè-Dulac theorem for the linear system that defines the renormalized mixing matrix in the coordinate representation Z(x, µ) -has been proposed in[1]. Specifically, it has been determined under which conditions a renormalization scheme exists where the linear system -and correspondingly Z(x, µ) -may be set in a diagonal canonical form that is one-loop exact to all perturbative orders -the nonresonant diagonalizable γ 0 β 0 case (I) -according to the Poincarè-Dulac theorem. Moreover, the remaining cases, (II), (III) and (IV), of operator mixing, where such diagonalization is not possible, have also been classified in[1]. Accordingly, if the matrix γ 0 β 0 , with γ(g) = γ 0 g 2 + · · · the matrix of the anomalous dimensions and β(g) = −β 0 g 3 + · · · the beta function, either is diagonalizable but a resonant condition for its eigenvalues and the system holds (II) or is nondiagonalizable and nonresonant (III), or is both nondiagonalizable and resonant (IV), Z(x, µ) is nondiagonalizable. Yet, we argue that in the gauge-invariant Hermitian sector of a massless QCD-like theory, which should be unitary in its free conformal limit at g(µ) = 0, γ 0 β 0 should be diagonalizable, because otherwise, to the order of g 2 (µ), a logCFT would arise that is nonunitary at g(µ) = 0. Nevertheless, even if γ 0 β 0 is diagonalizable, the associated linear system may be resonant, thus realizing the alternative (II) above. In the cases (II), (III) and (IV), where Z(x, µ) is nondiagonalizable, we demonstrate that its canonical form may be factorized into the exponential of a linear combination of upper triangular nilpotent constant matrices with coefficients that asymptotically in the UV are powers of logs of the running coupling, i.e., powers of loglogs of the coordinates, and a diagonal matrix as in the nonresonant diagonalizable case (I). Hence, its ultraviolet asymptotics differs intrinsically from the case (I) and, for asymptotically free theories, this is the closest analog of logCFTs. We also work out in detail physical realizations of the cases (I) and (II). 4 Factors of Z(x, µ) also enter the solution of the Callan-Symanzik equation for the OPE coefficients[4].5We employ the convention that the sum over repeated indices is understood. | 10.1140/epjc/s10052-022-10551-2 | [
"https://export.arxiv.org/pdf/2103.16220v3.pdf"
]
| 232,417,717 | 2110.07339 | e8293f45c03d57796395bd1348df7c367b46c9c4 |
Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem
23 Sep 2021
Matteo Becchetti [email protected]
Physics Department
Torino University
INFN Torino
Via Pietro Giuria 1I-10125TorinoItaly
Marco Bochicchio [email protected]
Physics Department
INFN Roma1
Piazzale A. Moro 2I-00185RomaItaly
Operator mixing in massless QCD-like theories and Poincarè-Dulac theorem
23 Sep 2021
Recently, a differential-geometric approach to operator mixing in massless QCD-like theories -that involves canonical forms, obtained by means of gauge transformations, based on the Poincarè-Dulac theorem for the linear system that defines the renormalized mixing matrix in the coordinate representation Z(x, µ) -has been proposed in[1]. Specifically, it has been determined under which conditions a renormalization scheme exists where the linear system -and correspondingly Z(x, µ) -may be set in a diagonal canonical form that is one-loop exact to all perturbative orders -the nonresonant diagonalizable γ 0 β 0 case (I) -according to the Poincarè-Dulac theorem. Moreover, the remaining cases, (II), (III) and (IV), of operator mixing, where such diagonalization is not possible, have also been classified in[1]. Accordingly, if the matrix γ 0 β 0 , with γ(g) = γ 0 g 2 + · · · the matrix of the anomalous dimensions and β(g) = −β 0 g 3 + · · · the beta function, either is diagonalizable but a resonant condition for its eigenvalues and the system holds (II) or is nondiagonalizable and nonresonant (III), or is both nondiagonalizable and resonant (IV), Z(x, µ) is nondiagonalizable. Yet, we argue that in the gauge-invariant Hermitian sector of a massless QCD-like theory, which should be unitary in its free conformal limit at g(µ) = 0, γ 0 β 0 should be diagonalizable, because otherwise, to the order of g 2 (µ), a logCFT would arise that is nonunitary at g(µ) = 0. Nevertheless, even if γ 0 β 0 is diagonalizable, the associated linear system may be resonant, thus realizing the alternative (II) above. In the cases (II), (III) and (IV), where Z(x, µ) is nondiagonalizable, we demonstrate that its canonical form may be factorized into the exponential of a linear combination of upper triangular nilpotent constant matrices with coefficients that asymptotically in the UV are powers of logs of the running coupling, i.e., powers of loglogs of the coordinates, and a diagonal matrix as in the nonresonant diagonalizable case (I). Hence, its ultraviolet asymptotics differs intrinsically from the case (I) and, for asymptotically free theories, this is the closest analog of logCFTs. We also work out in detail physical realizations of the cases (I) and (II). 4 Factors of Z(x, µ) also enter the solution of the Callan-Symanzik equation for the OPE coefficients[4].5We employ the convention that the sum over repeated indices is understood.
Introduction and physics motivations
The aim of the present paper is to reconsider the operator mixing and the associated ultraviolet (UV) asymptotics of the renormalized mixing matrix Z(x, µ) in the coordinate representation in asymptotically free Yang-Mills (YM) theories 1 massless to all perturbative orders (massless QCD-like theories for short), in order to analyze further implications of the differential-geometric approach to operator mixing initiated in [1], where an essential role is played by the Poincarè-Dulac theorem [2] in the framework of canonical forms [3] for linear systems of differential equations. In fact, Z(x, µ) is a pivotal ingredient to work out the UV asymptotics of gaugeinvariant correlators and OPE coefficients that will be considered in a forthcoming paper [4].
One problem addressed in [1], which is hardly discussed in the literature, has been to determine under which conditions the operator mixing may be essentially reduced to the multiplicatively renormalizable case. This is the case (I) -worked out extensively in [1] -of the classification based on the Poincarè-Dulac theorem introduced in [1].
The remaining cases, (II),(III) and (IV), of the aforementioned classification, where such a reduction is not actually possible, are studied in greater detail in the present paper.
There are several physics motivations for doing so, since the UV asymptotics of operator mixing enters a number of applications of the renormalization group (RG), which range from the deep inelastic scattering [5] in QCD to the evalutation of the ratio ǫ ′ ǫ [6][7][8] for the possible implications of new physics -if any -and to the constraints [9][10][11][12][13][14] to the eventual nonperturbative solution of the large-N limit [15][16][17][18] of massless QCD-like theories.
A further motivation for working out such asymptotics, for the general case of operator mixing as well, occurs as a part of the program, christened the asymptotically free bootstrap in [11], of verifying whether a candidate [11] nonperturbative S matrix arises from the nonperturbative 't Hooft large-N expansion [15] of a massless QCD-like theory, where the multiplicative renormalization of gauge-invariant operators has already played a key role [10].
In view of the physics applications of operator mixing mentioned above, the key step in [1] to obtain the aforementioned UV asymptotics of Z(x, µ) has been the choice of a suitable renormalization scheme.
In this respect, it has been known for some time that exploiting the freedom of changing renormalization scheme may lead to significant advantages.
Perhaps, the most famous example is the 't Hooft scheme [19], where all the coefficients of the beta function, β(g) = −β 0 g 3 − β 1 g 5 + · · · , but the first two, β 0 , β 1 , may be set to 0 by a suitable (formal 2 ) holomorphic reparametrization of the gauge coupling.
In fact, the aforementioned coefficients, β 2 , β 3 , · · · , may be set to an arbitrary value by a reparametrization of the coupling, and this freedom has been exploited in various contexts [20][21][22][23][24][25], including the supersymmetric one in relation to the exact NSVZ beta function [26].
Another example is the possibility to set to 0 all the coefficients but the first one, γ 0 , of the anomalous dimension, γ(g) = γ 0 g 2 + · · · , of a multiplicatively renormalizable operator by a similar [27] -but in general different -reparametrization of the coupling.
These examples are well known, but are not relevant in the present paper: They exploit the freedom of making (formal) holomorphic diffeomorphisms in the space of the coupling, while the change of scheme that we refer to is actually the (formal) holomorphic nonabelian 3 gauge freedom [1] in the choice of the basis of operators that mix under renormalization, in a way that we summarize as follows.
2 A summary of [1] The key idea in [1] has been to employ the time-honored theory of canonical forms [3] -obtained by (formal) holomorphic gauge transformations -for linear systems of differential equations -specifically, the Poincarè-Dulac theorem [2] -in order to find a sufficient condition by which a renormalization scheme exists where the matrix − γ(g) β(g) in eq. (5.1) can be set in the canonical form:
− γ(g) β(g) = γ 0 β 0 1 g (2.1) 2
A formal series is not assumed to be convergent. In QCD-like theories, this is appropriate in perturbation theory, which is believed to be only asymptotic in the UV thanks to the asymptotic freedom. 3 The aforementioned gauge freedom is nonabelian for the mixing of 2 or more operators, as opposed to the Abelian gauge symmetry in the case of a multiplicatively renormalizable operator. Its nonabelian and (formal) holomorphic character implies the peculiar features described in the present paper. that is one-loop exact to all orders of perturbation theory, with:
γ(g) = − ∂Z ∂ log µ Z −1 = γ 0 g 2 + γ 1 g 4 + γ 2 g 6 + · · · (2.2)
the matrix of the anomalous dimensions, and:
∂g ∂ log µ = β(g) = −β 0 g 3 − β 1 g 5 − β 2 g 7 + · · · (2.3)
the beta function, with g = g(µ) the renormalized coupling. A sufficient condition [1] for a renormalization scheme to exist where − γ(g) β(g) admits the canonical form in eq. (2.1) is that the eigenvalues λ 1 , λ 2 , · · · of the matrix γ 0 β 0 , in nonincreasing order λ 1 ≥ λ 2 ≥ · · · , do not differ by a positive even integer:
λ i − λ j − 2k = 0 (2.4)
for i ≤ j and k a positive integer. If such a renormalization scheme exists, the mixing has been dubbed nonresonant in [1]. Otherwise, it has been dubbed resonant. This terminology in [1] derives directly from the application of the Poincarè-Dulac theorem to the operator mixing, as we recall in the present paper.
Moreover, if in addition γ 0 β 0 is diagonalizable by a further change of the operator basis, the renormalized mixing matrix in the coordinate representation:
Z(x, µ) = P exp − g(µ) g(x) γ(g) β(g) dg (2.5)
that enters the solution:
G(x) = Z(x, µ)G(x, g(µ), µ)Z T (x, µ) (2.6)
of the Callan-Symanzik equation [1,4,[28][29][30][31]:
x · ∂ ∂x + β(g) ∂ ∂g + 2D G + γ(g) G + G γ T (g) = 0 (2.7)
for 2-point correlators 4 in Euclidean space-time:
G ik (x) = O i (x)O k (0) (2.8) of renormalized local gauge-invariant operators O i (x) 5 : O i = Z ik O Bk (2.9)
with O Bk the bare operators that mix 6 under renormalization and Z the bare mixing matrix, is diagonalizable as well, and its UV asymptotics reduces in the diagonal basis to the multiplicatively renormalizable case:
Z i (x, µ) = exp g(µ) g(x) γ 0i β 0 g dg = g(µ) g(x) γ 0i β 0 (2.10)
with Z i (x, µ) and γ 0i the eigenvalues of the corresponding matrices.
The key step in [1] to obtain the above result has been the differential-geometric interpretation [1] of a change of basis of renormalized operators, i.e., of a (finite) change of renormalization scheme:
O ′ i (x) = S ik (g)O k (x) (2.11)
as a (formal) holomorphic invertible gauge transformation S(g), of A(g):
A(g) = − γ(g) β(g) = γ 0 β 0 1 g + · · · (2.12)
as a (formal) meromorphic connection, with a Fuchsian singularity 7 -i.e., a simple pole -at g = 0, that transforms by the gauge transformation S(g) as:
A ′ (g) = S(g)A(g)S −1 (g) + ∂S(g) ∂g S −1 (g) (2.13)
of D as the corresponding covariant derivative:
D = ∂ ∂g − A(g) (2.14)
that defines the linear system:
DX = ∂ ∂g − A(g) X = 0 (2.15)
whose solution X(g), with a suitable initial condition, is Z(x, µ), and finally of Z(x, µ):
Z(x, µ) = P exp g(µ) g(x) A(g) dg = P exp − g(µ) g(x) γ(g) β(g) dg (2.16) 6
In fact, gauge-invariant operators also mix with BRST-exact operators and with operators that vanish by the equations of motion (EQM) [32][33][34]. But correlators of gauge-invariant operators with BRST-exact operators vanish, while correlators with EQM operators reduce to contact terms. Therefore, for our purposes it suffices to take into account the mixing of gauge-invariant operators only. 7 If a meromorphic connection, A(g), has a Fuchsian singularity at a point, a solution of the corresponding linear system in eq. (2.15) is regular singular [2], i.e., it satisfies a moderate-grow condition in a neighborhood of the point. as a Wilson line that transforms as:
Z ′ (x, µ) = S(g(µ))Z(x, µ)S −1 (g(x)) (2.17)
for the gauge transformation S(g). Following the interpretation above, the easiest way to compute the UV asymptotics of Z(x, µ) consists in setting the meromorphic connection A(g) in a canonical form by a suitable holomorphic gauge transformation according to the Poincaré-Dulac theorem.
Consequently, the classification in [1] of operator mixing is as follows: If a renormalization scheme exists where − γ(g) β(g) can be set in the canonical form of eq. (2.1), we refer to the mixing as nonresonant, that by eq. (2.4) is the generic case. Otherwise, we refer to the mixing as resonant.
Besides, γ 0 β 0 may be either diagonalizable or nondiagonalizable. Therefore, there are four cases of operator mixing: [1] to all orders of perturbation theory, since the mixing is nonresonant and γ 0 β 0 is diagonalizable. The remaining cases, where Z(x, µ) is not actually diagonalizable, are analyzed in greater detail in the present paper.
(I) Nonresonant diagonalizable γ 0 β 0 . (II) Resonant diagonalizable γ 0 β 0 . (III) Nonresonant nondiagonalizable γ 0 β 0 . (IV) Resonant nondiagonalizable γ 0 β 0 . In the case (I), Z(x, µ) is diagonalizable
We believe, as already remarked in [1], that the geometric interpretation above and the employment of the Poincarè-Dulac theorem make the subject of operator mixing in the physics literature more transparent than in previous treatments [5,6,35,36].
Plan of the paper
In section 4, just as a preamble, we work out by elementary methods three examples of operator mixing that, in the special case of two operators, are paradigmatic of the general case.
In section 5, which contains our main arguments and results, we analyze the four cases, (I), (II), (III) and (IV), in the classification above based on the Poincarè-Dulac theorem. Specifically, we work out the corresponding canonical forms for − γ(g) β(g) and Z(x, µ), and the UV asymptotics of Z(x, µ). Moreover, we argue that the cases (III) and (IV) -where γ 0 β 0 is nondiagonalizable -are ruled out by unitarity of the free conformal limit at g = 0 in the gauge-invariant Hermitian sector of a massless QCD-like theory.
In section 6, we revisit our elementary computation in section 4 in the light of the Poincarè-Dulac theorem, obviously finding perfect agreement. In section 7, as an application of the general theory in the present paper, we provide physical realizations of the cases (I) and (II). Specifically, we demonstrate that the case (II) is actually realized for the mixing of four-quark operators in SU(N) massless QCD with N f = N flavors of quarks for every N ≥ 4. Besides, we show that unitarity is implemented at g = 0 for the mixing of dimension 8 operators in large-N SU(N) YM theory, despite the aforementioned mixing would be potentially nonunitary, since γ 0 β 0 is potentially nondiagonalizable in this case, due to the degeneration of some of its eigenvalues.
In appendix A we discuss asymptotic versus exact correlators.
A preamble: three examples for the mixing of two operators by elementary methods
In case A(g) = − γ(g) β(g)
is upper triangular, we may compute Z(x, µ) from:
∂Z ∂g = A(g)Z (4.1)
directly, avoiding the intricacies of the Poincarè-Dulac theorem. Indeed, in this case A(g) may be decomposed into the sum of the diagonal, A Λ (g), and nilpotent, A N (g), contributions:
A(g) = A Λ (g) + A N (g) (4.2)
Then, Z(x, µ) may be computed in the form:
Z(x, µ) = Z Λ (x, µ)Z N (x, µ) (4.3)
provided that:
∂Z N ∂g = Z −1 Λ A N (g)Z Λ Z N (4.4)
and:
∂Z Λ ∂g = A Λ (g)Z Λ (4.5)
Therefore:
Z Λ (x, µ) = exp g(µ) g(x)
A Λ (g)dg (4.6) and:
Z N (x, µ) = P exp g(µ) g(x) Z −1 Λ A N (g)Z Λ dg (4.7)
Hence, since Z −1 Λ A N (g)Z Λ is nilpotent as well, the expansion of the path-ordered exponential for Z N (x, µ) terminates at a finite order, and Z(x, µ) is computable in a closed form.
Nonresonant versus resonant mixing
In a massless QCD-like theory, according to eq. (5.1):
A(g) = 1 g A 0 + ∞ n=1 A 2n g 2n (4.8)
with:
A 0 = γ 0 β 0 (4.9)
We work out for two operators three examples 8 of mixing, which are paradigmatic of the general case. Firstly, we set:
A 0 = Λ = λ 1 0 0 λ 2 (4.10)
diagonal, with λ 1 ≥ λ 2 , i.e., we display the eigenvalues of A 0 in nonincreasing order. Clearly, by eq. (4.9), this is the case that γ 0 is diagonalizable. Besides, we set:
A 2k = N 2k = 0 ν 12 0 0 (4.11)
upper triangular, with ν 12 a real nonvanishing number, and A 2n = 0 for n = k. Hence, in our example:
∂Z ∂g = Λg −1 + N 2k g 2k−1 Z (4.12)
For A 0 diagonal, we consider the following two cases that we dub respectively, according to the terminology in section 2, nonresonant diagonaliazable γ 0 β 0 : λ 1 − λ 2 = 2k (4.13) and resonant diagonalizable γ 0 β 0 : λ 1 − λ 2 = 2k (4.14)
with k a positive integer. Secondly, we consider the case:
λ 1 = λ 2 = λ (4.15)
and:
A 0 = Λ + N 0 = λ ν 12 0 λ (4.16)
which we dub, according to the terminology in section 2, nonresonant nondiagonalizable γ 0 β 0 , as now A 0 is not diagonalizable, since its eigenvalues coincide and ν 12 is assumed to be nonzero.
We compute the corresponding Z(x, µ) by the formulas above.
4.1.1 Nonresonant diagonalizable γ 0 β 0
Z Λ is diagonal and computed by exploiting eq. (4.6):
Z Λ (x, µ) = exp g(µ) g(x) Λ dg g = g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 (4.17)
Moreover, by direct evaluation:
Z −1 Λ A N (g)Z Λ = 0 ν 12 g 2k−1−λ 1 +λ 2 (µ)g λ 1 −λ 2 (x) 0 0 (4.18)
As a consequence:
Z N (x, µ) = P exp g(µ) g(x) Z −1 Λ A N (g)Z Λ dg = 1 ν 12 g 2k (x) λ 1 − λ 2 − 2k 1 − g(µ) g(x) 2k+λ 2 −λ 1 0 1 (4.19)
Finally, combining eq. (4.17) for Z Λ and the above result for Z N , we get:
Z(x, µ) = g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 I + 0 ν 12 g 2k (x) λ 1 − λ 2 − 2k 1 − g(µ) g(x) 2k+λ 2 −λ 1 0 0 = g(µ) g(x) λ 1 ν 12 g 2k (x) λ 1 − λ 2 − 2k g(µ) g(x) λ 1 − g(µ) g(x) λ 2 +2k 0 g(µ) g(x) λ 2 = 1 ν 12 g 2k (µ) 2k − λ 1 + λ 2 0 1 g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 1 − ν 12 g 2k (x) 2k − λ 1 + λ 2 0 1 = S(g(µ))Z Λ (x, µ)S −1 (g(x)) (4.20)
with the holomorphic gauge transformation:
S(g) = 1 ν 12 g 2k 2k − λ 1 + λ 2 0 1 (4.21) Hence, Z(x, µ) is gauge equivalent to the diagonal Z Λ (x, µ). 4.1.2 Resonant diagonalizable γ 0 β 0
Z Λ is given again by eq. (4.17), but now:
Z −1 Λ A N (g)Z Λ = 0 ν 12 g 2k (x) g(µ) 0 0 (4.22)
It follows from eq. (4.7) that:
Z N (x, µ) = P exp g(µ) g(x) Z −1 Λ A N (g)Z Λ dg = 1 ν 12 g 2k (x) log g(µ) g(x) 0 1 (4.23)
Combining the result for Z Λ (x, µ) and Z N (x, µ), we obtain:
Z(x, µ) = g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 I + 0 ν 12 g 2k (x) log g(µ) g(x) 0 0 = g(µ) g(x) λ 1 ν 12 g 2k (x) g(µ) g(x) λ 1 log g(µ) g(x) 0 g(µ) g(x) λ 2 = I + 0 ν 12 g 2k (µ) log g(µ) g(x) 0 0 g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 (4.24)
Because of the occurrence of the term containing g 2k (µ) log
g(µ) g(x) , Z(x, µ) is not
diagonalizable by a holomorphic gauge transformation.
Nonresonant nondiagonalizable
γ 0 β 0
Finally, we specialize the case above to k = 0. We get for λ 1 = λ 2 = λ:
Z(x, µ) = g(µ) g(x) λ 0 0 g(µ) g(x) λ I + 0 ν 12 log g(µ) g(x) 0 0 = g(µ) g(x) λ ν 12 g(µ) g(x) λ log g(µ) g(x) 0 g(µ) g(x) λ = I + 0 ν 12 log g(µ) g(x) 0 0 g(µ) g(x) λ 0 0 g(µ) g(x) λ (4.25)
Again, because of the occurrence of the term containing log g(µ) g(x)
, Z(x, µ) is not diagonalizable by a holomorphic gauge transformation.
5 Main arguments and results: operator mixing by the Poincarè-Dulac theorem 5.1 Resonant canonical form of − γ(g) β(g) by the Poincarè-Dulac theorem In a massless QCD-like theory the meromorphic connection [1] A(g) (section 2) admits the (formal) expansion:
A(g) = − γ(g) β(g) = 1 g A 0 + ∞ k=1 A 2k g 2k (5.1)
in odd powers of g, with the first few coefficients given by:
A 0 = γ 0 β 0 (5.2) A 2 = β 0 γ 1 − β 1 γ 0 β 2 0 (5.3) A 4 = β 2 1 γ 0 − β 0 β 2 γ 0 − β 0 β 1 γ 1 + β 2 0 γ 2 β 3 0 (5.4)
In general, by the Poincarè-Dulac theorem [2] (section 5.5), A(g) can be set by a (formal) holomorphic invertible gauge transformation in the canonical resonant form:
A ′ (g) = 1 g Λ + N 0 + k=1 N 2k g 2k (5.5)
where:
A 0 = Λ + N 0 (5.6)
is upper triangular, with eigenvalues diag(λ 1 , λ 2 , · · · ) = Λ in nonincreasing order λ 1 ≥ λ 2 ≥ · · · , and nilpotent part, N 0 , in normal Jordan form. The upper triangular nilpotent matrices N 2k satisfy:
g Λ N 2k g −Λ = g 2k N 2k (5.7)
i.e., their only nonzero entries, (N 2k ) ij , are such that:
λ i − λ j = 2k (5.8)
for i < j and k a positive integer. Besides, also:
g Λ N 0 g −Λ = N 0 (5.9) since [Λ, N 0 ] = 0 according to the Jordan normal form of A 0 .
Moreover, the sum in eq. (5.5) contains only a finite number of terms, contrary to eq. (5.1). Indeed, the number of differences of the eigenvalues is finite, and therefore, because of eq. (5.7), so it is the number of terms in eq. (5.5).
Eq. (5.8) is the resonance condition for the eigenvalues of the linear system associated to A(g):
DX = ∂ ∂g − A(g) X = 0 (5.10)
whose solution with a suitable initial condition (sections 5.6 and 5.7) is Z(x, µ).
In fact, from the proof (section 5.5) of the Poincarè-Dulac theorem it follows that, once A 0 has been set in Jordan normal form by a global 9 gauge transformation, precisely only the resonant terms in eq. (5.5) may survive after the gauge transformation that sets eq. (5.1) in the aforementioned canonical form.
In this case, the linear system is resonant and consequently the associated operator mixing has been dubbed resonant in [1].
Nonresonant canonical form of − γ(g)
β(g) by the Poincarè-Dulac theorem Hence, a sufficient condition [1] for all the resonant terms to be absent in eq. (5.5) is that the eigenvalues of A 0 in nonincreasing order, λ i ≥ λ j if i ≤ j, satisfy:
λ i − λ j = 2k (5.11)
with k a positive integer. The advantage [1] of this sufficient condition is that it is easily verified a priori from the only knowledge of the eigenvalues of the ratio γ 0 β 0 = A 0 -a one-loop quantity -.
With more effort, we can refine the sufficient condition above into a necessary and sufficient condition: If we set A 0 in the Jordan normal form of eq. (5.6), the necessary and sufficient condition for the linear system in eq. (5.10), with A(g) defined by eq. (5.1), to admit the nonresonant canonical form by a holomorphic gauge transformation:
A ′ (g) = Λ + N 0 g (5.12) is that all of the matrix elements (N 2k ) ij in eq. (5.5), with λ i − λ j = 2k, vanish. Of course, if λ i − λ j = 2k
, this may only be verified by constructing iteratively (section 5.5) the canonical form above. Otherwise, if λ i − λ j = 2k, no nonvanishing N k in eq. (5.5) may occur.
In both the cases above, the linear system is nonresonant, and consequently the associated operator mixing has been dubbed nonresonant in [1].
Classification of operator mixing by the Poincarè-Dulac theorem
Therefore, the Poincarè-Dulac theorem reduces the classification [1] of operator mixing to the four cases [1] that we summarize below.
5.3.1 (I) Nonresonant diagonalizable γ 0 β 0
The linear system is nonresonant and γ 0 β 0 is diagonalizable. For the system to be nonresonant, it is sufficient [1] that the eigenvalues of γ 0 β 0 in nonincreasing order satisfy:
λ i − λ j = 2k (5.13)
with i ≤ j and k a positive integer. For γ 0 β 0 to be diagonalizable, it is sufficient that its eigenvalues are all different. As we have mentioned above, for the system to be nonresonant, the necessary and sufficient condition is that in the canonical form of eq. (5.5) all the resonant terms vanish.
5.3.2 (II) Resonant diagonalizable γ 0 β 0
The linear system is resonant and γ 0 β 0 is diagonalizable. For the system to be resonant, a necessary condition is that, for at least two eigenvalues in nonincreasing order, it holds:
λ i − λ j = 2k
(5.14)
with i < j and k a positive integer. In this case, a necessary and sufficient condition is that, correspondingly, at least one N 2k in the canonical resonant form does not vanish.
The sufficient condition for γ 0 β 0 to be diagonalizable is as in the case (I).
(III) Nonresonant nondiagonalizable
γ 0 β 0
The linear system is nonresonant and γ 0 β 0 is nondiagonalizable. The nonresonant condition is as in the case (I). The necessary condition for γ 0 β 0 to be nondiagonalizable is that at least two of its eigenvalues coincide.
(IV) Resonant nondiagonalizable
γ 0 β 0
The linear system is resonant and γ 0 β 0 is nondiagonalizable. The resonant condition is as in the case (II). The necessary condition for γ 0 β 0 to be nondiagonalizable is as in the case (III).
A unitarity constraint
A massless QCD-like theory is conformal [37] invariant to the leading 10 , O(g 0 ), and next-to-leading 11 , O(g 2 ), perturbative order, since the beta function only affects the solution of the Callan-Symanzik equation starting from the order of g 4 . We argue that, if we assume that it also is unitary in its free conformal limit at g = 0 in the sector defined by gauge-invariant Hermitian operators, then the corresponding γ 0 β 0 should be diagonalizable. Thus, the aforementioned unitarity rules out the cases (III) and (IV).
The unitarity assumption above is satisfied in a massless QCD-like theory with a compact gauge group and matter satisfying the spin statistics theorem, both in Minkowskian and Euclidean space-time in the Hermitian gauge-invariant sector, as unitary gauges exist where the free limit is certainly unitary for the gluon and matter fields, and the gauge-fixing ghosts decouple in the correlators of gauge-invariant operators.
We demonstrate momentarily the aforementioned link between unitarity and diagonalizability of γ 0 β 0 for scalar operators in the conformal free limit. The analog argument for higher spins will appear in a forthcoming paper [4].
Firstly, if γ 0 β 0 is nondiagonalizable, to the order of g 2 a logarithmic conformal field theory (logCFT) arises, which is known to be nonunitary [38,39].
Specifically, either in a CFT or logCFT, the 2-point correlators of Euclidean Hermitian scalar primary conformal operators, G conf (x), satisfy the Callan-Symanzik equation:
x · ∂ ∂x G conf (x) + ∆G conf (x) + G conf (x)∆ T = 0 (5.15)
with ∆ the matrix of the conformal dimensions, whose general solution is:
G conf (x) = O(x)O(0) = e −∆ log √ x 2 µ 2 Ge −∆ T log √ x 2 µ 2 (5.16)
in matrix notation, where G is a real symmetric matrix independent of space-time and the tensor product between repeated O is understood. If ∆ is diagonalizable, a CFT occurs. Otherwise, if ∆ is nondiagonalizable, a logCFT [38,39] arises.
Moreover, both in a CFT and logCFT, for primary conformal operators, the operators/states correspondence holds [39,40]:
O(0)|0 = |O in O out | = lim x→∞ 0|e 2∆ log √ x 2 µ 2 O(x) (5.17)
As a consequence, the scalar product in matrix notation reads:
O out |O in = lim x→∞ 0|e 2∆ log √ x 2 µ 2 O(x)O(0)|0 = lim x→∞ e 2∆ log √ x 2 µ 2 e −∆ log √ x 2 µ 2 Ge −∆ T log √ x 2 µ 2 = lim x→∞ e ∆ log √ x 2 µ 2 Ge −∆ T log √ x 2 µ 2 (5.18)
In order to be well defined, the scalar product in eq. (5.18) must be independent of the variable x 2 µ 2 . Expanding the last line of eq. (5.18) in powers of log x 2 µ 2 we get:
O out |O in = I + ∆G log x 2 µ 2 + · · · G I − G∆ T log x 2 µ 2 + · · · = G + ∆G − G∆ T log x 2 µ 2 + · · · (5.19)
Then, the independence of the coordinates implies:
∆G − G∆ T = 0 (5.20)
and:
O out |O in = G (5.21)
Besides, in a massless QCD-like theory, because of the existence of the perturbative expansion, it holds to the order of g 2 :
∆(g) = D I + g 2 γ 0 + · · · (5.22)
G(g) = G 0 + g 2 G 1 + · · ·
in the conformal renormalization scheme [37], with D the canonical dimension of the operators O. Hence, expanding eq. (5.20) to the order of g 2 , we obtain:
γ 0 G 0 − G 0 γ T 0 = 0 (5.23)
Most interestingly, eq. (5.23) constrains G 0 , which arises to the order of g 0 , by means of γ 0 , which arises to the order of g 2 , the reason being the existence of the conformal structure to the order of g 2 . The consequences of eq. (5.23) follow: If γ 0 is diagonalizable, by eq. (5.23) G 0 commutes with γ 0 in the diagonal basis and thus in any basis. Besides, G 0 being a real symmetric matrix, it is diagonalizable as well, and therefore G 0 and γ 0 are simultaneously diagonalizable. This is the CFT case, where unitarity in the conformal free limit at g = 0 requires that G 0 has positive eigenvalues according to eqs. (5.21) and (5.22) specialized to g = 0.
If instead γ 0 is nondiagonalizable, i.e., in the logCFT case, G 0 has necessarily negative eigenvalues, i.e., the theory is nonunitary in its free conformal limit at g = 0 in the gauge-invariant Hermitian sector.
Indeed, if γ 0 is nondiagonalizable, eq. (5.20) nontrivially constrains the structure of G 0 . Firstly, in this case:
G 0 = g 1 g 2 g 3 · · · g n g 2 g 3 . . . g n 0 g 3 . . . g n 0 0 . . . . . . . . . . . . . . . g n 0 · · · · · · 0 (5.24)
for some real 12 g i in the basis where γ 0 has the canonical Jordan form:
γ 0 = γ 0D I + N 0 (5.25)
with γ 0D the eigenvalue of the Jordan block and N 0 nilpotent and upper diagonal with all the nonvanishing entries equal to 1. Then, eq. (5.23) reads:
N ia G 0aj = G 0ia N T aj (5.26)
with:
N 0ij = δ ij−1 i = 1, · · · , n ; j = 2, · · · , n 0 i = 1, · · · , n ; j = 1 (5.27) and:
N T 0ij = δ i−1j i = 2, · · · , n; j = 1, · · · , n 0 i = 1; j = 1, · · · , n (5.28) Therefore, eq. (5.26) implies:
G 0i+1j = G 0ij+1 i, j = 1, · · · , n − 1 G 0i+1n = G 0nj+1 = 0 i, j = 1, · · · , n − 1 (5.29)
that fixes the form of G 0 in eq. (5.32). Moreover, by a constant gauge transformation S of the form:
S = s 0 s 1 s 2 · · · s n−1 0 s 0 s 1 · · · s n−2 0 0 s 0 · · · s n−3 . . . . . . . . . . . . . . . 0 0 0 · · · s 0 (5.30)
12 Not all g i may vanish, otherwise the correlator would vanish in the free conformal limit.
that commutes with N 0 , G 0 transforms as [4]:
G ′ 0 = SG 0 S T (5.31)
and may be set in the canonical form [39]:
G ′ 0 = 0 0 0 · · · 1 0 0 . . . 1 0 0 . . . 1 0 0 . . . . . . . . . . . . . . . 1 0 · · · · · · 0 (5.32)
It turns out [39] that G ′ 0 has [r/2] positive eigenvalues and [r/2] negative eigenvalues, with r the rank of G ′ 0 . To prove the preceding statement, we observe that:
G ′2 0 = I (5.33)
Indeed, G ′ 0ij = δ i,n−j+1 , and as a consequence:
G ′2 0ij = G ′ 0ia G ′ 0aj = δ i,n−a+1 δ a,n−j+1 = δ ij (5.34)
Hence, the eigenvalues of G ′ 0 are ±1. Moreover, the trace of G ′ 0 is either 0 or 1, depending on whether r is even or odd respectively.
As a consequence, since the trace of a matrix is the sum of its eigenvalues, if r is even, G ′ 0 has r/2 positive eigenvalues and r/2 negative eigenvalues, otherwise, if r is odd, G ′ 0 has r/2 + 1 positive eigenvalues and r/2 negative eigenvalues. By summarizing, the key point of the argument above is that the nondiagonalizability of γ 0 and the existence of the conformal structure to the order of g 2 determine the structure of G 0 that controls the scalar product in the free conformal limit, in such a way that the free conformal limit is nonunitary if γ 0 is nondiagonalizable.
Finally, we may extend the perturbative argument above -about the existence of the scalar product to the order of g 2 -to all orders of perturbation theory, by considering a massless QCD-like theory at its conformal Wilson-Fisher fixed point g * , with β(g * , ǫ) = −g * ǫ + β(g) = 0, introduced in [41, 42] to perform higher-loop computations in dimensional regularization -in d = 4 − 2ǫ dimensions -of the anomalous-dimension matrices in massless QCD.
Indeed, the anomalous-dimension matrix γ(g * ) at the fixed point has the same coefficients [41,42] -as a series in g * -as the anomalous-dimension matrix γ(g) -as a series in g -and specifically the same γ 0 . Moreover, since the theory is conformal to all perturbative orders at the fixed point, the associated scalar product exists to all orders in g * .
Either way, the perturbative conformal symmetry to the order of g 2 or the conformal symmetry to all orders at the aforementioned Wilson-Fisher fixed point, and the lowest-order unitarity, rule out the cases (III) and (IV) of operator mixing in the gauge-invariant Hermitian sector of a massless QCD-like theory. The statement above does not necessarily apply to operators outside the gauge-invariant sector, whose correlators may be affected by the mixing with the ghost sector, which need not to be unitary.
A condensed proof of the Poincarè-Dulac theorem
We provide a condensed proof of (the linear version of) the Poincarè-Dulac theorem following [2].
Poincarè-Dulac theorem:
The most general linear system with a Fuchsian singularity at g = 0, where the meromorphic connection A(g) admits the (formal) expansion:
A(g) = 1 g A 0 + ∞ n=1
A n g n (5.35) may be set, by a (formal) holomorphic invertible gauge transformation, in the Poincarè-Dulac-Levelt normal form 13 :
A ′ (g) = 1 g Λ + N 0 + k=1 N k g k (5.36)
where Λ + N 0 is the Jordan normal form of A 0 , its eigenvalues diag(λ 1 , λ 2 · · · ) = Λ are in nonincreasing order λ 1 ≥ λ 2 ≥ · · · , N 0 is nilpotent and upper triangular, and the nilpotent upper triangular matrices N k satisfy:
g Λ N k g −Λ = g k N k (5.37)
for k = 1, 2, · · · , i.e., the only possibly nonvanishing matrix elements, (N k ) ij , of the N k are associated to the resonant eigenvalues:
λ i − λ j = k (5.38)
with i < j and k a positive integer. Incidentally, also g Λ N 0 g −Λ = N 0 , since [N 0 , Λ] = 0 by the Jordan normal form of A 0 .
Of course, if either the eigenvalues are nonresonant or the resonant matrix coefficients N k -associated to the resonant eigenvalues -vanish, the linear system collapses [2] into the Euler form 14 : 13 In the present paper, we refer to it as the resonant canonical form. 14 In the present paper, we refer to it as the nonresonant canonical form.
A ′ (g) = 1 g (Λ + N 0 ) (5.39)
We only report the key aspects of the proof, leaving more details to [2]. Proof :
The demonstration proceeds by induction on k = 1, 2, · · · by proving that, once A 0 and the first k −1 matrix coefficients, A 1 , · · · , A k−1 , have been set in the Poincarè-Dulac-Levelt normal form above, a holomorphic gauge transformation exists that leaves them invariant and also puts the k-th coefficient, A k , in normal form.
The step 0 of the induction consists just in putting A 0 in Jordan normal formwith eigenvalues in nonincreasing order and N 0 upper triangular, as in the statement of the theorem -by a global (i.e., constant) gauge transformation.
At the k-th step, we choose the holomorphic gauge transformation of the form:
S k (g) = 1 + g k H k (5.40)
with H k a matrix to be found momentarily. Its inverse is:
S −1 k (g) = (1 + g k H k ) −1 = 1 − g k H k + · · · (5.41)
where the dots represent terms of order higher than g k . The gauge action of S k (g) on the connection A(g) furnishes:
A ′ (g) = kg k−1 H k (1 + g k H k ) −1 + (1 + g k H k )A(g)(1 + g k H k ) −1 = kg k−1 H k (1 + g k H k ) −1 + (1 + g k H k ) 1 g A 0 + ∞ n=1 A n g n (1 + g k H k ) −1 = kg k−1 H k (1 − · · · ) + (1 + g k H k ) 1 g A 0 + ∞ n=1 A n g n (1 − g k H k + · · · ) = kg k−1 H k + 1 g A 0 + k n=1 A n g n + g k−1 (H k A 0 − A 0 H k ) + · · · = g k−1 (kH k + H k A 0 − A 0 H k ) + A k−1 (g) + g k−1 A k + · · · (5.42)
where we have skipped in the dots all the terms that contribute to an order higher than g k−1 , and we have put:
A k−1 (g) = 1 g A 0 + k−1 n=1 A n g n (5.43)
that is the part of A(g) that is not affected by the gauge transformation S k (g), and thus it verifies the hypotheses of the induction. Therefore, by eq. (5.42) the k-th matrix coefficient, A k , may be eliminated from the expansion of A ′ (g) to the order of g k−1 provided that an H k exists such that:
A k + (kH k + H k A 0 − A 0 H k ) = A k + (k − ad A 0 )H k = 0 (5.44) with ad A 0 Y = [A 0 , Y ].
If the inverse of ad A 0 − k exists, the unique solution for H k is:
H k = (ad A 0 − k) −1 A k (5.45)
Therefore, the only matrix coefficients that may not be removed from the expansion of A ′ (g) at the k-th step of the induction belong to the subspace where ad A 0 − k is not invertible. Hence, we should demonstrate that, for k positive, the elements Y k of the aforementioned subspace satisfy the condition in eq. (5.37) for N k , according to the statement of the theorem.
To understand what is going on, it is convenient to suppose initially [1] that N 0 = 0, i.e., that A 0 is diagonalizable.
In this case, ad A 0 − k = ad Λ − k, as a linear operator that acts on matrices, is diagonal with eigenvalues λ i − λ j − k and the matrices E ij , whose only nonvanishing entries are (E ij ) ij , as eigenvectors. Moreover, ad Λ − k is invertible if and only if its kernel only contains the zero matrix.
The eigenvectors E ij , normalized in such a way that (E ij ) ij = 1, form an orthonormal basis for the matrices:
E ij |E i ′ j ′ = δ ii ′ δ jj ′ , with A|B = Tr(ĀB) andĀ the adjoint of the matrix A.
Thus, E ij belongs to the kernel of ad Λ − k if and only if λ i − λ j − k = 0 and i < j, as k is a positive integer.
As a consequence, the E ij in the kernel satisfy eq. (5.37), according to the statement of the theorem:
g Λ E ij g −Λ = g λ i −λ j E ij = g k E ij (5.46)
Now we suppose that N 0 does not vanish, i.e., A 0 is nondiagonalizable. Hence, A 0 admits a canonical Jordan form as in the statement of the theorem.
The key point is that now ad A 0 − k, as a linear operator that acts on matrices, is lower triangular for the following ordering of the matrix basis.
We may choose an increasing sequence, diag(q 1 , q 2 , · · · ) = Q, of rationally independent weights, q i , [2] in such a way that the corresponding weight for
E ij is q j − q i -computed via g −Q E ij g Q = g q j −q i E ij -.
Thus, we may order our basis in such a way that the sequence of basis vectors E l with l = 1, 2, · · · coincides with the following sequence of the E ij ordered with nondecreasing weights: The E ij for i = j with strictly increasing weights, and the E ii -that have weight 0 -with i increasing.
The action of ad Λ on the basis leaves the weights of the E ij with i = j invariant, and sends to zero the E ii , in such a way that the action of ad Λ is diagonal on the entire basis.
Instead, the action of ad N 0 on the entire basis produces a linear combination of terms with strictly increased weights, since N 0 is upper triangular and, therefore, it is the sum of terms with positive weights, and for each of these terms the commutator with any E ij strictly increases the weights.
Moreover:
(ad Λ+N 0 − k)E l = E h E h |(ad Λ+N 0 − k)E l = E h (ad Λ+N 0 − k) hl (5.47)
where the sum on the index h is understood. Hence, with this ordering of the basis, the matrix:
(ad Λ+N 0 − k) hl = E h |(ad Λ+N 0 − k)E l (5.48)
is lower triangular [2] and its eigenvalues coincide with the eigenvalues of ad Λ − k. Now ad Λ+N 0 − k is not invertible if and only if at least one of its eigenvalues vanishes. But its eigenvalues coincide with the eigenvalues of ad Λ − k. Therefore, ad Λ+N 0 − k is invertible on the orthogonal complement of the kernel of ad Λ − k, as it is for ad Λ − k.
Hence, every matrix coefficient A k orthogonal to the kernel of ad Λ − k may be removed from A ′ (g), as in the diagonalizable case with N 0 = 0 above.
Obviously, the resonant matrix coefficients N k are finite in number, because there are only a finite number of differences of the eigenvalues.
As consequence, from a certain point on, all the remaining terms in the expansion of A ′ (g) may be removed, because they belong to the orthogonal complement of the kernel of ad A 0 − k, and the proof is complete.
Fundamental solution of the linear system
A fundamental solution [2] of the linear system in eq. (5.10) in the canonical resonant form of eq. (5.36) is:
X(g) = g Λ g N (5.49)
with N = N 0 + k=1 N k , as we verify by direct computation [2]:
∂X(g) ∂g X −1 (g) = g Λ Λ + N g g N g −N g −Λ = g Λ Λ + N g g −Λ = Λ + g Λ Ng −Λ g = Λ + N 0 + k=1 N k g k g = A ′ (g) (5.50)
Moreover, X(g) may be computed in a closed form, since the expansion of g N in powers of log g terminates because N is nilpotent.
Correspondingly, the solution X(g)X −1 (g 0 ) of eq. (5.10) in the canonical form of eq. (5.36) that reduces to the identity at g = g 0 may be computed in a closed form as well:
X(g)X −1 (g 0 ) = g Λ g N g −N 0 g −Λ 0 = g Λ g g 0 N g −Λ 0 = g g 0 Λ g Λ 0 g g 0 N g −Λ 0 = g g 0 Λ g Λ 0 e N log g g 0 g −Λ 0 = g g 0 Λ e g Λ 0 N g −Λ 0 log g g 0 = g g 0 Λ e k=0 g k 0 N k log g g 0 (5.51)
Solution for Z(x, µ)
Therefore, the solution of eq. (5.10) in the canonical resonant form of eq. (5.5) that reduces to the identity for g(x) = g(µ) in a massless QCD-like theory is:
Z(x, µ) = g Λ (µ)g N (µ)g −N (x)g −Λ (x) = g Λ (µ) g(µ) g(x) N g −Λ (x) = g(µ) g(x) Λ g Λ (x) g(µ) g(x) N g −Λ (x) = g(µ) g(x) Λ g Λ (x)e N log g(µ) g(x) g −Λ (x) = g(µ) g(x) Λ e g Λ (x)N g −Λ (x) log g(µ) g(x) = g(µ) g(x) Λ e k=0 g 2k (x)N 2k log g(µ) g(x) (5.52)
where [1] g(µ) and g(x) are short notations for the running coupling at the corresponding scales, g( µ Λ RGI ) and g(xΛ RGI ), and:
g 2 (xΛ RGI ) ∼ 1 β 0 log( 1 x 2 Λ 2 RGI ) 1 − β 1 β 2 0 log log( 1 x 2 Λ 2 RGI ) log( 1 x 2 Λ 2 RGI ) (5.53)
UV asymptotics of Z(x, µ)
In the cases (II),(III) and (IV), as the canonical form of Z(x, µ) is nondiagonal, its UV asymptotics is intrinsically different from the diagonal case (I).
Indeed, by eq. (5.52) Z(x, µ) may be factorized into the exponential of a linear combination of upper triangular nilpotent matrices with coefficients that asymptotically in the UV are powers of logs of the running coupling, i.e., powers of loglogs of the coordinates, and a diagonal matrix as in the nonresonant diagonal case (I):
Z(x, µ) = g(µ) g(x) Λ e k=0 g 2k (x)N 2k log g(µ) g(x) = g(µ) g(x) Λ e k=0 g 2k (x)N 2k log g(µ) g(x) g(µ) g(x) −Λ g(µ) g(x) Λ = e k=0 g 2k (µ)N 2k log g(µ) g(x) g(µ) g(x) Λ (5.54)
This is the closest analog of logCFTs that may occur in asymptotically free theories. Moreover, some subtleties arise in computing the UV asymptotics of Z(x, µ), since it follows from eq. (5.54) that the factorization of Z(x, µ) actually depends on the order of the factors, in such a way that:
g(µ) g(x) −Λ Z(x, µ) = e k=0 g 2k (x)N 2k log g(µ)
g(x) (5.55) but:
Z(x, µ) g(µ) g(x) −Λ = e k=0 g 2k (µ)N 2k log g(µ)
g(x)
(5.56) Therefore, the limit for x → 0 of eqs. (5.55) and (5.56) in general does not coincide. Specifically, in the case (II), i.e., for N 0 = 0, the limit is the identity I for eq. (5.55), but it is not finite for eq. (5.56). N 2k = 0 (6.2) for k = 1, 2, · · · , because the system is nonresonant. Therefore, by eq. (5.52) we obtain:
Z(x, µ) = g(µ) g(x) Λ = g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 (6.3)
that matches eq. (4.20) up to a holomorphic gauge transformation.
6.2 Resonant diagonalizable γ 0 β 0 N 0 = 0, because γ 0 β 0 is diagonal. Moreover: Λ = λ 1 0 0 λ 2 (6.4)
and:
N = N 2k = 0 ν 12 0 0 (6.5)
with λ 1 − λ 2 = 2k, since the system is resonant. Therefore, by eq. (5.52) we obtain:
Z(x, µ) = g(µ) g(x) λ 1 0 0 g(µ) g(x) λ 2 I + 0 ν 12 g 2k (x) log g(µ) g(x) 0 0 = g(µ) g(x) λ 1 ν 12 g 2k (x) g(µ) g(x) λ 1 log g(µ) g(x) 0 g(µ) g(x) λ 2 (6.6)
that matches eq. (4.24).
Nonresonant nondiagonalizable γ
0 β 0 γ 0 β 0 is not diagonalizable. Hence: Λ = λ 0 0 λ (6.7)
and:
N = N 0 = 0 ν 12 0 0 (6.8)
Therefore, by eq. (5.52) we obtain:
Z(x, µ) = g(µ) g(x) λ 0 0 g(µ) g(x) λ I + 0 ν 12 log g(µ) g(x) 0 0 = g(µ) g(x) λ ν 12 g(µ) g(x) λ log g(µ) g(x) 0 g(µ) g(x) λ (6.9)
that matches eq. (4.25).
A physical realization of the cases (II) and (I)
Flavor-changing four-quarks operators in SU(N) QCD with N f = N flavors of quarks
We work out a physical realization of the case (II): The operator mixing of flavorchanging four-quarks operators computed in [7]. Specifically, we consider the two sets of operators [7]:
Q V LR 1 = s α γ µ P L d β ū β γ µ P R c α Q V LR 2 = (s α γ µ P L d α ) ū β γ µ P R c β (7.1)
and:
Q SLR 1 = s α P L d β ū β P R c α Q SLR 2 = (s α P L d α ) ū β P R c β (7.2)
wheres, d,ū and c are quarks operators, and P L,R = 1 2 (1 ∓ γ 5 ). In [7] the anomalousdimension matrices have been computed to the order of g 4 : γ(g) = g 2 γ 0 + g 4 γ 1 + · · · (7.3)
where γ 0 and γ 1 are:
γ V LR 0 = 1 (4π) 2 −6 + 6 N 2 0 − 6 N 6 N 2 γ V LR 1 = 1 (4π) 4 − 203 6 N + 479 6N + 15 2N 3 + 10 3 N f − 22 3N 2 N f − 71 2 − 18 N 2 + 4 N f N − 100 3 + 3 N 2 + 22 3 N f N 137 6N + 15 2N 3 − 22 3N 2 N f (7.4)
for the VLR operators, and:
γ SLR 0 = 1 (4π) 2 6 N 2 − 6 N 0 −6 + 6 N 2 γ SLR 1 = 1 (4π) 4 137 6N + 15 2N 3 − 22 3N 2 N f − 100 3 + 3 N 2 + 22 3 N f N − 71 2 − 18 N 2 + 4 N f N − 203 6 N + 479 6N + 15 2N 3 + 10 3 N f − 22 3N 2 N f (7.5)
for the SLR operators, with N and N f the number of colors and flavors respectively. The eigenvalues of γ V LR 0 in nonincreasing order are:
λ V LR 1 = 1 (4π) 2 6 N 2 λ V LR 2 = 1 (4π) 2 6 −1 + 1 N 2 (7.6)
that coincide with the eigenvalues of γ SLR 0 :
λ SLR 1 = 1 (4π) 2 6 N 2 λ SLR 2 = 1 (4π) 2 6 −1 + 1 N 2 (7.7)
We set N f = N, in such a way that β 0 and β 1 read respectively:
β 0 = 1 (4π) 2 11 3 − 2 3 N f N = 3 (4π) 2 β 1 = 1 (4π) 4 34 3 − 13 3 N f N + N f N 3 = 1 (4π) 4 7 + 1 N 2 (7.8)
Therefore, the differences of the eigenvalues satisfy the resonant condition in eq. (4.14) with k = 1:
λ V LR 1 β 0 − λ V LR 2 β 0 = 2 (7.9) λ SLR 1 β 0 − λ SLR 2 β 0 = 2 (7.10)
thus realizing in a physical theory the case (II). We construct the holomorphic gauge transformations that bring the corresponding connections: 7.11) and:
− γ V LR (g) β(g) = 1 g γ V LR 0D β 0 + g 2 β 0 γ V LR 1D − β 1 γ V LR 0D β 2 0 + · · · (− γ SLR (g) β(g) = 1 g γ SLR 0D β 0 + g 2 β 0 γ SLR 1D − β 1 γ SLR 0D β 2 0 + · · ·S V LR 0 = − 1 N 1 1 N 0 (7.13)
and:
S SLR 0 = 1 − 1 N 0 1 (7.14)
respectively. Correspondingly:
γ V LR 0D = 1 (4π) 2 6 N 2 0 0 6 N 2 − 6 γ V LR 1D = 1 (4π) 4 51 2N 3 + 47 N 18 N 3 + 9N 2 − 45 2N − 18 N 3 − 63 2N 41 N − 21 2N 3 − 61N 2 (7.15)
and:
γ SLR 0D = 1 (4π) 2 6 N 2 0 0 6 N 2 − 6 γ SLR 1D = 1 (4π) 4 51 2N 3 + 47 N 18 N 4 − 45 2N 2 + 9 2 − 18 N 2 − 63 2 41 N − 21 2N 3 − 61N 2 (7.16)
Secondly, we choose the gauge transformations in eq. (5.40) for k = 2: (7.18) respectively, by requiring that the only terms that do not vanish in the gaugetransformed eqs. (7.11) and (7.12) are the resonant ones:
S V LR 2 (g) = I + g 2 H V LR 2 = 1 0 0 1 + g 2 a V LR 11 0 a V LRA V LR 2 + (2 I − ad A V LR 0 )H V LR 2 = 1 (4π) 2 0 6 N 3 + 3N 2 − 15 2N 0 0 (7.19)
and:
A SLR 2 + (2 I − ad A SLR 0 )H SLR 2 = 1 (4π) 2 0 6 N 4 − 15 2N 2 + 3 2 0 0 (7.20)
respectively, where:
A V LR 0 = γ V LR 0D β 0 = 2 N 2 0 0 2 N 2 − 2 A V LR 2 = β 0 γ V LR 1D − β 1 γ V LR 0D β 2 0 = 1 (4π) 2 − 2 3N 4 + 17 2N 3 − 14 3N 2 + 47 3N 6 N 3 + 3N 2 − 15 2N − 6 N 3 − 21 2N − 2 3N 4 − 7 2N 3 − 4 N 2 − 61N 6 + 41 3N + 14 3 (7.21)
and:
A SLR 0 = γ SLR 0D β 0 = 2 N 2 0 0 2 N 2 − 2 A SLR 2 = β 0 γ SLR 1D − β 1 γ SLR 0D β 2 0 = 1 (4π) 2 − 2 3N 4 + 17 2N 3 − 14 3N 2 + 47 3N 6 N 4 − 15 2N 2 + 3 2 − 6 N 2 − 21 2 − 2 3N 4 − 7 2N 3 − 4 N 2 − 61N 6 + 41 3N + 14 3 (7.22)
Therefore:
A V LR 2 + (2 I − ad A V LR 0 )H V LR 2 = 2a V LR 11 + c V LR 1 1 (4π) 2 6 N 3 + 3N 2 − 15 2N 4a V LR 21 + c V LR 2 2a V LR 22 + c V LR 3 = 1 (4π) 2 0 6 N 3 + 3N 2 − 15 2N 0 0 (7.23)
with:
c V LR 1 = 1 (4π) 2 − 2 3N 4 + 17 2N 3 − 14 3N 2 + 47 3N c V LR 2 = 1 (4π) 2 − 6 N 3 − 21 2N c V LR 3 = 1 (4π) 2 14 3 − 2 3N 4 − 7 2N 3 − 4 N 2 − 61N 6 + 41 3N (7.24)
and:
A SLR 2 + (2 I − ad A SLR 0 )H SLR 2 = 1 (4π) 2 2a SLR 11 + c SLR 1 1 (4π) 2 6 N 4 − 15 2N 2 + 3 2 4a SLR 21 + c SLR 2 2a SLR 22 + c SLR 3 = 1 (4π) 2 0 6 N 4 − 15 2N 2 + 3 2 0 0 (7.25)
with:
c SLR 1 = 1 (4π) 2 − 2 3N 4 + 17 2N 3 − 14 3N 2 + 47 3N c SLR 2 = 1 (4π) 2 − 21 8 − 6 N 2 c SLR 3 = 1 (4π) 2 14 3 − 2 3N 4 − 7 2N 3 − 4 N 2 − 61N 6 + 41 3N (7.26)
The solutions are:
H V LR 2 = 1 (4π) 2 1 3N 4 − 17 4N 3 + 7 3N 2 − 47 6N 0 3 2N 3 + 21 8N 1 3N 4 + 7 4N 3 + 2 N 2 + 61N 12 − 41 6N − 7 3 (7.27)
and:
H SLR 2 = 1 (4π) 2 1 3N 4 − 17 4N 3 + 7 3N 2 − 47 6N 0 3 2N 2 + 21 8 1 3N 4 + 7 4N 3 + 2 N 2 + 61N 12 − 41 6N − 7 3 (7.28)
The corresponding gauge-transformed connections A ′ (g) in the Poincarè-Dulac-Levelt normal form read:
− γ V LR (g) β(g) ′ = −S V LR 2 (g) γ V LR (g) β(g) S V LR 2 (g) −1 + ∂S V LR 2 (g) ∂g S V LR 2 (g) −1 = 1 g γ V LR 0D β 0 + g 2 A ′ V LR 2 = 1 g 2 N 2 0 0 2 N 2 − 2 + g 2 (4π) 2 0 6 N 3 + 3N 2 − 15 2N 0 0 (7.29)
and:
− γ SLR (g) β(g) ′ = −S SLR 2 (g) γ SLR (g) β(g) S SLR 2 (g) −1 + ∂S SLR 2 (g) ∂g S SLR 2 (g) −1 = 1 g γ SLR 0D β 0 + g 2 A ′ SLR 2 = 1 g 2 N 2 0 0 2 N 2 − 2 + g 2 (4π) 2 0 6 N 4 − 15 2N 2 + 3 2 0 0 (7.30)
As a consequence, the corresponding Z(x, µ) can be read from eq. (4.24) with k = 1.
Dimension 8 operators in large-N YM theory
We demonstrate by explicit computation that both the case (I) and the resonant condition of the case (II) are realized in the large-N YM theory, and that the unitarity constraint (section 5.4) in the free conformal limit is satisfied as well. We consider the dimension-8 gauge-invariant Hermitian scalar operators in SU(N) YM theory [44,45]:
O B841 = 1 N 4 F a µσ F a µρ F b σν F b ρν O B842 = 1 N 4 F a µσ F b µρ F b σν F a ρν O B843 = 1 N 4 F a µσ F a νρ F b σµ F b ρν O B844 = 1 N 4 F a µσ F b νρ F a σµ F b ρν O B845 = 1 N 4 d abcd 4 F a µσ F b µσ F c νσ F d νρ O B846 = 1 N 4 d abcd 4 F a µσ F c µρ F b νσ F d νρ O B847 = 1 N 4 d acbd 4 F a µσ F b µσ F c νρ F d νρ O B848 = 1 N 4 d abdc 4 F a µσ F c µρ F b νσ F d νρ (7.31)
where F a µν is:
F a µν = ∂ µ A a ν − ∂ ν A a µ − gf abc A b µ A c ν (7.32) with: d abcd 4 = d abe d dce (7.33)
where f abc , d abc are:
T a , T b = if abc T c T a , T b = 1 N δ ab I + d abc T c (7.34)
with the generators, T a , of the Lie algebra of SU(N) in the fundamental representation normalized as:
Tr(T a T b ) = 1 2 δ ab (7.35)
We refer to the operators O B841 · · · O B844 and O B845 · · · O B848 as to double-trace and single-trace operators respectively. They mix among themselves under renormalization [44,45]:
O = ZO B (7.36)
where O is the column vector of renormalized operators, whose transpose, O T , reads:
O T = (O 841 O 8412 O 843 O 844 O 845 O 846 O 847 O 848 ) (7.37)
with O B the vector of the bare operators. The corresponding γ 0 reads [44]:
γ 0 = 3 k=0 1 N k γ 0k (7.38)
For the matrix of 2-point correlators in the free conformal limit we get [43]:
G (2) (x) = 1 (x 2 ) 8 G 0 (7.39)
with:
G 0 = 6 k=0 1 N k G 0k (7.40)
We only report the leading-order terms, γ 00 and G 00 , in the large-N expansion [43]:
γ 00 = 1 (4π) 2 3 0 0 0 0 − 5 6 − 8 3 2 − 4 3 G 00 = 1 π 8 (7.41)
Hence, to the leading large-N order, single-and double-trace operators only mix separately among themselves [43]. According to eq. (5.23), γ 00 and G 00 are simultaneously diagonalizable by the global gauge transformation [43]: Therefore: and: Interestingly, the system above satisfies [43] the resonant condition for some eigenvalues in the double-trace sector:
S = 0 − 2γ ′ 00 = 1 (4π) 2 G ′ 00 = 1 π 8 γ ′ 004 β 0 − γ ′ 001 β 0 = 2 γ ′ 004 β 0 − γ ′ 002 β 0 = 2 (7.45)
Moreover, despite γ 00 would be potentially nondiagonalizable because of the two coinciding eigenvalues in eq. (7.43), it is actually diagonalizable -and G 00 as well -according to the unitarity constraint (section 5.4) in the free conformal limit. Moreover, we verify that the eigenvalues of G 00 [43] are all positive numbers: according to the aforementioned unitarity.
Acknowledgements
The first named author also acknowledges the financial support from the European Union Horizon 2020 research and innovation programme: High precision multi-jet dynamics at the LHC (grant agreement no. 772009).
A Asymptotic versus exact correlators
We comment on the asymptotic versus exact form of the correlators in massless QCD-like theories. The closed form of Z(x, µ) in eq. (2.5) relies implicitly on the perturbative definition of γ(g) and β(g) that are believed to be formal series, at best asymptotic for g → 0 thanks to the asymptotic freedom.
Correspondingly, the asymptotic solution of the Callan-Symanzik equation in eq. (2.6), with G(x, g(µ), µ) ∼ G(x, g(x)), where G(x, g(x)) also relies on the RGimprovement of perturbation theory, is believed to be only asymptotic in the UV to the exact 2-point correlator thanks to the asymptotic freedom.
The above statement may be verified directly in the large-N limit of confining massless QCD-like theories following [10], where it has been shown how the aforementioned asymptotics works in the multiplicatively renormalizable case.
Indeed, as remarked in [1], nonperturbatively, according to the RG, massless QCD-like theories develop a nontrivial dimensionful scale that labels the RG trajectory -the RG invariant -Λ RGI :
Λ RGI ∼ µ e − 1 2β 0 g 2 g − β 1 β 2 0 c 0 (1 + n=1 c n g 2n ) (A.1)
-the only free parameter [12,13] in the nonperturbative S matrix of confining massless QCD-like theories -which any physical mass scale must be proportional to. As a consequence, nonperturbatively in the large-N limit of confining massless QCD-like theories [15][16][17][18], the leading contribution to the exact Euclidean 2-point correlators of gauge-invariant operators must be an infinite sum of free-field propagators [17,18], with every mass in the propagators proportional to Λ RGI .
In the momentum representation, after the analytic continuation to Minkowski space-time, the sum of free propagators is a sum of pure poles, while the analytic continuation of the RG-improved [10] perturbative solution of the Callan-Symanzik equation has only cuts [10], involving logs and loglogs [10] of the momentum. Therefore, the exact and all-order RG-improved 2-point Euclidean correlators cannot coincide, otherwise their analytic continuations would coincide as well, though we have just shown that they do not.
Hence, RG-improved perturbation theory may only be UV asymptotic in large-N confining QCD-like theories, and in fact, as remarked above, it is believed to be such because of the asymptotic freedom.
5
Main arguments and results: operator mixing by the Poincarè-Dulac theorem 10 5.1 Resonant canonical form of − γ(g) β(g) by the Poincarè-Dulac theorem 10 5.2 Nonresonant canonical form of − γ(g) β(g) by the Poincarè-Dulac theorem 11 5.3 Classification of operator mixing by the Poincarèrealization of the cases (II) and (I) 24 7.1 Flavor-changing four-quarks operators in SU(N) QCD with N f = N flavors of quarks 24 7.2 Dimension 8 operators in large-N
6
Three examples for the mixing of two operators revisited by the Poincarè-Dulac theorem For completeness, we verify that the elementary computation in section 4 coincides with the solution of the linear system in canonical form according to the Poincarè-Dulac theorem.
Poincarè-Dulac-Levelt normal form of eq. (5.36).
We only consider YM theories with a single gauge coupling. Our methods may extend to theories with multiple couplings, at the price of increasing mathematical complication.
These examples do not necessarily arise from a massless QCD-like theory. Physical examples are worked out in section 7.
Namely, a constant gauge transformation, i.e., a g-independent transformation in our framework.
We assume that gauge-invariant operators are canonically normalized in such a way that the leading contribution to their 2-point correlators starts to the order of g 0 .11 Implementing conformal symmetry to the order of g 2 requires the choice of the conformal scheme[37] that differs by a finite renormalization[37] from other perturbative schemes.
On the geometry of operator mixing in massless QCD-like theories. M Bochicchio, 10.1140/epjc/s10052-021-09543-5arXiv:2103.15527Eur. Phys. J. C. 81749Eur. Phys. J. C. hep-thM. Bochicchio, On the geometry of operator mixing in massless QCD-like theories, Eur. Phys. J. C (2021) 81:749, Eur. Phys. J. C 81:749 (2021), arXiv:2103.15527 [hep-th].
Y Ilyashenko, S Yakovenko, Lectures On Analytic Differential Equations. 86Y. Ilyashenko, S. Yakovenko, Lectures On Analytic Differential Equations, Graduate Studies in Mathematics 86 (2008).
Formal Reduction Theory Of Meromorphic Differential Equations: A Group Theoretic View. D G Babbitt, V S Varadarajan, Pacific J. Math. 1091D. G. Babbitt, V. S. Varadarajan, Formal Reduction Theory Of Meromorphic Differential Equations: A Group Theoretic View, Pacific J. Math. 109 (1983) 1.
Canonical forms of operator mixing and UV asymptotics of OPE coefficients in massless QCD-like theories. M Becchetti, M Bochicchio, to appear in arXivM. Becchetti, M. Bochicchio, Canonical forms of operator mixing and UV asymptotics of OPE coefficients in massless QCD-like theories, to appear in arXiv.
Asymptotic Freedom in Deep Inelastic Processes in the Leading Order and Beyond. A J Buras, 10.1103/RevModPhys.52.199Rev. Mod. Phys. 52199A. J. Buras, Asymptotic Freedom in Deep Inelastic Processes in the Leading Order and Beyond, Rev. Mod. Phys. 52 (1980) 199.
Weak Hamiltonian, CP Violation and Rare Decays. A J Buras, arXiv:hep-ph/9806471Les Houches Summer School in Theoretical Physics. hep-phA. J. Buras, Weak Hamiltonian, CP Violation and Rare Decays, Les Houches Summer School in Theoretical Physics (1997), arXiv:hep-ph/9806471 [hep-ph].
Two loop QCD anomalous dimensions of flavor changing four quark operators within and beyond the standard model. A J Buras, M Misiak, J Urban, 10.1016/S0550-3213(00)00437-5arXiv:hep-ph/0005183Nucl. Phys. B. 586397hep-phA. J. Buras, M. Misiak, J. Urban, Two loop QCD anomalous dimensions of flavor changing four quark operators within and beyond the standard model, Nucl. Phys. B 586 (2000) 397, arXiv:hep-ph/0005183 [hep-ph].
The ∆S = 1 Effective Hamiltonian Including Next-to-Leading Order QCD and QED Corrections. M Ciuchini, E Franco, G Martinelli, L Reina, 10.1016/0550-3213(94)90118-XarXiv:hep-ph/9304257Nucl. Phys. B. 415403hep-phM. Ciuchini, E. Franco, G. Martinelli, L. Reina, The ∆S = 1 Effective Hamiltonian Including Next-to-Leading Order QCD and QED Corrections, Nucl. Phys. B 415 (1994) 403, arXiv:hep-ph/9304257 [hep-ph].
Ultraviolet asymptotics of glueball propagators. M Bochicchio, S P Muscinelli, arXiv:1304.6409JHEP. 130864hep-thM. Bochicchio, S. P. Muscinelli, Ultraviolet asymptotics of glueball propagators, JHEP 1308 (2013) 064, arXiv:1304.6409 [hep-th].
Glueball and meson propagators of any spin in large-N QCD. M Bochicchio, arXiv:1305.0273Nucl. Phys. B. 875621hep-thM. Bochicchio, Glueball and meson propagators of any spin in large-N QCD, Nucl. Phys. B 875 (2013) 621, arXiv:1305.0273 [hep-th].
An Asymptotic Solution of Large-N QCD, for the Glueball and Meson Spectrum and the Collinear S-Matrix. M Bochicchio, 10.1063/1.4949387Proceedings, 16th International Conference on Hadron Spectroscopy. 16th International Conference on Hadron Spectroscopy30004AIP Conf. Proc. 1735M. Bochicchio, An Asymptotic Solution of Large-N QCD, for the Glueball and Meson Spectrum and the Collinear S-Matrix, In: Proceedings, 16th International Conference on Hadron Spectroscopy (Hadron 2015), AIP Conf. Proc. 1735 (2016) 030004.
The large-N Yang-Mills S matrix is ultraviolet finite, but the large-N QCD S matrix is only renormalizable. M Bochicchio, 10.1103/PhysRevD.95.054010arXiv:1701.07833Phys. Rev. D. 9554010hep-thM. Bochicchio, The large-N Yang-Mills S matrix is ultraviolet finite, but the large-N QCD S matrix is only renormalizable, Phys. Rev. D 95 (2017) 054010, arXiv:1701.07833 [hep-th].
Renormalization in large-N QCD is incompatible with open/closed string duality. M Bochicchio, 10.1016/j.physletb.2018.06.072arXiv:1703.10176Phys. Lett. B. 783341hep-thM. Bochicchio, Renormalization in large-N QCD is incompatible with open/closed string duality, Phys. Lett. B 783 (2018) 341, arXiv:1703.10176 [hep-th].
OPE and a low-energy theorem in QCD-like theories. M Becchetti, M Bochicchio, 10.1007/JHEP03(2019)088arXiv:1810.08527JHEP. 0388hep-thM. Becchetti, M. Bochicchio, OPE and a low-energy theorem in QCD-like theories, JHEP 03 (2019) 088, arXiv:1810.08527 [hep-th].
A planar diagram theory for strong interactions. G Hooft, 10.1016/0550-3213(74)90154-0Nucl. Phys. B. 72461G. 't Hooft, A planar diagram theory for strong interactions, Nucl. Phys. B 72 (1974) 461.
Some Aspects of a Unified Approach to Gauge, Dual and Gribov Theories. G Veneziano, Nucl. Phys. B. 117519G. Veneziano, Some Aspects of a Unified Approach to Gauge, Dual and Gribov Theories, Nucl. Phys. B 117 (1976) 519.
Multicolor QCD as Dual Resonance Theory. A A , Annals Phys. 109365A. A. Migdal, Multicolor QCD as Dual Resonance Theory, Annals Phys. 109 (1977) 365.
Baryons in the 1 N expansion. E Witten, 10.1016/0550-3213(79)90232-3Nucl. Phys. B. 16057E. Witten, Baryons in the 1 N expansion, Nucl. Phys. B 160 (1979) 57.
Can We Make Sense Out of Quantum Chromodynamics?. G Hooft, Subnucl. Ser. 15943G. 't Hooft, Can We Make Sense Out of Quantum Chromodynamics?, Subnucl. Ser. 15 (1979) 943.
NSVZ scheme with the higher derivative regularization for N = 1 SQED. A L Kataev, K V Stepanyantz, 10.1016/j.nuclphysb.2013.07.010arXiv:1305.7094Nucl. Phys. B. 875459hep-thA. L. Kataev, K. V. Stepanyantz, NSVZ scheme with the higher derivative regularization for N = 1 SQED, Nucl. Phys. B 875 (2013) 459, arXiv:1305.7094 [hep-th].
The NSVZ beta-function in supersymmetric theories with different regularizations and renormalization prescriptions. A L Kataev, K V Stepanyantz, 10.1007/s11232-014-0233-3arXiv:1405.7598Theor. Math. Phys. 1811531hep-thA. L. Kataev, K. V. Stepanyantz, The NSVZ beta-function in supersymmetric theories with different regularizations and renormalization prescriptions, Theor. Math. Phys. 181 (2014) 1531, arXiv:1405.7598 [hep-th].
A class of the NSVZ renormalization schemes for N = 1 SQED. I O Goriachuk, A L Kataev, K V Stepanyantz, 10.1016/j.physletb.2018.09.014arXiv:1808.02050Phys. Lett. B. 785561hep-thI. O. Goriachuk, A. L. Kataev, K. V. Stepanyantz, A class of the NSVZ renormalization schemes for N = 1 SQED, Phys. Lett. B 785 (2018) 561, arXiv:1808.02050 [hep-th].
Exact β-Function in Abelian and non-Abelian N = 1 Supersymmetric Gauge Models and Its Analogy with the QCD β-Function in the C-scheme. I O Goriachuk, A L Kataev, 10.1134/S0021364020120085arXiv:2005.03445JETP Lett. 12663hep-thI. O. Goriachuk, A. L. Kataev, Exact β-Function in Abelian and non-Abelian N = 1 Supersymmetric Gauge Models and Its Analogy with the QCD β-Function in the C-scheme, JETP Lett. 111 no.12 (2020) 663, arXiv:2005.03445 [hep-th].
Riemann ζ(4) function contributions to O(α s 5 ) terms. I O Goriachuk, A L Kataev, I. O. Goriachuk, A. L. Kataev, Riemann ζ(4) function contributions to O(α s 5 ) terms
arXiv:2011.14746Adler D-function and Bjorken polarized sum rule in SU (N c ) QCD: results and consequences. hep-phAdler D-function and Bjorken polarized sum rule in SU (N c ) QCD: results and consequences, arXiv:2011.14746 [hep-ph].
Renormalization scheme and gauge (in)dependence of the generalized Crewther relation: what are the real grounds of the β-factorization property?. A V Garkusha, A L Kataev, V S Molokoedov, 10.1007/JHEP02(2018)161arXiv:1801.06231JHEP. 02161hep-phA. V. Garkusha, A. L. Kataev, V. S. Molokoedov, Renormalization scheme and gauge (in)dependence of the generalized Crewther relation: what are the real grounds of the β-factorization property?, JHEP 02 (2018) 161, arXiv:1801.06231 [hep-ph].
Exact Gell-Mann-Low Function of Supersymmetric Yang-Mills Theories from Instanton Calculus. V A Novikov, M A Shifman, A I Vainshtein, V I Zakharov, 10.1016/0550-3213(83)90338-3Nucl. Phys. B. 229381V. A. Novikov, M. A. Shifman, A. I. Vainshtein, V. I. Zakharov, Exact Gell-Mann-Low Function of Supersymmetric Yang-Mills Theories from Instanton Calculus, Nucl. Phys. B 229 (1983) 381.
. J C Collins, RenormalizationCambridge University PressJ. C. Collins, Renormalization, Cambridge University Press (1984).
Broken Scale Invariance in Scalar Field Theory. C G Callan, https:/journals.aps.org/prd/abstract/10.1103/PhysRevD.2.1541Phys. Rev. D. 215411547C. G. Callan, Broken Scale Invariance in Scalar Field Theory, Phys. Rev. D 2 (1970) 15411547.
Small Distance Behaviour in Field Theory and Power Counting. K Symanzik, Comm. Math. Phys. 18227K. Symanzik, Small Distance Behaviour in Field Theory and Power Counting, Comm. Math. Phys. 18 (1970) 227.
C Itzykson, J B Zuber, Quantum Field Theory. McGraw-HillC. Itzykson, J. B. Zuber, Quantum Field Theory, McGraw-Hill (1985).
M E Peskin, D V Schroeder, An Introduction to Quantum Field Theory. Westview PressM. E. Peskin, D. V. Schroeder, An Introduction to Quantum Field Theory, Westview Press (1995).
General Theory of Renormalization of Gauge Invariant Operators. S D Joglekar, B W Lee, 10.1016/0003-4916(76)90225-6Annals Phys. 97160S. D. Joglekar, B. W. Lee, General Theory of Renormalization of Gauge Invariant Operators, Annals Phys. 97 (1976) 160.
Remarks on the renormalization of gauge invariant operators in Yang-Mills theory. M Henneaux, 10.1016/0370-2693(93)91187-RarXiv:hep-th/9306101Phys. Lett. B. 31335Nucl. Phys. BM. Henneaux, Remarks on the renormalization of gauge invariant operators in Yang-Mills theory, Phys. Lett. B 313 (1993) 35 [Erratum: Nucl. Phys. B 316 (1993) 633], arXiv:hep-th/9306101.
Renormalization of composite operators in Yang-Mills theories using a general covariant gauge. J C Collins, R J Scalise, 10.1103/PhysRevD.50.4117arXiv:hep-ph/9403231Phys. Rev. D. 504117J. C. Collins, R. J. Scalise, Renormalization of composite operators in Yang-Mills theories using a general covariant gauge, Phys. Rev. D 50 (1994) 4117, arXiv:hep-ph/9403231.
The rule of operator mixing. H Sonoda, Nucl. Phys. B. 366629H. Sonoda, The rule of operator mixing, Nucl. Phys. B 366 (1991) 629.
Corrections to scaling laws. F J Wegner, 10.1103/PhysRevB.5.4529Phys. Rev. B. 54529F. J. Wegner, Corrections to scaling laws, Phys. Rev. B 5, (1972) 4529.
The Uses of Conformal Symmetry in QCD. V M Braun, G P Korchemsky, D Muller, 10.1016/S0146-6410(03)90004-4arXiv:hep-ph/0306057Prog. Part. Nucl. Phys. 51311hep-phV. M. Braun, G. P. Korchemsky, D. Muller, The Uses of Conformal Symmetry in QCD, Prog. Part. Nucl. Phys. 51 (2003) 311, arXiv:hep-ph/0306057 [hep-ph].
Logarithmic operators in conformal field theory. V Gurarie, arXiv:hep-th/9303160Nucl. Phys. B. 410535V. Gurarie, Logarithmic operators in conformal field theory, Nucl. Phys. B 410 (1993) 535, arXiv:hep-th/9303160
The ABC (in any D) of Logarithmic CFT. M Hogervorst, M Paulos, A Vichi, arXiv:1605.03959JHEP. 1710201hep-thM. Hogervorst, M. Paulos, A. Vichi, The ABC (in any D) of Logarithmic CFT, JHEP 1710 (2017) 201, arXiv:1605.03959 [hep-th].
D Simmons-Duffin, 10.1142/9789813149441_0001arXiv:1602.07982TASI Lectures on the Conformal Bootstrap, Contribution to TASI 2015. hep-thD. Simmons-Duffin, TASI Lectures on the Conformal Bootstrap, Contribution to TASI 2015, arXiv:1602.07982 [hep-th].
Two-loop conformal generators for leading-twist operators in QCD. V M Braun, A N Manashov, S Moch, M Strohmaier, 10.1007/JHEP03(2016)142arXiv:1601.05937JHEP. 03142hep-phV. M. Braun, A. N. Manashov, S. Moch, M. Strohmaier, Two-loop conformal generators for leading-twist operators in QCD, JHEP 03 (2016) 142, arXiv:1601.05937 [hep-ph].
Three-loop evolution equation for flavor-nonsinglet operators in off-forward kinematics. V M Braun, A N Manashov, S Moch, M Strohmaier, 10.1007/JHEP06(2017)037arXiv:1703.09532JHEP. 0637hep-phV. M. Braun, A. N. Manashov, S. Moch, M. Strohmaier, Three-loop evolution equation for flavor-nonsinglet operators in off-forward kinematics, JHEP 06 (2017) 037, arXiv:1703.09532 [hep-ph].
U Aglietti, M Becchetti, M Bochicchio, M Papinutto, F Scardino, arXiv:2105.11262Operator mixing, UV asymptotics of nonplanar/planar 2-point correlators, and nonperturbative large-N expansion of QCD-like theories. hep-thU. Aglietti, M. Becchetti, M. Bochicchio, M. Papinutto, F. Scardino, Operator mixing, UV asymptotics of nonplanar/planar 2-point correlators, and nonperturbative large-N expansion of QCD-like theories, arXiv:2105.11262 [hep-th].
Eight dimensional QCD at one loop. J A Gracey, 10.1103/PhysRevD.97.025009arXiv:1712.02565Phys. Rev. D. 9725009hep-thJ. A. Gracey, Eight dimensional QCD at one loop, Phys. Rev. D 97 (2018) 025009, arXiv:1712.02565 [hep-th].
Classification and one loop renormalization of dimension-six and dimension-eight operators in quantum gluodynamics. J A Gracey, 10.1016/j.nuclphysb.2004.06.053arXiv:hep-ph/0204266Nucl. Phys. B. 634Nucl. Phys. BJ. A. Gracey, Classification and one loop renormalization of dimension-six and dimension-eight operators in quantum gluodynamics, Nucl. Phys. B 634 (2002) 192, [Erratum: Nucl. Phys. B 696 (2004) 295], arXiv:hep-ph/0204266.
| []
|
[
"The rainbow-spectrum of RNA secondary structures",
"The rainbow-spectrum of RNA secondary structures"
]
| [
"Thomas J X Li ",
"· Christian ",
"M Reidys "
]
| []
| []
| In this paper we analyze the length-spectrum of rainbows in RNA secondary structures. A rainbow in a secondary structure is a maximal arc with respect to the partial order induced by nesting. We show that there is a significant gap in this length-spectrum. We shall prove that there asymptotically almost surely exists a unique longest rainbow of length at least n − O(n 1/2 ) and that with high probability any other rainbow has finite length. We show that the distribution of the length of the longest rainbow converges to a discrete limit law and that, for finite k, the distribution of rainbows of length k, becomes for large n a negative binomial distribution. We then put the results of this paper into context, comparing the analytical results with those observed in RNA minimum free energy structures, biological RNA structures and relate our findings to the sparsification of folding algorithms. | 10.1007/s11538-018-0411-9 | [
"https://arxiv.org/pdf/1806.03333v1.pdf"
]
| 3,904,024 | 1806.03333 | 6e8a5d76f59fc64e91da7add9fc7ce40851b4049 |
The rainbow-spectrum of RNA secondary structures
Thomas J X Li
· Christian
M Reidys
The rainbow-spectrum of RNA secondary structures
Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor)Secondary structure · Rainbow · Length-spectrum · Gap · Arc · Generating function · Singularity analysis Mathematics Subject Classification (2000) 05A16 · 92E10 · 92B05
In this paper we analyze the length-spectrum of rainbows in RNA secondary structures. A rainbow in a secondary structure is a maximal arc with respect to the partial order induced by nesting. We show that there is a significant gap in this length-spectrum. We shall prove that there asymptotically almost surely exists a unique longest rainbow of length at least n − O(n 1/2 ) and that with high probability any other rainbow has finite length. We show that the distribution of the length of the longest rainbow converges to a discrete limit law and that, for finite k, the distribution of rainbows of length k, becomes for large n a negative binomial distribution. We then put the results of this paper into context, comparing the analytical results with those observed in RNA minimum free energy structures, biological RNA structures and relate our findings to the sparsification of folding algorithms.
Introduction
RNA is a biomolecule involved in a plethora of functions, ranging from catalytic activity to gene expression. A single-stranded RNA molecule has a backbone consisting of nucleotides and can be described by its primary sequence, i.e., a linear, oriented sequence of the bases {A, U, G, C}. In contrast to DNA, an RNA strand folds into a helical configuration of its primary sequence by forming hydrogen bonds between pairs of nucleotides according to Watson-Crick A-U, C-G and wobble U-G base-pairing rules. These structures play a variety of biochemical roles within cells such as: transcription and translation (mRNA links DNA and proteins to convey genetic information with the assistance of tRNA (McCarthy and Holland, 1965)), catalyzing reactions (ribozymes catalyze diverse biological reactions as proteins (Kruger et al, 1982)), gene regulation (miRNA functions in RNA silencing and ncRNA in directing post-transcriptional regulation of gene expression (Eddy, 2001)).
The most prominent class of coarse grained RNA structures are the RNA secondary structures. These are contact structures without any reference of spatial embedding, whose contacts are base pairs subject to certain restrictions. First, their base pairs are canonical pairings: Watson-Crick as well as wobble base pairs. Bonding information such as non-canonical interactions, coaxial stacking of helices, major and minor groove triplexes, and interactions with other molecules are not considered. Secondly, any two base pairs are non-crossing: representing the contact structure as a diagram, by drawing its sequence on a horizontal line and each base pair as an arc in the upper half-plane, two arcs (i 1 , j 1 ) and (i 2 , j 2 ) cross if the nucleotides appear in the order i 1 < i 2 < j 1 < j 2 in the primary sequence. In this representation, RNA secondary structure contains exclusively non-crossing arcs, see Fig. 1.
The combinatorics of RNA secondary structures was pioneered by Waterman et al., more than three decades ago (Waterman, 1978(Waterman, , 1979Smith and Waterman, 1978;Howell et al, 1980;Schmitt and Waterman, 1994;Penner and Waterman, 1993). A variety of dynamic programming (DP) algorithms, predicting the minimum free energy (mfe) conformation for RNA molecules, have been derived (Zuker and Sankoff, 1984;Waterman and Smith, 1986;Zuker, 1989;Hofacker et al, 1994). Sparsification is a particular method facilitating a speed up of these DP-routines (Wexler et al, 2007;Salari et al, 2010;Backofen et al, 2011). The method employs the fact that certain matrices of the DP routines are sparse, a fact that greatly simplifies the computation. The theoretical analysis (Wexler et al, 2007) concludes a linear reduction time complexity based on a specific property of arcs in RNA molecules. This property is called polymer-zeta property and originates from studies of bonds in proteins. Polymer-zeta asserts that two nucleotides of distance m form a base pair with probability bm −c for some constants b > 0, c > 1, implying that long-distance base pairs have low probability. Subsequent analysis revealed that the polymer-zeta property does not hold for general RNA molecules (Backofen et al, 2011), and that sparsification provides only a constant, however significant reduction (Huang and Reidys, 2012).
We shall provide a detailed understanding of the longest, as well as the second-longest arc in RNA secondary structures. This paper is furthermore motivated by the question of how to interpret the "information" contained in non-coding DNA sequences. In Barrett et al (2017) a sequence-structure correlation of RNA is studied, implying the potential of RNA structure to play a critical role in providing such an interpretation. Accordingly, transcription would not only facilitate the generation of protein (for coding sequences) but also the interpretation of DNA data via forming RNA structure. In other words, it would not be the actual sequence of nucleotides alone but the structures compatible with such sequences that contain crucial information, changing the paradigm of sequence alignments. In this context it becomes relevant to analyze distances between two paired nucleotides in RNA structures. A particular class of such bonds are rainbows. A rainbow in a secondary structure is a maximal arc with respect to the partial order induced by nesting, i.e. the closing arc of a stem-loop, see Fig. 2. The length of a rainbow (i, j), defined as j − i, reflects the size of the corresponding stem-loop. In this paper, we study the length spectrum of rainbows (rainbowspectrum) in RNA secondary structures.
Rainbows have been studied in Jin and Reidys (2010a,b) in the context of k-noncrossing RNA structures. The authors show that the expected number of rainbows is finite and that the endpoint of a rainbow is more likely to occur at the end of the sequence, hinting at the existence of a unique longest rainbow. Another notion closely connected to that of rainbows is the 5 -3 distance, i.e. the number of rainbows plus the number of unpaired, external nucleotides. The finiteness of the 5 -3 distance has first been studied in Yoffe et al (2011), where the expected number of rainbows in RNA secondary structures has been obtained. Remarkably, the 5 -3 distance of biological RNA structures is also observed to be finite, indicating that certain features of random structures can also be observed in biological structures. Han and Reidys (2012) studies rainbows of RNA secondary structures in the context of the 5 -3 distance. It is shown that this distance satisfies a discrete limit law, implying the finiteness of the 5 -3 distance of uniformly sampled RNA structures. Clote et al (2012) shows that the expected distance between 5 and 3 ends of a specific RNA sequence is finite, with respect to the Turner energy model. More importantly,
5'
3' rainbows Fig. 2 The secondary structure of the P.li.LSUI2 intron (Robart et al, 2014). The structure has six rainbows of lengths 338, 66, 76, 40, 33 and 32, respectively. The figure is generated with the assistance of PseudoViewer3 (Byun and Han, 2009).
the finiteness of the 5 -3 distance and the existence of a long rainbow both lead to the effective circularization of linear RNA, which plays an important role in many biological processes (Yoffe et al, 2011). The second longest rainbow has in the limit of long sequences, with high probability, finite length. In other words, for any fixed probability, 0 < q < 1, we find a finite k(q) such that with probability, q, a random RNA secondary structure has a second longest rainbow of length at most k(q). However, with probability o(1), there are RNA secondary structures that exhibit a second longest rainbow of order O(n 1/2 ) or higher. In fact we shall show that the expected length of the second longest rainbow is O(n 1/2 ).
The key results of this paper are the following:
1. in uniformly generated RNA secondary structures the length of the longest rainbow tends, in the limit of long sequences, to a discrete limit law, having an expectation value n − O(n 1/2 ). That is, there is a gap in the lengthsequence of rainbows, i.e. there exists a unique longest rainbow, 2. with high probability any other rainbow has finite length, k, 3. in the limit of long sequences, the distribution of rainbows of length k tends to a negative binomial distribution, 4. mfe-structures also exhibit a unique longest rainbow of order n − O(n 1/2 ), and furthermore, with high probability, any other rainbow has finite length.
As for biological structures, in Fig. 2 we display the P.li.LSUI2 intron (Robart et al, 2014) RNA structure, containing a unique longest rainbow.
In order to obtain the limit distributions of the length of the longest rainbow, we study the generating function of secondary structures having a restricted length of their longest rainbow. This analysis will allow us to compute in Lemma 1 the expectation and variance of the length of the longest rainbow. Having established this we proceed computing the limit distribution in Theorem 3. As for analyzing rainbows of finite length, we consider a bivariate generating function distinguishing the number of rainbows of length k and establish a discrete limit law using the subcritical paradigm (Flajolet and Sedgewick, 2009) of singularity analysis. This paper is organized as follows: In Section 2, we provide some basic facts of RNA secondary structures. In Section 3, we compute the expectation and variance of the longest rainbow in RNA secondary structures and compute the discrete limit law. In Section 4, we first observe that with high probability we can restrict our analysis to rainbows of finite length and then proceed computing the associated limit distribution. In Section 5, we integrate our results and and discuss them in the context of the 5 -3 distance and RNA mfe-structures.
Basic facts
RNA secondary structure can be represented as a diagram, a labeled graph over the vertex set {1, . . . , n} whose vertices are arranged in a horizontal line and arcs are drawn in the upper half-plane, see Fig. 1. Clearly, vertices correspond to nucleotides in the primary sequence and arcs correspond to the Watson-Crick as well as wobble base pairs. The length of the structure is defined as the number of nucleotides. The length of an arc (i, j) is defined as j − i and an arc of length k is called a k-arc. The backbone of a diagram is the sequence of consecutive integers (1, . . . , n) together with the edges {{i, i + 1} | 1 ≤ i ≤ n − 1}. We shall distinguish the backbone edge {i, i + 1} representing a phosphodiester bond, from the arc (i, i + 1), which we refer to as a 1-arc. Two arcs (i 1 , j 1 ) and (i 2 , j 2 ) are crossing if i 1 < i 2 < j 1 < j 2 . An RNA secondary structure is defined as a diagram satisfying the following three conditions (Waterman, 1978):
1. non-existence of 1-arcs: if (i, j) is an arc, then j − i ≥ 2, 2. non-existence of base triples: any two arcs do not have a common vertex, 3. non-existence of pseudoknots: any two arcs are non-crossing, i.e., for two arcs (i 1 , j 1 ) and (i 2 , j 2 ) where i 1 < i 2 , i 1 < j 1 , and i 2 < j 2 , we have either i 1 < j 1 < i 2 < j 2 or i 1 < i 2 < j 2 < j 1 .
A stack of length r is a maximal sequence of "parallel" arcs, ((i, j), (i + 1, j − 1), . . . , (i + (r − 1), j − (r − 1))). Stacks of length one are energetically unstable and we find typically stacks of length at least two or three in biological structures (Waterman, 1978). A secondary structure, S, is r-canonical if it has minimum stack-length r.
Given an RNA secondary structure, S, an arc is called a rainbow if it is maximal with respect to the partial order
(i, j) ≤ (i , j ) ⇐⇒ i ≤ i < j ≤ j .
I.e. a rainbow is the closing arc of a stem-loop. A secondary structure is called irreducible if it contains a rainbow connecting the first and the last vertex in a structure.
We consider RNA secondary structures filtered by minimum arc-length and minimum stack-length. This filtration is motivated by the fact that for energetic reasons, RNA secondary structures exhibit a minimum arc-length of four and a minimum stack length two or three. The former is a consequence of the rigidity of the molecules backbone (Stein and Waterman, 1979) and the latter a mesomery effect of parallel Watson-Crick or U-G base pairs (Hunter and Sanders, 1990;Šponer et al, 2001, 2013.
Let s
[r] λ (n) and f [r]
λ (n) denote the numbers of r-canonical secondary structures and irreducible secondary structures over n nucleotides with minimum arc-length λ, respectively. We shall simplify notation by writing s(n) and f (n) instead of s λ (n). The generating functions S(x) and F(x) are given by
S(x) = n≥0 s(n)x n , F(x) = n≥1 f (n)x n ,
where f (1) = 1 represents a single nucleotide, which is irreducible by convention.
These two generating functions have been computed in Waterman (1978); Hofacker et al (1998); Barrett et al (2016).
Theorem 1 For any λ, r ∈ N, the generating functions S(x) and F(x) satisfy the functional equations
S(x) = 1 1 − F(x) , F(x) − x = x 2r 1 − x 2 S(x) − F(x) + x − λ−2 i=0 x i .
The generating function S(x) satisfies the functional equation The key idea to prove the functional equation of Theorem 1 is the following: any secondary structures can be decomposed into a sequence of irreducible structures, and any irreducible structure is either a single vertex or the stack containing the rainbow together with the enclosed reducible structure, see
x 2r S(x) 2 − B(x) S(x) + A(x) = 0, where A(x) = 1 − x 2 + x 2r , B(x) = (1 − x)A(x) + x 2r λ−2 i=0 x i . Explicitly, we have S(x) = B(x) − B(x) 2 − 4x 2r A(x) 2x 2r . + ...Theorem 2 For 1 ≤ λ ≤ 4 and 1 ≤ r ≤ 3, the dominant singularity ρ of F(x) is the minimal positive, real solution of B(x) 2 − 4x 2r A(x) = 0. The singular expansion of F(x) is given by F(x) = τ + δ ρ − x 1 2 + θ ρ − x + O (ρ − x) 3 2 , as x → ρ,
where τ = F(ρ), δ and θ are constants, that can be explicitly computed. Furthermore, the coefficients of F(x) satisfy
[x n ]F(x) = c n − 3 2 ρ −n 1 + O(n −1 ) , as n → ∞, where c is the positive constant c = −δρ 1 2 Γ (− 1 2 ) −1 .
The longest rainbow
Our analysis assumes the uniform distribution over all RNA secondary structures of n nucleotides, i.e. the distribution in which each structure has probability 1 s(n) . We shall analyze the random variable, Y n , representing the length of the longest rainbow in an RNA secondary structure of n nucleotides. The generating function of structures, whose rainbows have length less than or equal to m is given by
S ≤m+1 (x) = 1 1 − F ≤m+1 (x) , where F ≤m (x) = 1≤i≤m f (i)x i . By construction, we have P(Y n ≤ m) = [x n ]S ≤m+1 (x) [x n ]S(x) .
In the following we shall derive an asymptotic estimate of [x n ]S ≤m+1 (x). This will imply that the random variable n − Y n asymptotically almost surely (a.a.s.) converges to a discrete limit law.
To this end we derive first and second order information about Y n , which will allow us to apply a large deviation result, instrumental for the proof of our main result.
Lemma 1 The expectation and variance of Y n are given by
E[Y n ] = n − α n 1 2 1 + o(1) , V[Y n ] = βn 3 2 1 + o(1) , as n → ∞, where α = 2δ ρ 1 2 √ π(1−F(ρ)) and β = (1 − π 4 )α are positive constants.
Proof We consider P(Y n = n − k), by construction, we have
P(Y n = n − k) = [x n ](S ≤n−k+1 (x) − S ≤n−k (x)) [x n ]S(x) .
Claim 1: For k ≤ n 2 , we have
P(Y n = n − k) = [x k−1 ]Φ (F(x)) [x n−k+1 ]F(x) [x n ]S(x) ,(1)
where
Φ(x) = 1 1−x . Proof of Claim 1: The Taylor expansion of S ≤n−k (x) = Φ(F ≤n−k (x)) is given by S ≤n−k (x) = Φ(F ≤n−k (x)) = i≥0 Φ (i) (F(x)) i! (F ≤n−k (x) − F(x)) i . (2) Note that [x n ](F ≤n−k (x) − F(x)) i = 0 for i ≥ 2, since k ≤ n 2 and deg(F ≤n−k (x) − F(x)) > n 2 .
By taking the coefficient of x n in eq. (2), we obtain
[x n ]S ≤n−k (x) = [x n ] Φ(F(x)) + Φ (F(x))(F ≤n−k (x) − F(x)) .(3)
Similarly, eq. (3) holds for [x n ]S ≤n−k+1 (x). Therefore, we arrive at
P(Y n = n − k) = [x n ](Φ (F(x))(F ≤n−k+1 (x) − F ≤n−k (x))) [x n ]S(x) = [x n ](Φ (F(x))f (n − k + 1)x n−k+1 ) [x n ]S(x) = [x k−1 ]Φ (F(x)) [x n−k+1 ]F(x) [x n ]S(x) .
Claim 2:
1≤k≤ n 2 (k − 1)P(Y n = n − k) = α n 1 2 1 + o(1) , as n → ∞.(4)
Proof of Claim 2: We shall first derive an estimate of P(Y n = n − k) from Claim 1. By Theorem 2, we have the singular expansions of
F(x), S(x) = Φ(F(x)) and Φ (F(x)) F(x) = τ + δ ρ − x 1 2 + θ ρ − x + O (ρ − x) 3 2 , Φ(F(x)) = Φ(τ ) + Φ (τ )δ ρ − x 1 2 + θ 1 ρ − x + O (ρ − x) 3 2 , Φ (F(x)) = Φ (τ ) + Φ (τ )δ ρ − x 1 2 + θ 2 ρ − x + O (ρ − x) 3 2 ,
where θ, θ 1 , θ 2 are constants, and the singular expansions of Φ(F(x)) and Φ (F(x)) are obtained by combining the regular expansions of Φ(x) and Φ (x) with the singular expansion of F(x) (the subcritical case, see Flajolet and Sedgewick (2009) pp. 411). The Transfer Theorem (Flajolet and Sedgewick (2009)
pp. 390) then implies [x n−k+1 ]F(x) [x n ]S(x) = (n − k + 1) − 3 2 ρ −n+k−1 1 + O((n − k) −1 ) Φ (τ )n − 3 2 ρ −n 1 + O(n −1 ) = (1 − k−1 n ) − 3 2 ρ k−1 1 + O(n −1 ) Φ (τ ) 1 + O(n −1 ) = (1 − k−1 n ) − 3 2 ρ k−1 Φ (τ ) 1 + O(n −1 ) , as n → ∞, k ≤ n 2 ,(5)[x k−1 ]Φ (F(x)) = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 ) (k − 1) − 3 2 ρ −k+1 1 + O(k −1 ) , as k → ∞. (6)
Inserting this into eq. (1) and using τ = F(ρ), we obtain
P(Y n = n − k) = [x k−1 ]Φ (F(x)) [x n−k+1 ]F(x) [x n ]S(x) = − δ ρ 1 2 Φ (τ ) Γ (− 1 2 )Φ (τ ) 1 − k − 1 n − 3 2 (k − 1) − 3 2 1 + O(k −1 ) 1 + O(n −1 ) ,(7)
as k → ∞, n → ∞ and k ≤ n 2 . In view of the fact that the probability P(Y n = n − k) is at most 1, we have 1≤k≤n
1 8 (k − 1)P(Y n = n − k) = O(n 1 8 · n 1 8 ) = o(n 1
2 ). Furthermore for large k, we have eq. (7). This motivates to split the summation of eq. (4) and to consider the term n 1 8 ≤k≤ n 2 (k − 1)P(Y n = n − k) separately, as this allows to employ eq. (7). This leads to
n 1 8 ≤k≤ n 2 (k − 1)P(Y n = n − k) = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) n 1 8 ≤k≤ n 2 1 − k − 1 n − 3 2 (k − 1) − 1 2 1 + O(k −1 ) 1 + O(n −1 ) = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) n 1 2 n 1 8 ≤k≤ n 2 1 − k − 1 n − 3 2 k − 1 n − 1 2 1 n 1 + O(k −1 ) 1 + O(n −1 ) = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) n 1 2 1 2 0 (1 − x) − 3 2 x − 1 2 dx 1 + o(1) 1 + O(n −1 ) = αn 1 2 1 + o(1) , as n → ∞,(8)
where α is given by
α = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) 1 2 0 (1 − x) − 3 2 x − 1 2 dx = 2δ ρ 1 2 √ π(1 − F(ρ)) .
To see the third equality in eq. (8), we first derive
n 1 8 ≤k≤ n 2 1 − k − 1 n − 3 2 k − 1 n − 1 2 1 n = 1≤k≤ n 2 1 − k − 1 n − 3 2 k − 1 n − 1 2 1 n − 1≤k≤n 1 8 1 − k − 1 n − 3 2 k − 1 n − 1 2 1 n = 1 2 0 (1 − x) − 3 2 x − 1 2 dx 1 + o(1) − n 1 8 −1 n 0 (1 − x) − 3 2 x − 1 2 dx 1 + o(1)(9)= 1 2 0 (1 − x) − 3 2 x − 1 2 dx 1 + o(1) + O(n − 7 16 ) = 1 2 0 (1 − x) − 3 2 x − 1 2 dx 1 + o(1) , as n → ∞.
Here we can estimate the two sums by the integrals in eq. (9), since the integrals converge. The error term n
1− k − 1 n − 3 2 k − 1 n − 1 2 1 n 1+O(k −1 ) = 1 2 0 (1−x) − 3 2 x − 1 2 dx 1+o(1) ,
as n → ∞.
Combining eq. (8) and 1≤k≤n
1 8 (k − 1)P(Y n = n − k) = o(n1
2 ), we derive eq. (4).
Claim 3:
n 2 <k≤n (k − 1)P(Y n = n − k) = o(n 1 2 ), as n → ∞.
Proof of Claim 3: We compute
n 2 <k≤n (k − 1)P(Y n = n − k) ≤ n n 2 <k≤n P(Y n = n − k) = n 1 − 1≤k≤ n 2 P(Y n = n − k) = n 1 − 1≤k<n 2 5 P(Y n = n − k) − n 2 5 ≤k≤ n 2 P(Y n = n − k) .(10)
We choose k = n 2 5 as the cutoff, in order to employ eq. (7) for large k and as a result the error term of the estimate is of order o(n 1 2 ). We proceed by computing 1≤k<n 2 5 P(Y n = n − k) and n 2 5 ≤k≤ n 2 P(Y n = n − k).
For any 1 ≤ k < n 2 5 , we derive from eq. (1) together with eq. (5)
P(Y n = n − k) = [x k−1 ]Φ (F(x)) [x n−k+1 ]F(x) [x n ]S(x) = ρ k−1 [x k−1 ]Φ (F(x)) Φ (τ ) 1 − k − 1 n − 3 2 1 + O(n −1 ) = c b k ρ k−1 1 + O(n − 3 5 ) 1 + O(n −1 ) = c b k ρ k−1 1 + o(n − 1 2 ) , as n → ∞,(11)where c = Φ (F(ρ)) −1 , b k = [x k−1 ]Φ (F(x)
). The third equation follows from lim n→∞ 1 − k−1 n − 3 2 = 1 + O(n − 3 5 ), since k < n 2 5 . Thus, inserting eq. (11)
into the first sum in eq. (10), we obtain
1≤k<n 2 5 P(Y n = n − k) = 1≤k<n 2 5 c b k ρ k−1 1 + o(n − 1 2 ) = c k≥1 b k ρ k−1 − c k≥n 2 5 b k ρ k−1 1 + o(n − 1 2 ) = 1 − c k≥n 2 5 b k ρ k−1 1 + o(n − 1 2 ) (12) = 1 − c δ ρ 1 2 Φ (τ ) −Γ (− 1 2 ) k≥n 2 5 k − 3 2 1 + O(k −1 ) 1 + o(n − 1 2 ) (13) = 1 − 2c δ ρ 1 2 Φ (τ ) −Γ (− 1 2 ) n − 1 5 1 + O(n − 2 5 ) 1 + o(n − 1 2 ) (14) = 1 − αn − 1 5 1 + O(n − 2 5 ) 1 + o(n − 1 2 ) (15) = 1 − αn − 1 5 + o(n − 1 2 ), as n → ∞.
Eq. (12) follows from k≥1 b k ρ k−1 = Φ (F(ρ)) = c −1 and eq. (13) follows from eq. (6), since k ≥ n 2 5 tends to infinity. For eq. (14), we know k≥n 2 5 k − 3 2 = ζ(3/2, n 2 5 ), where ζ(s, n) = ∞ i=0 (n + i) −s is the Hurwitz-Zeta function. It is well known that, for s > 1, as real number n → ∞, the Hurwitz-Zeta function has the asymptotic expansion ζ(s, n) = n 1−s s−1 1 + O(n −1 ) . Then we derive k≥n .
As for the second sum, we have
n 2 5 ≤k≤ n 2 P(Y n = n − k) = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) n 2 5 ≤k≤ n 2 1 − k − 1 n − 3 2 (k − 1) − 3 2 1 + O(k −1 ) 1 + O(n −1 ) (16) = α 2 n 2 5 ≤k≤ n 2 1 − k − 1 n − 3 2 (k − 1) − 3 2 1 + O(k −1 ) 1 + O(n −1 ) = α 2 · 2n − 1 5 (1 + O(n − 2 5 )) 1 + O(n −1 )(17)
= αn − 1 5 + o(n − 1 2 ), as n → ∞.
Eq. (16) follows from eq. (7), as k → ∞, n → ∞ and k ≤ n 2 . In eq. (17), the summation is approximated by the Euler-Maclaurin summation formula (see, for example, Graham and Knuth and Patashnik (1994)) as follows
n 2 5 ≤k≤ n 2 1 − k − 1 n − 3 2 (k − 1) − 3 2 = n 2 n 2 5 f (x)dx − 1 2 f (x) n 2 n 2 5 + 1 12 f (x) n 2 n 2 5 + R 2 (n) = 2n − 1 5 (1 + O(n − 3 5 )) − 1 2 O(n − 3 5 ) + 1 12 O(n −1 ) + O(n −1 ) = 2n − 1 5 (1 + O(n − 2 5 )), as n → ∞, where f (x) = (1− x n ) − 3 2 x −(k − 1)P(Y n = n − k) ≤ n 1 − 1≤k<n 2 5 P(Y n = n − k) − n 2 5 ≤k≤ n 2 P(Y n = n − k) = n 1 − 1 − αn − 1 5 + o(n − 1 2 ) − αn − 1 5 + o(n − 1 2 ) = o(n 1 2 ), as n → ∞.
Now we are in position to compute
E[Y n ] = n k=1 (n − k)P(Y n = n − k) = n − 1 − n k=1 (k − 1)P(Y n = n − k) = n − 1 − 1≤k≤ n 2 (k − 1)P(Y n = n − k) − n 2 <k≤n (k − 1)P(Y n = n − k) = n − 1 − α n 1 2 1 + o(1) − o(n 1 2 ) = n − α n 1 2 1 + o(1) .
As for the variance,
V[Y n ] =E[Y 2 n ] − E[Y n ] 2 = n k=1 (n − k) 2 P(Y n = n − k) − E[Y n ] 2 = n k=1 (k − 1) 2 P(Y n = n − k) + (n − 1) 2 − 2(n − 1) n k=1 (k − 1)P(Y n = n − k) − E[Y n ] 2 .
It is clear from the above computations that n k=1 (k − 1)P(Y n = n − k) = αn 1 2 (1 + o(1)) and E[Y n ] 2 = (n − α n 1 2 + o(n 1 2 )) 2 . In this case, we have an analogue of eq. (8)
n 1 8 ≤k≤ n 2 (k − 1) 2 P(Y n = n − k) = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) n 3 2 1 2 0 (1 − x) − 3 2 x 1 2 dx 1 + o(1) 1 + O(n −1 ) = βn 3 2 1 + o(1) , as n → ∞, where β = δ ρ 1 2 Φ (τ ) −Γ (− 1 2 )Φ (τ ) 1 2 0 (1 − x) − 3 2 x 1 2 dx = (1 − π 4 )α.
Following the same line of arguments, we obtain
n k=1 (k − 1) 2 P(Y n = n − k) = βn 3 2 (1 + o(1)). As a result V[Y n ] = βn 3 2 (1 + o(1)) + O(n) = βn 3 2 (1 + o(1)),
completing the proof of the lemma. Fig. 4 The longest rainbow in random RNA secondary structures: we compare the expectation value and standard deviation (blue) with the average length (red) observed in uniformly sampled structures. Minimum arc-and stack-length are r = λ = 1, n = 100 × i where 1 ≤ i ≤ 4 and sample size is 10 4 , respectively. Error bars represent one standard deviation. Fig. 5 The longest rainbow in a random RNA secondary structures: dependency on minimum stack-(LHS) and arc-length (RHS), for 100 ≤ n ≤ 500.
In Fig. 4, we contrast our asymptotic estimate of the expectation and the average length of the longest rainbow in random RNA secondary structures. Fig. 5 shows that the parameter α of the expectation value of the longest rainbow increases, if the minimum stack-length increases or the minimum arclength decreases.
Remark: Lemma 1 shows that the length of the longest rainbow is n − O(n 1 2 ) with a standard deviation of O(n 3 4 ). As a result, the distribution of Y n becomes for larger and larger n more and more concentrated.
Theorem 3 We have for any t > 3 4 , lim n→∞ P(n − Y n ≥ Ω(n t )) = 0 (18) and for any k = o(n)
lim n→∞ P(n − Y n = k) = c b k ρ k−1 ,(19)where c = Φ (F(ρ)) −1 , b k = [x k−1 ]Φ (F(x)) and Φ(x) = 1 1−x .
Consequently the distribution of n − Y n a.a.s. converges to a discrete limit law. Fig. 6 The longest rainbow: we contrast the limit distribution (squares) with the distribution (dots) of uniformly sampled structures. Minimum arc-and stack-length are r = λ = 1, n = 400 for a sample size of 10 4 structures. Proof According to Lemma 1 we have that Y n is concentrated at n − α n 1 2 and the variance is O(n 3 2 ). Chebyshev's inequality, guarantees
P E[Y n ] − Y n ≥ a ≤ V[Y n ] a 2 .
Accordingly, for a = Ω(n t ) with t > 3 4 , the right hand-side tends to zero as n tends to infinity, whence eq. (18). To establish eq. (19) we inspect that the proof of eq. (11) in Lemma 1 holds for k = o(n) and eq. (19) follows. Eq. (18) implies that a.a.s. we may assume k = o(n) in which case eq. (19) guarantees that n − Y n = k satisfies a discrete limit law.
In Fig. 6, we compare our theoretical result with the distribution of the length of the longest rainbow in uniformly generated structures. Fig. 7 shows that the decrease of P(Y n = n − k), for increasing k, depends on minimum stack-and arc-length.
The spectrum of rainbows
In the previous section we established that there exists a.s. a unique longest rainbow in an RNA secondary structure. We shall call this rainbow the long rainbow and refer to any other rainbow as short. In this section we study the length-distribution of short rainbows.
To this end we first prove that with high probability we can assume, that any short rainbow is actually finite. That is we show Corollary 1 Given any > 0, there exists an integer t( ) such that
lim n→∞ P(Y n ≥ n − t( )) ≥ 1 − .
In particular, for r = 1, λ = 1, we have Proof We observe that Φ (F(x)) converges at ρ. I.e. for any > 0, there exists an integer t( ) such that
k>t( ) c b k ρ k−1 < , where c = (Φ (F(x))| x=ρ ) −1 and b k = [x k−1 ]Φ (F(x)). According to Theo- rem 3, we have lim n→∞ P(Y n ≥ n − t( )) = lim n→∞ k≤t( ) P(Y n = n − k) = k≤t( ) c b k ρ k−1 = k≥1 c b k ρ k−1 − k>t( ) c b k ρ k−1 ≥ c Φ (F(x))| x=ρ − = 1 − .
In case of r = 1 and λ = 1 we obtain ρ = 1 3 and c = 1 9 and in case of r = 2 and λ = 4 we have ρ = 0.540857 and c = 0.107902. This follows by direct computation using Theorem 3.
In the following, we shall study the distribution of rainbows of finite length. For fixed k, let s k (n, b) denote the number of r-canonical secondary structures with minimum arc-length λ, filtered by the number b of rainbows of length k. Let S k (x, u) = n,b s k (n, b)x n u b denote the corresponding bivariate generating function.
Lemma 2 The bivariate generating function of the number of r-canonical secondary structures with minimum arc-length λ, filtered by rainbows of length k, is given by
S k (x, u) = 1 1 − F(x) − (u − 1)f (k + 1)x k+1 .
Remark: The idea here is to enhance the combinatorial construction underlying the proof of Theorem 1, by marking each rainbow of length k. That is, we label each irreducible structure of length k+1 using the term (u−1)f (k+1)x k+1 .
Next we analyze X k,n , the r.v. counting the number of rainbows of length k in a random RNA secondary structure over n nucleotides. By construction, we have
P(X k,n = b) = s k (n, b) s(n) = [x n u b ]S k (x, u) [x n ]S(x) .
Theorem 4 For fixed k, X k,n satisfies the discrete limit law
lim n→∞ P(X k,n = b) = (b + 1)t b (1 − t) 2 , where τ = F(ρ) and t = f (k+1)ρ k+1
1−τ +f (k+1)ρ k+1 . That is, the limit law of X k,n is a negative binomial distribution N B(2, t) and the probability generating function of the limit distribution is given by
p k (u) = 1 − t 1 − tu 2 .
Proof Since Φ(x) = 1 1−x and h(x, u) := F(x) + (u − 1)f (k + 1)x k+1 have nonnegative coefficients and h(0, 0) = 0, the composition Φ(h(x, u)) is a welldefined formal power series. In view of Lemma 2, S k (x, u) can be expressed as
S k (x, u) = Φ(h(x, u)).
We verify that S k (x, u) has the same dominant singularity ρ as F(x), by checking that there exists a neighborhood U of 1 such that h(ρ, u) < 1 for all u in U . As a result, the composition S k (x, u) = Φ(h(x, u)) belongs to the subcritical case of singularity analysis (Flajolet and Sedgewick, 2009). Based on the singular expansion of F(x), we can compute the singular expansion of h(x, u) at ρ h(x, u) = τ + (u − 1)f (k + 1)ρ k+1 + δ ρ − x 1 2 (1 + o(1)), where τ = F(ρ). Combining this with the regular expansion of Φ(x) at τ 1 = τ + (u − 1)f (k + 1)ρ k+1
Φ(x) = Φ(τ 1 ) + Φ (τ 1 )(x − τ 1 )(1 + o(1)), we derive the singular expansion of S k (x, u) at ρ S k (x, u) = Φ(τ 1 ) + Φ (τ 1 )δ ρ − x 1 2 (1 + o(1)).
The transfer theorem (Flajolet and Sedgewick, 2009), then guarantees (1)). Now we are at position to compute
[x n ]S k (x, u) = Φ (τ 1 )δ c k n − 3 2 ρ n (1 + op k (u) = lim n→∞ b P(X k,n = b)u b = lim n→∞ b [x n u b ]S k (x, u)u b [x n ]S(x) = lim n→∞ [x n ]S k (x, u) [x n ]S(x) = lim n→∞ Φ (τ 1 ) Φ (τ ) = 1 − t 1 − tu 2 , where t = f (k+1)ρ k+1 1−τ +f (k+1)ρ k+1 . Exacting the coefficient of u b in p k (u), we arrive at lim n→∞ P(X k,n = b) = [u b ]p k (u) = b + 1 b t b (1 − t) 2 .
Corollary 2 For fixed k, expectation and variance of X k,n are asymptotically given by
lim n→∞ E(X k,n ) = 2 1 − τ f (k+1)ρ k+1 , lim n→∞ V(X k,n ) = 2f (k + 1)ρ k+1 1 − τ + f (k + 1)ρ k+1 (1 − τ ) 2 ,(20)
where τ = F(ρ).
Discussion
We have shown that the length-spectrum of rainbows in random RNA secondary structures has a gap. By Lemma 1 the longest rainbow is a.a.s. of (A) (B) Fig. 9 We uniformly sample 10 4 structures with minimum arc-and stack-length r = λ = 1 and sequence length n = 100 × i where 1 ≤ i ≤ 4. (A) displays the expectation and standard deviation of the length of the second longest rainbow. (B) we display, for the same sample, k 1 + k 2 , where k i is the average length of the i-th longest rainbow. Error bars represent one standard deviation.
size n − O(n 1/2 ) and Corollary 1 shows that with high probability the second largest rainbow has finite size. In any case, there exists a.a.s. a unique longest rainbow. In Theorem 3 we analyze the limit distribution of the size of the unique longest rainbow and show that it satisfies a discrete limit law. In Theorem 4 we identify the distribution of rainbows of finite size k, in the limit of long sequences as a negative binomial. The analysis in Section 3 can be generalized to the lengths of the second and third longest rainbows in uniformly generated structures. One can show that E[Y 2,n ] = α n 1 2 1 + o(1) and E[Y 3,n ] = o(n 1 2 ), where Y 2,n and Y 3,n denote the length of the second and third longest longest rainbow in RNA secondary structures. Suppose the longest rainbow, Y n , has length n − k. Taking out the enclosed irreducible structure, the remaining structure has length k. While the rainbow may cut the structure into two distinct intervals of equal orders, the resulting number of structures is far less than the number of structures over a single interval of size k = k − o(k). In this case, Lemma 1 guarantees that the second longest rainbow has average length k + O(k 1 2 ) since it is then effectively the longest rainbow of the remaining structure. Therefore, E[Y 2,n ] is k k + O(k 1 2 ) P(Y n = n − k) = k k + o(k) P(Y n = n − k), which is α n 1 2 1 + o(1) employing Claim 2 and Claim 3 of Lemma 1. Fig. 9 (A) confirms that the length of the second longest rainbow is O(n 1 2 ). Corollary 1 implies that Y 2,n is finite with high probability as n tends to infinity. However we also have E[Y 2,n ] = O(n 1 2 ), which means that on a set of measure tending to zero, Y 2,n is infinite. To illustrate this, consider X with P(X = k) = C k 4 −k for k ≥ 1, where C k = 1 k+1 2k k . Then for any > 0, there exists k 0 such that P(X ≤ k 0 ) = 1 − , in other words, X is finite with high probability. However
we have E[X] = k kP(X = k) = k kC k 4 −k = ∞.
Our results are connected with the distribution of the 5 -3 distance in RNA structures, whose finite expectation has been reported in Yoffe et al (2011). Han and Reidys (2012) computed the distribution of these distances proving that they satisfy a discrete limit law. While there is still a set of limit Fig. 10 The longest rainbow in RNA secondary structures: we compare the limit distribution of random RNA secondary structures having minimum arc-and stack-length four (blue) with the distribution of mfe-structures (red) obtained by ViennaRNA (Lorenz et al, 2011) of 10 4 random sequences of length 1000. measure zero, composed by structures in which the longest rainbow is not of length n − k, for finite k, our results show that with high probability we obtain a unique longest rainbow and several short rainbows of finite length. Accordingly, we can provide further insight into the discrete limit law, as the latter does not specify arc-lengths. Fig. 9 (B) shows that the longest and second longest rainbows leave only o(n 1 2 ) of nucleotides uncovered. This finding is in accordance with the result of Han and Reidys (2012), who established that the 5 -3 distance is finite.
According to Wexler et al (2007), sparsification achieves linear speed-up if the polymer-zeta property holds. Our results show that in random RNA structures, with high probability, the longest rainbow has almost the length of the sequence. Thus the polymer-zeta property does not hold for RNA secondary structure, unless one considers particular classes of natural RNA structures such as mRNA (Wexler et al, 2007). Having a closer look at the number of stems in random RNA structures, we observe a central limit theorem (thus having an expectation value of order O(n)). This suggests that the expected size of stems is O(1) and thus we find O(1) arcs of length n − O(n 1/2 ).
Let us put our results into context with mfe-structures. To this end we compare in Fig. 10 the limit distribution of the length of the longest rainbow with that of mfe-structures. We can report that the longest rainbow in mfestructures satisfies a similar distribution. Closer inspection reveals, that compared to random structures, mfe-structures exhibit fewer rainbows of length between 980 and 1000 and more rainbows of length between 400 and 980. Increasing the minimum stack-size in random structures has the effect that the distribution of lengths of the longest rainbow in random and mfe-structures becomes more and more similar, see Fig. 7 (LHS). This makes sense as mfestructures are typically form stacks of larger size. As for the expectation value, Fig. 11 shows that the longest rainbow of mfe-structures is also close to, but smaller than that in random structures. Fig. 11 Expectation value and standard deviation: we compare the theoretical estimate for random structures having minimum arc-and stack-length four (blue) with the distribution of mfe-structures (red), obtained by ViennaRNA (Lorenz et al, 2011) of 10 4 random sequences of length 500 ≤ n ≤ 1000. Table 1 The probability of having a long rainbow in RNA structures: we contrast our theoretical result in Corollary 1 for r = 4 and λ = 4 and the probabilities obtained from 10 4 random mfe-structures of length 1000. We have shown in Corollary 1 that random structures exhibit with high probability a longest rainbow of size n − k, for finite k. In Table 1 we study this phenomenon in mfe-structures. In fact we observe that this probability is higher in mfe-structures than in random structures, indicating that for mfestructures the gap in the sequence of length of rainbows is more pronounced.
In Fig. 12 we display that eq. (20) provides a good approximation for the expected number of short rainbows of length ≥ 25 in mfe-structures. However, mfe-structures have fewer rainbows of length between 5 and 15 and more rainbows of length ≥ 15 than random structures. As we observed in the context of the length distribution of the longest rainbow, increasing minimum stacksize in random structures results in a better and better approximation of short rainbows in mfe-structures. This seems plausible, as mfe-structures, in order to achieve minimum energy, tend to form long stems.
Finally, we discuss our findings in the context of rainbows observed in structures contained in RNA databases (from the RCSB PDB database (Berman et al, 2000) and the comparative RNA web (CRW) site (Cannone et al, 2002)). The observed average ratio of the length of the longest rainbow relative to the length of the sequence varies with different RNA families. For tRNA (76-90 nt) this ratio is 0.928(±0.048), implying a long rainbow. This is a result of the fact that tRNA typically forms the cloverleaf structure (Kim et al, 1974;Fig. 12 The expected number of rainbows of length k in RNA secondary structures: we contrast eq. (20) for random structures having minimum arc-and stack-length four (orange) and mfe-structures, of 10 4 random sequences of lengths n = 600, 800, 1000, respectively. Robertus et al, 1974). A similar ratio is also observed for transfer-messenger RNA (tmRNA, 300-400 nt), 5S rRNA (120 nt). 23S rRNAs of Escherichia coli and Thermus thermophilus (2904 nt) exhibit a ratio of 0.999. However 16S rRNAs of the same species (1542 nt) has a ratio of 0.584 for the longest rainbow and a ratio 0.308 for the second longest (Woese et al, 1980). This shows a general tendency of natural RNA structures to have a unique longest rainbow but there are exceptions: specific functionalities lead to structures having a small number of long rainbows.
As for future work we are concerned with the implications of the results of this paper for the entire arc-spectrum of RNA secondary structures. We argue here that, for n sufficiently large, using the fact that arcs are non-crossing, we can employ the results on the longest rainbow in order to compute the entire arc-spectrum in a recursive manner. Namely, once the stack concerning the longest rainbow is removed, we obtain an induced, nested, reducible RNA secondary structure. With respect to this structure we then iterate the argument working our way from top to bottom.
Fig. 1
1An RNA secondary structure represented as a contact graph (a) and as a diagram (b).
Fig. 3
3The decomposition of a secondary structure and an irreducible structure (reducible structures are colored in blue).
Fig. 3 .
3Singularity analysis of S(x) (Waterman (1978); Hofacker et al (1998); Barrett et al (2016), implies
3 2
2and the remainder R 2 (n) = O (2π)
Fig. 7
7The longest rainbow: dependency on minimum stack-(LHS) and arc-length (RHS), where n = 400.
Fig. 8
8The expectation value of rainbows of length k for different minimum stack-and arc-lengths.
Fig. 8
8illustrates the dependency of the expectation value of rainbows of length k on minimum stack-and minimum arc-length.
P(Y 1000 ≥ 1000 − k)k = 100 k = 200
k = 300
k = 400
k = 500
mfe
0.6333
0.7779
0.8574
0.9207
0.9775
uniform
0.7179
0.7936
0.8295
0.8514
0.8666
6 Acknowledgments.We would like to thank the reviewers for their comments and suggestions and specifically for pointing out a gap in the proof of Lemma 1. We gratefully acknowledge the help of Kevin Shinpaugh and the computational support team at BI. Many thanks to Christopher L. Barrett and Henning Mortveit for discussions. The second author is a Thermo Fisher Scientific Fellow in Advanced Systems for Information Biology and acknowledges their support of this work.
Sparse RNA folding: Time and space efficient algorithms. R Backofen, D Tsur, S Zakov, M Ziv-Ukelson, Journal of Discrete Algorithms. 91Backofen R, Tsur D, Zakov S, Ziv-Ukelson M (2011) Sparse RNA folding: Time and space efficient algorithms. Journal of Discrete Algorithms 9(1):12-31
Sequence-structure relations of biopolymers. C Barrett, F Huang, C Reidys, Bioinformatics. 333Barrett C, Huang F, Reidys C (2017) Sequence-structure relations of biopoly- mers. Bioinformatics 33(3):382-389
RNA secondary structures having a compatible sequence of certain nucleotide ratios. C Barrett, T Li, C Reidys, J Comput Biol. 2311Barrett C, Li T, Reidys C (2016) RNA secondary structures having a compat- ible sequence of certain nucleotide ratios. J Comput Biol 23(11):857-873
The Protein Data Bank. H Berman, J Westbrook, Z Feng, Nucleic Acids Res. 28Berman H, Westbrook J, Feng Z et al (2000) The Protein Data Bank. Nucleic Acids Res 28:235-242
PseudoViewer3: generating planar drawings of largescale RNA structures with pseudoknots. Y Byun, K Han, Bioinformatics. 2511Byun Y, Han K (2009) PseudoViewer3: generating planar drawings of large- scale RNA structures with pseudoknots. Bioinformatics 25(11):1435-1437
The Comparative RNA Web (CRW) Site: an online database of comparative sequence and structure information for ribosomal, intron, and other RNAs. J Cannone, S Subramanian, M Schnare, BMC Bioinformatics. 32Cannone J, Subramanian S, Schnare M et al (2002) The Comparative RNA Web (CRW) Site: an online database of comparative sequence and structure information for ribosomal, intron, and other RNAs. BMC Bioinformatics 3:2
Expected distance between terminal nucleotides of RNA secondary structures. P Clote, Y Ponty, J Steyaert, Journal of Mathematical Biology. 653Clote P, Ponty Y, Steyaert J (2012) Expected distance between terminal nu- cleotides of RNA secondary structures. Journal of Mathematical Biology 65(3):581-599
Non-coding RNA genes and the modern RNA world. S Eddy, Nature Reviews Genetics. 212Eddy S (2001) Non-coding RNA genes and the modern RNA world. Nature Reviews Genetics 2(12):919-929
Analytic Combinatorics. P Flajolet, R Sedgewick, Cambridge University PressNew YorkFlajolet P, Sedgewick R (2009) Analytic Combinatorics. Cambridge University Press, New York
Concrete Mathematics: A Foundation for Computer Science. R Graham, D Knuth, O Patashnik, Journal of Computational Biology. 197Addison-Wesley ProfessionalThe 5 -3 Distance of RNA Secondary StructuresGraham R, Knuth D, and Patashnik O (1994) Concrete Mathematics: A Foun- dation for Computer Science. Addison-Wesley Professional, Reading Mass Han H, Reidys C (2012) The 5 -3 Distance of RNA Secondary Structures. Journal of Computational Biology 19(7):868-878
Combinatorics of RNA secondary structures. I Hofacker, P Schuster, P Stadler, Discrete Appl Math. 881-3Hofacker I, Schuster P, Stadler P (1998) Combinatorics of RNA secondary structures. Discrete Appl Math 88(1-3):207-237
Fast folding and comparison of RNA secondary structures. Monatshefte für Chemie. I Hofacker, W Fontana, P Stadler, L Bonhoeffer, M Tacker, P Schuster, Chemical Monthly. 1252Hofacker I, Fontana W, Stadler P, Bonhoeffer L, Tacker M, Schuster P (1994) Fast folding and comparison of RNA secondary structures. Monatshefte für Chemie / Chemical Monthly 125(2):167-188
Computation of Generating Functions for Biological Molecules. J Howell, T Smith, M Waterman, SIAM J Appl Math. 391Howell J, Smith T, Waterman M (1980) Computation of Generating Functions for Biological Molecules. SIAM J Appl Math 39(1):119-133
On the combinatorics of sparsification. F Huang, C Reidys, Algorithms for Molecular Biology. 71Huang F, Reidys C (2012) On the combinatorics of sparsification. Algorithms for Molecular Biology 7(1):1-15
The nature of π-π interactions. C Hunter, J Sanders, J Am Chem Soc. 11214Hunter C, Sanders J (1990) The nature of π-π interactions. J Am Chem Soc 112(14):5525-5534
Irreducibility in RNA structures. E Jin, C Reidys, Bull Math Biol. 72Jin E, Reidys C (2010a) Irreducibility in RNA structures. Bull Math Biol 72:375-399
On the decomposition of k-noncrossing RNA structures. E Jin, C Reidys, Adv in Appl Math. 44Jin E, Reidys C (2010b) On the decomposition of k-noncrossing RNA struc- tures. Adv in Appl Math 44:53-70
The General Structure of Transfer RNA Molecules. S Kim, J Sussman, F Suddath, Proceedings of the National Academy of Sciences of the United States of America. 7112Kim S, Sussman J, Suddath F et al (1974) The General Structure of Transfer RNA Molecules. Proceedings of the National Academy of Sciences of the United States of America 71(12):4970-4974
Self-splicing RNA: Autoexcision and autocyclization of the ribosomal RNA intervening sequence of tetrahymena. K Kruger, P Grabowski, A Zaug, J Sands, D Gottschling, T Cech, Cell. 311Kruger K, Grabowski P, Zaug A, Sands J, Gottschling D, Cech T (1982) Self-splicing RNA: Autoexcision and autocyclization of the ribosomal RNA intervening sequence of tetrahymena. Cell 31(1):147-157
ViennaRNA Package 2.0. R Lorenz, S Bernhart, C Höner Zu Siederdissen, H Tafer, C Flamm, P Stadler, I Hofacker, Algorithms Mol Biol. 626Lorenz R, Bernhart S, Höner zu Siederdissen C, Tafer H, Flamm C, Stadler P, Hofacker I (2011) ViennaRNA Package 2.0. Algorithms Mol Biol 6:26
Denatured DNA as a direct template for in vitro protein synthesis. B Mccarthy, J Holland, Proceedings of the National Academy of Sciences of the United States of America. 543McCarthy B, Holland J (1965) Denatured DNA as a direct template for in vitro protein synthesis. Proceedings of the National Academy of Sciences of the United States of America 54(3):880-886
Spaces of RNA secondary structures. R Penner, M Waterman, Adv Math. 217Penner R, Waterman M (1993) Spaces of RNA secondary structures. Adv Math 217:31-49
Crystal structure of a eukaryotic group II intron lariat. A Robart, R Chan, J Peters, K Rajashankar, N Toor, Nature. 5147521Robart A, Chan R, Peters J, Rajashankar K, Toor N (2014) Crystal structure of a eukaryotic group II intron lariat. Nature 514(7521):193-197
Structure of yeast phenylalanine tRNA at 3Å resolution. J Robertus, J Ladner, J Finch, D Rhodes, R Brown, B Clark, A Klug, Nature. 2505467Robertus J, Ladner J, Finch J, Rhodes D, Brown R, Clark B, Klug A (1974) Structure of yeast phenylalanine tRNA at 3Å resolution. Nature 250(5467):546-551
Time and Space Efficient RNA-RNA Interaction Prediction via Sparse Folding. R Salari, M Möhl, S Will, S Sahinalp, R Backofen, Lecture Notes in Computer Science. Berger B6044SpringerComputational Molecular BiologySalari R, Möhl M, Will S, Sahinalp S, Backofen R (2010) Time and Space Efficient RNA-RNA Interaction Prediction via Sparse Folding. In: Berger B (ed) Research in Computational Molecular Biology, no. 6044 in Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp 473-490
Linear trees and RNA secondary structure. W Schmitt, M Waterman, Disc Appl Math. 51Schmitt W, Waterman M (1994) Linear trees and RNA secondary structure. Disc Appl Math 51:317-323
RNA secondary structure. T Smith, M Waterman, Math Biol. 42Smith T, Waterman M (1978) RNA secondary structure. Math Biol 42:31-49
On some new sequences generalizing the Catalan and Motzkin numbers. P Stein, M Waterman, Discrete Math. 263Stein P, Waterman M (1979) On some new sequences generalizing the Catalan and Motzkin numbers. Discrete Math 26(3):261-272
Electronic properties, hydrogen bonding, stacking, and cation binding of DNA and RNA bases. J Sponer, J Leszczynski, P Hobza, Biopolymers. 611Sponer J, Leszczynski J, Hobza P (2001) Electronic properties, hydrogen bond- ing, stacking, and cation binding of DNA and RNA bases. Biopolymers 61(1):3-31
Nature and magnitude of aromatic base stacking in DNA and RNA: Quantum chemistry, molecular mechanics, and experiment. J Sponer, J Sponer, A Mládek, P Jurečka, P Banáš, M Otyepka, Biopolymers. 9912Sponer J, Sponer J, Mládek A, Jurečka P, Banáš P, Otyepka M (2013) Na- ture and magnitude of aromatic base stacking in DNA and RNA: Quantum chemistry, molecular mechanics, and experiment. Biopolymers 99(12):978- 988
M Waterman, Secondary structure of single-stranded nucleic acids. In: Rota GC (ed) Studies on foundations and combinatorics, Advances in mathematics supplementary studies. Academic Press N.Y1Waterman M (1978) Secondary structure of single-stranded nucleic acids. In: Rota GC (ed) Studies on foundations and combinatorics, Advances in math- ematics supplementary studies, Academic Press N.Y., vol 1, pp 167-212
Combinatorics of RNA Hairpins and Cloverleaves. M Waterman, Stud Appl Math. 602Waterman M (1979) Combinatorics of RNA Hairpins and Cloverleaves. Stud Appl Math 60(2):91-98
Rapid dynamic programming algorithms for RNA secondary structure. M Waterman, T Smith, Advances in Applied Mathematics. 74Waterman M, Smith T (1986) Rapid dynamic programming algorithms for RNA secondary structure. Advances in Applied Mathematics 7(4):455-464
A Study of Accessible Motifs and RNA Folding Complexity. Y Wexler, C Zilberstein, M Ziv-Ukelson, Journal of Computational Biology. 146Wexler Y, Zilberstein C, Ziv-Ukelson M (2007) A Study of Accessible Motifs and RNA Folding Complexity. Journal of Computational Biology 14(6):856- 872
Secondary structure model for bacterial 16S ribosomal RNA: phylogenetic, enzymatic and chemical evidence. C Woese, L Magrum, R Gupta, R Siegel, D Stahl, J Kop, N Crawford, J Brosius, R Gutell, J Hogan, H Noller, Nucleic Acids Research. 810Woese C, Magrum L, Gupta R, Siegel R, Stahl D, Kop J, Crawford N, Brosius J, Gutell R, Hogan J, Noller H (1980) Secondary structure model for bac- terial 16S ribosomal RNA: phylogenetic, enzymatic and chemical evidence. Nucleic Acids Research 8(10):2275-2293
The ends of a large RNA molecule are necessarily close. A Yoffe, P Prinsen, W Gelbart, A Ben-Shaul, Nucleic Acids Research. 391Yoffe A, Prinsen P, Gelbart W, Ben-Shaul A (2011) The ends of a large RNA molecule are necessarily close. Nucleic Acids Research 39(1):292-299
On finding all suboptimal foldings of an RNA molecule. M Zuker, Science. 2444900Zuker M (1989) On finding all suboptimal foldings of an RNA molecule. Sci- ence 244(4900):48-52
RNA secondary structures and their prediction. M Zuker, D Sankoff, Bulletin of Mathematical Biology. 464Zuker M, Sankoff D (1984) RNA secondary structures and their prediction. Bulletin of Mathematical Biology 46(4):591-621
| []
|
[
"NUMERICAL STUDY OF AN ANISOTROPIC VLASOV EQUATION ARISING IN PLASMA PHYSICS",
"NUMERICAL STUDY OF AN ANISOTROPIC VLASOV EQUATION ARISING IN PLASMA PHYSICS"
]
| [
"Baptiste Fedele ",
"Claudia Negulescu "
]
| []
| []
| Goal of this paper is to investigate several numerical schemes for the resolution of two anisotropic Vlasov equations. These two toy-models are obtained from a kinetic description of a tokamak plasma confined by strong magnetic fields. The simplicity of our toy-models permits to better understand the features of each scheme, in particular to investigate their asymptotic-preserving properties, in the aim to choose then the most adequate numerical scheme for upcoming, more realistic simulations. | 10.3934/krm.2018055 | [
"https://arxiv.org/pdf/1610.01592v2.pdf"
]
| 119,745,862 | 1610.01592 | 3dcececab8a8df1331f77baf80fc119937be6334 |
NUMERICAL STUDY OF AN ANISOTROPIC VLASOV EQUATION ARISING IN PLASMA PHYSICS
9 Aug 2017
Baptiste Fedele
Claudia Negulescu
NUMERICAL STUDY OF AN ANISOTROPIC VLASOV EQUATION ARISING IN PLASMA PHYSICS
9 Aug 2017Plasma modellingkinetic equationsgyro-kinetic equationsasymptotic limitnumerical schemessimulationasymptotic-preserving schemes
Goal of this paper is to investigate several numerical schemes for the resolution of two anisotropic Vlasov equations. These two toy-models are obtained from a kinetic description of a tokamak plasma confined by strong magnetic fields. The simplicity of our toy-models permits to better understand the features of each scheme, in particular to investigate their asymptotic-preserving properties, in the aim to choose then the most adequate numerical scheme for upcoming, more realistic simulations.
Introduction
The present paper addresses a new approach for an efficient numerical resolution of anisotropic transport models, which simplified are of the type
∂ t f ǫ + u ǫ · ∇f ǫ = 0 , ∀(t, x, y) ∈ [0, T ] × Ω , f ǫ (0, x, y) = f in (x, y) ,(1.1)
subject to appropriate boundary conditions (here periodic ones). The unknown f ǫ stands for the quantity (distribution function) which is advected along the given (or self-consistently computed) field u in the domain Ω := [0, L x ] × [0, L y ] and the small scaling parameter ǫ ≪ 1 indicates that we have to deal with very strong advection fields u or equivalently with the long-time asymptotics of f ǫ . Such anisotropic transport models arise very often in physics, as simplifications of more complex systems. In Section 2 we detail some examples coming from plasma physics, as the Vlasov equation for the ion dynamics in the gyrokinetic regime. There are however several other examples arising in physics and leading to a simplified transport equation as (1.1), for example when one studies the long-time asymptotics of the incompressible Euler 2D equations, (1.1) representing then the vorticity equation, which has to be coupled (via u) with a Poisson equation for the stream-function computation [19].
A numerical resolution of problems of the type (1.1) is rather challenging in the regime ǫ ≪ 1, due to the singularity of the mathematical problem as ǫ → 0. Certainly, the exact solution of the simple transport-case (1.1) is known for ǫ > 0, however not in general situations, when u is self-consistently computed via f ǫ and when other (notstiff) terms are present. These general situations require then an efficient numerical treatment of (1.1). From a physical point of view we can say that we have to cope with a multiscale problem, the parameter ǫ being the stiffness parameter. Standard schemes (explicit hyperbolic approaches) require very restrictive CFL-conditions (dependent on ǫ) in order to accurately account for the microscopic ǫ-scales. Very often in such situations people are impliciting the stiff term [8], in order to avoid these too restrictive CFL-conditions. This can work in some situations, for example when the grid is aligned with the anisotropy, and only for a certain range of ǫ-values. However in more general configurations, not-aligned grids and ǫ-values covering all the interval [0, 1], impliciting the stiff term is no more sufficient, as shall be seen in this paper. We propose thus in this work a new numerical procedure, based on Asymptotic-Preserving arguments, being able to solve (1.1) in an efficient manner, uniformly accurate and stable in ǫ, and this on a simple, Cartesian grid. Asymptotic-Preserving methods are efficient, as they are designed in order to mimic on the discrete level the asymptotic behavior of the singularly perturbed problem solutions (see [15,22] for a detailed introduction). This paper was initiated by the repetitive remarks/questions one of the authors got during conferences, meaning that impliciting the stiff term in (1.1) is enough to get an efficient AP-scheme, which behaves well even in the limit ǫ → 0. The aim of this paper is to prove the contrary, AP-schemes are more than impliciting the stiff term. In order to understand in detail the main features of the here proposed AP-scheme, we preferred to keep the investigated model as simple as possible, so that a detailed numerical analysis is possible, permitting to perceive the differences of our scheme when compared to standard (implicit) schemes. We hope that doing so, we are able to resolve some of the confusion that surround AP-schemes. However, even if the here presented results are based on a simplified model as (1.1), the same Asymptotic-Preserving approach can be used for more involved anisotropic transport problems, such as those presented in Section 2 and which shall be the objective of an upcoming work. The AP-procedure we propose here was employed in other contexts by the authors (elliptic [6,7], parabolic [20]). The present setting is more stimulating, as we have to cope with highly oscillating problems when ǫ ≪ 1 and no more dissipative ones. In the present oscillating case, the limit (weak) ǫ → 0 is more challenging, and has to be defined adequately. We refer the reader to [3,4,16] for other AP-scheme references. This paper is laid as follows. Section 2 deals with the presentation of a physical situation leading, after scaling and simplification, to the anisotropic transport equation (1.1). Two simplified models which will be studied in the following, are presented. Section 3 reviews the mathematical framework necessary to study the first toy model, and investigates the asymptotic limit ǫ → 0. Section 4 introduces several numerical schemes that we shall apply for the resolution of the first toy model. Then, we present the numerical results obtained with these schemes in Section 5 and the numerical analysis in Section 6. The last section is dedicated to the mathematical and numerical study of the second toy model which considers variable coefficients. A conclusion gives some hints for our upcoming work, concerning the more realistic Vlasov equation (2.4).
Physical motivation and toy models
Let us shortly say here some words about the physical motivation of the present work and introduce the two simplified models we shall investigate numerically in the next sections. These simplified models are caricatures of typical asymptotic regimes encountered in plasma physics, as for example the gyro-kinetic regime, and contain all the numerical difficulties arising in the more complex real physical systems.
The core tokamak plasma can be considered as collisionless, such that the most appropriate model for the description of its dynamics is the Vlasov equation for each particle species (α = e for electrons and α = i for ions), i.e.
∂ t f α + v · ∇ x f α + e α m α (E + v × B) · ∇ v f α = 0 , (2.2)
where e α = ±e resp. m α are the particle elementary charge resp. mass and E(t, x) resp. B(t, x) are the electric respectively magnetic fields, determined self-consistently from Maxwell's equations. In the electrostatic case (given field B), Maxwell's equations have to be replaced by Poisson's equation
− ǫ 0 ∆Φ = ρ , ρ(t, x) := α e α R 3 f α (t, x, v) dv , (2.3)
where Φ is the electrostatic potential, related to the electric field E by E(t, x) = −∇Φ(t, x). For more details about the modelling of magnetically confined fusion plasmas, we refer the interested reader to the textbooks [2,10,13].
From a numerical point of view, solving the system (2.2)-(2.3) is rather arduous, due among others to its high dimensionality (6 dimensional in the phase space (x, v)) and to the presence of several time and space scales in the dynamics, introduced for ex. by the strong magnetic field B which confines the plasma in the tokamak. We shall be concerned in the present work with the multi-scale aspects of the kinetic problem, difficulties which are described mathematically by the following rescaling of the Vlasov equation for the ions (see [1,9,11,12,21] for the gyrokinetic scaling)
∂ t f + v · ∇ x f + E + 1 ǫ (v × B) · ∇ v f = 0 , (2.4)
where ǫ stands for the ratio of the particle cyclotron period to the observation time. The electrons experience the appearance of a second small parameter, related to the small electron to ion mass ratio m e /m i , leading to additional numerical burden, we shall not consider here (see [5]). The effect of the intense magnetic field on the particle dynamics is that it introduces a strong anisotropy, the motion of the charged particles being splitted into a fast gyration around the magnetic field lines and a slow dynamics along these lines, separation which necessarily causes numerical complications.
Let us introduce now two simplified toy models, which contain all the numerical difficulties of the initial model. In the rest of this paper we shall consider a homogeneous magnetic field B = b b with fixed direction b := e z and constant magnitude |B| = b ≡ 1. Furthermore, let us also introduce the following notation
v || = (0, 0, v z ) t , v ⊥ = (v x , v y , 0) t , ⊥ v := (v y , −v x , 0) t = v × B .
Sometimes it is more convenient to shift in (2.4) from Cartesian coordinates to polar coordinates for the velocity, i.e.
v = (v x , v y , v z ) ⇔ (r, θ, v z ) , v x := r cos(θ) v y := r sin(θ) , θ ∈ [0, 2π) r ≥ 0 .
The Vlasov equation (2.4), written in polar coordinates, has then the form
∂ t F + v z ∂ z F + E z ∂ vz F + (E x cos θ + E y sin θ) ∂ r F − 1 r (E x sin θ − E y cos θ) ∂ θ F +r (cos θ∂ x F + sin θ∂ y F ) − 1 ǫ ∂ θ F = 0 , (2.5) where the unknown is now F (t, x, y, z, r, θ, v z ).
The two formulations, (2.4) resp. (2.5), corresponding to a Cartesian (not fieldaligned) resp. polar (field-aligned) configuration, are different from a numerical point of view, and different numerical schemes are usually employed for their resolution. To understand this difference better, we shall investigate in the present work in detail some numerical schemes for simplified versions of (2.4) and (2.5). We deliberately simplified these equations in order to be able to do a complete numerical analysis and to understand in all details the features of the here introduced AP-schemes.
2.1. First toy model -Polar, field-aligned configuration. Let us start from the Vlasov equation (2.4), assume here that E ≡ 0, B = e z and consider furthermore only the dynamics in the perpendicular plane (x, y), i.e.
∂ t f + v ⊥ · ∇ x f + 1 ǫ (v × B) · ∇ v f = 0 , (2.6)
where ǫ ≪ 1 accounts as usual for very strong magnetic fields. In order to simplify the computations, one often shifts to polar coordinates for the velocity, leading to
∂ t F + r cos θ ∂ x F + r sin θ ∂ y F − 1 ǫ ∂ θ F = 0 , (2.7)
where the unknown now is F (t, x, y, r, θ). We recognize thus a simple 3D anisotropic transport equation, the variable r being considered as a parameter in (2.7).
Choosing an initial condition F in independent on the variable y, would even lead to a more simpler 2D transport model
∂ t F + r cos θ ∂ x F − 1 ǫ ∂ θ F = 0 . (2.8)
This problem represents the simplest example of an anisotropic advection equation, to be understood in detail before designing an efficient scheme for the resolution of the Vlasov equation in the gyrokinetic regime (2.4). It is sufficiently difficult in order to study the behavior of the various schemes we shall introduce, and shall be the starting point of Section 3.
2.2.
Second toy model -Cartesian, not field-aligned configuration. In this second part, we shall differently simplify our Vlasov equation in order to study a different behavior. In particular, setting E ≡ 0, B ≡ e z and taking an initial condition independent on the space variable, yields the following 2D equation, in Cartesian coordinates
∂ t f + 1 ǫ (v × B) · ∇ v f = 0 , (2.9)
or equivalently
∂ t f + v y ǫ ∂ vx f − v x ǫ ∂ vy f = 0 . (2.10)
The difference of this model to the previous one is that this time the characteristics are no more straight lines but curves, such that the numerical schemes will behave differently. As mentioned earlier, these two models correspond to simplified versions of a field-aligned, polar coordinate framework , as well as a not field-aligned, Cartesian framework, both associated to the Vlasov equation (2.4) in the gyro-kinetic regime.
2.3. Aim of the present paper. The main points we are interested in within this study are the following:
• design of AP-schemes for an efficient numerical resolution of anisotropic Vlasov equations of type (2.8), (2.9). Important properties we are asking from the schemes are: (a) stability independent on ǫ; (b) numerical diffusion/accuracy independent on ǫ; (c) discretization of the limit model as ǫ → 0; • show that taking the stiff term 1
ǫ (v × B) · ∇ v f in (2.4)
implicitly is not sufficient for having an AP-scheme, meaning that AP-schemes are more than taking "implicitly" the suitable terms. AP-schemes have to mimic at the discrete level the precise asymptotic behavior of the solution in the limit ǫ → 0;
• perform a detailed numerical analysis of the presented schemes in the framework of the two simplified toy-models (2.8), (2.9) and identify exactly which are the particularities of each scheme and each equation; • understand the difference between a field-aligned framework (2.8) and a Cartesian framework (2.9), and this from a numerical point of view; • prepare the foundation for a future, more realistic work, dealing with the resolution of the initial Vlasov equation (2.4) in the gyro-kinetic regime. Finally, let us say some words about Asymptotic-Preserving schemes. In general, inaccuracy in numerical simulations can result from applying unstable algorithms to wellconditioned problems or stable algorithms to ill-conditioned problems. Dealing with singularly-perturbed problems is a hard task, as they are ill-conditioned from the beginning. A standard, stable discretization (implicit in this case) often results in inaccurate results. The essence of AP-procedures is to replace singularly-perturbed problems by equivalent problems, which are regularly perturbed, well-conditioned problems, leading to uniformly accurate results, if stable algorithms are used (AP-approach).
First anisotropic Vlasov toy model and its mathematical study
Let us investigate now in detail the following simplified toy model, corresponding to a field-aligned anisotropic Vlasov equation
(V ) ǫ ∂ t f ǫ + a ∂ x f ǫ + b ǫ ∂ y f ǫ = 0 , ∀(t, x, y) ∈ [0, T ] × [0, L x ] × [0, L y ] , f ǫ (0, x, y) = f in (x, y) ,(3.11)
where f in is a given initial condition, a > 0 and b > 0 are for the moment constants and 0 < ǫ ≪ 1 is a parameter representing the strong anisotropy/stiffness of the problem. Our computational domain is a doubly periodic box Ω :
= [0, L x ] × [0, L y ].
We shall review here some standard numerical schemes as well as introduce some new ones for the resolution of such a singularly perturbed problem and discuss finally their advantages and disadvantages. In particular, one is interested in numerical schemes capable to solve (3.11) uniformly accurate in ǫ, so-called "Asymptotic-Preserving" schemes. Let us however start with a detailed mathematical study of the behavior of (3.11).
3.1. Singularly perturbed problem. Equation (3.11) is a simple advection problem, whose exact solution is given by the characteristic method, i.e.
f ǫ (t, x, y) = f in (x − at, y − b ǫ t) , ∀(t, x, y) ∈ [0, T ] × Ω . (3.12)
Remark that this function is L x -periodic in the variable x, L y -periodic in the variable y. Concerning the time-variable, two time-scales are present in the problem, a slow time-scale t and a rapid one t/ǫ.
The term b ǫ ∂ y f ǫ in (3.11) is the dominant term in the case where ǫ ≪ 1, such that passing formally to the limit ǫ → 0, yields This system, called "reduced system", is ill-posed. Depending on the initial condition, it can admit or an infinite number of solutions, namely if ∂ y f in = 0, or no regular solution (if ∂ y f in = 0). From a numerical point of view, this ill-posedness in the limit is translated into the singularity of the matrix of the linear system obtained by discretization of this problem. In particular, trying to solve (3.11) in a standard manner will necessarily lead to a linear system which degenerates in the limit ǫ → 0. This shall induce sever numerical problems. More adequate schemes are hence necessary for an efficient resolution of (3.11), as for example "Asymptotic-Preserving" schemes which are uniformly stable and accurate independently on the small parameter ǫ, and are additionally able to capture the limit model as ǫ → 0.
(R) ∂ y f = 0, ∀(t, x, y) ∈ [0, T ] × [0, L x ] × [0, L y ], f (0, x, y) = f in (x, y).
3.2. Limit model. For a better comprehension of our singularly-perturbed problem as well as for the construction of efficient "Asymptotic-Preserving" schemes, we have to identify the limit problem (V ) 0 of (3.11) and its solution denoted by f 0 . The information we get from the reduced model is that the limit-function f 0 has to be y-independent. With this information we introduce now the average of the function f ǫ with respect to the direction yf
ǫ (t, x) := 1 L y Ly 0 f ǫ (t, x, y)dy.
Integration of the equation (3.11) with respect to y yields ∂ tf ǫ + a∂ xf ǫ = 0, which is an ǫ-independent problem. Passing then to the limit ǫ → 0 leads to the advection equation
(V ) 0 ∂ t f 0 + a∂ x f 0 = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ], f 0 (0, x) =f in (x) , ∀x ∈ [0, L x ] ,(3.14)
with solution
f 0 (t, x) =f in (x − at) , ∀(t, x) ∈ [0, T ] × [0, L x ] .
The system (V ) 0 is what we call "limit-system" of the anisotropic Vlasov equation (V ) ǫ , as shall be proved in the next section.
3.3. Weak convergence. So far, we proved the existence of a unique solution f ǫ for the system (V ) ǫ resp. f 0 for the limit system (V ) 0 . The next step is now to show the weak-convergence of f ǫ towards f 0 as ǫ → 0, and this in a certain sense. To define this sense, we have to introduce the right mathematical framework. In the sequel the symbol ♯ shall underline the periodicity of the considered space.
Theorem 3.1. Let the initial condition f in ∈ H 1 ♯ (Ω). Then the unique solutions to (V ) ǫ resp. (V ) 0 satisfy f ǫ ∈ W 1,∞ (0, T ; L 2 ♯ (Ω)) ∩ L ∞ (0, T ; H 1 ♯ (Ω)) resp. f 0 ∈ W 1,∞ (0, T ; L 2 ♯ (0, L x )) ∩ L ∞ (0, T ; H 1 ♯ (0, L x )). Moreover, we have the weak-⋆ limit f ǫ * ⇀ ǫ→0 f 0 in L ∞ (0, T ; L 2 ♯ (Ω)) . (3.15)
Proof. To prove (3.15), which signifies
T 0 Ω f ǫ (t, x, y) − f 0 (t, x) φ(t, x, y) dx dy dt −→ ǫ→0 0 ∀φ ∈ L 1 (0, T ; L 2 ♯ (Ω)) , we shall introduce first a primitive of the function f in (x, .) −f in (x), i.e. g(x, y) := y 0 f in (x, z) −f in (x) dz.
It follows that the function g belongs to H 1
♯ (Ω) such that g ǫ (t, x, y) := g(x − at, y − b t/ǫ) belongs to W 1,∞ (0, T ; L 2 ♯ (Ω)) ∩ L ∞ (0, T ; H 1 ♯ (Ω)
). The L y -periodicity of g is seen by the simple computation
g(x, y + L y ) = y+Ly 0 f in (x, z) −f in (x) dz = y −Ly f in (x, z)dz −f in (x)(y + L y ) = y 0 f in (x, z)dz −f in (x) y + Ly 0 f in (x, z)dz −f in (x) L y = g(x, y) .
Taking now an arbitrary test function φ ∈ C 1 0 (0, T ; L 2 ♯ (Ω)) and introducing for simplicity for each f, g ∈ L 2 ♯ (Ω) the bracket f, g := Ω f g dx dy, we have
T 0 f in (x − at, y − b ǫ t) −f in (x − at), φ(t) dt = T 0 ∂ y g x − at, y − b ǫ t , φ(t) dt = − ǫ b T 0 ∂ t g x − at, y − b ǫ t + a ∂ x g x − at, y − b ǫ t , φ(t) dt = ǫ b T 0 g x − at, y − b ǫ t , φ ′ (t) dt − ǫa b T 0 ∂ x g x − at, y − b ǫ t , φ(t) dt. As g ǫ ∈ W 1,∞ (0, T ; L 2 ♯ (Ω)) ∩ L ∞ (0, T ; H 1 ♯ (Ω)), we can estimate ∀φ ∈ C 1 0 (0, T ; L 2 ♯ (Ω)), T 0 f in (x − at, y − b ǫ t) −f in (x − at) , φ(t) dt Cǫ ,
where C > 0 is a constant independent on ǫ. Therefore,
∀φ ∈ C 1 0 (0, T ; L 2 ♯ (Ω)), T 0 f in (x − at, y − b ǫ t) −f in (x − at) , φ(t) dt −→ ǫ→0 0 ,
which concludes the proof due to the dense injection C 1 0 (0, T ; L 2 ♯ (Ω)) ⊂ L 1 (0, T ; L 2 ♯ (Ω)).
Numerical schemes for the anisotropic Vlasov equation
In this section we shall now introduce several numerical schemes for the resolution of (3.11) and examine them in more details. Firstly, different time semi-discretizations will be presented and then some words mentioned about a standard upwind spacediscretization. The time-discretization is the most important step in the construction of AP-schemes. For this, let us first introduce the following homogeneous discretizations of our time interval [0, T ] as well as of our simulation domain Ω = [0, L x ] × [0, L y ] :
∆t := T /N t , N t ∈ N ; t n := n * ∆t , n = 0, · · · , N t ∆x := L x /(N x − 1) , N x ∈ N ; x i := (i − 1) * ∆x , i = 1, · · · , N x ∆y := L y /(N y − 1) , N y ∈ N ; y j := (j − 1) * ∆y , j = 1, · · · , N y . (4.16) We denote by Q h the index domain Q h := [0, N t ]×[1, N x ]×[1, N y ] ⊂ N 3 .
We shall denote further by f ǫ,n resp. f ǫ,n ij the numerical approximation of f ǫ (t n , x, y) resp. f ǫ (t n , x i , y j ). Recall also that we consider a doubly-periodic framework, such that
f ǫ,n 0,j = f ǫ,n Nx−1,j , f ǫ,n 1,j = f ǫ,n Nx,j , f ǫ,n i,0 = f ǫ,n i,Ny−1 , f ǫ,n i,1 = f ǫ,n i,Ny , ∀(n, i, j) ∈ Q h . 4.1. Semi-discretization in time. 4.1.1. IMEX scheme.
The first time semi-discretization we shall study will be the implicitexplicit (IMEX) Euler method, where the stiff term is taken implicitly, i.e
(IMEX) ǫ f ǫ,n+1 − f ǫ,n ∆t + a ∂ x f ǫ,n + b ǫ ∂ y f ǫ,n+1 = 0 , ∀n ≥ 0 . (4.17)
To study the behavior of this scheme, as ǫ becomes smaller, let us formally let ǫ go to zero in (4.17) and get
∂ y f 0,n+1 (x, y) = 0 , ∀(x, y) ∈ Ω .
This equation admits an infinite amount of solutions, namely all periodic functions dependent only on x. This formal analysis permits hence to conclude that the IMEX scheme can not be an AP-scheme, as it does not capture correctly the asymptotic behavior of the problem, which is rather given by the limit problem (V ) 0 . This property shall be tested numerically in Section 5.
Fourier method/Micro-Macro method.
A different way to solve (3.11) is to use a partial Fourier transform in the variable y, which is possible here, as we are in a simplified periodic context with constant coefficients. Denoting indeed the Fourier coefficients bŷ
f ǫ k (t, x) := 1 L y Ly 0 f ǫ (t, x, y) e −i ωy k y dy , ∀k ∈ Z , ω y := 2 π L y , one has f ǫ (t, x, y) = ∞ k=−∞f ǫ k (t, x) e i ωy k y ,(4.18)
where the Fourier coefficients are solutions of the system
∂ t f ǫ 0 + a ∂ x f ǫ 0 = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ] ∂ t f ǫ k + a ∂ x f ǫ k + i ω y k b ǫ f ǫ k = 0 , ∀k = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ] .
(4.19)
A simple discretization of this problem can be
(F ) ǫ f ǫ,n+1 0 − f ǫ,n 0 ∆t + a ∂ x f ǫ,n 0 = 0 , ∀n ≥ 0 f ǫ,n+1 k − f ǫ,n k ∆t + a ∂ x f ǫ,n k + i ω y k b ǫ f ǫ,n+1 k = 0 , ∀k = 0 , ∀n ≥ 0 .
Solving this system and using the inverse Fourier transform (4.18) permits to get the desired result, i.e. the values of the unknowns f ǫ,n ij , solution of (3.11).
Let us investigate now the behavior of this system when ǫ → 0. Formally we get
(F ) 0 f ǫ,n+1 0 − f ǫ,n 0 ∆t + a ∂ x f ǫ,n 0 = 0 , ∀n ≥ 0 f ǫ,n+1 k = 0 , ∀k = 0 , ∀n ≥ 0 .
Therefore, we find a discretized version of the Vlasov limit problem (V ) 0 , signifying that this method will be "Asymptotic-Preserving".
The Fourier method is very nice, however it can be applied only in a simplified periodic framework with constant coefficients. As a sort of generalization one can think at the micro-macro method [3], which is based on the decomposition of each quantity in its mean part over the variable y, denoted by H ǫ or simplyf ǫ , and the fluctuation part h ǫ or simply (f ǫ ) ′ , defined as follows
H ǫ (t, x) := 1 L y Ly 0 f ǫ (t, x, y) dy , h ǫ (t, x, y) := f ǫ (t, x, y) − H ǫ (t, x) ,h ǫ = 0 .
Taking now the average of the advection equation (3.11) over y and subtracting the resulting equation then from the initial one, yields a system to be solved for the unknowns (H ǫ , h ǫ ), i.e.
(MM) ǫ ∂ t H ǫ + a∂ x H ǫ = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ] ∂ t h ǫ + a∂ x h ǫ + b ǫ ∂ y h ǫ = 0 , ∀(t, x, y) ∈ [0, T ] × Ω h ǫ = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ] .
(4.20)
Let us study now the behavior of this system when ǫ → 0. We have formally
(MM) 0 ∂ t H 0 + a∂ x H 0 = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ] ∂ y h 0 = 0 , ∀(t, x, y) ∈ [0, T ] × Ω h 0 = 0 , ∀(t, x) ∈ [0, T ] × [0, L x ] .
(4.21)
The two last equations establish that h 0 ≡ 0. Hence the system (MM) 0 is nothing else than the Vlasov limit system (V ) 0 . Again, we have created a scheme which is a regular perturbation of the asymptotic limit model, and shall be hence "Asymptotic-Preserving". This method is rather similar to Fourier method, however more general, as it can be applied in rather broad contexts. To understand this similitude, remark that H ǫ is nothing else than the first Fourier coefficient f ǫ 0 and the fluctuation h ǫ regroups the remaining Fourier modes. However, there is still a disadvantage or difficulty, namely the implementation of the constrainth ǫ = 0, which is crucial for the passage to the limit ǫ → 0. It is this constraint which permits in the limit to get a unique h 0 and to have thus a well-posed limit problem (MM) 0 . But averaging along the anisotropy lines can be very difficult in more general contexts, for ex. when these lines are not aligned with the axes.
4.1.3.
Lagrange-multiplier method. The Lagrange-multiplier method is based on the idea to replace the stiff, dominant term b ǫ ∂ y f by a smoother one ∂ y q, yielding the system
(La) ǫ ∂ t f ǫ + a ∂ x f ǫ + b ∂ y q ǫ = 0 , ∀(t, x, y) ∈ [0, T ] × Ω ∂ y f ǫ = ǫ ∂ y q ǫ , ∀(t, x, y) ∈ [0, T ] × Ω q ǫ |Γ in = 0 , (4.22)
where the inflow boundary is defined as Γ in := {(x, y) ∈ ∂Ω / y = 0}. In the limit ǫ → 0 one remarks that q ǫ is a sort of Lagrange multiplier corresponding to the constraint ∂ y f 0 = 0, where the name of the method.
First, we will prove the equivalence between the system (La) ǫ and the Vlasov equation (V ) ǫ , proving thus the well-posedness of the reformulation (La) ǫ . For this, let us first consider the unique solution f ǫ of (V ǫ ) and prove the existence of a function q ǫ such
that (f ǫ , q ǫ ) solves (La) ǫ . Since f in ∈ H 1 ♯ (Ω), we have f ǫ ∈ V := W 1,∞ (0, T ; L 2 ♯ (Ω)) ∩ L ∞ (0, T ; H 1 ♯ (Ω)). The kernel of the dominant operator b ǫ ∂ y f , denoted by G, reads: G := {f ǫ ∈ V, ∂ y f ǫ = 0}.
Then we shall decompose f ǫ in the following manner, which is somehow similar to a Hilbert Ansatz :
f ǫ = p ǫ + ǫq ǫ , (4.23) with (p ǫ , q ǫ ) ∈ G × V.
To have a unique decomposition, we have to single out the G-part of q ǫ , by fixing for example q ǫ on the inflow boundary Γ in , choosing q ǫ ∈ Q with
Q := {q ǫ ∈ V, q ǫ | Γ in = 0}.
Obviously, we have G ∩ Q = {0 V }, implying the uniqueness of the decomposition (4.23).
Replacing now this decomposition in the system (V ) ǫ , we obtain directly the system (La) ǫ , which proves the existence of a solution to (La) ǫ . The converse is trivial, meaning that for (f ǫ , q ǫ ) ∈ V × Q solution to (La) ǫ , f ǫ solves (V ) ǫ . Altogether, we have proved the equivalence between both systems. Now let us consider the limit problem of (La) ǫ , obtained by letting formally ǫ → 0 in (4.22) The second equation leads to f 0 =f 0 . Then, averaging the first equation of (4.24) in the y-variable, yields
(La) 0 ∂ t f 0 + a ∂ x f 0 + b ∂ y q 0 = 0 , ∀(t, x, y) ∈ [0, T ] × Ω ∂ y f 0 = 0, ∀(t, x, y) ∈ [0, T ] × Ω q 0 |Γ in = 0 .∂ tf 0 + a ∂ xf 0 = 0,(4.25)
where we used that q 0 is L y -periodic. This equation permits the determination of the limit function f 0 . Furthermore, the remaining well-posed system
b∂ y q 0 = −∂ t f 0 − a ∂ x f 0 , ∀(t, x, y) ∈ [0, T ] × Ω q 0 |Γ in = 0 , ∀(t, x) ∈ ×[0, T ] × [0, L x ], (4.26)
can be solved to assure finally the existence of the unique solution (f 0 , q 0 ) for the limit problem (La) 0 . The Lagrangian scheme seems to be the most "far-reaching" AP-scheme . The only disadvantage of this method is that we have now two unknowns and hence two equations to be solved, meaning longer simulation times. However, we are no more forced to follow the anisotropy lines and can choose coarse Cartesian, not-field aligned grids.
4.2.
Space discretization for the IMEX scheme. For any numerical scheme presented above, we decided to consider the standard upwind method to discretize the transport terms in the equation (3.11). The idea behind this choice is that the spacediscretization is not the important step in the construction of an AP-scheme, such that we opted for a simple discretization, in order not to embroil the further numerical analysis as well as the understanding of the main ideas of our methods. The same arguments incited us to select only first order discretizations in time. A Runge-Kutta coupled to a second-order space-discretization would be naturally more accurate, changes however nothing in the essential concept of our AP-strategies. As mentioned earlier, in a forthcoming paper we shall be concerned with a realistic, fusion plasma situation, such that we shall adapt the most adequate of the here presented schemes to more accurate second order techniques, to gain in accuracy. Now, let us recall the first-order upwind forms
a ∂ x f ǫ,n i,j ≈ a f ǫ,n i,j − f ǫ,n i−1,j ∆x , if a > 0, a ∂ x f ǫ,n i,j ≈ a f ǫ,n i+1,j − f ǫ,n i,j ∆x , if a < 0, ∀(n, i, j) ∈ Q h .
We have analogous formulae for the partial derivative in the y-variable. Denoting now α := a∆t ∆x > 0 and β := b∆t ∆y > 0 and using the periodicity, i.e.
f ǫ,n 0,j = f ǫ,n Nx−1,j , f ǫ,n 1,j = f ǫ,n Nx,j , f ǫ,n i,0 = f ǫ,n i,Ny−1 , f ǫ,n i,1 = f ǫ,n i,Ny , ∀(n, i, j) ∈ Q h
. the completely discretized IMEX scheme writes finally :
(IMEX) ǫ (ǫ + β)f ǫ,n+1 i,j − βf ǫ,n+1 i,j−1 = ǫ(1 − α)f ǫ,n i,j + ǫαf ǫ,n i−1,j , for all (n, i, j) ∈ [0, N t − 1] × [1, N x − 1] × [1, N y − 1]
. We remark that we can rewrite this scheme like a system of N x − 1 equations :
A F n+1 i = B n i , ∀n 0, ∀i ∈ [1, N x − 1],(4.27)
where :
A = (ǫ + β) 0 . . . 0 −β −β . . . 0 0 0 0 . . . . . . 0 0 0 0 . . . . . . 0 0 0 0 −β (ǫ + β) , F n+1 i = f ǫ,n+1 i,1 f ǫ,n+1 i,2
. . .
f ǫ,n+1 i,Ny−2 f ǫ,n+1 i,Ny−1 , B n i = ǫ(1 − α)f ǫ,n i,1 + ǫαf ǫ,n i−1,1 ǫ(1 − α)f ǫ,n i,2 + ǫαf ǫ,n i−1,2 . . . ǫ(1 − α)f ǫ,n i,Ny−2 + ǫαf ǫ,n i−1,Ny−2 ǫ(1 − α)f ǫ,n i,Ny−1 + ǫαf ǫ,n i−1,Ny−1 .
At each time step, we resolve this system ∀i ∈ [1, N x − 1], to get the unknowns f ǫ,n+1
i,j . Remark that A = ǫ Id + C β is a regular perturbation of a singular, cyclic matrix C β .
Numerical simulations
In this part, we shall test numerically every scheme introduced in the previous Section for the resolution of the anisotropic Vlasov equation (3.11). The homogeneous time and phase-space discretization was previously introduced in (4.16) and we choose in the sequel the following parameters: T = 1, L x = 2π, L y = 2π, N t = 101, N x = N y = 201, a = 0.1 and b = 1. Changes in these parameters shall be explicitly mentioned. The initial condition we adopt is given by :
f in (x, y) := sin(x) cos(2y) + 1 , ∀(x, y) ∈ Ω := [0, L x ] × [0, L y ].
We recall that the exact solution of (3.11) is known and reads, for each ǫ > 0:
f ǫ ex (t, x, y) = sin x − at cos 2 y − b ǫ t + 1 , ∀(t, x, y) ∈ [0, T ] × Ω.
In Figure 1, we reveal two graphics which contain on the one hand f in and on the other hand f ǫ ex at the final time T = 1. Furthermore, in order to better figure out our problem, we plotted in Figure 2 the exact solution of the limiting Vlasov system (3.14) at the final time T , i.e. f 0 ex (T, x) = f in (x − aT ). Remark that this solution is homogeneous in the y-variable.
(a) f in (x, y) (b) f ǫ ex (T, x, y)
Finally, we show in Figure 3 the time-evolution of the exact solution f ǫ ex at one point only, i.e. (x Nx−1 , y Ny−1 ). We distinguish easily on the left plot (A) of Fig. 3 the two periods, one linked with the x-variable, and the other one corresponding to the y-variable. This last one is ǫ-dependent and we see that more ǫ is small, more the frequency of the timeoscillations becomes important. As the 2D situation is not so eloquent, we eliminate the x-variable in the problem and considered also a 1D problem, keeping only the term containing the parameter ǫ (i.e. a = 0). The time-evolution of the exact solution at the point y Ny−1 is now plotted in Fig. 3 (B). One observes here more easily that with smaller becoming ǫ, the frequency of the time-oscillations is increasing. In the limit ǫ → 0, f ǫ (t, y Ny−1 ) converges weakly towards the average, which is here the constant 1.
5.1.
Some results obtained with our schemes. Now we examine how the different numerical schemes introduced above cope with such an asymptotic behavior. For ǫ = 1, we recognize an approximation of the exact solution (see Figure 1) and for ǫ = 10 −10 , the limit solution is clearly obtained (see Figure 2). Briefly one can say that the numerical solution follows the weak-⋆ convergence f ǫ ⋆ ⇀ ǫ→0 f 0 as ǫ becomes smaller and smaller. But, one can remark a numerical diffusion which leads to a loss of amplitude, especially visible in the non-limit case ǫ = 1 or ǫ = 10 −1 . To observe better this numerical diffusion, we show in the right plot of Fig. 5 the time-evolution of just one point of the numerical solution, corresponding again to a 1D situation as the one plotted on the right of Fig. 3, and this for several values of ǫ. As one can observe, the damping is more and more pronounced if ǫ → 0. For small ǫ-values the numerical solution recovers quasi immediately the weak limit solution, here the constant 1. This damping phenomenon will be understood from the numerical analysis we shall fulfill in Section 6. 5.1.2. Fourier, Micro-Macro and Lagrange-multiplier schemes. Let us now present analogous results for the remaining schemes, namely the Fourier, Micro-Macro and Lagrangemultiplier schemes. The 2D plots are rather similar to the ones presented for the IMEXscheme (see Fig. 4-5). To examine the difference between these methods, we preferred to plot in Fig. 6 only the time-evolution of the numerical solution in the 1D-context again. We remark that the damping of the Fourier method is more slowly than the ones of the IMEX-scheme as well as Micro-Macro and Lagrange-multiplier scheme (which are completely overlapping). But, once again we observe that in the limit ǫ/t → 0, the fluctuations are completely damped out and we recover the weak limit solution.
5.2.
Convergence of the schemes for fixed ǫ > 0. Let us study now the convergence of the here presented schemes with respect to time and space, and this for fixed ǫ > 0, permitting to show their validity in the large ǫ-regime. For this, fix ǫ > 0 and consider the error between exact and numerical solutions as a function of the mesh-size, at the final time T. Firstly, concerning the convergence with respect to ∆t, we choose small space steps (N x = N y = 501) such that the space errors are much smaller than the time error and vary then the time step. Equally we apply the same strategy for the convergence with respect to ∆x and ∆y, by fixing a time step of N t = 501. In all cases, the parameter ǫ is fixed to 1. In Figure 7, we have plotted curves in log-log scale, showing the evolution of the errors as a function of ∆x, ∆y and ∆t, respectively.
As expected, we observe that all schemes are first order in time and space. Some comments are however necessary to understand Figure 7. First, the slop of the curves gets smaller than 1 in the small-grid ranges. This is due to the fact that the error to be investigated (for ex. in ∆t) becomes as small as the fixed error term (in ∆x, ∆y) and saturates. Secondly, the slope of the curves becomes also smaller in the large-grid ranges. This is usual, as for large discretization steps, the rest-terms in the Taylor series for the error analysis can no longer be neglected. Finally, we would like to draw the attention of the reader to the Fourier error curve, which has a constant slope in (B). This is completely natural, as the Fourier method has spectral accuracy.
5.3.
Asymptotic behavior as ǫ → 0. To begin the study of the asymptotic behavior, we define the following two errors
η ǫ (t) = max i,j |f ǫ ex,i,j − f ǫ num,i,j |(t), γ ǫ (t) = max i,j |f ǫ num,i,j − f 0 ex,i,j |(t),
where η ǫ (t) represents the L ∞ -error between the exact and the numerical solution at instant t, for fixed ǫ > 0, whereas γ ǫ (t) denotes the L ∞ -error at instant t between the numerical solution f ǫ num and the exact limit solution f 0 ex .
We are interested in the evolution of these two errors at the final time T as functions of ǫ. The curves corresponding to the different schemes are plotted in Figure 8. As expected, we observe a decrease of η ǫ (T ) and an increase of γ ǫ (T ) when ǫ → 1. For ǫ → 0 the converse behavior is observed. This plot shows that each scheme approximates well either the exact solution f ǫ ex for large ǫ, or the exact limit solution f 0 ex for small ǫ.
What can be said as a conclusion, is that all schemes seem to have the right asymptotic behavior in this simple test case. Indeed, for fixed ǫ > 0, each numerical solution f ǫ num converges to the expected solution f ǫ ex as long as the grid is refined (Fig. 7). For fixed discretization steps, all numerical solutions f ǫ num converge towards the limit solution f 0 when ǫ becomes smaller and smaller, underlying the AP property of our methods. It is worth mentioning however that the IMEX-scheme is no more working for ǫ smaller than 10 −14 , the matrix A of the IMEX linear-system (4.27), namely
A F n+1 i = B n i , A = ǫ Id + C β , det C β = 0 ,
is becoming numerically singular in the limit ǫ → 0. This is not the case for the Micro-Macro as well as Lagrange-multiplier schemes, which give accurate results even for ǫ = 0. This difference in the behavior can be observed also from the study of the conditionnumber of the discretization matrices, paying attention especially on the ǫ-dependence. Remark here that an "Asymptotic-Preserving scheme" must have an ǫ-independent condition number, depending merely on the discretization parameters ∆x, ∆y.
In Fig. 9 we plotted thus the matrix condition-number cond(A) := ||A −1 || 2 ||A|| 2 corresponding to the three schemes (IMEX, Micro-Macro and Lagrange-multiplier) as a function of ǫ. What can be observed is that for the Micro-Macro and Lagrange-multiplier scheme, the condition-number is ǫ-independent (for ǫ ≤ 10 −2 ), which is a hint of the well-posedness of these two problems in the limit ǫ → 0, namely of (MM) 0 resp. (La) 0 . On the other hand, for the IMEX-scheme cond(A) is proportional to 1/ǫ (slope of the curve is approx. −1). This circumstance is the translation on the discrete level of the fact that the reduced model (3.13), obtained on the continuous level by letting formally ǫ → 0 in the IMEX time-discretization, is ill-posed, admitting an infinite amount of solutions.
However, even if these arguments show clearly that the IMEX-method should behave badly for very small ǫ-values, it is not the case in our simplified toy model, in particular it does not seem to be affected by the bad condition number. This will no more be the case in our second toy-model. To understand in detail what happens, a more refined error study could be profitable and shall be done in the next section. The final interpretation is postponed to Section 6.3 after having estimated the truncation error. One can only say here that the functioning of the IMEX-scheme is due to the fact that the investigated problem is very simple and specifically the anisotropy is aligned with the Cartesian mesh.
Numerical analysis
Let us now perform a numerical analysis study of our schemes introduced for the resolution of (3.11), permitting to understand in detail the behavior observed in the last section. In particular we shall detail only the error-analysis of the standard IMEXscheme and the Asymptotic-Preserving Lagrange-multiplier scheme. The error study of the other schemes is very similar. See [14,17] for more details on this analysis part.
6.1. IMEX scheme. We begin by recalling the full discretized form of the IMEX scheme : Finally, we observe that the IMEX scheme (6.28) is a second-order scheme for the modified Vlasov equation
(IMEX) ǫ f ǫ,n+1 i,j − f ǫ,n i,j ∆t + a f ǫ,n i,j − f ǫ,n i−1,j ∆x + b ǫ f ǫ,n+1 i,j − f ǫ,n+1 i,j−1 ∆y = 0 , ∀(n, i, j) ∈ Q h .∂ t g ǫ + a ∂ x g ǫ + b ǫ ∂ y g ǫ − a∆x 2 (1 − α) ∂ xx g ǫ − b∆y 2ǫ (1 + β ǫ ) ∂ yy g ǫ = 0. (6.29)
Proof: The local truncation error of the method (6.28) is defined by
T I (t, x, y, ∆t, ∆x, ∆y) = f ǫ (t + ∆t, x, y) − f ǫ (t, x, y) ∆t + a f ǫ (t, x, y) − f ǫ (t, x − ∆x, y) ∆x + b ǫ f ǫ (t + ∆t, x, y) − f ǫ (t + ∆t, x, y − ∆y)
∆y .
Supposing that f ǫ is sufficiently smooth in order to apply a Taylor development, we find
T I (t n , x i , y j , ∆t, ∆x, ∆y) =∂ t f ǫ + ∆t 2 ∂ tt f ǫ + b∆t ǫ ∂ yt f ǫ + a ∂ x f ǫ − a∆x 2 ∂ xx f ǫ + b ǫ ∂ y f ǫ − b∆y 2ǫ ∂ yy f ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ),
where f ǫ is taken in (t n , x i , y j ). Since f ǫ satisfies the Vlasov equation (3.11), the O(1) terms drop out. Moreover, by differentiating the Vlasov equation along t, y and x, we express the partial derivatives ∂ tt f and ∂ ty f as functions of ∂ xx f and ∂ yy f . We find thus
∂ tt f ǫ = a 2 ∂ xx f ǫ + 2 ab ǫ ∂ xy f ǫ + b 2 ǫ 2 ∂ yy f ǫ , ∂ yt f ǫ = −a ∂ xy f ǫ − b ǫ ∂ yy f ǫ .
The local truncation error writes finally
T I (t n , x i , y j , ∆t, ∆x, ∆y) = − a∆x 2 (1 − α) ∂ xx f ǫ − b∆y 2ǫ (1 + β ǫ ) ∂ yy f ǫ + O(∆t 2 , ∆x 2 , ∆y 2 ).
Remark 6.2. The modified equation (6.29) is an advection/diffusion equation. Note that the diffusion is stronger in the y-direction due to the term 1/ǫ. These diffusion terms are responsible for the damping that we observed in the numerical simulations (see Fig. 5 (B)), damping which tends towards infinity in the y-direction, as ǫ → 0. Note also that the diffusion coefficient is positive if α ≤ 1. This is precisely the stability condition of the upwind scheme, as we will see afterwards. If this condition is not respected, the diffusion becomes negative, leading to an ill-posed problem with exponentially growing solutions. Proof: To study the stability of our scheme, let us inject in (6.28) for fixed n ∈ N a plane wave of the form
f ǫ,n i,j = e ikx i e ily j ∀(i, j),
with k, l ∈ Z two arbitrary modes, and look how it evolves from one time-step to the other. Let us denote by ξ I the amplification factor for this passage t n → t n+1 , meaning
f ǫ,n+1 i,j = ξ I f ǫ,n i,j = ξ I e ikx i e ily j , ∀(i, j).
Inserting now these terms in the discretized equation (6.28), yields, after simplification
ξ I 1 + b∆t ǫ∆y (1 − e −il∆y ) = 1 − a∆t ∆x (1 − e −ik∆x )
.
A scheme is said to be stable in the Von Neumann sense, if the amplification factor satisfies |ξ I | ≤ 1, such that the modes are not amplified from one time-step to the other. Straightforward computations yield
|ξ I | = ǫ 1 − 4α(1 − α) sin 2 k ∆x 2 ǫ 2 + 4β(ǫ + β) sin 2 l ∆y 2 , ∀k, l ∈ Z.
Then, a necessary and sufficient condition to have the Von Neumann stability is : a∆t ∆x 1.
Remark 6.4. Note that in the case l = 0, when ǫ tends towards 0, the amplification factor converges towards 0. This means that for injected waves with mode l = 0, the scheme becomes more and more diffusive and attenuates completely the oscillations.
6.2. Lagrange-multiplier scheme. We do now the same work for the Lagrangemultiplier scheme, i.e. Proof: In order to prove the result, we write the local truncation error of the first equation. We find that
(La) ǫ f ǫ,n+1 i,j − f n i,j ∆t + a f ǫ,n i,j − f ǫ,n i−1,j ∆x + b q ǫ,n+1 i,j − q ǫ,n+1 i,j−1 ∆y = 0 , ∀(n, i, j) ∈ Q h f ǫ,n+1 i,j − f ǫ,n+1 i,j−1 ∆y = ǫ q ǫ,n+1 i,j − q ǫ,n+1 i,j−1 ∆y , ∀(n, i, j) ∈ Q h q ǫ,n i,1 = 0 , ∀(n, i) ∈ [0, N t ] × [0, N x ].T L (t n , x i , y j , ∆t, ∆x, ∆y) = ∆t 2 ∂ tt f ǫ − a ∆x 2 ∂ xx f ǫ − b ∆y 2 ∂ yy q ǫ + b∆t∂ yt q ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
Since the first equation of (6.30) is verified by (f ǫ , q ǫ ), we have
∂ tt f ǫ = −a∂ xt f ǫ − b∂ yt q ǫ , ∂ xt f ǫ = −a∂ xx f ǫ − b∂ xy q ǫ , ∂ ty f ǫ = −a∂ xy f ǫ − b∂ yy q ǫ .
Then,
T L (t n , x i , y j , ∆t, ∆x, ∆y) = ∆t 2 (a 2 ∂ xx f ǫ + ab∂ xy q ǫ ) − a ∆x 2 ∂ xx f ǫ − b ∆y 2 ∂ yy q ǫ + b ∆t 2 ∂ ty q ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
Since the second equation of (6.30) is verified, we have
∂ ty q ǫ = 1 ǫ ∂ ty f ǫ , ∂ yy q ǫ = 1 ǫ ∂ yy f ǫ , ∂ xy q ǫ = 1 ǫ ∂ xy f ǫ ,
such that we find the same expression as for the IMEX scheme, i.e.
T L (t n , x i , y j , ∆t, ∆x, ∆y) = −∇ · (D L ∇f ǫ ) + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
The just proved result confirms what we have seen on the numerical plots. Indeed, the IMEX and Lagrange-multiplier schemes have the same behavior when regarding the convergence and the asymptotic behavior.
Theorem 6.6. The Lagrange-multiplier scheme is stable in the Von Neumann sense if and only if the CFL condition a∆t ∆x 1 is satisfied.
Proof: Here, we have two unknown functions f ǫ and q ǫ . To study the Von Neumann stability, we write q ǫ,n+1
i,j = ξ q q ǫ,n i,j , f ǫ,n+1 i,j
= ξ f f ǫ,n i,j , with the two amplification factors ξ q and ξ f . As usual, we insert these expressions in the discretized Lagrange-multiplier equations. We obtain a linear system where the unknowns are ξ q and ξ f . This system writes
1 β 1 − e −ikm∆y 1 −ǫ ξ f ξ q = α 1 − e −ikn∆x 0 ,
and is easy to invert. Computing the amplification factor ξ f , we remark that it is identical to the one calculated for the IMEX scheme.
6.3. AP-properties. We are now able to explain the numerical results obtained in Section 5, in particular to explain why the IMEX-scheme, even if being not an APscheme, gives in this simple field-aligned test case, good results up to a value of ǫ = 10 −14 .
For this, let us recall that two types of errors arise during a numerical resolution of the Vlasov equation (3.11). First of all we have the truncation errors, estimated in the last subsections, and secondly one has also the round-off errors, arising at each elementary computation. To be more precise, one has to consider the three linear systems, corresponding to (4.27):
A F ex = B + ǫT , A F = B , (A + δA) F num = B + δB ,
where to simplify notation we omitted all the time and space indices. Here we denoted by F ex the exact solution of the Vlasov equation (3.11), satisfying the linear system (4.27) up to a truncation error T , F is the exact solution of the linear system (4.27), supposing exact arithmetics, and finally F num is the solution to the linear system (4.27) obtained via a computer, hence contaminated with round-off errors. The error we are interested in, can be estimated as follows
||F ex − F num || ≤ ||F ex − F || + ||F − F num || .
Stability and consistency permit to show that the first error term is of the order of the truncation error. For the estimate of the second error term, we have to take into account the condition number of the matrix, in particular one has the estimate [23]
||F − F num || ||F || ≤ cond(A) 1 − ||A −1 || ||δA|| ||δA|| ||A|| + ||δB|| ||B|| .
Performing our computations in double precision (machine accuracy of 10 −16 ), and as long as the condition number is not exceeding a value of 10 12 (see Fig. 9), the second error term is not so dangerous. For larger condition numbers, this term can give rise to erroneous results. In our test case, it is however rather the first error-term which leads to trouble, as the truncation error is 1/ǫ-dependent. In the first toy-model (3.11), the large truncation error impacts only the y-direction, leading to a large diffusion along the axes-aligned anisotropy and hence to the limit-model. We shall see a drastic difference in the second, not-field aligned toy-model.
Second Vlasov toy-model with variable coefficients
Finally, let us come now in this section to the second Vlasov toy model, given by : We can write this system under matrix form :
∂ t f ǫ + 1 ǫ (v × B) · ∇ v f ǫ = 0,(7.(G) ǫ ∂ t f ǫ + y ǫ ∂ x f ǫ − x ǫ ∂ y f ǫ = 0, ∀(t, x, y) ∈ [0, T ] × Ω. f ǫ (0, x, y) = f in (x, y), ∀(x, y) ∈ Ω ,(7.Ẋ Y = 1 ǫ A X Y , A := 0 1 −1 0 leading to C x,y ǫ (s) := X Y (s) = e A s−t ǫ x y .
Denoting the rotation matrix by R ǫ (y) := e A y ǫ , one has
R ǫ (s − t) = e A s−t ǫ = cos s − t ǫ sin s − t ǫ − sin s − t ǫ cos s − t ǫ .
We can easily verify that the characteristic curve passing through the point (x, y) is a spiral, whose projection on the (x, y)-plane is the circle with radius R := x 2 + y 2 and center (0, 0). All characteristics are 2 π ǫ-periodic (in t).
The exact solution f ǫ of (7.32) is now simply the advection of the initial condition along these characteristic curves, such that f ǫ (t, x, y) = f in (X(0, t, x, y), Y (0, t, x, y)) = f in cos t ǫ x−sin t ǫ y, sin t ǫ x+cos t ǫ y .
7.2.
Limit solution of the problem. The next step is to obtain the limit solution of the problem (7.32), as ǫ → 0. Keeping in mind that f ǫ is constant along the characteristic curves, we integrate (7.32) along C x,y ǫ , to get
∂ t C x,y ǫ f ǫ dσ + 1 ǫ C x,y ǫ (y, −x) t · ∇f ǫ (t, x, y)dσ = 0, leading to ∂ t C x,y ǫ f ǫ dσ + 1 ǫ t+2πǫ t (Y (s), −X(s)) t · ∇f ǫ (t, X(s), Y (s)) x 2 + y 2 ǫ ds = 0 . Furthermore, as t+2πǫ t (Y (s), −X(s)) t · ∇f ǫ (t, X(s), Y (s))ds = t+2πǫ t d ds f ǫ t, X(s), Y (s)) = 0,
which comes from the periodicity of the characteristics, and denoting the average along a curve by f ǫ := 1 |C x,y ǫ | C x,y ǫ f ǫ dσ, with |C x,y ǫ | = 2 π ǫ, we have :
∂ t f ǫ = 0 .
Letting now formally ǫ → 0, we obtain the following limit problem associated to (7.32):
(G) 0 f 0 = f in .
(IMP ) ǫ f ǫ,n+1 − f ǫ,n ∆t + y ǫ ∂ x f ǫ,n+1 − x ǫ ∂ y f ǫ,n+1 = 0, ∀n 0, ∀(x, y) ∈ Ω,
(7.34) as well as
(La) ǫ f ǫ,n+1 − f ǫ,n ∆t + y ∂ x q ǫ,n+1 − x ∂ y q ǫ,n+1 = 0, y ∂ x f ǫ,n+1 − x ∂ y f ǫ,n+1 = ǫ y ∂ x q ǫ,n+1 − x ∂ y q ǫ,n+1 − (∆x∆y) γ q ǫ,n+1
∀n 0 .
(7.35) The term (∆x∆y) γ q ǫ,n in (7.35) is a stabilization term permitting to have the uniqueness of the solution (f ǫ , q ǫ ). In the former "field-aligned" example, we fixed q ǫ on the anisotropy lines by setting q ǫ |Γ in = 0, but here it is more arduous from a numerical point of view. The stabilization aims equally to fix q ǫ , however in a different manner. It is very delicate to choose the magnitude of this term, in order not to destroy the problem, in particular we took here γ = 0.91. First it is a small perturbation of the equation, of the order of the truncation error. Secondly, averaging the second equation of the Lagrange-multiplier scheme along the anisotropy lines, permits to obtain (∆x∆y) γ q ǫ,n+1 = 0, which means that q ǫ is unique, by having zero average along the field lines. A more detailed study of this stabilization technique was performed in [18] for the elliptic framework. For the spatial discretization, we use again an upwind scheme, observing that this time the equation has no more constant coefficients. Thus, we define :
x + i := max i (x i , 0), x − i := min i (0, x i ), y + j := max j (y j , 0), y − j = min j (0, y j ), ∀(i, j) ∈ N 2 .
The full discretization of the IMP scheme writes now
(IMP ) ǫ f ǫ,n+1 i,j + 1 ǫ r x (y + j − y − j ) + r y (x + i − x − i f ǫ,n+1 i,j − r x (y + j f ǫ,n+1 i−1,j − y − j f ǫ,n+1 i+1,j )− r y (x + i f ǫ,n+1 i,j+1 − x − i f ǫ,n+1 i,j−1 ) = f ǫ,n i,j , ∀(n, i, j) ∈ Q h ,
with r x = ∆t ∆x and r y = ∆t ∆y . And for the Lagrange-multiplier scheme, we have :
(La) ǫ f ǫ,n+1 i,j + 1 ǫ r x (y + j − y − j ) + r y (x + i − x − i q ǫ,n+1 i,j − r x (y + j q ǫ,n+1 i−1,j − y − j q ǫ,n+1 i+1,j )− r y (x + i q ǫ,n+1 i,j+1 − x − i q ǫ,n+1 i,j−1 ) = f ǫ,n i,j , ∀(n, i, j) ∈ Q h , 1 ∆t r x (y + j − y − j ) + r y (x + i − x − i f ǫ,n+1 i,j − r x (y + j f ǫ,n+1 i−1,j − y − j f ǫ,n+1 i+1,j )− r y (x + i f ǫ,n+1 i,j+1 − x − i f ǫ,n+1 i,j−1 ) = ǫ ∆t r x (y + j − y − j ) + r y (x + i − x − i q ǫ,n+1 i,j − r x (y + j q ǫ,n+1 i−1,j − y − j q ǫ,n+1 i+1,j ) − r y (x + i q ǫ,n+1 i,j+1 − x − i q ǫ,n+1 i,j−1 ) − (∆x∆y) γ q ǫ,n+1 i,j , ∀(n, i, j) ∈ Q h .
Numerical simulations.
Here we present our simulations corresponding to both numerical schemes. We consider Ω = [−3, 3] 2 , T = 1 and the discretization parameters N t = 64 and N x = N y = 160. The initial data is defined by a Gaussian function :
f in (x, y) = exp − x 2 + y 2 2σ 2 , σ = 0.5, ∀(x, y) ∈ Ω.
As we showed before, the exact solution is known thanks to the characteristic method.
In the present simple test case, one can easily prove that
f ǫ ex (t, x, y) = f in (x, y) = exp − x 2 + y 2 2σ 2 ,(7.36)
in other words, the exact solution is a stationary solution, independent of ǫ, the initial condition being constant along the anisotropy field. This simple test case permits in a very simple way to compare both methods with respect to the ǫ-dependence of the results, in particular to show that the IMP-scheme is not an Asymptotic-Preserving scheme. We shall investigate in a future paper a more involved, physical test-case, where we shall adapt the here introduced Lagrange-multiplier-method, which seems to be the most appropriate method for our singularly-perturbed Vlasov problem (2.4), to secondorder schemes and test more thoroughly its AP-properties.
In Figure 10 we first plot the condition-number cond(A) := ||A −1 || 2 ||A|| 2 associated to the two schemes. As for the first toy-model, one remarks the ǫ-independent conditionnumber of the Lagrange-multiplier-scheme, whereas, as expected, the IMP scheme has an 1/ǫ-dependent condition-number. The two curves correspond to the IMP and Lagrange-multiplier schemes.
Then, in Figure 12, we show the numerical solution f ǫ at the final time T and computed for several values of ǫ, with both IMP and Lagrange-multiplier schemes. For ǫ = 1 and ǫ = 0.1, we do not distinguish any difference. However for smaller ǫ values, the solution obtained with the Lagrange-multiplier scheme seems to be ǫ-independent, contrary to the IMP scheme, which diffuses more and more as ǫ → 0. Indeed, the IMP solution is completely damped as ǫ → 0 and leads towards the zero-solution, whereas the Lagrangemultiplier scheme keeps the form of the Gaussian, with a usual ǫ-independent (∆x, ∆y)diffusion. This permits to conclude that the Lagrange-multiplier scheme is an APscheme contrary to the IMP scheme.
In order to distinguish much better this AP-property, we plot on Figure 11 a cut of the previous curves at the point x = 0. We observe clearly the diffusion in the IMP scheme which depends of ǫ contrary to the Lagrange-multiplier scheme. 7.5. Numerical analysis. The aim of this section is to explain the plots presented before. In particular we will investigate why the IMP scheme does not work for small ǫ-values, whereas the Lagrange-multiplier scheme preserves the asymptotics. First of all, we compute the local truncation error of both schemes. We shall consider only the case x ≥ 0 and y ≥ 0, the remaining cases changing nothing in the following reasoning. 7.5.1. IMP scheme. We begin by recalling the expression of the full discretized expression of this scheme :
(IMP ) ǫ f ǫ,n+1 i,j − f ǫ,n i,j ∆t + y j ǫ f ǫ,n+1 i,j − f ǫ,n+1 i−1,j ∆x − x i ǫ f ǫ,n+1 i,j+1 − f ǫ,n+1
i,j ∆y = 0 , ∀(n, i, j) ∈ Q h . Theorem 7.1. The IMP scheme (7.37) is consistent with the second Vlasov problem (7.31), first order accurate in time and in space. Moreover, the local truncation error writes
T I (t n , x i , y j , ∆t, ∆x, ∆y) = −∇ · D I ∇f ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
where
D I := 1 ǫ y j ∆x 2 1 + α j ǫ −x i y j ∆t 2ǫ −x i y j ∆t 2ǫ x i ∆y 2 1 + β i ǫ , α j := y j ∆t ∆x , β i := x i ∆t ∆y .
Proof: This proof is very similar to the proof of Theorem 6.1.
Remark 7.2. Contrary to the first toy-model, where the diffusion was 1/ǫ-dependent only in the anisotropy-direction, which was aligned with the coordinate system, in the present case, the diffusion-matrix is scaled by a 1/ǫ factor, meaning that this time we have a very strong 1/ǫ-dependent diffusion in all directions. This large diffusion leads rapidly to a damping of the solution towards zero, as ǫ becomes smaller, and leads thus to completely erroneous results. 7.5.2. Lagrange-multiplier scheme. We use the same reasoning for the Lagrange-multiplier scheme
(La) ǫ ∂ t f ǫ + y ∂ x q ǫ − x ∂ y q ǫ = 0, y∂ x f ǫ − x ∂ y f ǫ = ǫ y ∂ x q ǫ − x ∂ y q ǫ − (∆x∆y) γ q ǫ .(7.38)
Supposing y 0 and x 0, we have the full discretization of (La) ǫ
(La) ǫ f ǫ,n+1 i,j − f ǫ,n i,j ∆t + y j q ǫ,n+1 i,j − q ǫ,n+1 i−1,j ∆x − x i q ǫ,n+1 i,j+1 − q ǫ,n+1 i,j ∆y = 0 , y j f ǫ,n+1 i,j − f ǫ,n+1 i−1,j ∆x − x i f ǫ,n+1 i,j+1 − f ǫ,n+1 i,j ∆y = ǫ y j q ǫ,n+1 i,j − q ǫ,n+1 i−1,j ∆x − x i q ǫ,n+1 i,j+1 − q ǫ,n+1 i,j ∆y −(∆x∆y) γ q ǫ,n+1 i,j .
(7.39) Theorem 7.3. The Lagrange-multiplier scheme (7.39) is consistent with the second Vlasov model (7.31) and first order accurate in time and in space. Furthermore, the local truncation error writes
T L1 T L2 = ∇· 0 0 ∇· 0 D L 1 D L 2 −ǫD L 2 ∇f ǫ ∇q ǫ + O(∆t 2 , ∆x 2 , ∆y 2 ) = ∇ (D L 1 ∇q ǫ ) ∇ (D L 2 ∇f ǫ ) − ǫ∇ (D L 2 ∇q ǫ ) + O(∆t 2 , ∆x 2 , ∆y 2 ) where D L1 := y j ∆x 2 1 + α j ǫ −x i y j ∆t 2ǫ −x i y j ∆t 2ǫ x i ∆y 2 1 + β i ǫ , D L2 = y j ∆x 2 0 0 x i ∆y 2 .
Proof: We begin by the computation of the T L1 term. Supposing sufficient regularity for the functions f ǫ and q ǫ , we use Taylor series expansion to get T L1 (t n , x i , y j , ∆t, ∆x, ∆y) = ∆t 2
∂ tt f ǫ + y j ∆t ∂ xt q ǫ − y j ∆x 2 ∂ xx q ǫ − x i ∆y 2 ∂ yy q ǫ − x i ∆t ∂ ty q ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
Since the first equation (7.38) is verified, we can write ∂ tt f ǫ = −y j ∂ xt q ǫ + x i ∂ yt q ǫ , And we differentiate in time the second equation of (7.38) to obtain T L1 (t n , x i , y j , ∆t, ∆x, ∆y) = ∆t 2ǫ
∂ t (y j ∂ x f ǫ − x i ∂ y f ǫ ) − y j ∆x 2 ∂ xx q ǫ − x i ∆y 2 ∂ yy q ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
We have the following relations ∂ tx f ǫ = x ∂ yx q ǫ − y ∂ xx q ǫ + ∂ y q ǫ , ∂ ty f = x ∂ yy q ǫ − ∂ x q ǫ − y ∂ xy q ǫ .
The local truncation error writes finally T L1 (t n , x i , y j , ∆t, ∆x, ∆y) = ∆t 2ǫ (y j ∂ y q ǫ + x i ∂ x q ǫ + 2x i y j ∂ xy q ǫ ) − y j ∆x 2 (1 + α j /ǫ) ∂ xx q ǫ − x i ∆y 2 (1 + β i /ǫ) ∂ yy q ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
With an analogous reasoning, we compute the truncation error of the second equation:
T L2 (t n , x i , y j , ∆t, ∆x, ∆y) = − y j ∆x 2 ∂ xx f ǫ − x i ∆y 2 ∂ yy f ǫ − ǫ y j ∆x 2 ∂ xx q ǫ + x i ∆y 2 ∂ yy q ǫ + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ).
Remark 7.4. In contrast to the first Vlasov toy-model (3.11), the IMP and Lagrangemultiplier schemes do not have the same behavior with respect to the local truncation error. More particularly, the dependence on ǫ is very different. The IMP-scheme is diffusing in all directions, diffusion proportional to 1/ǫ. The only 1/ǫ-dependent diffusion in the Lagrange-multiplier scheme arises in relation with the auxiliary unknown q ǫ , i.e. in the term ∇ (D L 1 ∇q ǫ ). And one can immediately verify that the 1/ǫ-dependence arises only aligned with the anisotropy field lines, and not perpendicular to them. Indeed, one gets immediately for the diffusion along resp. perpendicular to the field lines:
(y , −x) D L 1 (y , −x) T = y 3 ∆x 2 + x 3 ∆y 2 + ∆t 2 (x 2 + y 2 ) 2 (x , y) D L 1 (x , y) T = x y 2 [x ∆x + y ∆y] .
Concluding remarks
To conclude, let us summarize here the knowledge we acquired about the resolution of anisotropic Vlasov equations of the type (2.4) arising in fusion plasma modelisation. Two types of techniques can be adopted from the beginning. One can decide to pass directly to polar coordinates in velocity and get hence a field-aligned formulation as for ex. (2.7). In this case, a simple IMEX-scheme is the most appropriate scheme to be used, being simple enough and giving rise to accurate results up a sufficiently small ǫ-value. However, the disadvantage is that one has to change coordinate system, which can be rather cumbersome if the magnetic field is variable, in time and space. The second technique is rather simple, as it avoids to pass to field-aligned coordinates and remains in a nice Cartesian framework. The drawback is that in this case it is no more sufficient to implicit the stiff term and take the other terms explicitly. Indeed, for small ǫ-values (already ǫ = 10 −4 ), meaning strong magnetic fields as in tokamak plasmas, an IMEX scheme would lead to erroneous results. An Asymptotic-Preserving reformulation like our "Lagrange-multiplier-method" is more adequate, leading in the limit ǫ → 0 towards the right Limit-problem. This Lagrange-multiplier-method is indeed usable for all ǫ ≥ 0 and gives accurate and stable results independently on ǫ. However there is a disadvantage, namely the fact that it is more time-consuming, as it involves an additional unknown q ǫ . Solving an anisotropic Vlasov equation of the type (2.4) needs hence an a priori decision, which of these two techniques to follow. The first technique is at the moment the basis of several codes. The second technique has not be tested up to now, and its rigorous validation and comparison with the first one will be the aim of a forthcoming paper, in a more physical context. E-mail address: [email protected], [email protected]
Figure 1 .
1Representation of the initial condition f in (A) and the exact solution f ǫ ex at the final time T = 1 (B). Here ǫ = 1.
Figure 2 .(a) f ǫ ex (t, x Nx−1 , y Ny−(b) f ǫ ex (t, y Ny− 1 )
21Representation of the exact limit solution f 0 ex (t, x) at the final time T .
Figure 3 .
3Time-evolution of the exact solution at point (x Nx−1 , y Ny−1 ) in the two dimensional case (A) with T = 12 and N t = 501 ; resp. at point y Ny−1 in the one dimensional case with T = 10, a = 0 and N t = 501 (B).
5.1. 1 .
1IMEX scheme. We start by first showing in Fig. 4 as well as in the left plot of Fig. 5 the numerical solution f ǫ via the IMEX-scheme, for three different values of ǫ, namely ǫ = 1, ǫ = 0.1 and ǫ = 10 −10 , all of them at the final time T = 1.
Figure 4 .
4Representation of the numerical solution f ǫ for two values of ǫ, and at the final time T , corresponding to the IMEX scheme.
(b) f ǫ (t, y Ny− 1 )
1
Figure 5 .
5Left (A): Plot of the num. sol. f ǫ for ǫ = 10 −10 , at the final time T . Right (B): Time-evolution of the IMEX scheme sol. at point y Ny−1 in the 1D case for T = 10 and several ǫ. We have added the exact solution for ǫ = 1.
Figure 6 .
6Time-evolution of the solution via Fourier (A) and IMEX, MMresp. Lagrange-multiplier schemes (B), at y Ny−1 in 1D with T = 10, a = 0, N t = 501. We have added in both cases the exact solution for ǫ = 1.
Figure 7 .
7Evolution of the L ∞ -error between f ǫ ex (t, ·) and f ǫ (t, ·) at final time T = 1 and for ǫ = 1, as a function of ∆x (with N y = 15001, N t = 15001), ∆y (with N x = 15001, N t = 15001) and ∆t (with N x = N y = 1001).
Figure 8 .
8Evolution of η ǫ (T ) and γ ǫ (T ) as a function of ǫ for each scheme.
Figure 9 .
9Condition number cond(A) as a function of ǫ in log-log scale.The three curves correspond to the IMEX, Micro-Macro and Lagrange-multiplier schemes.
. 1 .
1The IMEX scheme (6.28) is consistent with the Vlasov equation(3.11), and first order accurate in space and time. Furthermore, the local truncation error writes T I (t n , x i , y j , ∆t, ∆x, ∆y) = −∇ · (D I ∇f ǫ ) + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ),
Theorem 6. 3 .
3The IMEX scheme is stable in the Von Neumann sense if and only if the CFL-condition a∆t ∆x 1 is satisfied.
. 5 .
5The Lagrange-multiplier scheme (6.30) is consistent with the Vlasov equation(3.11), and first order accurate in space and time. Furthermore, the local truncation error writesT L (t n , x i , y j , ∆t, ∆x, ∆y) = −∇ · (D L ∇f ǫ ) + O(∆t 2 ) + O(∆x 2 ) + O(∆y 2 ),
31) with ǫ ≪ 1 and the magnetic field B = e z . This model is a simplified version of the anisotropic Vlasov equation (2.4) in not-field aligned Cartesian coordinates. Denoting, for notational simplicity, the velocity-variable as v = (x, y, z), we have v × B = (y , −x , 0) t , such that the previous equation writes :
32) where this time our velocity-domain is given by Ω := [−L x , L x ] × [−L y , L y ]. Again we will consider a doubly-periodic framework. 7.1. Exact solution by the characteristic method. The exact solution of the equation (7.32) is simply determined via the characteristic method. The characteristic curve C x,y ǫ (s) := X(s), Y (s) passing at instant t through (x, y), solves the ODE : t), Y (t)) = (x, y).
.
Numerical schemes for the second Vlasov toy model. Let us now discretize the second Vlasov toy model (7.32) via the IMP (fully implicit scheme this time) and Lagrange-multiplier schemes. The time semi-discretizations read
Figure 10 .
10Condition number cond(A) as a function of ǫ in log-log scale.
Figure 11 .
11Representation of a cut at x = 0 of f ǫ num at the final time T for the IMP and Lagrange-multiplier schemes, and several values of ǫ.
Figure 12 .
12Representation of the function f ǫ at the final time T for the IMP and Lagrange-multiplier scheme, with several values of ǫ.
Acknowledgments. The authors would like to acknowledge support from the ANR PEPPSI (Plasma Edge Physics and Plasma-Surface Interactions, 2013-2017). Furthermore, this work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training program 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
Transport equations with disparate advection fields. Application to the gyrokinetic models in plasma physics. M Bostan, SIAM J. Sci. Comp. 311Bostan M.: Transport equations with disparate advection fields. Application to the gyrokinetic models in plasma physics, SIAM J. Sci. Comp. 31(1) 334-368 (2008).
Plasma Physics and controlled fusion. F F Chen, Springer VerlagNew YorkChen F. F.: Plasma Physics and controlled fusion, Springer Verlag. New York, (2006).
An asymptotic preserving scheme based on a micro-macro decomposition for collisional Vlasov equations: diffusion and high-field scaling limits. N Crouseilles, M Lemou, Kinetic Related Models. 4Crouseilles N., Lemou M., An asymptotic preserving scheme based on a micro-macro decomposition for collisional Vlasov equations: diffusion and high-field scaling limits, Kinetic Related Models 4 (2011), 441-477.
Asymptotic-Preserving schemes for oscillatory Vlasov-Poisson equations. N Crouseilles, M Lemou, F Méhats, JCP. 248Crouseilles N., Lemou M., Méhats F., Asymptotic-Preserving schemes for oscillatory Vlasov- Poisson equations, JCP 248 (2013), pp 287-308.
Asymptotic transition from kinetic to adiabatic electrons along magnetic field lines. De Cecco, A Negulescu, C Possanner, S , SIAM MMS (Multiscale Model. Simul.). 151De Cecco A., Negulescu C., Possanner S., Asymptotic transition from kinetic to adiabatic electrons along magnetic field lines, SIAM MMS (Multiscale Model. Simul.) 15 (2017), no. 1, 309-338.
Duality based Asymptotic-Preserving Method for highly anisotropic diffusion equations. P Degond, F Deluzet, A Lozinski, J Narski, C Negulescu, Communications in Mathematical Sciences. 101Degond P., Deluzet F., Lozinski A., Narski J., Negulescu C., Duality based Asymptotic-Preserving Method for highly anisotropic diffusion equations, Communications in Mathematical Sciences 10 (2012), no. 1, 1-31.
An Asymptotic-Preserving method for highly anisotropic elliptic equations based on a micro-macro decomposition. P Degond, A Lozinski, J Narski, C Negulescu, Journal of Computational Physics. 2317Degond P., Lozinski A., Narski J., Negulescu C. An Asymptotic-Preserving method for highly anisotropic elliptic equations based on a micro-macro decomposition, Journal of Computational Physics 231 (2012), no. 7, 2724-2740.
An Asymptotic Preserving Scheme for the ES-BGK model of the Boltzmann equation. F Filbet, S Jin, J. Sci. Computing. 462Filbet F., Jin S., An Asymptotic Preserving Scheme for the ES-BGK model of the Boltzmann equation, J. Sci. Computing 46 (2011), no. 2, 204-224.
X Garbet, Y Idomura, L Villard, T Watanabe, Gyrokinetic simulations of turbulent transport. 50Garbet X., Idomura Y., Villard L., Watanabe T.: Gyrokinetic simulations of turbulent transport. Nuclear Fusion Vol. 50, No 4, (2010).
. R J Goldston, P H Rutherford, Plasma Physics. Taylor & Francis GroupGoldston R. J., Rutherford P.H., Plasma Physics. Taylor & Francis Group, (1995).
GYSELA, a full-f global gyrokinetic Semi-Lagrangian code for ITG turbulence simulations. V Grandgirard, Y Sarazin, X Garbet, G Dif-Pradalier, Ghendrih Ph, N Crouseilles, G Latu, E Son-Nendrücker, N Besse, P Bertrand, American Institute of Physics Conference Series. 871Theory of Fusion PlasmasGrandgirard V., Sarazin Y., Garbet X., Dif-Pradalier G., Ghendrih Ph., Crouseilles N., Latu G., Son-nendrücker E., Besse N., Bertrand P., GYSELA, a full-f global gyrokinetic Semi-Lagrangian code for ITG turbulence simulations, Theory of Fusion Plasmas 871 (2006), American Institute of Physics Conference Series, pp 100-111.
Existence and uniqueness of soecific stationary solutions. Ghendrih Ph, M Hauray, A Nouri, Kinetic and Related Models. 24Derivation of a gyrokinetic modelGhendrih Ph., Hauray M., Nouri A.: Derivation of a gyrokinetic model. Existence and uniqueness of soecific stationary solutions. Kinetic and Related Models, Vol. 2, No 4, pp 707-725, (2009).
Plasma confinement. R D Hazeltine, J D Meiss, Dover PublicationsNew YorkHazeltine R.D., Meiss J.D.: Plasma confinement. Dover Publications, New York (2003).
M H Holmes, Introduction to Numerical Methods in Differential Equations. New YorkSpringerHolmes M.H.: Introduction to Numerical Methods in Differential Equations, Springer, New York, (2007).
Asymptotic preserving (AP) schemes for multiscale kinetic and hyperbolic equations: a review. Jin S , Rivista di Matematica della Universita di Parma. 3Jin S., Asymptotic preserving (AP) schemes for multiscale kinetic and hyperbolic equations: a review, Rivista di Matematica della Universita di Parma 3, 177-216 (2012).
A new asymptotic preserving scheme based on micro-macro formulation for linear kinetic equations in the diffusion limit. M Lemou, L Mieussens, J. Differential Equations. 249Lemou M., Mieussens L.: A new asymptotic preserving scheme based on micro-macro formulation for linear kinetic equations in the diffusion limit. J. Differential Equations, 249, pp. 1620-1663 (2010).
R J Leveque, Finite Difference Methods for Ordinary and Partial Differential Equations. Siam, PhiladelphiaLeVeque R.J.: Finite Difference Methods for Ordinary and Partial Differential Equations. Siam, Philadelphia, (2007).
Highly anisotropic temperature balance equation and its asymptotic-preserving resolution, M2AN (Mathematical Modelling and Numerical Analysis). A Lozinski, J Narski, C Negulescu, 48Lozinski A., Narski J., Negulescu C., Highly anisotropic temperature balance equation and its asymptotic-preserving resolution, M2AN (Mathematical Modelling and Numerical Analysis) 48 (2014) 1701-1724.
A J Majda, A L Bertozzi, Vorticity and incompressible flow. Cambridge University PressMajda A.J., Bertozzi A.L., Vorticity and incompressible flow, Cambridge University Press, 2002.
Asymptotic-Preserving scheme for highly anisotropic non-linear diffusion equations. A Mentrelli, C Negulescu, Journal of Comp. Phys. 231Mentrelli A., Negulescu C., Asymptotic-Preserving scheme for highly anisotropic non-linear diffu- sion equations, Journal of Comp. Phys. 231 (2012), 8229-8245.
Kinetic modelling of strongly magnetized tokamak plasmas with mass disparate particles. The electron Boltzmann relation. C Negulescu, submittedNegulescu C. Kinetic modelling of strongly magnetized tokamak plasmas with mass disparate par- ticles. The electron Boltzmann relation, submitted.
| []
|
[
"Josephson spin-valve realization in the magnetic nodal-line topological semimetal Fe 3 GeTe 2",
"Josephson spin-valve realization in the magnetic nodal-line topological semimetal Fe 3 GeTe 2"
]
| [
"O O Shvetsov \nInstitute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District\n",
"Yu S Barash \nInstitute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District\n",
"A V Timonina \nInstitute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District\n",
"N N Kolesnikov \nInstitute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District\n",
"E V Deviatov \nInstitute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District\n",
"\nAcademician Ossipyan str\n142432Russia\n"
]
| [
"Institute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District",
"Institute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District",
"Institute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District",
"Institute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District",
"Institute of Solid State Physics\nRussian Academy of Sciences\nChernogolovka, Moscow District",
"Academician Ossipyan str\n142432Russia"
]
| []
| Three-dimensional van der Waals ferromagnet Fe3GeTe2 (FGT) is regarded as a candidate for the magnetic topological nodal line semimetal. We investigate lateral electron transport between two 3 µm spaced superconducting In leads beneath a thick three-dimensional FGT exfoliated flake. At low 30 mK temperature, we observe Josephson supercurrent that exhibits unusual critical current Ic suppression by the magnetic field B. The overall Ic(B) pattern is asymmetric in respect to the B sign. We demonstrate, that the asymmetry is defined by the magnetic field sweep direction, so the Ic(B) pattern is strictly reversed (as B to −B inversion) for the opposite sweeps. We also observe an interplay between maximum and minimum in Ic(B) in normal magnetic fields, while there are fast aperiodic Ic(B) fluctuations for the in-plane ones. These effects can not be expected for homogeneous superconductor-ferromagnet-superconductor junctions, while they are known for Josephson spin valves. The mostly possible scenario for Josephson spin valve realization in FGT is the misalignment of spin polarizations of the Fermi arc surface states and ferromagnetic FGT bulk, but we also discuss possible influence of spin-dependent transport between magnetic domains. | 10.1134/s0021364022100101 | [
"https://export.arxiv.org/pdf/2108.13761v1.pdf"
]
| 237,363,393 | 2108.13761 | b563a1536c5818643a8d787ac4fe66ad9ae408a6 |
Josephson spin-valve realization in the magnetic nodal-line topological semimetal Fe 3 GeTe 2
Aug 2021
O O Shvetsov
Institute of Solid State Physics
Russian Academy of Sciences
Chernogolovka, Moscow District
Yu S Barash
Institute of Solid State Physics
Russian Academy of Sciences
Chernogolovka, Moscow District
A V Timonina
Institute of Solid State Physics
Russian Academy of Sciences
Chernogolovka, Moscow District
N N Kolesnikov
Institute of Solid State Physics
Russian Academy of Sciences
Chernogolovka, Moscow District
E V Deviatov
Institute of Solid State Physics
Russian Academy of Sciences
Chernogolovka, Moscow District
Academician Ossipyan str
142432Russia
Josephson spin-valve realization in the magnetic nodal-line topological semimetal Fe 3 GeTe 2
Aug 2021(Dated: January 6, 2022)PACS numbers: 7340Qv 7130+h
Three-dimensional van der Waals ferromagnet Fe3GeTe2 (FGT) is regarded as a candidate for the magnetic topological nodal line semimetal. We investigate lateral electron transport between two 3 µm spaced superconducting In leads beneath a thick three-dimensional FGT exfoliated flake. At low 30 mK temperature, we observe Josephson supercurrent that exhibits unusual critical current Ic suppression by the magnetic field B. The overall Ic(B) pattern is asymmetric in respect to the B sign. We demonstrate, that the asymmetry is defined by the magnetic field sweep direction, so the Ic(B) pattern is strictly reversed (as B to −B inversion) for the opposite sweeps. We also observe an interplay between maximum and minimum in Ic(B) in normal magnetic fields, while there are fast aperiodic Ic(B) fluctuations for the in-plane ones. These effects can not be expected for homogeneous superconductor-ferromagnet-superconductor junctions, while they are known for Josephson spin valves. The mostly possible scenario for Josephson spin valve realization in FGT is the misalignment of spin polarizations of the Fermi arc surface states and ferromagnetic FGT bulk, but we also discuss possible influence of spin-dependent transport between magnetic domains.
I. INTRODUCTION
Recently, Fe 3 GeTe 2 (FGT) has attracted significant attention as a promising platform for novel physical phenomena, which are connected with magnetic and electronic non-trivial topology. FGT is an itinerant van der Waals ferromagnet characterized by an out-of-plane magnetocrystalline anisotropy both for three-dimensional single crystals and down to two-dimensional limit, which was confirmed by theoretical and experimental investigations [1][2][3][4][5][6] . Experimentally, FGT shows large anomalous Hall 7,8 and Nernst 9 effects, topological Hall effect 10 and Kondo lattice physics 11 . From the view of the electronic band structure, three-dimensional FGT is a unique candidate for the ferromagnetic nodal line semimetal 8 , hosting spin-polarized Fermi arc surface states 12 .
Different realizations of spin valves are known for magnetic materials. Usually, spin valves are realized as ferromagnetic multilayers 13,14 with different layers' thickness. The multilayer resistance depends on the mutual orientation of their magnetizations due to the spin-dependent scattering, so the resistance can be affected by external magnetic field or high current density. Due to the different spin polarization of the Fermi arc surface states and ferromagnetic bulk, magnetic topological materials should also demonstrate spin-valve transport properties 15,16 , i.e. they can be regarded as natural realization of spin-valves. In this case, spin-polarized Fermi arcs and ferromagnetic bulk represent thin (free) and thick (reference) layers, respectively [17][18][19][20] .
In proximity with a superconductor, topological surface (or edge) states are able to carry supercurrents over extremely large distances [21][22][23][24][25] . For the magnetic topological materials it naturally implies spin triplet superconductivity, which is the mutual effect of superconduc-tivity, exchange interaction and spin-orbit coupling [26][27][28][29][30][31] . Triplet supercurrent can be expected, e.g., for a Josephson spin valve [32][33][34][35][36] (JSV), where ferromagnetic multilayer is sandwiched between two superconducting electrodes. In the majority of devices the Josephson current is directed perpendicular to the layers, but the spin-valve effects and, in particular, the generation of the triplet supercurrent can also occur in systems, where the supecurent flows along the planes 37 .
In JSVs supercurrent is defined mainly by the relative orientation of the layers' magnetizations, while in conventional Josephson junctions it is modulated by magnetic flux. The strength of the singlet-triplet conversion substantially depends on the particular configuration of the magnetization misalignment in the Josephson spin valve. For the supercurrent flowing perpendicular to the layers, such a dependence on relative orientations of the layers' magnetizations was studied in detail and has been recently used for experimental identification of relative weights of singlet and triplet amplitudes constituting a net supercurrent 32 . Due to the natural spin-valve realization, magnetic topological semimetals like FGT may be regarded as a platform for planar JSV investigations.
Symmetry analysis and first principles calculations have shown, that the inversion symmetry breaking can occur at the FGT interface 38 . Noncentrosymmetric interfacial effects are known to be able to substantially influence the charge transport in magnetic systems, in particular via the spin-orbit torque, and to result in unidirectional transport properties 16,[39][40][41][42] . In proximity with superconductivity, broken inversion and time reversal symmetry can generaly lead to asymmetries of the Josephson current with respect to the magnetic field reversal, e.g., due to chiral properties of the topologically protected states 43,44 . In superconducting heterostructures with noncomplanar magnetization textures, breaking the magnetization reversal symmetry can result in the direct coupling between the magnetic moment and the supercurrent, and in the anomalous Josephson effect [45][46][47][48][49] .
Here, we investigate lateral electron transport between two 3 µm spaced superconducting In leads beneath a thick three-dimensional FGT exfoliated flake. At low 30 mK temperature, we observe Josephson supercurrent that exhibits unusual critical current I c suppression by the magnetic field B. The overall I c (B) pattern is asymmetric in respect to the B sign. We demonstrate, that the asymmetry is defined by the magnetic field sweep direction, so the I c (B) pattern is strictly reversed (as B to −B inversion) for the opposite sweeps. We also observe an interplay between maximum and minimum in I c (B) in normal magnetic fields, while there are fast aperiodic I c (B) fluctuations for the in-plane ones. These effects can not be expected for homogeneous superconductorferromagnet-superconductor junctions, while they are known for Josephson spin valves. The mostly possible scenario for Josephson spin valve realization in FGT is the misalignment of spin polarizations of the Fermi arc surface states and ferromagnetic FGT bulk, but we also discuss possible influence of spin-dependent transport between magnetic domains.
II. SAMPLES AND TECHNIQUE
Fe 3 GeTe 2 was synthesized from elements in evacuated silica ampule in a two-step process. At the first step, the load was heated up to 470 • C at 10 deg/h rate and the ampule was held at this temperature for 50 h. At the second step, the temperature was increased up to 970 • C with the same rate. After 140 h exposure, the ampule was cooled down to the room temperature at 5 deg/h rate. X-ray diffraction data indicates, that the iron tellurides FeTe and FeTe2 were also found in the obtained material, in addition to the expected Fe 3 GeTe 2 compound.
To obtain Fe 3 GeTe 2 single crystals, the synthesized mixture was sealed in evacuated silica ampule with some admixture of iodine. The transport reaction was carried out for 240 h with temperatures 530 • C and 410 • C in hot and cold zones, respectively. Afterward, the ampule was quenched in a liquid nitrogen. Water-solvable iron and tellurium iodides were removed in hot distilled water from the obtained Fe 3 GeTe 2 single crystals, so the X-ray diffraction analysis confirmed strict Fe 3 GeTe 2 composition.
Non-trivial surface properties are known for threedimensional topological semimetal single crystals 50 . Thus, we use thick (1 µm) FGT flakes, which are obtained by a mechanical cleavage from the initial single crystal. Fig. 1(a) shows a top-view image of a FGT flake with underlying indium leads. The leads pattern is formed by lift-off technique after thermal evaporation of 100 nm In on the insulating SiO 2 substrate. The 10 µm wide leads are separated by 3 µm intervals. One FGT flake is transferred to the substrate with the defined In leads pattern and pressed to the leads slightly. No stress is needed for a flake to hold on the In leads afterward. This procedure allows to create transparent FGT-In interfaces 51-53 without mechanical polishing or chemical treatment, and to protect the relevant (bottom) FGT surface from any oxidation or contamination.
To confirm FGT quality, magnetoresistance measurements are performed also in standard Hall bar geometry for reference samples with normal (Au) leads. In Fig. 1(b), longitudinal magnetoresistance R xx is monotonous and negative in normal magnetic fields (red curve, right axis), while it shows a kink at 3.5 T for the in-plane configuration (blue curve, left axis). This behavior coincides well with the previously reported results 10 . Moreover, large anomalous Hall effect is shown in Fig. 1(c) for normal field orientation, while hysteresis in R xy is also known for the in-plane field as a novel planar Hall effect, see Fig. 1(d). The latter has been also recognized as topological Hall effect related to the complicated spin structures in FGT 10 . We study electron transport between two neighbor In leads in a standard four-point technique, see Fig. 1(a). All the wire resistances are excluded, which is necessary for low-impedance samples. To obtain dV /dI(I) characteristics, dc current is additionally modulated by a low 2 µA (below the dc current step) ac component at a 1107 Hz frequency. We measure the ac component of the potential drop (∼ dV /dI) by lock-in. The signal is confirmed to be independent of the modulation frequency within 100 Hz -10kHz range, which is defined by the applied filters. The measurements are performed within the 30 mK -1.2 K temperature range.
III. EXPERIMENTAL RESULTS
Fig. 2 clearly demonstrates
Josephson effect for two different samples, which are referred as S1 and S2. Qualitative behavior is similar, despite strongly different critical current I c and normal resistance values.
As expected, the zero-resistance state appears below some critical temperature, which is about 0.88 K and 0.34 K for the devices in Fig. 2 (a) and (b). These In-FGT-In junctions are characterized by different maximum supercurrent values I c =0.17 mA (S1) and 0.018 mA (S2).
The high temperature curves are typical for Andreev reflection in Fig. 2. The superconducting gap positions are defined by symmetric resistive dV /dI features at low currents, they are denoted by dashed lines in Fig. 2. For S1, ∆ S1 = 0.42 meV is obtained from ±0.22 mA dV /dI feature positions and 1.9 Ω resistance level in Fig. 2 (a). ∆ S2 can also be estimated as 0.28 meV in Fig. 2 since the bulk 0.5 meV In gap should be partially suppressed by the intrinsic FGT magnetization.
Since FGT is an uniaxial ferromagnet, which is confirmed by the Hall curves in Fig. 1 (c,d), it seems to be reasonable to investigate Josephson effect in differently oriented magnetic fields. On the other hand, In-FGT junctions are known to be badly reproduced in different coolings, which restricts the possibilities to remount a sample in the dilution refrigerator. For these reasons, qualitatively similar samples S1 and S2 are initially mounted in the in-plane and normal field orientations, respectively, to avoid unwanted influence of the cooling procedure on the experimental data. Fig. 3 demonstrates the influence of external in-plane (a) and normal (b) magnetic fields on sample resistance at T = 30 mK for S1 and S2, respectively. The result is qualitatively similar for both field orientations: the zero-resistance state is suppressed by the external field, dV /dI(B) curves are not symmetric with respect to the zero field value.
As a most important, the observed dV /dI(B) asymmetry depends on the magnetic field sweep direction. Moreover, all the dV /dI features are mirrored for the opposite (blue and rad colors) field sweeps, so dV /dI(B) curves are strictly reversed for the two sweep directions in Figs. 3 (a) and (b). This curve reversal can not be expected for a superconductor-ferromagnet-superconductor (SFS) junction with the homogeneous magnetization of the central ferromagnetic layer. In contrast, it known to be a fingerprint of the complicated spin structures, like ferromagnetic domains or multilayer in Josephson spin valves 32-36 . Fig. 3 also excludes any possibility for the unwanted shortings of the In leads, since a simple In-In junction can not demonstrate the observed dV /dI(B) reversal.
dV /dI(B) reversal can also be demonstrated by colormaps in Figs. 4 (a,b) and (d,e) for samples S1 and S2, respectively. The colormaps are obtained from dV /dI(I) curves at fixed magnetic field values, which are changed point-by-point in up or down directions. To establish definite sample magnetization state, every magnetic field sweep cycle begins from high field value B = ±100 mT. Due to the procedure, dV /dI(B) reversal is not connected with any time-dependent relaxation. The panels (a) and (b) differ by the magnetic field sweep directions in Fig. 4, which is from negative to positive values in (a) and is just opposite in (b). The previously described dV /dI(B) reversal can be clearly seen, e.g., by the asymmetric black feature at ±9 mT in Figs. 4 (a,b). The reversal effect is even more pronounced in (d) and (e) for normal magnetic fields.
For Josephson effect, an important information can be obtained from the maximum supercurrent I c suppression. In principle, zero-resistance black region in the colormaps reflects the critical current I c (B) suppression pattern, as it is emphasized by the white envelope curves in Figs. 4 (a,b) and (d,e). To obtain I c with high accuracy at fixed B, we sweep the current ten times from the zero value (i.e. from the superconducting dV /dI = 0 state) to some value well above the I c (the resistive dV /dI > 0 state) and then determine I c as an average value of dV /dI breakdown positions. The result is presented in Figs. 4 (c) and (f) for two magnetic field orientations, respectively. The general I c (B) shape is asymmetric in both cases, the asymmetry is reversed for the up (blue) and down (red) field sweeps. I c (B) also does not exhibit a conventional Fraunhofer pattern 54,55 .
There are also some features in Fig. 4, which are different in two magnetic field orientations.
For the in-plane magnetic fields, I c (B) shows fast aperiodic fluctuations in Fig. 4 (a-c). No distinct period could be detected at least for the field step as small as ∆B = 0.01 mT. We check, that our procedure gives I c values, which are perfectly stable at fixed magnetic field, as demonstrated in Fig. 5 (a). The maximum I c deviation over 1000 curves is about 0.005 mA at B = -1.4 mT, which is negligible comparing to the observed fluctuations' amplitude 0.05 mA in Fig. 4 (c). Thus, the fluc-tuations are controlled by the external magnetic field, although they are found to be aperiodic. On the contrary, no noticeable fluctuations can be observed for normal magnetic field orientation, see Figs. 4(d-f). The curves for the up (blue) and down (red) sweeps are reversed, but in addition there is an interplay between maximum and minimum in I c (B) at ±12 mT, which is well known for the Josephson spin valves [32][33][34][35][36] .
Temperature dependence of the critical current I c (T ) is shown in Fig. 5(b). It closely reminds the temperature dependencies observed in a half-metallic CrO 2 based long Josephson junctions 26 , where a large contribution of spin triplet supercurrent was implied.
IV. DISCUSSION
As a result, we observe I c (B) pattern asymmetry and its' reversal in dependence on the magnetic field sweep direction. This effect can be observed for both magnetic field orientations, while in normal magnetic fields there is also a prominent change of the I c (B) shape during remagnetization.
This behavior can not be expected for usual SFS junctions with the homogeneous magnetization of the central ferromagnetic layer, where remagnetization can only shift the I c (B) pattern position in magnetic field 26,33 . On the other hand, the observed behavior is a known fingerprint of Josephson spin valves [32][33][34][35][36] . While in conventional Josephson junctions supercurrent is modulated by magnetic flux, in JSVs it is mainly defined by the relative orientation of magnetic layers, giving rise to the I c (B) asymmetry and reversal.
A conventional spin valve, in its simplest form, is a layered structure consisting of a thick (fixed) and a thin (free) ferromagnetic layers 13,14 . Spin valve resistance is defined by the relative angle between magnetizations of the layers due to the spin-dependent scattering, which can be tuned by field or flowing current. Spin valve can be naturally realized in different types of topological materials and their heterostuctures with ferromagnets [17][18][19][20] . In this case, spin-polarized topological surface state acts as one layer of a spin valve, while the role of the other is played by the ferromagnetic lead or by the ferromagnetic sample's bulk 17,19 . The spin-polarized surface state acts as a source of spin, while spin-dependent scattering within the ferromagnet results in a different resistance depending on the magnetization direction 16 .
In the case of FGT, the presence of spin-polarized topological Fermi arcs has been demonstrated by ARPES 8 , while spin momentum locking 56 was inferred to be responsible for anti-symmetric magnetoresistance in FGT/graphite/FGT heterostructures 12 . Thus, a FGT flake may be regarded as a spin valve, this scenario is independently verified by magnetoresistance of a single Au-FGT junction for the reference Hall bar sample in Fig. 5(c), where typical spin-valve hysteresis is observed 13,14 . Moreover, Fig. 4 shows asymmetric resistive features even at high currents, i.e. for the suppressed superconductivity (for 2-10 Ohms junction resistance). These features are also reversed for two magnetic field directions, which confirms spin-valve behavior in FGT.
If a spin valve is sandwiched between two superconducting electrodes 32-36 , see Fig. 6, asymmetric I c (B) pattern should be reversed after remagnetization. Effectiveness of singlet-triplet conversion depends on magnetic orientation misalignment, so I c (B) pattern depends on the spin-valve configuration. Due to the hysteresis in magnetization of a spin valve 13,14 , I c (B) demonstrates a mirror reversal in the opposite field sweeps 32,33 .
The observed interplay between the I c (B) maximum and minimum after remagnetization in Fig. 4(d-f) is very unusual. Generally, this behaviour requires breaking of certain symmetries. For FGT, the inversion symmetry breaking is known at the interface 38 . This is supported by a number of experimental observations of the skyrmion-like spin textures, e.g., Bloch-type 57 and Néeltype 38,58,59 skyrmions, domain wall twists 60 , and chiral spin textures 61,62 . Inversion symmetry breaking in a system with a large spin-orbit interactions gives rise to the spin-orbit torque, comprising terms which are even and odd in magnetization. The relative signs of the terms changes under remagnetization, violating the reversal of the I c (B) pattern in Fig. 4(d-f). This interplay in I c (B) is not observed in Fig. 4 (a,b,c), since the FGT magnetization is not collinear to the out-of-plane current-induced polarization in this case. We wish to note, that one can not ascribe the observed interplay to the spin valve memory effect 32,34 , since every remagnetization process starts from the same B = ±100 mT in our experiment.
Regarding the effects of the domain structure, one should note that the presence of several ferromagnetic domains between the superconducting leads could generally give rise to essentially the same physics as in a JSV 63,64 . In particular, asymmetric non-Fraunhofer I c (B) patterns in SFS junctions with a complex multi-domain structure have been reported before 54,55 . However, the domain structure effects are hardly responsible for the results obtained in this paper. Sufficiently thick FGT samples within the low-temperature range < ∼ 5K contain several types of domains 65,66 , among which bubble-like domains with comparatively small sizes, about a few hundred nanometers, are randomly distributed over the surface, introducing a substantial stochastic component to the domain structure 66 . We would like to emphasize here, that the asymmetric dV /dI(B) curves and dV /dI(B, I) colormaps in Figs. 3, 4 are highly reproducible, and therefore should not be attributed to any stochastic interfacial domain structures that would prevent to reproduce the results with an observed accuracy. Although, the noncoplanar spin textures 10 could noticeably contribute to aperiodic variations presented in Figs. 4 (a,b,c) for the in-plane field orientation.
Thus, our experimental results can be regarded as demonstration of the JSV, which is realized in the magnetic nodal-line topological semimetal FGT. Moreover, surface transport was ubiquitously attributed to carry Josephson current at long distances in JJs based on topological materials [21][22][23][24][25] , which supports the overall interpretation.
V. CONCLUSION
As a conclusion, we investigate lateral electron transport between two 3 µm spaced superconducting In leads beneath a thick three-dimensional FGT exfoliated flake. At low 30 mK temperature, we observe Josephson supercurrent that exhibits unusual critical current I c suppression by the magnetic field B. The overall I c (B) pattern is asymmetric in respect to the B sign. We demonstrate, that the asymmetry is defined by the magnetic field sweep direction, so the I c (B) pattern is strictly reversed (as B to −B inversion) for the opposite sweeps. We also observe an interplay between maximum and minimum in I c (B) in normal magnetic fields, while there are fast aperiodic I c (B) fluctuations for the in-plane ones. These effects can not be expected for homogeneous superconductor-ferromagnet-superconductor junctions, while they are known for Josephson spin valves. The mostly possible scenario for Josephson spin valve realization in FGT is the misalignment of spin polarizations of the Fermi arc surface states and ferromagnetic FGT bulk, but we also discuss possible influence of spindependent transport between magnetic domains.
FIG. 1 .
1(Color online) (a) A top-view image of the sample with electrical connections. A thick (1 µm) single-crystal FGT flake is placed by the flat bottom surface on the predefined superconducting In leads. The right inset shows the initial leads pattern, which consists of 10 µm wide indium stripes separated by 3 µm intervals. Electron transport is investigated between two neighbor In leads in a standard fourpoint technique, all the wire resistances are excluded. Arrows indicate the in-plane B || and normal B ⊥ magnetic field orientations for Figs. 3 and 4. (b,c,d) Magnetoresistance measurements, to confirm FGT quality, for a reference sample with Au leads in standard Hall bar geometry. (b) Longitudinal magnetoresistance Rxx for the in-plane (left axis, blue curve) and for the normal (right axis, red curve) fields. (c,d) Hall Rxy(B) hysteresis loops in normal and in-plane fields, respectively, which is usually ascribed to anomalous and topological Hall effects in FGT 10 . The arrows denote magnetic field sweep directions.
FIG. 2 .
2(Color online) Josephson effect for two different samples with In-FGT-In junctions (S1 and S2 in (a) and (b), respectively). Qualitative behavior is similar, despite strongly different critical current (Ic=0.17 mA in (a) and 0.018 mA in (b)) and normal resistance values. The zero-resistance state appears below 0.88 K for S1 and 0.34 K for S2. The hightemperature curves are typical for Andreev reflection. The superconducting gap positions are denoted by the dashed lines (see the main text), it should not be confused with asymmetric jumps in dV /dI at much higher currents. The data are presented for zero magnetic field.
FIG. 3 .
3(b).These gap values are reasonable for In-FGT-In junctions, (Color online) Influence of the external in-plane (a) and normal (b) magnetic fields on Josephson effect at T = 30 mK for S1 and S2, respectively. dV /dI(B) curves are not symmetric with respect to zero field, the observed asymmetry depends on the magnetic field sweep direction, which is denoted by arrows of the corresponding color. All the dV /dI features are mirrored for the opposite field sweeps, so dV /dI(B) curves are strictly reversed for two sweep directions. This curve reversal can not be expected for a superconductorferromagnet-superconductor junction with the homogeneous magnetization of the ferromagnetic layer, but it is a fingerprint of the complicated spin structures. The data are obtained at 30 mK.
FIG. 4 .
4(Color online) Colormaps of dV /dI(I, B) for samples S1 and S2 in (a,b) and (d,e) respectively. The panels (a,d) and (b,e) differ by the magnetic field sweep direction, which is from negative to positive values in (a) and is just opposite in (b). All the data are obtained at 30 mK. To establish definite sample magnetization, every magnetic field sweep begins from high field value B = ±100 mT (the sign depends on the sweep direction). The colormaps are obtained from dV /dI(I) curves at fixed magnetic field values, which are changed point-by-point in up or down directions. To establish definite sample magnetization state, every magnetic field sweep cycle begins from high field value B = ±100 mT. The dV /dI(B) reversal fromFig. 3can be clearly seen, e.g., by the asymmetric black feature at ±9 mT in (a,b). The reversal effect is even more pronounced in (d,e) for normal magnetic fields. (c,f) Ic(B) dependences for the in-plane and normal magnetic field orientations, respectively. The general Ic(B) shapes are asymmetric in both cases, the asymmetry is reversed for the up (blue) and down (red) field sweeps. For the in-plane magnetic fields, Ic(B) shows well-reproducible aperiodic fluctuations in (c). On the contrary, no noticeable fluctuations can be observed in (f). In normal magnetic fields, there is an interplay between maximum and minimum in Ic(B) at ±12 mT, which is well known for the Josephson spin valves[32][33][34][35][36] . The data are obtained at 30 mK.
FIG. 5 .FIG. 6 .
56(Color online) (a) Stability of Ic, as it is demonstrated for 1000 sequentially recorded curves at a fixed inplane field value B = -1.4 mT. The maximum Ic deviation is about 0.005 mA at B = -1.4 mT, which is negligible comparing to the fluctuations' amplitude 0.05 mA in Fig. 4 (c). (b) Temperature dependence of Ic for S2 in zero field, which supports a large contribution of triplet supercurrent in our In-FGT-In junctions 26 . (c) Typical spin-valve hysteresis 13,14 in magnetoresistance of a single Au-FGT junction for the reference FGT flake. Blue and red curves correspond to the up and down magnetic field sweeps, respectively. In FGT, spin-polarized surface state acts as a source of spin, while spin-dependent scattering within the ferromagnet results in the resistance dependence on the magnetization direction. (Color online) Sketch of the Josephson spin valve, which is realized in In-FGT-In junctions due to the spinpolarized surface state in the magnetic nodal-line topological semimetal FGT. Supercurrent (partially) flows through the spin polarized surface state (grey region) with complicated spin polarization, while spin-dependent scattering with the magnetized FGT bulk is responsible for the spin valve behavior.
ACKNOWLEDGMENTSWe wish to thank V.T. Dolgopolov and A.S. Melnikov for fruitful discussions, and S.S Khasanov for X-ray sample characterization. We gratefully acknowledge financial support by the RF State task.
. Y Deng, Y Yu, Y Song, J Zhang, N Z Wang, Z Sun, Y Yi, Y Z Wu, S Wu, J Zhu, J Wang, X H Chen, Y Zhang, Nature. 56394Y. Deng, Y. Yu, Y. Song, J. Zhang, N. Z. Wang, Z. Sun, Y. Yi, Y. Z. Wu, S. Wu, J. Zhu, J. Wang, X. H. Chen, and Y. Zhang, Nature 563, 94 (2018).
. 10.1038/s41586-018-0626-9https://doi.org/10.1038/s41586-018-0626-9
. J.-J Guo, Q.-L Xia, X.-G Wang, Y.-Z Nie, R Xiong, G.-H Guo, 10.1016/j.jmmm.2020.167719J. Magn. Magn. Mater. 527167719J.-J. Guo, Q.-L. Xia, X.-G. Wang, Y.-Z. Nie, R. Xiong, G.- H. Guo, J. Magn. Magn. Mater. 527, 167719 (2021). doi: 10.1016/j.jmmm.2020.167719
. C Tan, J Lee, S G Jung, T Park, S Albarakati, J Partridge, M R Field, D G Mcculloch, L Wang, C Lee, Nat. Commun. 91554C. Tan, J. Lee, S. G. Jung, T. Park, S. Albarakati, J. Partridge, M. R. Field, D. G. McCulloch, L. Wang, and C. Lee, Nat. Commun. 9, 1554 (2018).
. 10.1038/s41467-018-04018-whttps://doi.org/10.1038/s41467-018-04018-w
. H L Zhuang, P R C Kent, R G Henning, Phys. Rev. B. 93134407H. L. Zhuang, P. R. C. Kent, R. G. Henning, Phys. Rev. B. 93, 134407 (2016).
. L Cai, C Yu, L Liu, W Xia, H.-A Zhou, L Zhao, Y Dong, T Xu, Z Wang, Y Guo, Y Zhao, J Zhang, L Yang, L Yang, W Jiang, 10.1063/5.0030607Appl. Phys. Lett. 117192401L. Cai, C. Yu, L. Liu, W. Xia, H.-A. Zhou, L. Zhao, Y. Dong, T. Xu, Z. Wang, Y. Guo, Y. Zhao, J. Zhang, L. Yang, L. Yang, and W. Jiang, Appl. Phys. Lett. 117, 192401 (2020). https://doi.org/10.1063/5.0030607
. B Chen, J H Yang, H D Wang, M Imai, H Ohta, C Michioka, K Yoshimura, M H Fang, J. Phys. Soc. Jpn. 82124711B. Chen, J. H. Yang, H. D. Wang, M. Imai, H. Ohta, C. Michioka, K. Yoshimura, and M. H. Fang, J. Phys. Soc. Jpn. 82, 124711 (2013).
. Y Wang, C Xian, J Wang, B Liu, L Ling, L Zhang, L Cao, Z Qu, Y Xiong, Phys. Rev. B. 96134428Y. Wang, C. Xian, J. Wang, B. Liu, L. Ling, L. Zhang, L. Cao, Z. Qu, and Y. Xiong, Phys. Rev. B. 96, 134428 (2017).
. K Kim, J Seo, E Lee, K.-T Ko, B S Kim, Bo G Jang, J M Ok, J Lee, Y J Jo, W Kang, J H Shim, C Kim, H W Yeom, B I Min, B.-J Yang, J S Kim, Nat. Mater. 17794K. Kim, J. Seo, E. Lee, K.-T. Ko, B. S. Kim, Bo G. Jang, J. M. Ok, J. Lee, Y. J. Jo, W. Kang, J. H. Shim, C. Kim, H. W. Yeom, B. I. Min, B.-J. Yang, and J. S. Kim, Nat. Mater. 17, 794 (2018).
. J Xu, W A Phelan, C.-L Chien, Nano Lett. 198250J. Xu, W. A. Phelan, C.-L. Chien, Nano Lett., 19, 8250 (2019).
. Y You, Y Gong, H Li, Z Li, M Zhu, J Tang, E Liu, Y Yao, G Xu, F Xu, W Wang, Phys. Rev. B. 100134441Y. You, Y. Gong, H. Li, Z. Li, M. Zhu, J. Tang, E. Liu, Y. Yao, G. Xu, F. Xu, and W. Wang, Phys. Rev. B 100, 134441 (2019).
. Y Zhang, H Lu, X Zhu, S Tan, W Feng, Q Liu, W Zhang, Q Chen, Y Liu, X Luo, D Xie, L Luo, Z Zhang, X Lai, 10.1126/sciadv.aao6791Sci. Adv. 46791Y. Zhang, H. Lu, X. Zhu, S. Tan, W. Feng, Q. Liu, W. Zhang, Q. Chen, Y. Liu, X. Luo, D. Xie, L. Luo, Z. Zhang, and X. Lai, Sci. Adv. 4, eaao6791 (2018). https://doi.org/10.1126/sciadv.aao6791
. S Albarakati, C Tan, Z Chen, J G Partridge, G Zheng, L Farrar, E L H Mayes, M R Field, C Lee, Y Wang, Y Xiong, M Tian, F Xiang, A R Hamilton, O A Tretiakov, D Culcer, Y Zhao, Y Wang, 10.1126/sciadv.aaw0409Sci. Adv. 5409S. Albarakati, C. Tan, Z. Chen, J. G. Partridge, G. Zheng, L. Farrar, E. L. H. Mayes, M. R. Field, C. Lee, Y. Wang, Y. Xiong, M. Tian, F. Xiang, A. R. Hamilton, O. A. Tre- tiakov, D. Culcer, Y. Zhao, and Y. Wang, Sci. Adv. 5, eaaw0409 (2019). https://doi.org/10.1126/sciadv.aaw0409
. M Tsoi, A G M Jansen, J Bass, W.-C Chiang, M Seck, V Tsoi, P Wyder, Phys. Rev. Lett. 804281M. Tsoi, A. G. M. Jansen, J. Bass, W.-C. Chiang, M. Seck, V. Tsoi, and P. Wyder, Phys. Rev. Lett. 80, 4281 (1998).
. E B Myers, D C Ralph, J A Katine, R N Louie, R A Buhrman, Science. 285867E. B. Myers, D. C. Ralph, J. A. Katine, R. N. Louie, and R. A. Buhrman, Science 285, 867 (1999).
. Y.-T Liu, C.-C Huang, K.-H Chen, Y.-H Huang, arXiv:2108.01272C.-C. Tsai, T.-Y. Chang, C.-F. PaiY.-T. Liu, C.-C. Huang, K.-H. Chen, Y.-H. Huang, C.-C. Tsai, T.-Y. Chang, C.-F. Pai, arXiv:2108.01272 (2021).
. J Železný, Z Fang, K Olejník, J Patchett, F Gerhard, C Gould, L W Molenkamp, C Gomez-Olivella, J Zemen, T Tichý, T Jungwirth, C Ciccarelli, arXiv:2102.13441J.Železný, Z. Fang, K. Olejník, J. Patchett, F. Gerhard, C. Gould, L. W. Molenkamp, C. Gomez-Olivella, J. Zemen, T. Tichý, T. Jungwirth, C. Ciccarelli, arXiv:2102.13441.
. V D Esin, D N Borisenko, A V Timonina, N N Kolesnikov, E V Deviatov, Phys. Rev. B. 101155309V. D. Esin, D. N. Borisenko, A. V. Timonina, N. N. Kolesnikov, and E. V. Deviatov, Phys. Rev. B 101, 155309 (2020).
. A Kononov, O O Shvetsov, A V Timonina, N N Kolesnikov, E V Deviatov, JETP Lett. 109180A. Kononov, O. O. Shvetsov, A. V. Timonina, N. N. Kolesnikov, and E. V. Deviatov, JETP Lett. 109, 180 (2019).
. O O Shvetsov, V D Esin, A V Timonina, N N Kolesnikov, E V Deviatov, Europhys. Lett. 12757002O. O. Shvetsov, V. D. Esin, A. V. Timonina, N. N. Kolesnikov, and E. V. Deviatov, Europhys. Lett. 127, 57002 (2019).
. J Tian, I Miotkowski, S Hong, Y P Chen, Sci. Rep. 514293J. Tian, I. Miotkowski, S. Hong, Y. P. Chen, Sci. Rep. 5, 14293 (2015).
. J H Lee, G.-H Lee, J Park, J Lee, S.-G Nam, Y.-S Shin, J S Kim, H.-J Lee, Nano Lett. 145029J. H. Lee, G.-H. Lee, J. Park, J. Lee, S.-G. Nam, Y.-S. Shin, J. S. Kim, and H.-J. Lee, Nano Lett. 14, 5029 (2014).
. A Kononov, O O Shvetsov, S V Egorov, A V Timonina, N N Kolesnikov, E V Deviatov, Europhys. Lett. 12227004A. Kononov, O. O. Shvetsov, S. V. Egorov, A. V. Tim- onina, N. N. Kolesnikov, and E. V. Deviatov, Europhys. Lett. 122, 27004 (2018).
. C Huang, B T Zhou, H Zhang, B Yang, R Liu, H Wang, Y Wan, K Huang, Z Liao, E Zhang, S Liu, Q Deng, Y Chen, X Han, J Zou, X Lin, Z Han, Y Wang, K Tuen Law, & F Xiu, Nat. Comm. 102217C. Huang, B. T. Zhou, H. Zhang, B. Yang, R. Liu, H. Wang, Y. Wan, K. Huang, Z. Liao, E. Zhang, S. Liu, Q. Deng, Y. Chen, X. Han, J. Zou, X. Lin, Z. Han, Y. Wang, K. Tuen Law & F. Xiu, Nat. Comm. 10, 2217 (2019).
. O O Shvetsov, V D Esin, Yu S Barash, A V Timonina, N N Kolesnikov, E V Deviatov, Phys. Rev. B. 10135304O. O. Shvetsov, V. D. Esin, Yu. S. Barash, A. V. Timonina, N. N. Kolesnikov, and E. V. Deviatov, Phys. Rev. B 101, 035304 (2020).
. Y Wang, S Yang, P K Sivakumar, B R Ortiz, S M L Teicher, H Wu, A K Srivastava, C Garg, D Liu, S S P Parkin, E S Toberer, T Mcqueen, S D Wilson, M N Ali, arXiv:2012.05898Y. Wang, S. Yang, P. K. Sivakumar, B. R. Ortiz, S. M. L. Teicher, H. Wu, A. K. Srivastava, C. Garg, D. Liu, S. S. P. Parkin, E. S. Toberer, T. McQueen, S. D. Wilson, M. N. Ali, arXiv:2012.05898.
. R S Keizer, S T B Goennenwein, T M Klapwijk, G Miao, G Xiao, A Gupta, Nature. 439825R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao, A. Gupta, Nature 439, 825 (2006).
. F S Bergeret, A F Volkov, K B Efetov, Phys. Rev. Lett. 864096F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Phys. Rev. Lett. 86, 4096 (2001).
. F S Bergeret, A F Volkov, K B Efetov, Rev. Mod. Phys. 771321F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Rev. Mod. Phys. 77, 1321 (2005).
. P Dutta, F Parhizgar, A M Black-Schaffer, Phys. Rev. B. 10164514P. Dutta, F. Parhizgar, A. M. Black-Schaffer, Phys. Rev. B 101, 064514 (2020)
. F S Bergeret, I V Tokatly, Phys. Rev. Lett. 110117003F. S. Bergeret and I. V. Tokatly, Phys. Rev. Lett. 110, 117003 (2013).
. F S Bergeret, I V Tokatly, Phys. Rev. B. 89134517F. S. Bergeret and I. V. Tokatly, Phys. Rev. B 89, 134517 (2014).
. O M Kapran, A Iovan, T Golod, V M Krasnov, Phys. Rev. Research. 213167O. M. Kapran, A. Iovan, T. Golod, V. M. Krasnov, Phys. Rev. Research 2, 013167 (2020).
. N Banerjee, J W A Robinson, M G Blamire, Nat. Commun. 54771N. Banerjee, J. W. A. Robinson, M. G. Blamire, Nat. Com- mun. 5, 4771 (2014).
. B M Niedzielski, T J Bertus, J A Glick, R Loloee, W P PrattJr, N O , Birge Phys. Rev. B. 9724517B. M. Niedzielski, T. J. Bertus, J. A. Glick, R. Loloee, W. P. Pratt Jr., and N. O. Birge Phys. Rev. B 97, 024517 (2018).
. N Satchell, P M Shepley, M Algarni, M Vaughan, E Darwin, M Ali, M C Rosamond, L Chen, E H Linfield, B J Hickey, G Burnell, Appl. Phys. Lett. 11622601N. Satchell, P. M. Shepley, M. Algarni, M. Vaughan, E. Darwin, M. Ali, M. C. Rosamond, L. Chen, E. H. Linfield, B. J. Hickey, and G. Burnell, Appl. Phys. Lett. 116, 022601 (2020).
. E C Gingrich, B M Niedzielski, J A Glick, Y Wang, D L Miller, R Loloee, W P PrattJr, N O Birge, Nat. Phys. 12564E. C. Gingrich, B. M. Niedzielski, J. A. Glick, Y. Wang, D. L. Miller, R. Loloee, W. P. Pratt Jr., N. O. Birge, Nat. Phys. 12, 564 (2016).
. T Yu, M Karminskaya, Yu, A A Kupriyanov, Golubov, JETP Letters. 87570T. Yu. Karminskaya, M. Yu. Kupriyanov, and A. A. Gol- ubov, JETP Letters 87, 570 (2008).
. T.-E Park, L Peng, J Liang, A Hallal, F Yasin, X Zhang, K M Song, S J Kim, K Kim, M Weigand, G Schütz, S Finizio, J Raabe, K Garcia, J Xia, Y Zhou, M Ezawa, X Liu, J Chang, H C Koo, Y D Kim, M Chshiev, A Fert, H Yang, X Yu, S Woo, Phys. Rev. B. 103104410T.-E. Park, L. Peng, J. Liang, A. Hallal, F. Yasin, X. Zhang, K. M. Song, S. J. Kim, K. Kim, M. Weigand, G. Schütz, S. Finizio, J. Raabe,K. Garcia, J. Xia, Y. Zhou, M. Ezawa, X, Liu, J. Chang, H. C. Koo, Y. D. Kim, M. Chshiev, A. Fert, H. Yang, X. Yu, S. Woo Phys. Rev. B 103, 104410 (2021).
. K Garello, I M Miron, C O Avci, F Freimuth, Y Mokrousov, S Blügel, S Auffret, O Boulle, G Gaudin, P Gambardella, Nature Nanotechnol. 8587K. Garello, I. M. Miron, C. O. Avci, F. Freimuth, Y. Mokrousov, S. Blügel, S. Auffret, O. Boulle, G. Gaudin, P. Gambardella, Nature Nanotechnol. 8, 587 (2013).
. X Qiu, Z Shi, W Fan, S Zhou, H Yang, Adv. Mater. 301705699X. Qiu, Z. Shi, W. Fan, S. Zhou, and H. Yang, Adv. Mater. 30, 1705699 (2018).
. A Manchon, J Železný, I M Miron, T Jungwirth, J Sinova, A Thiaville, K Garello, P Gambardella, Rev. Mod. Phys. 9135004A. Manchon, J.Železný, I. M. Miron, T. Jungwirth, J. Sinova, A. Thiaville, K. Garello, P. Gambardella, Rev. Mod. Phys. 91, 035004 (2019).
. H Kurebayashi, J Sinova, D Fang, A C Irvine, T D Skinner, J Wunderlich, V Novák, R P Campion, B L Gallagher, E K Vehstedt, L P Zârbo, K Výborný, A J Ferguson, T Jungwirth, Nature Nanotechol. 9211H. Kurebayashi, J. Sinova, D. Fang, A. C. Irvine, T. D. Skinner, J. Wunderlich, V. Novák, R. P. Campion, B. L. Gallagher, E. K. Vehstedt, L. P. Zârbo, K. Výborný , A. J. Ferguson and T. Jungwirth, Nature Nanotechol. 9, 211 (2014).
. C.-Z Chen, J J He, M N Ali, G.-H Lee, K C Fong, K T Law, Phys. Rev. B. 9875430C.-Z. Chen, J. J. He, M. N. Ali, G.-H. Lee, K. C. Fong, and K. T. Law, Phys. Rev. B 98, 075430 (2018).
. N F Q Yuan, L Fu, arXiv:2106.01909N. F. Q. Yuan, L. Fu, arXiv:2106.01909.
. A Buzdin, Phys. Rev. Lett. 101107005A. Buzdin, Phys. Rev. Lett. 101, 107005 (2008).
. F Konschelle, A Buzdin, Phys. Rev. Lett. 10217001F. Konschelle and A. Buzdin, Phys. Rev. Lett. 102, 017001 (2009);
. Phys. Rev. Lett. 123E169901Phys. Rev. Lett. 123, 169901(E) (2019).
. I Kulagina, J Linder, Phys. Rev. B. 9054504I. Kulagina and J. Linder, Phys. Rev. B 90, 054504 (2014).
. M A Silaev, I V Tokatly, F S Bergeret, Phys. Rev. B. 95184508M. A. Silaev, I. V. Tokatly, and F. S. Bergeret, Phys. Rev. B 95, 184508 (2017).
. Yu M Shukrinov, I R Rahmonov, K Sengupta, A Buzdin, Appl. Phys. Lett. 110182407Yu. M. Shukrinov, I. R. Rahmonov, K. Sengupta, and A. Buzdin, Appl. Phys. Lett. 110, 182407 (2017).
. N P Armitage, E J Mele, A Vishwanath, Rev. Mod. Phys. 9015001N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018).
. O O Shvetsov, V D Esin, Yu S Barash, A V Timonina, N N Kolesnikov, E V Deviatov, Phys. Rev. B. 10135304O. O. Shvetsov, V. D. Esin, Yu. S. Barash, A. V. Timonina, N. N. Kolesnikov, and E. V. Deviatov, Phys. Rev. B 101, 035304 (2020)
. O O Shvetsov, V D Esin, A V Timonina, N N Kolesnikov, E V Deviatov, Phys. Rev. B. 99125305O. O. Shvetsov, V. D. Esin, A. V. Timonina, N. N. Kolesnikov, E. V. Deviatov, Phys. Rev. B 99, 125305 (2019)
. O O Shvetsov, A Kononov, A V Timonina, N N Kolesnikov, E V Deviatov, JETP Letters. 107774O. O. Shvetsov, A. Kononov, A. V. Timonina, N. N. Kolesnikov, E. V. Deviatov, JETP Letters, 107, 774 (2018).
. T S Khaire, W P PrattJr, N O Birge, Phys. Rev. B. 7994523T. S. Khaire, W. P. Pratt Jr., and N. O. Birge, Phys. Rev. B 79, 094523 (2009).
. M A Khasawneh, Supercond. Sci. Technol. 2424005M. A. Khasawneh et al., Supercond. Sci. Technol., 24, 024005 (2011).
. S.-Y Xu, C Liu, S K Kushwaha, R Sankar, J W Krizan, I Belopolski, M Neupane, G Bian, N Alidoust, T.-R , S.-Y. Xu, C. Liu, S. K. Kushwaha, R. Sankar, J. W. Krizan, I. Belopolski, M. Neupane, G. Bian, N. Alidoust, T.-R.
. H.-T Chang, C.-Y Jeng, W.-F Huang, H Tsai, P P Lin, F.-C Shibayev, R J Chou, M Z Cava, Hasan, Science. 347Chang, H.-T. Jeng, C.-Y. Huang, W.-F. Tsai, H. Lin, P. P. Shibayev, F.-C. Chou, R. J. Cava, M. Z. Hasan, Science 347, 294 (2015).
. B Ding, Z Li, G Xu, H Li, Z Hou, E Liu, X Xi, F Xu, Y Yao, W Wang, Nano Lett. 20868B. Ding, Z. Li, G. Xu, H. Li, Z. Hou, E. Liu, X. Xi, F. Xu, Y. Yao, and W. Wang, Nano Lett. 20, 868 (2020).
. Y Wu, S Zhang, J Zhang, W Wang, Y L Zhu, J Hu, G Yin, K Wong, C Fang, C Wan, X Han, Q Shao, T Taniguchi, K Watanabe, J Zang, Z Mao, X Zhang, K L Wang, Nat. Commun. 113860Y. Wu, S. Zhang, J. Zhang, W. Wang, Y. L. Zhu, J. Hu, G. Yin, K. Wong, C. Fang, C. Wan, X. Han, Q. Shao, T. Taniguchi, K. Watanabe, J. Zang, Z. Mao, X. Zhang, and K. L. Wang, Nat. Commun. 11, 3860 (2020).
. M Yang, Q Li, R V Chopdekar, R Dhall, J Turner, J D Carlström, C Ophus, C Klewe, P Shafer, A T , M. Yang, Q. Li, R. V. Chopdekar, R. Dhall, J. Turner, J. D. Carlström, C. Ophus, C. Klewe, P. Shafer, A. T.
. J W N'diaye, G Choi, Y Z Chen, C Wu, F Hwang, Z Q Wang, Qiu, Sci. Adv. 65157N'Diaye, J. W. Choi, G. Chen, Y. Z. Wu, C. Hwang, F. Wang, and Z. Q. Qiu, Sci. Adv. 6, eabb5157 (2020).
. L Peng, F S Yasin, T.-E Park, S J Kim, X Zhang, T Nagai, K Kimoto, S Woo, X Yu, 10.1002/adfm.202103583arXiv:2105.00468L. Peng, F. S. Yasin, T.-E. Park, S. J. Kim, X. Zhang, T. Nagai, K. Kimoto, S. Woo, X. Yu, arXiv:2105.00468, doi:10.1002/adfm.202103583
. H Wang, C Wang, Y Zhu, Z.-A Li, H Zhang, H Tian, Y Shi, H Yang, J Li, arXiv:1907.08382H. Wang, C. Wang, Y. Zhu, Z.-A. Li, H. Zhang, H. Tian, Y. Shi, H. Yang, J. Li, arXiv:1907.08382
. M J Meijer, J Lucassen, R A Duine, H J M Swagten, B Koopmans, R Lavrijsen, M H D Guimarães, Nano Lett. 208563M. J. Meijer, J. Lucassen, R. A. Duine, H. J. M. Swagten, B. Koopmans, R. Lavrijsen, and M. H. D. Guimarães, Nano Lett. 20, 8563 (2020).
. A Konstandin, J Kopu, M Eschrig, Phys. Rev. B. 72140501A. Konstandin, J. Kopu, and M. Eschrig, Phys. Rev. B 72, 140501(R) (2005).
. E Bhatia, A Srivastava, J Devine-Stoneman, N A Stelmashenko, Z H Barber, J W A Robinson, K Senapati, Nano Lett. 213092E. Bhatia, A. Srivastava, J. Devine-Stoneman, N. A. Stel- mashenko, Z. H. Barber, J. W. A. Robinson, and K. Sena- pati, Nano Lett. 21, 3092 (2021).
. N León-Brito, E D Bauer, F Ronning, J D Thompson, R Movshovich, J. Appl. Phys. 12083903N. León-Brito, E.D. Bauer, F. Ronning, J.D. Thompson, and R. Movshovich, J. Appl. Phys. 120, 083903 (2016).
. G D Nguyen, J Lee, T Berlijn, Q Zou, S M Hus, J Park, Z Gai, C Lee, A.-P Li, Phys. Rev. B. 9714425G. D. Nguyen, J. Lee, T. Berlijn, Q. Zou, S. M. Hus, J. Park, Z. Gai, C. Lee, and A.-P. Li, Phys. Rev. B 97, 014425 (2018).
| []
|
[
"Variational Multi-Phase Segmentation using High-Dimensional Local Features",
"Variational Multi-Phase Segmentation using High-Dimensional Local Features"
]
| [
"Niklas Mevenkamp [email protected] \nRWTH Aachen University\n\n",
"Benjamin Berkels [email protected] \nRWTH Aachen University\n\n"
]
| [
"RWTH Aachen University\n",
"RWTH Aachen University\n"
]
| []
| We propose a novel method for multi-phase segmentation of images based on high-dimensional local feature vectors. While the method was developed for the segmentation of extremely noisy crystal images based on localized Fourier transforms, the resulting framework is not tied to specific feature descriptors. For instance, using local spectral histograms as features, it allows for robust texture segmentation. The segmentation itself is based on the multi-phase Mumford-Shah model. Initializing the high-dimensional mean features directly is computationally too demanding and ill-posed in practice. This is resolved by projecting the features onto a low-dimensional space using principle component analysis. The resulting objective functional is minimized using a convexification and the Chambolle-Pock algorithm. Numerical results are presented, illustrating that the algorithm is very competitive in texture segmentation with state-of-the-art performance on the Prague benchmark and provides new possibilities in crystal segmentation, being robust to extreme noise and requiring no prior knowledge of the crystal structure. | 10.1109/wacv.2016.7477729 | [
"https://arxiv.org/pdf/1902.09863v1.pdf"
]
| 12,668,380 | 1902.09863 | c39adfa3d72a1f438c83f7cab1df754fe11366b6 |
Variational Multi-Phase Segmentation using High-Dimensional Local Features
Niklas Mevenkamp [email protected]
RWTH Aachen University
Benjamin Berkels [email protected]
RWTH Aachen University
Variational Multi-Phase Segmentation using High-Dimensional Local Features
10.1109/WACV.2016.7477729A definitive version was published in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) and is available at https://dx.
We propose a novel method for multi-phase segmentation of images based on high-dimensional local feature vectors. While the method was developed for the segmentation of extremely noisy crystal images based on localized Fourier transforms, the resulting framework is not tied to specific feature descriptors. For instance, using local spectral histograms as features, it allows for robust texture segmentation. The segmentation itself is based on the multi-phase Mumford-Shah model. Initializing the high-dimensional mean features directly is computationally too demanding and ill-posed in practice. This is resolved by projecting the features onto a low-dimensional space using principle component analysis. The resulting objective functional is minimized using a convexification and the Chambolle-Pock algorithm. Numerical results are presented, illustrating that the algorithm is very competitive in texture segmentation with state-of-the-art performance on the Prague benchmark and provides new possibilities in crystal segmentation, being robust to extreme noise and requiring no prior knowledge of the crystal structure.
Introduction
Image segmentation, i.e. the task of decomposing an image into disjoint regions that are roughly homogeneous in a suitable sense, is one of the fundamental image processing problems. If three or more regions are sought, one speaks of multi-phase segmentation. This problem has been studied thoroughly in the literature and entirely different concepts have been put forward as the basis for image segmentation, such as fuzzy region competition [21], contour detection [2], random walks [9], markov random fields [22], just to name a few. Due to the variety of proposed methods, providing a comprehensive list is beyond the scope of this article, but we refer the interested reader to [31]. Then there is, of course, the class of variational approaches based on the famous Mumford-Shah energy [25].
The most straight-forward application in multi-phase segmentation is to divide images into regions based on their gray or color intensities [8]. A more complex task is to segment images based on their local structure. This has applications in texture segmentation [28], as well as many medical applications, such as the segmentation of blood vessels [14]. Algorithms for structure classification and segmentation usually extract local features from the image, which analyze important properties of the structures of interest, such as the image intensity, position and orientation of edges, or the local frequency spectrum [29]. In the case of texture segmentation, Gabor filters are arguably the most popular source of feature discrimination [32], often combined with other filters in so-called local spectral histograms [23]. Other methods rely on linear transforms, such as the shorttime Fourier transform [3], wavelet transforms [7], or, more recently, the Stockwell transform [11]. While the part of this paper on texture segmentation uses well-proven spectral histograms to recognize regions, it differs from established methods by their integration into a variational framework, allowing to control the regions' connectedness.
Dealing with complex structures, such as textures, often implies high-dimensionality of the parameters describing the problem. However, in image segmentation, one is mostly interested in classifying structure into a few categories, potentially allowing for lower-dimensional representations. Dimension reduction of high-dimensional data is an immensely broad topic [30] and finds applications in many different areas of research. There exist different approaches, but the most widely used techniques are arguably clustering [27] and principal component analysis (PCA) [19]. The latter two are connected in the sense that the relaxed solution of k-means, one of the most popular clustering algorithms, is given by principal components [10]. PCA has been investigated in the context of variational image segmentation before, both as a means for dimension reduction [26] and to increase the contrast of color-texture indicators in natural images [17].
In materials science, an important application of structure-based segmentation is the analysis of crystals. Available methods are based on variational minimization of Mumford-Shah energies that require the local stencil of a reference crystal as prior knowledge [4,5,12].
Key contributions
• a widely applicable framework for image segmentation by structure is discussed, including a novel combination of PCA of high-dimensional features, Mumford-Shah and a robust initialization strategy, which allows for a broad choice of feature descriptors
• the framework is shown to work very well, even for extremely noisy data, in crystal segmentation, where it generalizes existing methods in the sense that no apriori information about the crystals is required
Variational multi-phase segmentation
In this section, we briefly recall the Mumford-Shah model [25] for multi-phase segmentation based on suitable indicator functions. Furthermore, we recall a convexification approach that enables an efficient numerical minimization of the model. Let Ω = [0, 1] 2 . The task is to divide Ω into pairwise disjoint regions Ω l , l = 1, . . . , k based on given indicator functions f 1 , . . . , f k : Ω → R ≥0 . f l (x) can be interpreted as the cost of putting a point x ∈ Ω into the set Ω l . For instance, if an image g : Ω → R is supposed to be segmented based on its gray values, possible indicator functions are f l (x) = (g(x) − c l ) 2 . Here, c l is the average gray value of g in the l-th region.
A segmentation of Ω based on the indicator functions that guarantees a certain regularity of the segments can be achieved by minimizing the Mumford-Shah energy [25]:
min (Ω l ) k l=1 k l=1 Ω l f l dx + λPer(Ω l , Ω) .(1)
Here, Per(Ω l , Ω) denotes the perimeter of the set Ω l in Ω [1]. Roughly speaking, the perimeter is the length of the boundary of Ω l , not counting the parts of the boundary that are also on the boundary of Ω. This problem is hard to address numerically since the unknown variables are sets. In particular, its discrete counterpart, known as Pott's model, is NP-hard. Thus, various convex relaxation approaches have been proposed in the past. For the sake of simplicity, we use one of the most straightforward approaches, given in [34]. Let us stress that our framework does not rely on this particular choice, but can also be combined with more sophisticated convexification approaches. Let
E[u] := k l=1 Ω f l u l dx + λ|u l | TV(Ω)(2)
where u ∈ U is a vector valued labeling function and
U := u ∈ BV(Ω) k : u ≥ 0 ∧ k l=1 u l = 1 a.e. in Ω (3)
is the admissible set. Here,
|u| TV(Ω) := sup p∈C 1 c (Ω,R 2 ) p ∞ ≤1 Ω u div p(4)
denotes the total variation and BV(Ω) is the space of functions of bounded variation, i.e. the space of Lebesgue integrable functions with finite total variation. Then, the convex relaxation of (1) is to minimize E over the set U.
The minimizer u * can be interpreted as a soft segmentation and can be converted into a hard segmentation by setting
Ω l := {x ∈ Ω : u * l (x) ≥ u * j (x) ∀j = l}.
In order to address this minimization numerically, a discretization of the energy (2) and the admissible set (3) is required. Let X = (x i ) n i=1 ∈ R 2×n be a regular 2D pixel grid. We use piecewise constant approximations f li = f l (x i ) and u li = u l (x i ) for i = 1, . . . , n. The corresponding column vectors and matrices of all pixel values are denoted by a boldface letter, e.g. u = (u 1 , . . . , u k ) ∈ R n×k . Furthermore, we denote with K : R n → R 2×n the discrete gradient operator corresponding to the grid X and forward differences. Using this operator to discretize the total variation (4), the minimization of the discretized energy (2) can be posed as the following discrete saddle point problem:
min u∈U h max p li p li ≤1 k l=1 n i=1 {f li u li + λ (Ku l ) i ,p li } ,(5)
where U h is the discrete counterpart of U andp li = (p li1 , p li2 ) discretizes p from (4) for u l at node x i . Problems of this form can be solved with the Chambolle-Pock algorithm [6], summarized in Algorithm 1. The required resolvent operators are given by
R 1 (p) = (p lij / max{ p li , 1}) lij ,(6)R 2 (u) = π U h u − τ λ f .(7)
Here, f = (f 1 , . . . , f k ) and π U h (u) denotes the orthogonal projection of u onto the set U h . This projection can
Algorithm 1 Chambolle-Pock Type 1 p (0) lij = 0, l = 1, . . . , k, i = 1, . . . , n, j = 1, 2 u (0) li = 1 k , l = 1, . . . , k, i = 1, . . . , n u (0) = u (0) repeat p (t+1) = R 1 (p (t) + σKu (t) ) u (t+1) = R 2 (û (t) − τ K * p(t+1) ) u (t+1) =û (t+1) + θ(û (t+1) −û (t) ) t ← t + 1 until u (t+1) − u (t) < or t > t max
be calculated with O(k) operations using an iterative algorithm described in [24]. In this work, all numerical experiments use the parameters σ = τ = 1 8 , θ = 0.7, = 0.001, t max = 10000. The regularization parameter is chosen as λ = 0.01 (Table 2), λ = 0.005 (Table 1, Figure 1) and λ = 25 ( Figure 2).
Local features for structure characterization
Description and relation to Mumford-Shah
Our aim is to provide a method to segment images into regions of different structure based on the information from local features. In the discrete setting, local features corresponding to a pixel x i are encoded in the values of an input image g in a (2s+1)×(2s+1) window W s (x i ) centered at x i . Here, s ∈ N determines the scale that is still considered to be local. From these values, features are extracted by an operator of the form F : R (2s+1) 2 → R m , that should fulfill certain properties, which we will detail later. Applying F to the (2s + 1) × (2s + 1) matrix g(W s (x i )) containing the image pixel values in the window W s (x i ) gives the feature vector corresponding to the pixel x i :
F[g](x i ) := F(g(W s (x i ))).(8)
Let Ω * l ⊂ X, l = 1, . . . , k denote the sought discrete regions, i.e. the true sets of pixels belonging to the different structure regions. Then, a suitable feature extractor (as defined in (8)) for discriminating regions of different structures can be characterized by the following two properties:
max l=1,...,k max x,x ∈Ω * l F[g](x) − F[g](x ) is small, (9) min l,l =1,...,k l =l min x∈Ω * l ,x ∈Ω * l F[g](x) − F[g](x ) is large,(10)
i.e. local features should vary as little as possible within each region and offer as much contrast as possible between different regions. Examples for robust feature extractors for texture and crystal segmentation will be discussed in Section 4. Given a suitable feature extractor and the true mean features within the different structure regions
c = 1 |Ω * l | x∈Ω * l F[g](x) k l=1 ∈ R m×k ,(11)
the following indicator can be used for segmentation in (5):
f li := F[g](x i ) − c l 2 .(12)
In practice, the mean values c are of course unknown. However, given some approximate guess c {t} for the mean values, Algorithm 1 can be applied, resulting in a segmentation u {t} . Then, the following update rule can be applied to refine the mean features
c {t+1} l = n i=1 F[g](x i )u {t} li n i=1 u {t} li .(13)
This way, given some initial guess c {0} , both the segmentation and the mean features can be refined in an alternating fashion. Note that we use curly brackets instead of round ones for the index here, to differentiate between the iterations within Algorithm 1 and these outer iterations. Unfortunately, the result of this alternating minimization strategy depends heavily on the initial guess c {0} . In the literature, it is often suggested to approximate c {0} via clustering, which is equivalent to minimizing (1) for λ = 0 with (12) as indicator and with respect to both the regions Ω l and c. This clustering problem is NP-hard itself, but efficient iterative solvers, such as k-means, are available and have proven to work well in the case of low-dimensional indicator functions (e.g. in color segmentation) [8]. However, robust feature extractors suitable for structure discrimination tend to be high-dimensional (m greater than 100 or even 1000). In this case, clustering becomes unfeasible in practice, because the available solvers are likely to get stuck in undesired local minima when applied in such high dimensions.
Dimension reduction and decorrelation
Clustering of high-dimensional data is a well studied problem in the literature [27]. It has been noted that often many of the dimensions are irrelevant for the core information expressed by a given data set and that they might mask the essential clusters due to noise. Therefore, several approaches for subspace clustering have been proposed to address this problem [20]. In our context, dimension reduction and decorrelation via principal component analysis (PCA) should work well: given a feature extractor F fulfilling (9) & (10), (F[g](x i )) x∈Ω * l is of low variance for any l ∈ {1, . . . , k} and, compared to this, for l = l the set
(F[g](x), F[g](x )) x∈Ω * l ,x ∈Ω * l is of high variance. Performing PCA on the matrix of mean-centralized fea- tures A = (F[g](x 1 ) − µ[g], . . . , F[g](x n ) − µ[g]) ∈ R m×n with µ[g] = 1 n n i=1 F[g](x i ), results in a lower- dimensional coefficient representation α (r) = (U (r) ) T A ∈ R r×n ,
where U (r) ∈ R m×r is the matrix of eigenvectors belonging to the largest r eigenvalues of AA T . Clustering the coefficients α (r) into k clusters gives a coefficient representation γ (r) ∈ R r×k , which results in the initial guess
c {0} = U (r) γ (r) + µ[g].
Since we need c ∈ R m×k , r = k is a natural choice for dimension reduction.
In [33], it was noted that the clustering can get stuck in local minima due to effects caused by the inhomogeneity of the features across the boundary between two regions. Unlike purely point-wise indicators (s = 0), local feature extractors cause points within about half the window size of a region boundary (in 2D space) to spread between the two mean features corresponding to the regions adjacent to the boundary (in coefficient space). In order to prevent the k−means minimizer from getting stuck in-between such two clusters, Yuan et al. proposed to disregard such boundary points when clustering by thresholding an edgeness indicator, given by finite differences of the features on the scale of the window size [33]. As this approach is only based on the assumption of homogeneity of the features within each structure region, it can be used for general feature extractors and allows us to adopt this technique for the initial clustering.
While the above resembles a robust method to retrieve an initial value for c in the full dimensional feature space, the dimension reduction we now have at hand also suggests itself to reduce the noise of the high dimensional feature vectors and to increase their inter-region contrast within the subsequent variational segmentation framework. First of all, let us point out that as U (m) forms an orthonormal basis of R m , we can express the indicator (12) and thus the fidelity term in (5) in terms of γ (m) and α (m) :
F[g](x i ) − c l 2 = α (m) i − γ (m) l 2(14)
Furthermore, definition (13) can be rewritten as
U (m) γ (m) l = c l − µ[g] = U (m) n i=1 α (m) i u il n i=1 u il(15)
i.e. the mean values γ can be updated using the coefficients α instead of the feature vectors F[g](x i ). Reducing the dimension to r < m introduces an error, which can be bounded by the eigenvalues λ 1 ≥ · · · ≥ λ m of AA T :
f li − α (r) i − γ (r) l 2 ≤ 2 m j=r+1 λ j(16)
This inequality can be deduced by applying the triangle inequality to the difference of the left-and right-hand side in (14), splitting α
(m) i , γ (m) l into α (r) i , γ (r) l
and remaining partsᾱ i ,γ l , as well as representingᾱ i ,γ l as convex combinations of the columns of (U (m) ) T A with non-zero coefficients in columns corresponding to λ r+1 , . . . , λ m .
Note that the error in (16) can be estimated without calculating all m eigenvalues, which may become computationally expensive when the dimension m becomes large:
m j=r+1 λ j = A 2 F − r j=1 λ j .(17)
Here, A F = m i=1 n j=1 |A ij | 2 denotes the Frobenius norm. This way, the error in the fidelity term can still be monitored, when the eigenvectors of AA T are calculated iteratively, e.g. using a deflation type of strategy. We use r = k within the entire framework. Algorithm 2 summarizes the proposed method. All numerical experiments uset max = 3. The edgeness threshold on the finite differences before clustering in Algorithm 2 is chosen as δ = 0.5 (Table 2), δ = 0.25 (Table 1, Figure 1) and δ = 1.0 ( Figure 2).
Algorithm 2 Variational multi-phase feature segmentation A = (F[g](x i )) n i=1 α = (U (k) ) T A ∆ i = j:xj =xi±(0,s)∨xj =xi±(s,0) α i − α j α = α i : ∆ i < δ · 1 n n i=1 ∆ i (γ l ) k l=1 = k-means(α) u {0} li = δ l,arg min l αi−γ l 2 p {0} = 0 repeat f li = α i − γ l 2 , l = 1, . . . , k, i = 1, . . . , n (u {t+1} , p {t+1} ) = Algortihm 1(f , u {t} , p {t} ) γ l = n i=1 α i u {t+1} il / n i=1 u {t+1} il untilt =t max return mask i = arg max l=1,...,k u {tmax+1} il
Properties and advantages of the method
PCA has been used, for instance, as a concept for dimension reduction of PET data and subsequent variational segmentation [26], as well as a tool for increasing the contrast in the region descriptors for natural color-texture images in a variational segmentation framework [17]. Moreover, Yuan et al. [33] utilized the related concept of singular value decomposition to compute a low-rank factorization of a local spectral histogram based feature matrix and estimate subsequent template features via clustering. We want to stress that the initialization step in the proposed method shares the idea of dimension reduction and clustering of features, albeit differing slightly in the details, and is, in this regard, similar to [33]. However, our work embeds these ideas into a variational segmentation framework, which grants the following two main advantages:
First, the proposed method can be applied to a very general class of feature extractors, since it only relies on the natural properties (9), (10). In particular, in contrast to [33], it does not rely on the assumption that the feature vectors are linear combinations of the mean features in each region (this assumption and its consequences are discussed later in this section). Among others, the generality of our framework allows the usage of globally coupling, convolution based linear transforms. Functions of this type, such as the shorttime Fourier transform [3], the Stockwell-Transform [11], or different types of wavelet transforms [7], have been studied for texture segmentation and shown their performance.
Second, the dimension reduction of the fidelity term helps to increase the degree to which it fulfills (9), (10).
In particular, incorporating the PCA not only in the initial clustering, but also throughout the entire variational minimization, helps to suppress noise in the fidelity term. In Section 4.2, we will demonstrate how effective this strategy performs in the case of extremely noisy crystal images, using the Fourier transform as the feature extractor.
Unlike [23,33], the general applicability of the proposed framework is tied to the need for a regularization of the segment boundaries, which is covered by the Mumford-Shah model. This need arises from an unexpected behavior of the indicator functions near segment boundaries. Due to the window size, the feature extractor sees a mixture of different segments near boundaries. For general feature extractors, this means that feature vectors near boundaries are not necessarily a linear combination of the cluster centers corresponding to the adjacent segments. In case k > 2, it might happen that the feature vector at a boundary between two regions Ω * 1 , Ω * 2 is nearer to the mean feature vector of a third region Ω * 3 than it is to Ω * 1 or Ω * 2 itself. This means that the indicator f cannot necessarily identify the correct segment within a distance of s to the sought segments. Note that this effect does not arise for k = 2. As mentioned above, the perimeter regularization within the Mumford-Shah model addresses this problem for practical purposes. For input data where the regularization alone is not sufficient, we suggest to combine feature extractors of different window sizes.
Beyond this, the proposed method is an extension of [23,33] in the sense that the decoupling of the coefficient representation from the segmentation allows for an outer iteration to refine the mean features, whereas in [33] the clusters are solely computed from the feature matrix.
Let us point out that, since the method is based on local windows, it has the common limitations inherent to such methods. The feature scale is tied to the window size s, so the method can only reliably detect regions that are at least somewhat larger than the window W s (x). Furthermore, special care has to be taken close to the boundary, where the window W s (x) leaves the support of the image. Please note that the proposed method enforces the region boundaries to approach the image boundary orthogonally, which is due to the natural boundary conditions in the Euler-Lagrange equation of (2). This effect can be reduced by introducing ghost cells at the image boundary with a zero extension of all indicators, but it is still noticeable (cf. Figure 1).
Applications and numerical results
Texture segmentation
Apart from plain gray value or color intensities, among the most thoroughly studied types of structures in image segmentation are textures [18]. In the image processing sense, a texture essentially consists in a more or less strictly repetitive pattern of the spatial arrangement of the gray or color values in an image. Thus, indicators for texture segmentation need to take into account image information from a whole neighborhood, at least on the scale of the spatial distance between repetitions. There are two main classes of operators that have been proposed in the literature, namely 1) local spectral histograms combined with a suitable bank of filters and 2) localized linear transforms, both of which fall into the class of feature extractors described earlier. In the context of texture segmentation, we limit our analysis to the first class, while the second class will be utilized for crystal segmentation in the next section.
Local spectral histograms are defined as follows: first a bank of p filters is selected and applied to the image, resulting in a sequence of filtered images g 1 , . . . , g p . Then, the feature extractor is defined by
(F SH [g](x)) ij = x k ∈Ws(x) zi,j+1 zij δ(z − g ik )) dz |W s (x)|(18)
Here, z ij , i = 1, . . . , p, j = 1, . . . , q + 1 define the binning of the histograms and are often chosen such that z i,1 = min x g i (x), z i,q = max x g i (x) and equidistant in between. Thus, the dimension of the extracted feature at every pixel x is m = pq. The most popular filter used in this context is arguably the Gabor filter. Other commonly used filters are Gaussian filters, Laplacian of Gaussian filters, or just the intensity filter (i.e. the identity). For a thorough description of spectral histograms of filtered images and their application to texture segmentation, we refer to [23].
In the following, we compare our approach to the Outex US 00000 test suite of the Outex texture database (http://www.outex.oulu.fi) and the Prague ICPR2014 contest [15] (http://mosaic. utia.cas.cz/icpr2014/).
On the Outex database, we provide a thorough comparison between the proposed method and Factorization-Based Texture Segmentation (FSEG) [33]. We chose to focus on FSEG here, because it 1) ranks best in the ICPR2014 contest among methods with available code and 2) is similar to our framework. Table 1 quantifies the mean segmentation performance and its standard deviation over all 100 texture mosaics from the Outex US 00000 test suite. Three different versions of FSEG, described in the caption of Table 1, are used for this comparison. Note that running FSEG without its TxtMerge post-processing (first row) makes sense in this case, since the number of segments is known. However, this also disables filling holes in the segments, as seen in the sixth column of Figure 1. While FSEG * -TxtMerge performs best in correct segmentation (CS), and FSEG * achieves the smallest omission error (O), our method ranks highest in all remaining measures. Note that in Table 1 we also compare our method to plain clustering in order to evaluate the benefit of 1) the improved initialization strategy Table 1. Gray-scale texture segmentation comparison on the Outex US 00000 test suite with known number of segments. FSEG [33] uses the ICPR2014 code of FSEG with fixed number of segments (segn = 5). FSEG * and Algorithm 2 both combine spectral histograms with 11 bins, window sizes s = 15 and s = 30 (stacked with weights 0.8 and 0.2 respectively), and Gabor filters of kernel sizes σ = 5, 7, 9 and orientations θ = 0, 1 2 π, 1 4 π, − 1 4 π. FSEG * -TxtMerge is the same as FSEG * but runs without FSEG's TxtMerge post-processing. Bold face highlights the best, a star the second-best result in each column. via PCA and 2) the subsequent variational optimization including region boundary smoothing. Indeed, the proposed method performs significantly better than plain clustering. Finally, a visual inspection of Figure 1 indicates that the proposed method provides a good compromise between fidelity of region boundaries and reduction of artifacts (holes, missing regions).
Method CS ↑ O ↓ C ↓ CA ↑ CO ↑ CC ↑ FSEG * -
Next, we compare the proposed method to results from the Prague ICPR2014 contest [15]. Here, we use the same feature extractors as above (on the lightness channel), except that the kernel size σ = 9 is omitted. Therefore, we add an intensity filter on all three channels (in L*a*b color space) to each spectral histogram. The number of segments is estimated as k = min{k : 1 n m j=k +1 λ j < ω} with ω = 0.05. Since this estimate is not precise (even for an optimal choice of ω), we additionally employ FSEG's TxtMerge post-processing. Table 2 quantifies the mean segmentation quality over all 80 colored texture mosaics from the Prague ICPR2014 contest dataset. While our method produces larger over-segmentation (OS) than the other best-ranked methods, indicating stronger overestimation of the number of segments, it performs best for under-segmentation (US), indicating a good coverage of all ground truth segments, reflecting the good initialization strategy. Moreover, our method performs second-best for correct segmentation (CS), omission error (O), class accuracy (CA) and correct assignment (CO). Most notably, according to all other presented measures and in total half of them, our method performs best among all competitors.
Note that VRA-PMCFA resolves fine boundary features but produces labeling noise, whereas our method smoothes region boundaries in favor of suppressing labeling noise. Thus, it depends on the application which of the two methods is likely to be more suitable.
Unsupervised crystal segmentation
A fundamental research topic in materials science is the analysis and modeling of crystals. Modern transmission Table 2. Color texture segmentation on the Prague ICPR2014 contest dataset with unknown number of segments. Bold face highlights the best, a star the second-best value in each column, and † indicates that no corresponding publication could be found at the time of writing. electron microscopes (TEM) allow for imaging at atomic scale, which makes the crystal grid visible. In a perfect setting, the crystal is given by a Bravais lattice L a1,a2 = {n 1 a 1 + n 2 a 2 : n 1 , n 2 ∈ Z},
Method CS ↑ OS ↓ US ↓ ME ↓ NE ↓ O ↓ C ↓ CA ↑ CO ↑ CC ↑ VRA-PMCFA
where a 1 , a 2 ∈ R 2 are the lattice vectors defining the orientation and spacing of the crystal. However, crystals of interest usually exhibit a more complicated behavior, like discontinuous orientation changes along so-called grain boundaries. The fully automatic analysis of grain geometries in TEM images is subject of ongoing research [12]. Available variational approaches for grain segmentation [4,5,12] are built on the assumption that all grains can be characterized through transformations of a local stencil q 1 , . . . , q N ∈ R 2 , corresponding to a reference crystal given by all linear combinations of a 1 , a 2 with coefficients in {−1, 0, 1}. Then, the Mumford-Shah model with an indicator function of the following type can be used [4,5]:
f l (x) := 1 N N k=1 d(g(x), g(x + M (α l )q k )),(20)
Here, d denotes a suitable intensity distance function and M (α l ) ∈ R 2×2 is an orthogonal matrix, rotating the stencil by the angle of the the l-th grain relative to the reference. The need for a-priori knowledge of the reference crystal structure inherent to indicators like (20) is a severe limitation of available methods. As we will show, this limitation can be overcome by using our proposed framework with the modulus of the 2D-FFT as the local feature extractor:
F FFT [g](x) = (|FFT[g(W s (x))] ij |) ij .(21)
Let us assume that the window W s (x) is large enough to cover at least one period of the crystal in either direction at any location. Then, the modulus of the Fourier transform F FFT [g](x) automatically encodes the local stencil (M (α l )q k ) N k=1 within the positions of Bragg reflections. Assuming that the window size s matches the period of the crystal and the unit cell is a square, i.e. the discrete signals g(W s (x)) are exactly periodic, the translation of the window across the image causes a phase shift in frequency domain. This phase shift is canceled by the absolute value in the modulus, making the feature extractor F FFT [g](x) translation invariant inside crystal regions with fixed lattice parameters. Though in practice this assumption is not met, artifacts in frequency domain caused by window boundary effects are easily handled by the perimeter regularization, as long as the window size s is chosen reasonably large. Furthermore, these are also reduced through the proposed dimension reduction of the fidelity term (14). Note that crystal images are usually far from periodic at the boundary (s pixels in orthogonal direction) and thus F FFT [g](x) cannot be reasonably defined there. Here, we simply extend the segmentation constantly to cover the boundary region. Figure 2 shows segmentation results obtained by the proposed method with a 2D-FFT modulus based feature extractor of sizes s = 15 (rows 1-3) and s = 20 (last row). The crystals in the first three rows consist of regions differing only in crystal orientation. From visual inspection, the results for the noise-free images are exact up to inter-atomic distance. In the first row, despite the noisy grain (third column) begin hardly recognizable due to high noise power (Gaussian noise with a standard deviation of 100% of the maximum noise-free image intensity), the segmentation deviates little from that of the noise-free grain. Similar results are observed for the three-phase scenario (second row). A lower noise power (66%) was chosen, because otherwise the small bottom grain could not be detected, likely due to its small size compared to the window size. The multiphase segmentation also works very well for five regions (third row) under the presence of very strong noise (100%). Furthermore, as seen in the bottom row of Figure 2, the proposed Fourier-based segmentation is feasible and robust to large amounts of noise (100%), even if the individual grains have entirely different crystal lattices. This is a type of material of practical relevance to material scientists that cannot be handled by the stencil based methods [4,5,12].
Conclusions
We have discussed a variational framework for multiphase image segmentation based on structural information Figure 2. Segmentation of crystals without (left) and with (right) noise, computed by the proposed method using the 2D-FFT modulus (21) and visualized as boundary curves (red). Bottom row images courtesy of Paul Voyles. from high-dimensional local features. The framework imposes no special constraints on the used indicator functions, except that they are suitable for structure discrimination in the sense that they should be roughly homogeneous inside the structures of interest and provide some contrast across the different regions of interest. A robust initialization strategy for the segmentation algorithm was presented in this context, based on dimension reduction and decorrelation via PCA, as well as edgeness detection and clustering. Numerical results for two applications were presented. For texture segmentation, the proposed framework provides very competitive results, including state-of-the-art performance on the Prague benchmark. Using the 2D-FFT as feature ex-tractor, robust and unsupervised crystal segmentation can be achieved, including segmentation of crystals with entirely different structure from extremely noisy data and without apriori information about the crystals. We would like to point out that the proposed method can also be applied directly to high-dimensional data, for instance in spectroscopy.
The source code of the proposed method, including executables reproducing all presented results, is available at http://nmevenkamp.github.io/pcams.
Figure 1 .
1Segmentations of the first three mosaics from the Outex US 00000 test suite. The first column shows the original image, the second the ground truth and the remaining columns the results by FSEG[33], Clustering, FSEG * , FSEG * -TxtMerge and Algorithm 2.
Acknowledgments The authors thank Paul Voyles for providing simulated STEM images of a two-phase crystal.
Functions of bounded variation and free discontinuity problems. L Ambrosio, N Fusco, D Pallara, Oxford Mathematical Monographs. 2Oxford University PressL. Ambrosio, N. Fusco, and D. Pallara. Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs. Oxford University Press, New York, 2000. 2
Contour detection and hierarchical image segmentation. P Arbelaez, M Maire, C Fowlkes, J Malik, PAMI33P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Con- tour detection and hierarchical image segmentation. PAMI, 33(5):898-916, 2011. 1
An iterative algorithm for cell segmentation using short-time Fourier transform. J Barba, J Gil, J. Microsc. 18424J. Barba and J. Gil. An iterative algorithm for cell seg- mentation using short-time Fourier transform. J. Microsc., 184(2):127-132, 1996. 1, 4
Extracting grain boundaries and macroscopic deformations from images on atomic scale. B Berkels, A Rätz, M Rumpf, A Voigt, J. Sci. Comput. 3517B. Berkels, A. Rätz, M. Rumpf, and A. Voigt. Extracting grain boundaries and macroscopic deformations from im- ages on atomic scale. J. Sci. Comput., 35(1):1-23, 2008. 1, 7
Convex relaxation for grain segmentation at atomic scale. M Boerdgen, B Berkels, M Rumpf, D Cremers, VMV. 17M. Boerdgen, B. Berkels, M. Rumpf, and D. Cremers. Con- vex relaxation for grain segmentation at atomic scale. In VMV, 2010. 1, 7
A first-order primal-dual algorithm for convex problems with applications to imaging. A Chambolle, T Pock, J. Math. Imaging Vision. 401A. Chambolle and T. Pock. A first-order primal-dual algo- rithm for convex problems with applications to imaging. J. Math. Imaging Vision, 40(1):120-145, 2011. 2
Wavelet-based rotational invariant roughness features for texture classification and segmentation. D Charalampidis, T Kasparis, TIP. 1184D. Charalampidis and T. Kasparis. Wavelet-based rotational invariant roughness features for texture classification and segmentation. TIP, 11(8):825-837, 2002. 1, 4
Color image segmentation: advances and prospects. H.-D Cheng, X Jiang, Y Sun, J Wang, Pattern recognition. 34123H.-D. Cheng, X. Jiang, Y. Sun, and J. Wang. Color image segmentation: advances and prospects. Pattern recognition, 34(12):2259-2281, 2001. 1, 3
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions. M D Collins, J Xu, L Grady, V Singh, CVPR. M. D. Collins, J. Xu, L. Grady, and V. Singh. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions. In CVPR, 2012. 1
K-means clustering via principal component analysis. C Ding, X He, ICML. C. Ding and X. He. K-means clustering via principal com- ponent analysis. In ICML, 2004. 1
Image texture characterization using the discrete orthonormal Stransform. S Drabycz, R G Stockwell, J R Mitchell, J. Digit. Imaging. 2264S. Drabycz, R. G. Stockwell, and J. R. Mitchell. Image texture characterization using the discrete orthonormal S- transform. J. Digit. Imaging, 22(6):696-708, 2009. 1, 4
Fast automated detection of crystal distortion and crystal defects in polycrystal images. M Elsey, B Wirth, Multiscale Modeling & Simulation. 1217M. Elsey and B. Wirth. Fast automated detection of crystal distortion and crystal defects in polycrystal images. Multi- scale Modeling & Simulation, 12(1):1-24, 2014. 1, 7
Efficient graphbased image segmentation. P F Felzenszwalb, D P Huttenlocher, IJCV. 592P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph- based image segmentation. IJCV, 59(2):167-181, 2004. 7
Blood vessel segmentation methodologies in retinal images-a survey. M M Fraz, P Remagnino, A Hoppe, B Uyyanonvara, A R Rudnicka, C G Owen, S A Barman, Methods Programs Biomed. 1081ComputM. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, and S. A. Barman. Blood vessel seg- mentation methodologies in retinal images-a survey. Com- put. Methods Programs Biomed., 108(1):407-433, 2012. 1
Unsupervised image segmentation contest. M Haindl, S Mikes, ICPR. 56M. Haindl and S. Mikes. Unsupervised image segmentation contest. In ICPR, 2014. 5, 6
Unsupervised hierarchical weighted multi-segmenter. M Haindl, S Mikes, P Pudil, Multiple Classifier Systems. J. A. Benediktsson, J. Kittler, and F. Roli5519M. Haindl, S. Mikes, and P. Pudil. Unsupervised hierarchi- cal weighted multi-segmenter. In J. A. Benediktsson, J. Kit- tler, and F. Roli, editors, Multiple Classifier Systems, volume 5519 of Lecture Notes in Computer Science, pages 272-282.
. Heidelberg Springer Berlin, Springer Berlin Heidelberg, 2009. 7
Variational and PCA based natural image segmentation. Y Han, X.-C Feng, G Baciu, Pattern Recognition. 4674Y. Han, X.-C. Feng, and G. Baciu. Variational and PCA based natural image segmentation. Pattern Recognition, 46(7):1971-1984, 2013. 1, 4
Image segmentation based on the integration of colour-texture descriptors-a review. D E Ilea, P F Whelan, Pattern Recognition. 4410D. E. Ilea and P. F. Whelan. Image segmentation based on the integration of colour-texture descriptors-a review. Pattern Recognition, 44(10):2479-2501, 2011. 5
Dimension reduction by local principal component analysis. N Kambhatla, T K Leen, Neural Comput. 97N. Kambhatla and T. K. Leen. Dimension reduction by local principal component analysis. Neural Comput., 9(7):1493- 1516, 1997. 1
Clustering highdimensional data: A survey on subspace clustering, patternbased clustering, and correlation clustering. H.-P Kriegel, P Kröger, A Zimek, 1:1-1:58ACM Trans. Knowl. Discov. Data. 31H.-P. Kriegel, P. Kröger, and A. Zimek. Clustering high- dimensional data: A survey on subspace clustering, pattern- based clustering, and correlation clustering. ACM Trans. Knowl. Discov. Data, 3(1):1:1-1:58, Mar. 2009. 3
A multiphase image segmentation method based on fuzzy region competition. F Li, M K Ng, T Y Zeng, C Shen, SIAM J. Imaging Sci. 331F. Li, M. K. Ng, T. Y. Zeng, and C. Shen. A multiphase im- age segmentation method based on fuzzy region competition. SIAM J. Imaging Sci., 3(3):277-299, 2010. 1
Spectral-spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields. J Li, J M Bioucas-Dias, A Plaza, TGRS. 503J. Li, J. M. Bioucas-Dias, and A. Plaza. Spectral-spatial hyperspectral image segmentation using subspace multino- mial logistic regression and Markov random fields. TGRS, 50(3):809-823, 2012. 1
Image and texture segmentation using local spectral histograms. X Liu, D Wang, TIP. 15105X. Liu and D. Wang. Image and texture segmentation using local spectral histograms. TIP, 15(10):3066-3077, Oct 2006. 1, 5
A finite algorithm for finding the projection of a point onto the canonical simplex of R n. C Michelot, J. Optim. Theory Appl. 501C. Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of R n . J. Optim. Theory Appl., 50(1):195-200, 1986. 3
Optimal approximations by piecewise smooth functions and associated variational problems. D Mumford, J Shah, Commun. Pure Appl. Math. 425D. Mumford and J. Shah. Optimal approximations by piece- wise smooth functions and associated variational problems. Commun. Pure Appl. Math., 42(5):577-685, 1989. 1, 2
Variational segmentation and PCA applied to dynamic PET analysis. B Parker, D D Feng, Pan-Sydney Area Workshop on Visual Information Processing. 14B. Parker and D. D. Feng. Variational segmentation and PCA applied to dynamic PET analysis. In Pan-Sydney Area Work- shop on Visual Information Processing, 2003. 1, 4
Subspace clustering for high dimensional data: A review. L Parsons, E Haque, H Liu, SIGKDD Explor. Newsl. 613L. Parsons, E. Haque, and H. Liu. Subspace clustering for high dimensional data: A review. SIGKDD Explor. Newsl., 6(1):90-105, 2004. 1, 3
A review of recent texture segmentation and feature extraction techniques. T R Reed, J H Dubuf, CVGIP: Image understanding. 573T. R. Reed and J. H. DuBuf. A review of recent texture seg- mentation and feature extraction techniques. CVGIP: Image understanding, 57(3):359-372, 1993. 1
Wavelet frame based multiphase image segmentation. C Tai, X Zhang, Z Shen, SIAM J. Imaging Sci. 64C. Tai, X. Zhang, and Z. Shen. Wavelet frame based multi- phase image segmentation. SIAM J. Imaging Sci., 6(4):2521- 2546, 2013. 1
Dimensionality reduction: A comparative review. L J Van Der Maaten, E O Postma, H J Van Den, Herik, JMLR. 101L. J. van der Maaten, E. O. Postma, and H. J. van den Herik. Dimensionality reduction: A comparative review. JMLR, 10(1-41):66-71, 2009. 1
Survey of contemporary trends in color image segmentation. S R Vantaram, E Saber, J. Electron. Imaging. 214S. R. Vantaram and E. Saber. Survey of contemporary trends in color image segmentation. J. Electron. Imaging, 21(4):040901-1-040901-28, 2012. 1
Efficient Gabor filter design for texture segmentation. T P Weldon, W E Higgins, D F Dunn, Pattern Recognition. 2912T. P. Weldon, W. E. Higgins, and D. F. Dunn. Efficient Gabor filter design for texture segmentation. Pattern Recognition, 29(12):2005-2015, 1996. 1
Factorization-based texture segmentation. J Yuan, D Wang, A Cheriyadat, TIP. 24117J. Yuan, D. Wang, and A. Cheriyadat. Factorization-based texture segmentation. TIP, 24(11):3488-3497, November 2015. 3, 4, 5, 6, 7
Fast global labeling for real-time stereo using multiple plane sweeps. C Zach, D Gallup, J.-M Frahm, M Niethammer, VMV. C. Zach, D. Gallup, J.-M. Frahm, and M. Niethammer. Fast global labeling for real-time stereo using multiple plane sweeps. In VMV, 2008. 2
| []
|
[
"Improving Diversity with Adversarially Learned Transformations for Domain Generalization",
"Improving Diversity with Adversarially Learned Transformations for Domain Generalization"
]
| [
"Tejas Gokhale [email protected] \nArizona State University\n\n",
"Rushil Anirudh [email protected] \nLawrence Livermore National Laboratory\n\n",
"Jayaraman J Thiagarajan \nLawrence Livermore National Laboratory\n\n",
"Bhavya Kailkhura [email protected] \nLawrence Livermore National Laboratory\n\n",
"# Chitta Baral \nArizona State University\n\n",
"Yezhou Yang [email protected] \nArizona State University\n\n"
]
| [
"Arizona State University\n",
"Lawrence Livermore National Laboratory\n",
"Lawrence Livermore National Laboratory\n",
"Lawrence Livermore National Laboratory\n",
"Arizona State University\n",
"Arizona State University\n"
]
| []
| To be successful in single source domain generalization (SSDG), maximizing diversity of synthesized domains has emerged as one of the most effective strategies. Recent success in SSDG comes from methods that pre-specify diversity inducing image augmentations during training, so that it may lead to better generalization on new domains. However, naïve pre-specified augmentations are not always effective, either because they cannot model large domain shift, or because the specific choice of transforms may not cover the types of shift commonly occurring in domain generalization. To address this issue, we present a novel framework called ALT: adversarially learned transformations, that uses an adversary neural network to model plausible, yet hard image transformations that fool the classifier. ALT learns image transformations by randomly initializing the adversary network for each batch and optimizing it for a fixed number of steps to maximize classification error. The classifier is trained by enforcing a consistency between its predictions on the clean and transformed images. With extensive empirical analysis, we find that this new form of adversarial transformations achieves both objectives of diversity and hardness simultaneously, outperforming all existing techniques on competitive benchmarks for SSDG. We also show that ALT can seamlessly work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance. | 10.1109/wacv56688.2023.00051 | [
"https://export.arxiv.org/pdf/2206.07736v2.pdf"
]
| 249,712,431 | 2206.07736 | 5cdd39c55284e5167cbb4c07a980118b339a05ae |
Improving Diversity with Adversarially Learned Transformations for Domain Generalization
Tejas Gokhale [email protected]
Arizona State University
Rushil Anirudh [email protected]
Lawrence Livermore National Laboratory
Jayaraman J Thiagarajan
Lawrence Livermore National Laboratory
Bhavya Kailkhura [email protected]
Lawrence Livermore National Laboratory
# Chitta Baral
Arizona State University
Yezhou Yang [email protected]
Arizona State University
Improving Diversity with Adversarially Learned Transformations for Domain Generalization
Code: https://githubcom/tejas-gokhale/ALT
To be successful in single source domain generalization (SSDG), maximizing diversity of synthesized domains has emerged as one of the most effective strategies. Recent success in SSDG comes from methods that pre-specify diversity inducing image augmentations during training, so that it may lead to better generalization on new domains. However, naïve pre-specified augmentations are not always effective, either because they cannot model large domain shift, or because the specific choice of transforms may not cover the types of shift commonly occurring in domain generalization. To address this issue, we present a novel framework called ALT: adversarially learned transformations, that uses an adversary neural network to model plausible, yet hard image transformations that fool the classifier. ALT learns image transformations by randomly initializing the adversary network for each batch and optimizing it for a fixed number of steps to maximize classification error. The classifier is trained by enforcing a consistency between its predictions on the clean and transformed images. With extensive empirical analysis, we find that this new form of adversarial transformations achieves both objectives of diversity and hardness simultaneously, outperforming all existing techniques on competitive benchmarks for SSDG. We also show that ALT can seamlessly work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
Introduction
Domain generalization is the problem of making accurate predictions on previously unseen domains, especially when these domains are very different from the data distribution on which the model was trained. This is a challenging problem that has seen steady progress over the last few years [4, 41, * Work done during internship at LLNL 33,43,31]. This paper focuses on the special case -single source domain generalization (SSDG) -where the model has access only to a single training domain, and is expected to generalize to multiple different testing domains. This is especially hard because of the limited information available to train the model with just a single source.
When multiple source domains are available (MSDG), recent analysis [18] shows that even simple methods like minimizing empirical risk jointly on all domains, performs better than most existing sophisticated formulations. A corollary to this finding is that success in DG is dependent on diversity -i.e., exposing the model to as many potential training domains as possible. As the SSDG problem allows access only to a single training domain, such an exposure must come in the form of diverse transformations of the source domain that can simulate the presence of multiple domains, ultimately leading to low generalization error.
The idea of using diversity to train models has been sufficiently explored - [22,45,46,7] show that a diverse set of augmentations during training improves a model's robustness under distribution shifts. Specific augmentations can be used if the type of diversity encountered at test time is known; for eg., if it is known that the test set contains random combinations of rotation, translation, and scaling, using augmentations correlated with this domain shift would lead to good performance [2,42,16]. However, since we cannot assume knowledge of the test domain under the SSDG problem statement, the extent to which the model needs to be exposed to specific augmentations remains unclear. Augmentation methods impose a strong prior in terms of the types of diversity that the model is exposed to, which may not match with desirable test-time transformations. As we will show in this paper, data augmentation methods that produce good results on one dataset, do not necessarily work on other datasets -in some cases, they may even hurt performance!
In addition to such a knowledge gap, unfortunately, such augmentation methods can only achieve invariance under small distribution shifts like unknown corruptions, noise, or adversarial perturbations, but do not work effectively when Figure 1. ALT consists of a diversity module (data augmentation functions such as Augmix [22] or RandConv [43] and an adversary network (to learn image transformations that fool the classifier). We show an example from the PACS benchmark under the single-source domain generalization setting, with real photos (P) as the source domain and art paintings (A), cartoons (C), and sketches (S) as the target domains. The plot summarizes our results -while diversity alone improves performance over the naive ERM baseline, adapting this diversity using adversarially learned transformations (ALT) provides a significant boost for domain generalization on multiple benchmarks. the distribution shift is large and of a semantic nature, as in the case of domain generalization. On the other hand, some recent methods have directly used randomized convolutions to synthesize diverse image manipulations [43], motivated by the large space of potentially realizable functions induced by a convolutional layer, which cannot be easily emulated using simple analytical functions.
In this paper we hypothesize that, while diversity is necessary for single-source domain generalization, diversity alone is insufficient -blindly exposing a model to a wide range of transformations may not guarantee greater generalization. Instead, we argue that carefully designed forms of diversity are needed -specifically those that can expose the model to unique and task-dependent transformations with large semantic changes that are otherwise unrealizable with plug-and-play augmentations as before. To this end, we introduce an adversary network whose objective is to find plausible image transformations that maximize classification error. This adversary network enables access to a much richer family of image transformations as compared to prior work on data augmentation. By randomly initializing the adversary network in each iteration, we ensure the adversarial transformations are unique and diverse themselves. We enforce a consistency between a diversity module and the adversary network during training along with the classifier's predictions, so that together they expose the model to learn from both diverse and challenging domains.
Our method, dubbed ALT (adversarially learned transformations), offers an interplay between diversity and adversity. Over time, a synergistic partnership between the diversity and adversary networks emerges, exposing the model to increasingly unique, challenging and semantically diverse examples that are ideally suited for single source domain generalization. The adversary network benefits from the classifier being exposed to the diversity module, and as such avoid trivial adversarial samples with appropriate checks. This allows the adversarial maximization to explore a wider space of adversarial transformations that cannot be covered by prior work on pixel-level additive perturbations.
We demonstrate this advantage of our method empirically on multiple benchmarks -PACS [27], Office-Home [39], and Digits [41]. On each benchmark, we outperform the state-of-the-art single source domain generalization methods by a significant margin. Moreover, since our framework disentangles diversity and adversarial modules, we can combine it with various diversity enforcing techniques -we identify two such state-of-the-art methods with AugMix [22], and RandConv [43], and show that placing them inside our framework leads to significantly improved generalization performance over their vanilla counterparts. We illustrate this idea in Figure 1 where we show an image of a horse from the 'photo' training distribution in PACS and the different styles of cartoon/sketch/art painting horses that may be encountered at test time.
Contributions: We summarize our contributions below.
• We introduce a method, dubbed ALT, which produces adversarially learned image transformations that expose a classifier to a large space of image transformations for superior domain generalization performance. ALT performs adversarial training in the parameter space of an adversary network as opposed to pixel-level adversarial training. • We show how ALT integrates diversity-inducing data augmentation and hardness-inducing adversarial training in a synergistic pipeline, leading to diverse transformations that cannot be realized by blind augmentation strategies or adversarial training methods on their own. • We validate our methods empirically on three benchmarks (PACS, Office-Home, and Digits) demonstrating state-ofthe-art performance and provide analysis of our approach.
Related Work
Multi-Source Domain Generalization. Domain generalization has been explored under both multi-source (MSDG) and single-source (SSDG) settings. For the MSDG task, multiple source domains are available for training and performance is evaluated on other unseen target domains. Techniques designed for MSDG seek to utilize these multiple domains to perform feature fusion [37], learning domaininvariant features [14], meta-learning [28], invariant risk minimization [1], learning mappings between multiple training domains [35], style randomization [31], and learning a conditional generator to synthesize novel domains using cycle-consistency [49] Gulrajani et al. [18] provide an extensive comparative study of these approaches and report that simply performing ERM on the combination of source domains leads to the best performance. Many benchmarks have been proposed to evaluate MSDG performance such as PACS [27], OfficeHome [39], Digits [41], and WILDS [24] which is a compendium of MSDG datasets.
In the Single-Source Domain Generalization setting, only one domain is available for training, and as SSDG is harder as MSDG methods are infeasible; most work has therefore focused on data augmentation. Notable among these methods is the idea of adversarial data augmentation -ADA [41] and M-ADA [33] apply pixel-level additive perturbations to the image in order to fool the classifier. Resulting images are used as augmented data to train the classifier. RandConv [43] shows that shape-preserving transformations in the form of random convolutions of images lead to impressive performance gains on Digits.
Adversarial Attack and Defense. Adversarial attack algorithms have been developed to successfully fool image classifiers via pixelwise perturbations [17,29,3,12]. Algorithms have been developed to defend against such adversarial attacks [29,11,44,23]. The scope of this paper is not to perform adversarial attack and defense, but to develop a framework to obtain adversarially generated samples that improve domain generalization performance.
Adversarial Training. In ALT, we emphasize on the nature of the diversity that could be acquired during training, which is crucial in the single-source setting. ALT learns adversarial perturbations in the function space of neural network weights. This allows us access to a wider and richer space of augmentations compared to pixel-wise perturbations such as ADA and M-ADA, or combinatorial augmentation search methods such as ESDA [40]. The adversarial component in ALT allows the network to seek newer and harder transformations for every batch as training progresses, which cannot be achieved with static augmentations such as AugMix or RandConv, or by utilizing normalization layer statistics for style debiasing [31].
Robustness to Image Corruptions. There has also been interest in training classifiers that are robust to corruptions that occur in the real world, such as different types of noise and blur, artifacts due to compression techniques, and weather-related environments such as fog, rain, and snow. [38,15] show that training models with particular types of corruption augmentations does not guarantee robustness to other unseen types of corruptions or different levels of corruption severity. Hendrycks et al. [21] curate benchmarks (ImageNet-C and CIFAR-C) to test robustness along a fixed set of corruptions. They also provide a benchmark called ImageNet-P which tests robustness against other corruption types such as small tilts and changes in brightness. A similar benchmark for corruptions of handwritten digit images, MNIST-C [30] has also been introduced.
Data Augmentation has been an effective strategy for improving in-domain generalization using simple techniques such as random cropping, horizontal flipping [20], occlusion or removal of patches [10,48]. Data augmentation techniques have been shown to improve robustness against adversarial attacks and natural image corruptions [46,45,7]. Learning to augment data has been explored in the context of object detection [50] and image classification [34,6,47].
Proposed Approach
Under the single-source domain generalization setting, consider the training dataset D containing N image-label
pairs D = {(x i , y i )} N i=1
, and a classifier f parameterized by neural network weights θ. The standard expected risk minimization (ERM) approach seeks to learn θ by minimizing the in-domain risk measured by a suitable loss function such as the cross-entropy loss.
R ERM = E x∈D L CE (f (x; θ), y).(1)
For SSDG, we are interested in a classifier that has the least risk on several unseen target domains D ′ that are not observed during training. We consider SSDG under covariate shift, i.e. when P (X) changes but P (Y |X) remains the same. Our approach builds on diversity based and adversarial augmentation approaches which we outline next.
Generalization via Maximizing Diversity. A successful strategy to improve generalization on unseen domains is to utilize a set of pre-defined data augmentations F div , to emphasize the invariance properties that are important for f (θ) to learn. Such methods modify Equation 1 as:
R div = E x∈D L CE (f (x; θ), y) + λ KL D KL ,(2)
where D KL is a consistency term, typically a divergence, such as KL-Divergence, between the softmax probabilities of the classifier obtained with the clean and transformed data, respectively, e.g., D KL = KL(f (x)||f (F div (x))). The choice of F div leads to different types of augmentations; for instance, AugMix [22] utilizes a combination of pre-defined transformations such as shear, rotate, color jitter, An approach proposed by Xu et al. [43] is to apply a randomly initialized convolutional layer to the input image. Methods such as these are effective strategies to enforce diversity-based consistencies for generalization. Although these methods have the advantage of being simple pre-defined transformations that are dataset agnostic, they suffer from drawbacks under the SSDG setting. When executed on their own, they may not capture sufficient diversity in terms of large semantic shifts, such as when expecting generalization on sketches from a model trained on photos. [41,33]. This is commonly enforced by learning an additive noise vector which when added, maximizes classifier cost. Unfortunately in the case of domain generalization, these methods have failed to match the performance of diversity-only methods optimizing for the cost outlined in Equation 2. This is in part because they lack sufficient diversity, and by design they can only guarantee robustness to small perturbations from the training domain, as opposed to large semantic and stylistic shifts, which are crucial for domain generalization.
ALT: Adversarially Learned Transformations
While diversity-only methods have shown promise, they are limited in their ability to generalize to domains with large shifts. On the other hand, techniques based purely on adversarial hardness are theoretically well-motivated but do not match the performance of diversity-based methods. In this paper, we propose a new approach that takes the best of these two approaches using an adversary network that is trained to create semantically consistent image transformations that fool the classifier. These manipulated images are then used during training as examples on which the image must learn invariance. Since these perturbations are parameterized as learnable weights of a neural network, the network is free to choose large, complex transformations without being restricted to additive noise as done in previous work [41]. Further, this network is randomly initialized for each batch, making the types of adversarial transformations discovered unique and diverse over the course of training. Formally, the adversary network g transforms the input image as
x g = g(x), where g : R C×H×W → R C×H×W(3)
where C, H, W are the number of channels, height, and width of input images. g is parameterized by weights ϕ. This network, dubbed ALT, forms the backbone of our method.
Algorithm 1 Adaptive Diversity via ALT
Input: Source dataset D = {(x i , y i )} N i=1
Output: Network Parameters θ * 1:
Initialize: θ ← θ 0 ▷ weights of f () 2: for each t ∈ {1 . . . T } do 3: xt, y t ∼ D ▷ sample input batch 4: if t < Tpre then 5: θ ← θ − η∇L CE (f (xt; θ), y t )) 6: else 7: ρ ← ρ 0 , ϕ ← ϕ 0 ▷ weights of r(), g() 8: for each i ∈ 1 . . . m adv do 9:ŷ g ← f (g(x; ϕ); θ) 10: ϕ ← ϕ + ∇(L cls (ŷ g , y) − L T V (xg)) 11:
end for each 12: To train ALT, we setup an adversarial optimization problem with the goal of producing transformations, which when applied to the source domain, can fool the classifier f . While existing efforts dealing with robustness to small corruptions use ℓ p norm-bounded pixel-level perturbations to fool the model, we find that this is not sufficient for domain generalization as such methods do not allow searching for adversarial samples with semantic changes. Instead, we directly perform adversarial training in the space of ϕ, i.e., the neural network weights of ALT. Given input images x, parameters ϕ are randomly initialized, and the corresponding adversarial samples x g are found as:
θ ← θ − η adv ∇L ALT ▷ seex g = max ϕ L CE (f (g(x; ϕ); θ), y) − L T V (g(x; ϕ)). (4)
The first term seeks to update ϕ to maximize the classifier loss, while L T V (total variation) [36] acts as a smoothness regularization for the generated image x g = g(x; ϕ). The maximization in Eq. 4 is solved by performing m adv steps of gradient descent with learning rate η adv . We note a few important aspects of ALT -unlike existing methods that explicitly place an ℓ p −norm constraint on the adversarial perturbations, we control the strength of the adversarial examples by limiting the number of optimization steps taken by g to maximize classification error. Next, since we randomly initialize g for each batch, the network is reset to a random function. In fact, when the number of adversarial steps is set to 0, g behaves similar to RandConv [43] since it is only a set of convolutional layers, with additional non-linearity. Finally, in addition to limiting the number of adversarial steps, we place a simple total variation loss on the generated image to force smoothness in the output. This naturally suppresses high frequency noise-like artifacts and encourages realistic image transformations. It also prevents the optimization from resorting to learning trivial transformations in order to maximize classifier loss, such as noise addition or entirely removing or obfuscating the semantic content of the image. Improving Diversity. The samples x g obtained by solving Equation 4 represent hard adversarial images that can be leveraged by the model to generalize to domain shift. But it also lends itself to exploit other forms of naïve diversity achieved by methods like RandConv and AugMix. We represent these "diversity modules" as r, which produce outputs x r = r(x). Our method utilizes these samples in the training process by enforcing a consistency between the predictions of the classifier on the source image and its transformations from r and g. By including the diversity module into the optimization process, the invariances inferred by the classifier lead to stronger and more diverse adversarial examples in future epochs. Eventually, a synergistic partnership emerges between the diversity module and the adversary network to produce a wide range of image transformations that are significantly different from the source domain.
Let p c , p r , and p g denote the softmax prediction probabilities of classifier f on x, x r , and x g , respectively. Then the consistency between these predictions can be computed using Kullback-Leibler divergence [25] as:
L KL = D KL (p mix ||p c ) + w r D KL (p mix ||p r ) + (2 − w r )D KL (p mix ||p g ), (5)
where p mix denotes the mixed prediction:
p mix = p c + w r p r + (2 − w r )p g 3 .(6)
The weight w r ∈ [0, 2] controls the relative contribution of diversity and adversity to the consistency loss; w r > 1 implies more weight on consistency with the diversity module; w r < 1 implies more weight on consistency with the adversary network. In our experiments, we use w r = 1, i.e., both diversity and adversary are given equal importance. Our final loss function for training the classifier is given as the convex combination of the consistency L KL and the classifier loss L cls = L CE (f (g(x); θ), y), as shown below:
L ALT = (1 − λ KL )L cls + λ KL L KL .(7)
Implementation. Algorithm 1 shows how ALT is implemented. In our experiments, we use RandConv or AugMix as the diversity module r and a fully-convolutional image-toimage network as the adversary network g. g has 5 convolutional layers with kernel size 3 and LeakyReLU activation. We train the classifier for a total of T batch iterations of which T pre iterations are used for pre-training the classifier using standard ERM on only the source domain (with only L cls ). During each batch iterations t > T pre , we randomly initialize the weights of both r and g with the "Kaiming Normal" strategy [20] as our starting point for producing diverse perturbations, and update g using the adversarial cost in Equation 4. After g is adversarially updated for the given batch, we use the combination of classifier loss and consistency in Equation 7 to update model parameters θ.
Experiments
We validate our approach with extensive empirical analysis of ALT and its constituent parts using three popularly used domain generalization benchmarks.
Datasets. The SSDG setup is as follows: we train on a single source domain, and evaluate its performance on unobserved target (or test) domains with no access to any data from them during training. We demonstrate the effectiveness of our approach using three popular domain generalization benchmark datasets: (a) PACS [27] consists of images belonging to 7 classes from 4 domains (photo, art painting, cartoon, sketch); we choose one domain as the source and the rest as target domains. (b) Office-Home [39] consists of images belonging to 65 classes from 4 domains (art, clipart, real, product); we choose one domain as the source and the rest as target domains. (c) Digits: we follow the setting from Volpi et al. [41] and use 1000 images from MNIST [26] as the source dataset, and USPS [9], SVHN [32], MNIST-M and SYNTH [13] as the target datasets.
Evaluation. For all datasets, we train models on each individual domain, and test on the remaining domains. We provide fine-grained results on each test set as well as the average domain generalization performance. We compare with several state-of-the-art techniques on SSDG and compare three variants of our methods: ALT g−only refers to the simplest form of our method that only uses the adversary network during training without an explicit diversity module r. ALT RandConv and ALT AugM ix utilize RandConv and AugMix, respectively, as the diversity module, where the consistency is now placed as explained in Equation 5.
PACS
Baselines. Our baselines are JiGen [4], ADA [41], Aug-Mix [22], RandConv [43], and SagNet [31] -designed to reduce style bias using normalization techniques. We also implement a combination of RandConv and AugMix -i.e. instead of the ALT formulation of using a diversity module and our adversary network, we use two diversity modules (RandConv and AugMix) and enforce the same consistency as Equation 5. This allows us to compare how effective the adversary network is, compare to using two sources of diversity. We use ResNet18 [20] pre-trained on ImageNet as our model architecture and train all models for 2000 iterations with batch-size of 32, learning rate 0.004, SGD optimizer with cosine annealing learning rate scheduler, weight decay of 0.0001, and momentum 0.9. For ALT, we set consistency coefficient λ KL =0.75, adversarial learning rate η adv =5e−5, number of adversarial steps m adv =10 and w r =1.0.
Results. Results are shown in Table 1. We observe that ALT without a diversity module (ALT g−only ) surpasses generalization performance of all prior methods including diversity Table 2. Single-source domain generalization accuracy (%) on Office-Home [39]. X→Y implies X is the source and Y is the target dataset. R: real; A: art; C: clipart; P: product. Performance is reported as mean of 5 repetitions. Standard deviation values are in the appendix.
methods RandConv and AugMix and the previous best Sag-Net [31]. ALT with adaptive diversity further improves the results and ALT AugM ix establishes a new state-of-the-art accuracy of 64.7%. All three variants of ALT are better than the combination of RandConv+AugMix, providing further evidence that adversarially learned transformations are more effective than combinations of diversity-based augmentations. The Sketch (S) target domain (human drawn blackand-white sketches of real objects) has been the most difficult for previous methods; the difficulty can be observed in terms of performance in columns A→S, C→S, and P →S. ALT significantly improves the performance on the sketch target domain. Generalizing from photos as source to C, S, A as targets is a very realistic setting, since large-scale natural image datasets such as ImageNet [8] are widely used and publicly available, while data for sketches, cartoons, and paintings are limited. ALT is the best model under this realistic setting.
Office-Home
Baselines. For OfficeHome, we follow the protocol from the previous state-of-the-art Sagnet [31] and use ResNet50 as the model architecture. Note that we do not perform any hyperparameter tuning for OfficeHome and directly apply identical training settings and hyperparameters from PACS.
Results. Table 2 shows the results on Office-Home. We observe that RandConv (previous best on Digits) and Sag-Net (previous best on PACS) perform worse than ERM on OfficeHome, while AugMix is better by 2.44%. The com-bination of RandCon+AugMix is also worse than the ERM baseline. All three variants of ALT surpass prior results, with ALT AugM ix resulting in the best accuracy of 59.45%. The most difficult target domain for previous methods is Clipart (C), possibly because most clip-art images have white backgrounds, while real world photos (R) and product images are naturally occurring. ALT improves performance in each case with C as the target domain. An observation similar to PACS can also be made here -ALT is the best model under the realistic setting of generalizing from widely available real photos (R) to other domains.
Digits
Baselines. Our baselines include a naïve "source-only" model trained using expected risk minimization (ERM) on the source dataset, M-ADA [33] -an adversarial data augmentation method, and AugMix [22] and RandConv [43] which exploit diversity through consistency constraints. We also compare with ESDA [40], an evolution-based search procedure over a pre-defined set of augmentations [6]. We use DigitNet [41] as the model architecture for all models for a fair comparison. All models are trained for T =10000 iterations, with batch-size of 32, learning rate of 0.0001, using the Adam optimizer. For ALT, we set the consistency coefficient λ KL =0.75, adversarial learning rate η adv =5e−6, number of adversarial steps m adv =10, and equal weight w r =1.0 for diversity and adversary networks.
Results. Table 3 shows that pixel-level adversarial train- Table 3. Single-source domain generalization accuracy (%) on digit classification, with MNIST-10K as source and MNIST-M [13], SVHN [32], USPS [9], and SYNTH [13] as target domains. Note: ADA and M-ADA do not report standard deviation.
ing approaches (ADA and M-ADA) offer only marginal improvements over the naïve ERM baseline. The results for diversity-promoting data augmentation methods are mixedwhile AugMix is only 1.09% better than ERM, RandConv provides a significant boost. Interestingly, the base version of our approach, ALT g−only , which is exclusively based on adversarial training, is significantly better than pixellevel adversarial training. More importantly, it is also better than diversity method AugMix, while performing lower than RandConv by a small margin 0.39%. When we trained ALT with adaptive diversity (ALT RandConv and ALT AugM ix ), we achieved the best performance, beating previous state-of-theart. SVHN and SYNTH are the hardest target domains as they contain real-world images of street signs or house number signs, whereas USPS is closely correlated with MNIST, both being black-and-white centered images of handwritten digits, and MNIST-M is derived from MNIST but with different backgrounds. AugMix fares poorly on both real-world datasets, but is able to generalize well to MNIST-M and USPS. Although AugMix results in an average accuracy of 54.59% on the target domains, when used in conjunction with ALT, the ALT AugM ix leads to a large gain of 19.79%, highlighting the significance of the adversary network.
Analysis of ALT
In this section we study the various components of ALT, and provide insights into their impact on generalization.
ALT is better than naïve diversity.
Our first big insight is that ALT without an explicit diversity (ALT g−only ) module still outperforms all the top performing methods across the benchmarks we evaluated on, indicating that learned adversarial transformations are a powerful way to train classifiers for generalization.
Our next observation is that ALT makes the choice of diversity module fairly arbitrary. We see this effect on multiple benchmarks -for example, on the Digits benchmark shown in Table 3, AugMix has a relatively poor generalization performance when compared with the baseline ERM whereas ALT Augmix achieves state of the art. This is again seen in the Office-Home benchmark shown in Table 2, where RandConv is worse than ERM, but ALT RandConv is the best performing method. Thus, irrespective of the choice of diversity module, the adversarially learned transformations benefit generalization on all benchmarks.
In Figure 2 (left panel) we analyze the diversity introduced by ALT on the Digits benchmark, in comparison to the source distribution the target (OOD) distribution and the distribution of RandConv augmentations. While RandConv does simulate a domain shift compared to the source, most RandConv points are clustered close to each other. However, the diversity due to ALT is considerably larger and ALT samples are spread widely across the tSNE space. We believe this is because data augmentation functions have a fixed types of diversity (random convolution filter in the case of RandConv), while ALT searches for adversarial transformations for each batch -this leads to novel types of diversity for each batch of training samples. We also show qualitative examples of the image transformations learned with ALT in Figure 2, and it is clear that ALT achieves far more diverse and larger transformations of the input images than previous data augmentation techniques. A similar comparison of ALT with AugMix [22] is shown in the supplementary material.
Effect of Varying ALT Hyperparameters.
The three main hyper-parameters that control ALT are:
(1) λ KL -the coefficient in Eq. 5 which decides the weight for the KL-divergence consistency in the total loss, (2) m adv -the number of adversarial steps in the adversarial maximization of Eq. 4, and (3) w r -the diversity weight which controls the interaction between the diversity module r() and the adversary network g() in Eq. 6. We investigate the effect of each of these on domain generalization accuracy in Figure 3. The first plot shows that the consistency coefficient λ KL is impactful and a higher value leads to better generalization. However at λ KL = 1.0 the accuracy degenerates to random performance; this is expected as the classifier loss gets 1 − λ KL =0 weight. From the second plot, we observe that the optimal number of adversarial steps is around 20. We study the effect of each hyper-parameter in ALT on the average accuracy using the Digits benchmark (shown as 1 standard deviation around the mean over 5 runs). We observe that the consistency (left) is generally important until a certain point, after which it becomes harmful; (middle) taking more adversarial steps improves performance; (right) surprisingly, we find that the trade-off between diversity and adversity is non-trivial and dataset dependent. In our benchmarking experiments (Tables 1, 2 , 3) we do not perform any hyper-parameter tuning, and set wr=1, i.e. equal weight to adversity and diversity.
Note that performance at all non-zero values of m adv that we tried (5,10,15,20,25) is greater than previous state-ofthe-art. The importance of the adversarial module is evident from the third plot -performance at w r = 0 (adversarial module only) is higher than performance of w r = 2 (diversity module only), and the combination of both modules yields the best result. Clearly, the adversarial component is a critical factor that causes improvements in generalization.
Conclusion
In this paper, we address the problem of single source domain generalization. Our approach, Adversarially Learned Transformations (ALT) updates a convolutional network to learn plausible image transformations of the source domain that can fool the classifier during training; and enforces a consistency constraint on the predictions on clean images and transformed images. ALT is a significant improvement over prior methods that utilize pixel-wise perturbations. We showed that this strategy outperforms all existing techniques, including standard data augmentation methods, on multiple benchmarks because it is able to generate a diverse set of large transformations of the source domain. We also find that ALT can be naturally combined with existing diversity modules like RandConv or AugMix to improve their performance. We studied components of ALT through extensive ablations and analysis to obtain insights into its performance gains. Our studies indicate that naïve diversity alone is insufficient, but needs to be combined with adversarially learned transformations to maximize generalization performance.
This appendix contains training settings, additional results and visualizations to supplement the main paper. We also discuss the limitations of ALT and scope for future work in this direction. Code to reproduce experiments has been released publicly: https://github.com/ tejas-gokhale/ALT.
A. Training Settings
B. Detailed Results
We provide detailed results including standard deviation values for our models on the PACS and Office-Home benchmarks, for each source domain. We compare these with RandConv [43], AugMix [22], and a combination of Rand-Conv and AugMix which utilizes AugMix as one of the two augmentations in the consistency constraint of RandConv. Tables 6, 7, 8, 9, when using P, A, C, and S as the source datasets, respectively. Results of the Office-Home experiments are show in Tables 10, 11, 12, 13, when using R, A, C, and P as the source datasets, respectively.
Results of the PACS experiments are shown in
C. Visualizations
In this section, we provide additional visualizations and qualitative examples for augmented images generated by ALT, for Digits (Figure 7), PACS (Figures 8,9,10,11), and Office-Home PACS (Figures 12, 13, 14, 15). In each figure, the first row shows input images x, the second row shows the outputs of the diversity module r(x), and the third row shows the outputs of the adversary network g(x).
In Figure 5 and 6 we show an illustration of the diversity introduced by ALT in comparison to the source distribution, the target (OOD) distribution, and the distribution of RandConv augmentations, for the PACS and OfficeHome benchmarks respectively. The diversity introduced by ALT is much larger and wide-spread than data augmentation techniques such as RandConv.
D. Limitations and Future Directions
In this paper we have explored the effectiveness of ALT on three standard domain generalization benchmarks. For fair comparison, for each baseline model, we use the same model architecture and training settings as the backbone -for instance, ERM, RandConv, AugMix and ALT are all trained with the same hyperparameters as shown in Table 4. For significance of results, we have repeated each experiment (including those in analyses) for 5 different seeds and have reported mean and standard deviation.
D.1. Complexity of Adversary Network.
One limitation (and therefore scope for future work) is that we have considered one family of architecture for our adversary network g -fully convolutional image-to-image translation networks. We conduct additional analysis to understand how this choice affects generalization performance, and compare performance when using between 2 and 6 convolutional layers. We reuse all other training settings from our benchmark model ALT RandConv on both Digits and PACS. Results are shown in Table 5.
For PACS and OfficHome, we observe that all ALT models compared are better than previous baselines including AugMix and RandConv. For Digits, we observe that performance of ALT with a 2-layer g is close to RandConv, and is greater than all previous baselines for higher depth of the network. We do not see a clear correlation across datasets between the number of layers and the domain generalization performance. Investigating the dynamics of model capacity of the adversary network and how it may affect domain generalization, is an interesting direction for future work.
It may be possible that more complex generative architectures (i.e. greater complexity of transformations) may be needed for larger domain shift is larger, to model diversity and adversity for a given source domain. Thus the choice of architecture for g is an interesting direction; nevertheless, in this paper we show that the simple fully convolutional architecture gives us performance boosts in all three datasets.
We believe that ideas presented in this paper, although evaluated on image classification, have the potential of being widely applicable to many other vision tasks for domain generalization. They may also be applied to other application areas such as audio or text, where the transformation function g may take different forms.
Figure 2 .
2(Left) tSNE plot showing the discrepancy between the source distribution (MNIST) and the out-of-distribution datasets for the "Digits" benchmark. The diversity introduced by ALT is much larger and wide-spread than data augmentation techniques such as RandConv.(Right) Qualitative Comparison of PACS images transformed by RandConv data augmentation vs. ALT (ALT RandConv ), illustrating the wide range of transformations learned by ALT.
Figure 3 .
3Analysis:
Figure 4 .
4tSNE plot showing the discrepancy between the source distribution and the out-of-distribution datasets for the Digits benchmark.
Figure 5 .
5tSNE plot showing the discrepancy between the source distribution and the out-of-distribution datasets for the PACS benchmark.
Figure 6 .Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .Figure 12 .Figure 13 .Figure 14 .Figure 15 .
6789101112131415tSNE plot showing the discrepancy between the source distribution and the out-of-distribution datasets for the OfficeHome benchmark. Digits: Comparison of images transformed by RandConv and ALT RandConv with MNIST10k as source dataset. PACS: Comparison of images transformed by RandConv and ALT RandConv with Photo as source dataset. PACS: Comparison of images transformed by RandConv and ALT RandConv with Art-Painting as source dataset. PACS: Comparison of images transformed by RandConv and ALT RandConv with Cartoon as source dataset. PACS: Comparison of images transformed by RandConv and ALT RandConv with Sketch as source dataset. Office-Home: Comparison of images transformed by RandConv and ALT RandConv with Real as source dataset. Office-Home: Comparison of images transformed by RandConv and ALT RandConv with Art as source dataset. Office-Home: Comparison of images transformed by RandConv and ALT RandConv with Clipart as source dataset. Office-Home: Comparison of images transformed by RandConv and ALT RandConv with Product as source dataset.
Method A→C A→S A→P C→A C→S C→P S→A S→C S→P P→A P→C P→S Avg. Method A→C A→P A→R C→A C→P C→R P→A P→C P→R R→A R→C R→P Avg.ERM
62.3
49.0
95.2
65.7
60.7
83.6
28.0
54.5
35.6
64.1
23.6
29.1
54.3
JiGen [4]
57.0
50.0
96.1
65.3
65.9
85.5
26.6
41.1
42.8
62.4
27.2
35.5
54.6
ADA [41]
64.3
58.5
94.5
66.7
65.6
83.6
37.0
58.6
41.6
65.3
32.7
35.9
58.7
SagNet [31]
67.1
56.8
95.7
72.1
69.2
85.7
41.1
62.9
46.2
69.8
35.1
40.7
61.9
RandConv [43]
61.1
60.5
87.3
57.1
72.9
73.7
52.2
63.9
46.1
61.3
37.6
50.5
60.3
AugMix [22]
68.4
54.6
95.2
74.3
66.7
87.3
40.0
57.4
46.8
67.3
26.8
41.4
59.6
RandConv+AugMix
64.2
62.5
90.7
65.4
71.3
78.8
46.1
61.3
54.4
65.5
39.3
40.9
61.7
ALTg−only
63.5
63.8
94.9
68.9
74.4
84.6
39.7
61.1
49.3
68.8
43.4
50.8
63.6
ALTRandConv
63.6
65.8
92.5
69.1
75.1
84.5
40.1
61.7
50.8
68.4
43.4
55.2
64.2
ALTAugMix
65.7
68.2
93.2
71.9
74.2
86.0
40.2
62.9
49.1
68.5
43.5
53.3
64.7
Table 1. Single-source domain generalization accuracy (%) on PACS [5]. X→Y implies X is the source and Y is the target dataset. P: photo;
A: art-painting; C: cartoon; S: sketch. Performance is reported as mean of 5 repetitions. Standard deviation values are in the appendix.
ERM
42.61 59.18 69.45 48.37 56.09 59.38 46.07 40.18 68.19 63.12 45.13 74.34 56.00
SagNet [31]
42.18 56.03 67.34 46.68 53.89 57.88 45.49 40.09 67.11 61.39 48.32 72.79 54.93
RandConv [43]
43.98 55.28 67.31 45.49 56.58 59.03 43.80 43.19 66.50 57.62 48.26 72.97 55.00
AugMix [22]
45.31 61.88 71.88 49.30 58.93 62.24 50.04 42.59 71.51 64.10 47.56 75.95 58.44
RandConv+AugMix 42.61 54.43 65.62 43.70 55.04 57.91 43.24 41.71 65.52 59.17 48.18 71.17 53.94
ALT g−only
47.26 61.14 71.21 48.88 57.81 60.99 48.15 46.70 69.30 64.85 52.84 76.28 58.78
ALT RandConv
48.33 61.19 71.75 50.13 58.82 62.26 49.21 47.03 70.53 64.88 53.10 76.07 59.44
ALT AugM ix
48.06 61.16 71.12 50.43 58.84 61.84 49.32 47.55 70.64 64.86 53.27 76.29 59.45
Table 4
4shows the training settings and hyperparameters used for experiments on each benchmark. See Algorithm 1 in the main paper for context and relevant equations.Variable
Digits
PACS
Office-Home
f architecture DigitNet [41] ResNet18 [20]
ResNet50 [20]
g architecture {conv kernel=3,stride=1,padding=1,leakyReLUp=0.2 } × 4
ρ 0 , ϕ 0
Kaiming Normal Initialization [19]
T
10000
2000
2000
T pre
1250
400
400
η
1e-4
0.004
0.004
m adv
10
10
10
η adv
5e-6
5e-5
5e-5
w r
1.0
1.0
1.0
λ KL
0.75
0.75
0.75
Table 4. Training settings and hyper-parameters for experiments on
each benchmark.
Photo ⋆
PhotoArt-Painting Cartoon Sketch Target Avg. PACS Avg. RandConv 96.407 ±0.757 61.309 ±2.316 37.577 ±2.257 50.463 ±9.018 49.783 ±4.255 61.439 ±3.217 AugMix 99.532 ±0.438 68.633 ±0.950 33.788 ±1.205 36.304 ±2.801 46.242 ±1.122 59.564 ±0.930 RandConv + AugMix 98.363 ±0.438 65.527 ±3.060 39.300 ±6.237 40.901 ±5.073 48.576 ±4.031 61.023 ±3.001 ±2.537 56.024 ±2.009 55.197 ±0.498 66.135 ±0.330 ALTAugMix 99.298 ±0.438 68.506 ±0.836 43.507 ±2.615 53.271 ±4.149 55.094 ±1.876 66.145 ±1.387 Table 6. SSDG performance on PACS for the P→ACS setting. ⋆ Source Domain. bold: best result. ±0.361 55.027 ±2.195 71.469 ±0.637 76.871 ±0.581 RandConv + AugMix 90.743 ±0.781 90.481 ±0.638 64.206 ±2.238 62.515 ±2.854 72.488 ±1.731 76.986 ±1.177 ALT g−only 94.934 ±0.269 91.058 ±0.720 63.524 ±1.821 63.813 ±2.249 74.090 ±1.086 78.332 ±0.845 ALT RandConv 93.593 ±0.328 92.596 ±1.036 64.044 ±0.635 65.991 ±1.130 74.543 ±0.537 79.056 ±0.609 ALTAugMix 93.174 ±0.437 91.442 ±0.638 65.683 ±1.656 68.226 ±2.453 75.694 ±1.214 79.631 ±0.856Method
ALT g−only
99.064 ± 0.286 68.770 ± 0.932
43.387 ± 1.142 50.832 ± 2.937 54.330 ± 1.078 65.513 ± 0.757
ALT RandConv
98.947 ±0.234 68.740 ±0.702
40.828 Method
Photo
Art Painting ⋆ Cartoon
Sketch
Target Avg. PACS Avg.
RandConv
87.281 ±0.796 85.437 ±0.532
61.143 ±2.752 60.519 ±4.050 69.648 ±2.152 73.595 ±1.582
AugMix
95.317 ±0.422 93.077 ±1.276
64.061
Table 7 .
7SSDG performance on PACS for the A→PCS setting. ⋆ Source Domain. bold: best result. ±0.940 71.097 ±0.609 74.659 ±1.088 80.066 ±0.897 RandConv + AugMix 78.790 ±0.975 65.400 ±1.611 93.840 ±1.020 71.285 ±2.730 71.825 ±1.315 77.329 ±1.105 ALT g−only 84.575 ±1.047 68.867 ±2.126 94.768 ±0.43 74.421 ±0.441 75.954 ±1.119 80.658 ±0.929 ALT RandConv 83.916 ±0.51 68.086 ±1.901 95.190 ±0.686 74.487 ±0.505 75.496 ±0.799 80.420 ±0.644 ALTAugMix 85.964 ±1.098 71.943 ±1.234 94.599 ±0.560 74.172 ±0.752 77.360 ±0.734 81.670 ±0.667Method
Photo
Art-Painting Cartoon ⋆
Sketch
Target Avg. PACS Avg.
RandConv
73.677 ±1.814 57.051 ±1.764
91.66 ±0.876
72.855 ±2.314 67.861 ±1.550 73.810 ±1.317
AugMix
84.599 ±0.997 68.281 ±2.085
96.287
Table 8 .
8SSDG performance on PACS for the C→PAS setting. ⋆ Source Domain. bold: best result. RandConv 46.132 ±4.879 52.168 ±1.623 63.942 ±2.219 94.264 ±0.673 54.081 ±1.959 64.126 ±1.465 AugMix 46.731 ±2.916 37.852 ±1.878 58.575 ±1.747 94.221 ±0.711 47.719 ±1.723 59.345 ±1.268 RandConv + AugMix 54.359 ±0.819 46.074 ±2.709 61.246 ±1.245 94.171 ±0.582 53.893 ±0.945 63.963 ±0.787 ALT g−only 49.305 ±2.775 39.658 ±3.423 61.109 ±1.853 94.573 ±0.466 50.024 ±2.408 61.161 ±1.726 ALT RandConv 51.305 ±0.866 41.787 ±1.174 62.773 ±1.089 94.724 ±0.527 51.955 ±0.791 62.647 ±0.571 ALTAugMix 49.078 ±2.072 40.186 ±2.494 62.901 ±0.358 94.271 ±0.624 50.721 ±1.414 61.609 ±1.103 Table 9. SSDG performance on PACS for the S→PAC setting. ⋆ Source Domain. bold: best result. RandConv 83.028 ±2.067 59.021 ±0.916 47.269 ±1.251 72.172 ±0.418 59.487 ±0.792 65.372 ±1.096 AugMix 87.294 ±1.21 64.101 ±0.882 47.564 ±0.158 75.956 ±0.32 62.54 ±0.345 68.729 ±0.490 RandConv + AugMix 81.514 ±0.515 59.167 ±0.722 48.180 ±1.024 71.166 ±0.445 59.504 ±0.256 65.007 ±0.226 ALT g−only 86.514 ±0.622 64.622 ±0.490 53.327 ±0.344 76.276 ±0.117 64.742 ±0.122 70.185 ±0.138 ALT RandConv 87.477 ±1.042 64.879 ±0.439 53.097 ±0.554 76.066 ±0.447 64.681 ±0.290 70.380 ±0.312 ALTAugMix 86.560 ±0.980 64.860 ±0.267 53.271 ±0.799 76.286 ±0.347 64.806 ±0.327 70.244 ±0.461 Table 10. SSDG performance on Office-Home for the R→ACP setting. ⋆ Source Domain. bold: best result. RandConv 66.915 ±1.069 72.428 ±2.066 42.387 ±1.405 55.045 ±1.547 54.782 ±1.297 59.194 ±1.427 AugMix 71.887 ±0.432 80.494 ±1.342 45.314 ±0.768 61.882 ±0.382 59.694 ±0.427 64.894 ±0.369 RandConv + AugMix 65.620 ±0.632 71.852 ±1.758 42.606 ±1.026 54.434 ±0.774 54.220 ±0.617 58.628 ±0.862 ALT g−only 71.193 ±0.308 78.930 ±1.146 47.340 ±0.331 61.151 ±0.561 59.895 ±0.283 64.654 ±0.259 ALT RandConv 71.754 ±0.286 78.025 ±1.181 48.328 ±0.787 61.186 ±0.429 60.423 ±0.280 64.823 ±0.474 ALTAugMix 71.122 ±0.540 79.095 ±1.634 48.058 ±0.632 61.156 ±0.813 60.112 ±0.590 64.858 ±0.518Method
Photo
Art-Painting Cartoon
Sketch ⋆
Target Avg. PACS Avg.
Table 11 .
11SSDG performance on Office-Home for the A→RCP setting. ⋆ Source Domain. bold: best result. Product Target Avg. Office-Home Avg. 58.944 ±0.521 44.741 ±0.714 80.320 ±1.073 56.211 ±1.141 53.299 ±0.711 60.054 ±0.771 AugMix 62.244 ±0.526 49.309 ±0.879 81.510 ±0.885 58.939 ±0.584 56.831 ±0.530 63.000 ±0.454 RandConv + AugMix 57.914 ±0.730 43.698 ±0.511 77.986 ±1.087 55.040 ±0.683 52.217 ±0.249 58.660 ±0.417 ALT g−only 61.968 ±0.849 49.977 ±0.987 80.320 ±1.073 58.779 ±0.743 56.908 ±0.808 62.761 ±0.733 ALT RandConv 62.264 ±0.560 50.133 ±0.956 80.732 ±0.637 58.819 ±0.558 57.072 ±0.539 62.987 ±0.455 ALTAugMix 61.841 ±0.382 50.426 ±1.070 80.824 ±0.510 58.839 ±0.559 57.035 ±0.580 62.982 ±0.526Method
Real
Art
Clipart ⋆
RandConv
Table 12 .
12SSDG performance on Office-Home for the C→RAP setting. ⋆ Source Domain. bold: best result. Target Avg. Office-Home Avg. RandConv 66.318 ±0.240 43.524 ±0.664 43.365 ±1.058 90.135 ±0.643 51.069 ±0.607 60.836 ±0.372 AugMix 71.515 ±0.706 50.041 ±0.688 42.596 ±0.619 91.622 ±0.263 54.717 ±0.518 63.943 ±0.453 RandConv + AugMix 65.523 ±0.753 43.240 ±1.454 41.710 ±0.621 89.459 ±0.785 50.158 ±0.900 59.983 ±0.865 ALT g−only 70.082 ±0.532 48.842 ±0.648 46.877 ±0.552 91.306 ±0.544 55.267 ±0.302 64.277 ±0.171 ALT RandConv 70.530 ±0.359 49.208 ±0.418 47.025 ±0.498 91.577 ±0.506 55.588 ±0.300 64.585 ±0.212 ALTAugMix 70.637 ±0.301 49.318 ±1.008 47.554 ±0.458 91.396 ±0.798 55.837 ±0.383 64.726 ±0.361 Table 13. SSDG performance on Office-Home for the P→RAC setting. ⋆ Source Domain. bold: best result.Method
Real
Art
Clipart
Product ⋆
Appendix
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, David Lopez-Paz, arXiv:1907.02893Invariant risk minimization. arXiv preprintMartin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. 3
Learning invariances in neural networks from training data. W Gregory, Marc Benton, Pavel Finzi, Andrew Gordon Izmailov, Wilson, NeurIPS. 2020Gregory W Benton, Marc Finzi, Pavel Izmailov, and An- drew Gordon Wilson. Learning invariances in neural networks from training data. In NeurIPS, 2020. 1
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, 2017 ieee symposium on security and privacy (sp). IEEENicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39-57. IEEE, 2017. 3
Domain generalization by solving jigsaw puzzles. M Fabio, Antonio D' Carlucci, Silvia Innocente, Barbara Bucci, Tatiana Caputo, Tommasi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition56Fabio M Carlucci, Antonio D'Innocente, Silvia Bucci, Bar- bara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 2229-2238, 2019. 1, 5, 6
Gabriela Csurka, arXiv:1702.05374Domain adaptation for visual applications: A comprehensive survey. arXiv preprintGabriela Csurka. Domain adaptation for visual applications: A comprehensive survey. arXiv preprint arXiv:1702.05374, 2017. 6
Autoaugment: Learning augmentation strategies from data. Barret Ekin D Cubuk, Dandelion Zoph, Vijay Mane, Quoc V Vasudevan, Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition36Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasude- van, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 113-123, 2019. 3, 6
Randaugment: Practical automated data augmentation with a reduced search space. Barret Ekin D Cubuk, Jonathon Zoph, Quoc V Shlens, Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops13Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition Work- shops, pages 702-703, 2020. 1, 3
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Fei-Fei Li, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Miami, Florida, USAIEEE Computer SocietyJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical im- age database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248-255. IEEE Computer Society, 2009. 6
Neural network recognizer for hand-written zip code digits. Js Denker, Gardner, Graf, Henderson, Howard, Hubbard, Jackel, I Baird, Guyon, Proceedings of the 1st International Conference on Neural Information Processing Systems. the 1st International Conference on Neural Information Processing Systems57JS Denker, WR Gardner, HP Graf, D Henderson, RE Howard, W Hubbard, LD Jackel, HS Baird, and I Guyon. Neural network recognizer for hand-written zip code digits. In Pro- ceedings of the 1st International Conference on Neural Infor- mation Processing Systems, pages 323-331, 1988. 5, 7
Improved regularization of convolutional neural networks with cutout. Terrance Devries, W Graham, Taylor, arXiv:1708.04552arXiv preprintTerrance DeVries and Graham W Taylor. Improved regular- ization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. 3
Stochastic activation pruning for robust adversarial defense. Kamyar Guneet S Dhillon, Azizzadenesheli, C Zachary, Jeremy D Lipton, Jean Bernstein, Aran Kossaifi, Animashree Khanna, Anandkumar, International Conference on Learning Representations. Guneet S Dhillon, Kamyar Azizzadenesheli, Zachary C Lip- ton, Jeremy D Bernstein, Jean Kossaifi, Aran Khanna, and Animashree Anandkumar. Stochastic activation pruning for robust adversarial defense. In International Conference on Learning Representations, 2018. 3
Boosting adversarial attacks with momentum. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9185-9193, 2018. 3
Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, PMLRInternational conference on machine learning. 57Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015. 5, 7
Domain-adversarial training of neural networks. The journal of machine learning research. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, 17Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marc- hand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016. 3
Robert Geirhos, Jonas Carlos R Medina Temme, Rauber, H Heiko, Matthias Schütt, Felix A Bethge, Wichmann, Generalisation in humans and deep neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. Robert Geirhos, Carlos R Medina Temme, Jonas Rauber, Heiko H Schütt, Matthias Bethge, and Felix A Wichmann. Generalisation in humans and deep neural networks. In Pro- ceedings of the 32nd International Conference on Neural Information Processing Systems, pages 7549-7561, 2018. 3
Attributeguided adversarial training for robustness to natural perturbations. Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, J Jayaraman, Chitta Thiagarajan, Yezhou Baral, Yang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayara- man J Thiagarajan, Chitta Baral, and Yezhou Yang. Attribute- guided adversarial training for robustness to natural perturba- tions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7574-7582, 2021. 1
Explaining and harnessing adversarial examples. Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun, editors, 3rd International Confer- ence on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. 3
In search of lost domain generalization. Ishaan Gulrajani, David Lopez-Paz, ICML. 13Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In ICML, 2021. 1, 3
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision11Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level per- formance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034, 2015. 11
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. Las Vegas, NV, USAIEEE Computer Society511Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society, 2016. 3, 5, 11
Benchmarking neural network robustness to common corruptions and perturbations. Dan Hendrycks, Thomas G Dietterich, 7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USADan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and per- turbations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. 3
Augmix: A simple data processing method to improve robustness and uncertainty. Dan Hendrycks, Norman Mu, Barret Ekin Dogus Cubuk, Justin Zoph, Balaji Gilmer, Lakshminarayanan, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia202011Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. 1, 2, 3, 5, 6, 7, 11
Adversarial defense via learning to generate diverse attacks. Yunseok Jang, Tianchen Zhao, Seunghoon Hong, Honglak Lee, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionYunseok Jang, Tianchen Zhao, Seunghoon Hong, and Honglak Lee. Adversarial defense via learning to generate di- verse attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2740-2749, 2019. 3
Wilds: A benchmark of in-the-wild distribution shifts. Pang Wei Koh, Shiori Sagawa, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, PMLR, 2021. 3International Conference on Machine Learning. Pang Wei Koh, Shiori Sagawa, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Ya- sunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, et al. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, pages 5637- 5664. PMLR, 2021. 3
On information and sufficiency. S Kullback, Leibler, Annals of Mathematical Statistics. 221S Kullback, RA Leibler, et al. On information and sufficiency. Annals of Mathematical Statistics, 22(1):79-86, 1951. 5
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. the IEEE86Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recog- nition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 5
Deeper, broader and artier domain generalization. Da Li, Yongxin Yang, Yi-Zhe Song, Timothy M Hospedales, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision25Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generaliza- tion. In Proceedings of the IEEE international conference on computer vision, pages 5542-5550, 2017. 2, 3, 5
Learning to generalize: Meta-learning for domain generalization. Da Li, Yongxin Yang, Yi-Zhe Song, Timothy M Hospedales, Thirty-Second AAAI Conference on Artificial Intelligence. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Learning to generalize: Meta-learning for do- main generalization. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. 3
Deepfool: a simple and accurate method to fool deep neural networks. Alhussein Seyed-Mohsen Moosavi-Dezfooli, Pascal Fawzi, Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pas- cal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 2574-2582, 2016. 3
Mnist-c: A robustness benchmark for computer vision. Norman Mu, Justin Gilmer, arXiv:1906.02337arXiv preprintNorman Mu and Justin Gilmer. Mnist-c: A robustness bench- mark for computer vision. arXiv preprint arXiv:1906.02337, 2019. 3
Reducing domain gap by reducing style bias. Hyeonseob Nam, Hyunjae Lee, Jongchan Park, Wonjun Yoon, Donggeun Yoo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition56Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 8690-8699, 2021. 1, 3, 5, 6
Reading digits in natural images with unsupervised feature learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y Ng, 5Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. 5, 7
Learning to learn single domain generalization. Fengchun Qiao, Long Zhao, Xi Peng, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USAIEEE20207Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain generalization. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 12553-12562. IEEE, 2020. 1, 3, 4, 6, 7
Learning to compose domain-specific transformations for data augmentation. Advances in neural information processing systems. J Alexander, Ratner, Zeshan Henry R Ehrenberg, Jared Hussain, Christopher Dunnmon, Ré, 303239Alexander J Ratner, Henry R Ehrenberg, Zeshan Hussain, Jared Dunnmon, and Christopher Ré. Learning to compose domain-specific transformations for data augmentation. Ad- vances in neural information processing systems, 30:3239, 2017. 3
Alexander Robey, J George, Pappas, arXiv:2102.11436and Hamed Hassani. Model-based domain generalization. arXiv preprintAlexander Robey, George J Pappas, and Hamed Has- sani. Model-based domain generalization. arXiv preprint arXiv:2102.11436, 2021. 3
Nonlinear total variation based noise removal algorithms. Stanley Leonid I Rudin, Emad Osher, Fatemi, Physica D: nonlinear phenomena. 601-4Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259-268, 1992. 4
Situational fusion of visual representation for visual navigation. Danfei William B Shen, Yuke Xu, Leonidas J Zhu, Li Guibas, Silvio Fei-Fei, Savarese, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionWilliam B Shen, Danfei Xu, Yuke Zhu, Leonidas J Guibas, Li Fei-Fei, and Silvio Savarese. Situational fusion of visual representation for visual navigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2881-2890, 2019. 3
Examining the impact of blur on recognition by convolutional networks. Igor Vasiljevic, Ayan Chakrabarti, Gregory Shakhnarovich, arXiv:1611.05760arXiv preprintIgor Vasiljevic, Ayan Chakrabarti, and Gregory Shakhnarovich. Examining the impact of blur on recognition by convolutional networks. arXiv preprint arXiv:1611.05760, 2016. 3
Deep hashing network for unsupervised domain adaptation. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, Sethuraman Panchanathan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition56Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018-5027, 2017. 2, 3, 5, 6
Addressing model vulnerability to distributional shifts over image transformation sets. Riccardo Volpi, Vittorio Murino, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision67Riccardo Volpi and Vittorio Murino. Addressing model vul- nerability to distributional shifts over image transformation sets. In Proceedings of the IEEE/CVF International Con- ference on Computer Vision, pages 7980-7989, 2019. 3, 6, 7
Generalizing to unseen domains via adversarial data augmentation. Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John C Duchi, Vittorio Murino, Silvio Savarese, Advances in Neural Information Processing Systems. 711Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John C. Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. In Advances in Neural Information Processing Systems, pages 5339-5349, 2018. 1, 2, 3, 4, 5, 6, 7, 11
Learning perturbation sets for robust machine learning. Eric Wong, J Zico Kolter, International Conference on Learning Representations. Eric Wong and J Zico Kolter. Learning perturbation sets for robust machine learning. In International Conference on Learning Representations, 2020. 1
Robust and generalizable visual representation learning via random convolutions. Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, Marc Niethammer, International Conference on Learning Representations. 711Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, and Marc Niethammer. Robust and generalizable visual representation learning via random convolutions. In International Confer- ence on Learning Representations, 2020. 1, 2, 3, 4, 5, 6, 7, 11
Adversarial examples: Attacks and defenses for deep learning. Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li, IEEE transactions on neural networks and learning systems. 30Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Ad- versarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 30(9):2805-2824, 2019. 3
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision13Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regu- larization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Con- ference on Computer Vision, pages 6023-6032, 2019. 1, 3
mixup: Beyond empirical risk minimization. Hongyi Zhang, Moustapha Cisse, David Yann N Dauphin, Lopez-Paz, International Conference on Learning Representations. 13Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018. 1, 3
Adversarial autoaugment. Xinyu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong, International Conference on Learning Representations. Xinyu Zhang, Qiang Wang, Jian Zhang, and Zhao Zhong. Adversarial autoaugment. In International Conference on Learning Representations, 2019. 3
Random erasing data augmentation. Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13001-13008, 2020. 3
Learning to generate novel domains for domain generalization. Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang, European Conference on Computer Vision. SpringerKaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Learning to generate novel domains for domain gen- eralization. In European Conference on Computer Vision, pages 561-578. Springer, 2020. 3
Learning data augmentation strategies for object detection. Barret Zoph, D Ekin, Golnaz Cubuk, Tsung-Yi Ghiasi, Jonathon Lin, Quoc V Shlens, Le, European Conference on Computer Vision. SpringerBarret Zoph, Ekin D Cubuk, Golnaz Ghiasi, Tsung-Yi Lin, Jonathon Shlens, and Quoc V Le. Learning data augmentation strategies for object detection. In European Conference on Computer Vision, pages 566-583. Springer, 2020. 3
| []
|
[
"Extending the reach of axion-photon regeneration experiments towards larger masses with phase shift plates",
"Extending the reach of axion-photon regeneration experiments towards larger masses with phase shift plates"
]
| [
"Joerg Jaeckel \nDeutsches Elektronen-Synchrotron DESY\nCentre for Particle Theory\nDurham University\nNotkestrasse 85DH1 3LE, D-22607Durham, HamburgUnited Kingdom, Germany\n",
"Andreas Ringwald \nDeutsches Elektronen-Synchrotron DESY\nCentre for Particle Theory\nDurham University\nNotkestrasse 85DH1 3LE, D-22607Durham, HamburgUnited Kingdom, Germany\n"
]
| [
"Deutsches Elektronen-Synchrotron DESY\nCentre for Particle Theory\nDurham University\nNotkestrasse 85DH1 3LE, D-22607Durham, HamburgUnited Kingdom, Germany",
"Deutsches Elektronen-Synchrotron DESY\nCentre for Particle Theory\nDurham University\nNotkestrasse 85DH1 3LE, D-22607Durham, HamburgUnited Kingdom, Germany"
]
| []
| We present a scheme to extend the sensitivity of axion-photon regeneration experiments towards larger masses with the help of properly chosen and placed phase shift plates. | 10.1016/j.physletb.2007.07.066 | [
"https://arxiv.org/pdf/0706.0693v1.pdf"
]
| 18,278,760 | 0706.0693 | 7cfa78e06959cdb0b27d29f0b0497a30bceb5a84 |
Extending the reach of axion-photon regeneration experiments towards larger masses with phase shift plates
5 Jun 2007 February 1, 2008
Joerg Jaeckel
Deutsches Elektronen-Synchrotron DESY
Centre for Particle Theory
Durham University
Notkestrasse 85DH1 3LE, D-22607Durham, HamburgUnited Kingdom, Germany
Andreas Ringwald
Deutsches Elektronen-Synchrotron DESY
Centre for Particle Theory
Durham University
Notkestrasse 85DH1 3LE, D-22607Durham, HamburgUnited Kingdom, Germany
Extending the reach of axion-photon regeneration experiments towards larger masses with phase shift plates
5 Jun 2007 February 1, 2008IPPP/07/28; DCPT/07/56; DESY 07-081
We present a scheme to extend the sensitivity of axion-photon regeneration experiments towards larger masses with the help of properly chosen and placed phase shift plates.
Many proposals to embedd the standard model of particle physics into a more general, unified framework predict a number of new very light particles which are very weakly coupled to ordinary matter. Typically, such light particles arise if there is a global continuous symmetry that is spontaneously broken in the vacuum -a notable example being the axion [1,2], a pseudoscalar particle arising from the breaking of a U(1) Peccei-Quinn symmetry [3] introduced to explain the absence of CP violation in strong interactions. Other examples of light spin-zero bosons beyond the standard model are familons [4], Majorons [5,6], the dilaton, and moduli, to name just a few. We will call them axion-like particles, ALPs, in the following.
At low energies, the coupling of such an ALP, whose corresponding quantum field we denote by φ, to photons is described by an effective Lagrangian,
L = − 1 4 F µν F µν + 1 2 ∂ µ φ∂ µ φ − 1 2 m 2 φ φ 2 − 1 4 gφF µνF µν ,(1)
where F µν (F µν ) is the (dual) electromagnetic field strength tensor 1 and m φ is the mass of the ALP. Correspondingly, in the presence of an external magnetic field, a photon of energy ω may oscillate into an ALP of small mass m φ < ω, and vice versa [9,10]. Figure 1: Schematic view of ALP production through photon conversion in a magnetic field (left), subsequent travel through an optical barrier, and final detection through photon regeneration (right).
γ laser γ laser − → B − → B φ
The exploitation of this mechanism is the basic idea behind ALP-photon regenerationsometimes also called "light shining through a wall" -experiments [11][12][13] (cf. Fig. 1). Namely, if a beam of photons is shone across a magnetic field, a fraction of these photons will turn into ALPs. This ALP beam could then propagate freely through an optical barrier without being absorbed, and finally another magnetic field located on the other side of the wall could transform some of these ALPs into photons -apparently regenerating these photons out of nothing.
A pioneering experiment of this type was carried out in Brookhaven by the Brookhaven-Fermilab-Rochester-Trieste (BFRT) collaboration, using two prototype magnets for the Colliding Beam Accelerator [14,15]. Presently, there are worldwide several second generation ALP-photon regeneration experiments under construction or serious consideration (cf. Table 1; for a review, see Refs. [21,22]). These efforts are partially motivated by Table 1: Experimental parameters of upcoming photon regeneration experiments: magnetic fields B i and their length ℓ i on production (i = 1) and regeneration (i = 2) side (cf. Fig. 1).
Name
Laboratory Magnets Laser ALPS [16] DESY/D B 1 = B 2 = 5 T ℓ 1 = ℓ 2 = 4.21 m ω = 2.34 eV BMV [17] LULI/F
B 1 = B 2 = 11 T ℓ 1 = ℓ 2 = 0.25 m ω = 1.17 eV LIPSS [18] Jlab/USA B 1 = B 2 = 1.7 T ℓ 1 = ℓ 2 = 1 m ω = 1.17 eV OSQAR [19] CERN/CH B 1 = B 2 = 11 T ℓ 1 = ℓ 2 = 7 m ω = 1.17 eV B 1 = 5 T PVLAS [20] Legnaro/I ℓ 1 = 1 m ω = 1.17 eV B 2 = 2.2 T ℓ 2 = 0.5 m
the report from the PVLAS collaboration of evidence for a non-zero apparent rotation of the polarization plane of a laser beam after passage through a magnetic field [23]. While the size of the observed effect greatly exceeds the expectations from quantum electrodynamics [24][25][26], it is compatible with the expectations [27] arising in the context of a photon-ALP oscillation hypothesis. Indeed, the rotation observed by PVLAS can be reconciled with the non-observation of a signal by BFRT, if there exists an ALP with a mass m φ ∼ meV and a coupling g ∼ 10 −6 GeV −1 [28]. Although these parameter values seem to be in serious conflict with bounds coming from astrophysical considerations, there are various ways to evade them [29][30][31][32][33][34][35]. Therefore, it is extremely important to check the ALP interpretation of PVLAS by purely laboratory experiments [32]. Moreover, it would be nice if in this way one might ultimately extend the laboratory search for ALPS to previously unexplored parameter values (see also Ref. [42]). In this letter, we propose a method to extend the sensitivity of the planned photon-regeneration experiments to higher ALP masses.
Let us start with an outline of the calculation of the photon → ALP conversion probability P γ→φ , to lowest order in the coupling g. As emphasized in Ref. [13], this calculation amounts to solving the classical field equations following from Eq. (1),
∂ µ F µν = g∂ µ φF µν ; ∂ µ ∂ µ + m 2 φ φ = g E · B ,(2)
to lowest order in gBℓ, where ℓ is the linear dimension associated with the extent of the magnetic field 2 . This can be done by neglecting the modification of the electromagnetic field due to the presence of the pseudoscalar field (through the right hand side of the first Figure 2: Two photon coupling g of the (pseudo-)scalar versus its mass m φ . Iso-contour of the regeneration probability P γ→φ→γ = P γ→φ P φ→γ , for the parameters of the ALPS experiment, i.e. magnetic fields B 1 = B 2 = 5 T, over a length ℓ 1 = ℓ 2 = 4.21 m, exploiting a green (λ = 532 nm) photon beam, corresponding to ω = 2.34 eV, in vacuum. Also shown in red are the 5 sigma allowed regions [28] from PVLAS data on rotation [23] plus BFRT data on rotation, ellipticity, and regeneration [15] plus Q&A data on rotation [38]. equation above). Solving for φ in the second equation yields [9,13]
φ (±) ( x, t) = e −iωt d 3 x ′ 1 4π e ±ik φ | x− x ′ | | x − x ′ | g E( x ′ ) · B( x ′ ) ,(3)
where the energy ω and the modulus of the three-momentum k φ are related by k φ = ω 2 − m 2 φ . This solution simplifies even more if we specialize to the usual experimental configuration of a laser photon beam send along the x-axis with fixed linear polarization in the z direction. If the transverse extent of the magnetic field is much larger than that of the laser beam, the problem is effectively one-dimensional. In one dimension and taking into account only ALPs that propagate into the positive x-direction, Eq. (3) becomes,
φ (+) (x, t) = e −i(ωt−k φ x) ig 2k φ dx ′ E(x ′ ) · B(x ′ ).(4)
Inserting in Eq. (4) furthermore the appropriate plane wave form E 0 ( x, t) = e z E 0 e iω(x−t) for the electric field of the laser beam and assuming, as realized in all the proposed experiments, a magnetic field with fixed direction along the z-axis and possibly variable (as a function of
x) magnitude, B 0 ( x) = e z B 0 (x), one ends up with the solution 3 φ (±) ( x, t) = ig 2k φ E 0 e −i(ωt−k φ x) dx ′ e iqx ′ B 0 (x ′ ) ,(5)
where
q = k γ − k φ = ω − ω 2 − m 2 φ ≈ m 2 φ 2ω(6)
is the momentum transfer to the magnetic field, i.e. the modulus of the momentum difference between the photon and the ALP. The probability that a photon converts into an axion-like particle and vice versa can be read off from Eq. (5) and reads [9,13] P γ→φ = P φ→γ = 1 4
ω k φ g 2 dx ′ e iqx ′ B 0 (x ′ ) 2 ,(7)
which reduces, for a constant magnetic field, B 0 (x ′ ) = const, of linear extension ℓ, to
P γ→φ ≈ g 2 B 2 0 sin 2 (qℓ/2) /q 2 .(8)
Clearly, in the experimental setup considered, the maximum conversion probability, P γ→φ ≈ g 2 B 2 0 ℓ 2 , is attained at small momentum transfer, q = m 2 φ /(2ω) ≪ 1, corresponding to a small ALP mass. For this mass range, the best limits are obtained in a straightforward manner by exploiting strong and long dipole magnets, as they are used for storage rings such as HERA [36] or LHC [37], cf. the experiments ALPS [16] and OSQAR [19], respectively (see Table 1). However, for larger masses, the sensitivity of this setup rapidly diminishes.
We illustrate this in Fig. 2, which displays an iso-contour of the light shining through a wall probability in the g-m φ plane, exploiting the experimental parameters of the ALPS experiment [16]. Clearly, for this setup, the parameter region in g vs. m φ suggested by the combination of BFRT plus Q&A exclusion and PVLAS evidence can not be probed. This is even more dramatic for the OSQAR experiment, which exploits an LHC magnet. Moreover, increasing the refraction index by filling in buffer gas does not help since it works in the wrong direction (contrary to the claim 4 in the ALPS letter of intent [16]).
A simple possibility to probe the meV region 5 in the ALPS setup is to reduce the effective length of the magnetic field region both on the production and detection side of the magnet by shortening the beam pipe on both sides. As can be seen in Fig. 3, this possibility enables to extend the mass region probed by the experiment, however at the expense of sensitivity: one looses about one order of magnitude in the light shining through a wall probability.
Another idea to extend the sensitivity towards larger ALP masses was introduced in Ref. [13]. There, it was shown that a segmentation of the magnetic field into regions 4 In a refractive medium, the laser beam has a phase velocity 1/n ≡ v ≡ ω/k γ . The momentum
transfer (6) reads then q = n ω − ω 2 − m 2 φ ≈ m 2 φ 2ω + (n − 1) ω.
The second term in this expression has the opposite sign as the corresponding term in Ref. [16]. Correspondingly, one would need a buffer gas with refraction index less than unity, i.e. a plasma, in order to decrease q (and thereby maximize the conversion probability (8)) rather than to increase it. A.R. would like to thank Aaron Chou for pointing out the correct sign. 5 Another possibility to probe larger ALP masses even with a long magnet would be to exploit VUV or X-ray free-electron laser beams [39][40][41]. However, at the moment conventional lasers seem to offer better prospects (see also Ref. [42]) Figure 3: Iso-contour of the regeneration probability, as in Fig. 2, but with reduced lengths of the magnetic field region. Note, that the regeneration probability is reduced by a factor of 10.
of alternating polarity gives a form factor dx ′ exp(iqx ′ )B 0 (x ′ ) that peaks at a nonzero value of q, thereby giving sensitivity to higher-mass pseudoscalars. In fact, the conversion probability (7) reads [13,43], in a magnet with N segments of alternating field direction (but the same magnitude B 0 ),
P γ→φ ≈ g 2 B 2 0 sin 2 (qd/2) q 2 N k=1 (−1) k exp{i(2k − 1)qd/2} 2(9)
= g 2 B 2 0 q 2 sin 2 (qℓ/2) tan 2 (qℓ/(2N)) for N even cos 2 (qℓ/2) tan 2 (qℓ/(2N)) for N odd ,
where d = ℓ/N is the length of each of the N segments. For N > 1, this indeed gives rise to more sensitivity at non-zero values of q.
In this letter, we will introduce a similar, but more practical possibility based on the use of phase shift plates. The idea is very simple. From our starting point, Eq. (4), we can see that what counts is actually E(x ′ ) · B(x ′ ). The configuration based on N alternating magnetic fields is therefore equivalent to a configuration with non-alternating magnetic field, however with N − 1 retardation plates with phase shift π ("λ/2" plates) inverting the sign of the electric field, placed equidistantly over the length ℓ of the magnet. In this case we have alternating signs of cos θ, where θ is the angle between E and B, instead of alternating signs of the magnetic field. But both cases have an identical profile of E(x ′ ) · B(x ′ ). In Fig. 4, we show that with a proper choice of the number and positions of such phase shifters, ALPS should easily cover the region of parameter space suggested by PVLAS + BFRT + Q&A. The same applies for OSQAR.
Let us now get a more intuitive understanding of how this works and see how we can do even better. The crucial part in Eq. (5) is the integral Figure 4: Iso-contour of the regeneration probability, as in Fig. 2. Here, we used one phase shift ("λ/2") plate each in the middle of the generation and the regeneration sides.
f (q) = dx ′ e iqx ′ B 0 (x ′ ).(10)
For a constant magnetic field of length ℓ the oscillating factor e iqx ′ suppresses the integral compared to the massless case with q = 0, where the integral is simply
|f (q)| < |f (0)| = ℓ 0 dx ′ B 0 = B 0 ℓ, for B 0 (x) = constant.(11)
This suppression arises because coherent production of ALPs works only if the ALP and the photon are in phase. The factor e iqx ′ accounts for the phase difference between ALP and photon.
To improve the situation one would want to bring photon and ALP back into phase with each other. This can be achieved in a simple way by the introduction of phase shift plates. A simplified picture of the phase correction process is given in Fig. 5. At the beginning, photon (red) and ALP (black) are in phase. However, due to its mass, the ALP has a slightly larger wavelength than the photon. After a few oscillations photon and ALP are more and more out of phase. Then we insert the phase shift plate (turquoise). With refractive indices n > 1 we cannot make the wavelength larger. So it is not possible to "delay" the photon until the ALP has caught up. What we can do, however, is to is to increase the phase difference between ALP and photon such that it is exactly 2π (the photon does an extra wiggle in Fig. 5.). Now, a phase shift of 2π is exactly equivalent to a phase shift of 0. Photon and ALP are in phase again. Therefore, we can keep photon and ALP in phase over quite long distances simply by inserting a suitable phase shift plate whenever the phase difference becomes too large, and we get coherent production over the whole length of the magnet.
Let us now understand more quantitatively how this works. To derive Eqs. (5) and (10) we have assumed that the photon is a plane wave. Therefore, we can identify qx ′ = (k γ − k φ )x ′ as the phase difference between the photon wave and the ALP wave at the point x ′ . In general we should write (cf. Eq. (4)) where ϕ γ,φ are the phases of the photon and the ALP fields, respectively.
f (q) = dx ′ e i(ϕγ (x ′ )−ϕ φ (x ′ )) B 0 (x ′ ) ,(12)
Let us imagine a situation where we insert N − 1 thin 6 , non-reflective 7 plates that accelerate the photon phase by κ at equidistant places sℓ/N = s∆x, s = 1 . . . N − 1, in a constant magnetic field of length ℓ. The plates affect only the photon. The ALP phase remains unaffected. Therefore, we have,
ϕ γ (x) = k γ x + sκ for s∆x < x ≤ (s + 1)∆x, ∆x = ℓ N ,(13)ϕ φ (x) = k φ x.
Inserting this into Eq. (12), we find
f (q) = B 0 N −1 s−0 (s+1)∆x s∆x dx ′ e i(qx ′ +sκ) = B 0 e i 2 (qℓ+(N −1)α) 2i sin q∆x 2 q sin N 2 (q∆x + κ) sin 1 2 (q∆x + κ)
. (14) 6 One might ask what happens to the photon-ALP system inside the material of the plate. One can check (cf., e.g., Ref. [10]) that for sufficiently large refractive index of the material, n − 1 ≫ m 2 φ /(2ω 2 ), the mixing between photon and ALP is effectively switched off compared to the mixing in vacuum. Photon and ALP simply propagate through the plate without changing their amplitudes (the phases change, of course). In other words, the thickness of the plates has to be subtracted from the total length of the production or regeneration region. That is why we require thin plates. For practical purposes, this is a rather mild constraint. For n − 1 ∼ 0.1, the thickness of the plates required for a phase shift of the order of 2π is only d ∼ 10 λ ∼ 10 µm, which is tiny compared to the typical lengths of the production/regeneration regions which are of the order of a few m. 7 Reflected photons are effectively lost. Figure 6: Iso-contours of the regeneration probability, as in Fig. 2. In the left figure, we have used no phase correction (red), one plate with κ = π (green), and one plate with the optimal choice of κ according to Eq. (16) for m φ = 1.2 meV (blue). The black curve is for 20 plates with the optimal choice of κ. In the right figure, we have the same but with 3 plates for the green and blue curves.
We can now choose the number of plates N and the phase shift κ according to the recipe described above. First we choose N large enough such that
1 2 q∆x ≪ 1(15)
And then we choose κ such that the phase difference that has accumulated over ∆x is "completed" to 2π,
κ = 2π − 1 2 q∆x.(16)
Evaluating Eq. (14) in the limit N(q∆x + κ)/2 → 0 one finds
|f (q)| = B 0 ∆x N = B 0 ℓ.(17)
And we have coherent production over the whole length ℓ.
The potential of this approach is demonstrated in Fig. 6 for the example of the ALPS experiment. In the optimized mass region we get more than ten times as many regenerated photons as we would get if the length of the magnet is reduced as in Fig. 3.
Another practical advantage of this method is that we can scan through a whole mass range. Performing several measurements with different phase shift plates we can always choose for each q, i.e. for each m φ , plates with an appropriate κ such that it is close enough to its optimal value (16),
1 2 |q(m φ )ℓ − Nκ| ≪ 1.(18)
For an infinite number of plates this would allow to extend the mass range all the way to the frequency ω of the photons 8 . In practice, we can insert only a finite number of phase Figure 7: Iso-contours of the regeneration probability, as in Fig. 2. This figure demonstrates the potential for scanning through a whole range of masses by choosing the right κ for each m φ . The red curve is the sensitivity without phase correction. The black curve is obtained by using three plates but scanning through a whole range of κ. In other words to obtain this curve one would insert the plates. Measure. Change the plates to a slightly different value of κ and measure again. This is repeated for all values of κ in the range [0, 2π].
shift plates and Eq. (18) cannot be fulfilled for too large masses. But, already a small number of plates leads to a remarkable increase of the sensitivity for higher masses, as we can see from Fig. 7.
In summary: So called "light shining through a wall" experiments are a promising tool to search for light particles coupled to photons. In this note we have shown how the reach of such an experiment can be extended towards larger masses by inserting properly chosen phase shift plates. Although our explicit discussion is for the case of spin-0 axionlike particles the method works in general for particles exhibiting photon-particle-photon oscillations.
Figure 5 :
5Illustration of the effect of a properly chosen and placed phase shift plate on the phase relation between photon and ALP (this simplified picture shows only the phase relation; the amplitudes of photon and ALP are not correct in this picture). Photon (red) and ALP (black) start in phase. Due to their different wavelength they are, however, somewhat out of phase after several oscillations -say by an amount ζ. This is corrected by introduction of a phase shift plate that causes the photon to get an extra phase 2π −ζ. In other words the plate causes the photon to complete the extra wiggle.
The effective Lagrangian (1) applies for a pseudoscalar ALP, i.e. a spin-zero boson with negative parity. In the case of a scalar ALP, the F µνF µν in Eq. (1) is replaced by F µν F µν . For the more general case where φ ceases to be an eigenstate of parity[7], see Ref.[8].
In the case of a scalar ALP, the term E · B in Eqs. (2), (3), and (4) is replaced by 1 2 ( E 2 − B 2 ).
The solution(5) applies also in the case of a scalar ALP, if the magnetic field direction is chosen to point into the y direction, B 0 ( x) = e y B 0 (x) (or, alternatively, if the polarization of the laser is chosen to point in the y direction).
Above the photon frequency, ALP production is energetically forbidden and q becomes imaginary.
AcknowledgmentsWe would like to thank Giovanni Cantatore, Aaron Chou, Marin Karuza, Axel Lindner, Giuseppe Ruoso, Pierre Sikivie, and Karl van Bibber for interesting discussions.
. S Weinberg, Phys. Rev. Lett. 40223S. Weinberg, Phys. Rev. Lett. 40 (1978) 223.
. F Wilczek, Phys. Rev. Lett. 40279F. Wilczek, Phys. Rev. Lett. 40 (1978) 279.
. R D Peccei, H R Quinn, Phys. Rev. Lett. 381440R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38 (1977) 1440.
. F Wilczek, Phys. Rev. Lett. 491549F. Wilczek, Phys. Rev. Lett. 49 (1982) 1549.
. Y Chikashige, R N Mohapatra, R D Peccei, Phys. Lett. B. 98265Y. Chikashige, R. N. Mohapatra and R. D. Peccei, Phys. Lett. B 98 (1981) 265.
. G B Gelmini, M Roncadelli, Phys. Lett. B. 99411G. B. Gelmini and M. Roncadelli, Phys. Lett. B 99 (1981) 411.
. C T Hill, G G Ross, Nucl. Phys. B. 311253C. T. Hill and G. G. Ross, Nucl. Phys. B 311 (1988) 253.
. Y Liao, arXiv:0704.1961hep-phY. Liao, arXiv:0704.1961 [hep-ph].
. P Sikivie, Phys. Rev. Lett. 511415Erratum-ibid. 52 (1984) 695P. Sikivie, Phys. Rev. Lett. 51 (1983) 1415 [Erratum-ibid. 52 (1984) 695].
. G Raffelt, L Stodolsky, Phys. Rev. D. 371237G. Raffelt and L. Stodolsky, Phys. Rev. D 37 (1988) 1237.
. A A Anselm, Yad. Fiz. 421480A. A. Anselm, Yad. Fiz. 42 (1985) 1480.
. M Gasperini, Phys. Rev. Lett. 59396M. Gasperini, Phys. Rev. Lett. 59 (1987) 396.
. K Van Bibber, N R Dagdeviren, S E Koonin, A Kerman, H N Nelson, Phys. Rev. Lett. 59759K. Van Bibber, N. R. Dagdeviren, S. E. Koonin, A. Kerman and H. N. Nelson, Phys. Rev. Lett. 59 (1987) 759.
. G Ruoso, BFRT CollaborationZ. Phys. C. 56505G. Ruoso et al. [BFRT Collaboration], Z. Phys. C 56 (1992) 505.
. R Cameron, BFRT CollaborationPhys. Rev. D. 473707R. Cameron et al. [BFRT Collaboration], Phys. Rev. D 47 (1993) 3707.
Production and detection of axion-like particles in a HERA dipole magnet: Letter-of-intent for the ALPS experiment. K Ehret, ALPS collaborationarXiv:hep-ex/0702023K. Ehret et al. [ALPS collaboration], "Production and detection of axion-like particles in a HERA dipole magnet: Letter-of-intent for the ALPS experiment," arXiv:hep-ex/0702023.
C Rizzo, BMV CollaborationThe, BMV Collaboration2nd ILIAS-CERN-CAST Axion Academic Training. C. Rizzo for the [BMV Collaboration], 2nd ILIAS-CERN-CAST Axion Academic Training 2006, http://cast.mppmu.mpg.de/
Baker for the. K , LIPSS Collaboration2nd ILIAS-CERN-CAST Axion Academic Training. K. Baker for the [LIPSS Collaboration], 2nd ILIAS-CERN-CAST Axion Academic Training 2006, http://cast.mppmu.mpg.de/
. P Pugnat, OSQAR CollaborationCERN-SPSC-P- 331P. Pugnat et al. [OSQAR Collaboration], CERN-SPSC-2006-035, CERN-SPSC-P- 331.
Cantatore for the. G , PVLAS Collaboration2nd ILIAS-CERN-CAST Axion Academic Training. G. Cantatore for the [PVLAS Collaboration], 2nd ILIAS-CERN-CAST Axion Aca- demic Training 2006, http://cast.mppmu.mpg.de/
. A Ringwald, arXiv:hep-ph/0612127A. Ringwald, arXiv:hep-ph/0612127.
. R Battesti, arXiv:0705.0615hep-exR. Battesti et al., arXiv:0705.0615 [hep-ex].
. E Zavattini, PVLAS CollaborationarXiv:hep-ex/0507107Phys. Rev. Lett. 96110406E. Zavattini et al. [PVLAS Collaboration], Phys. Rev. Lett. 96 (2006) 110406 [arXiv:hep-ex/0507107].
. S L Adler, Annals Phys. 67599S. L. Adler, Annals Phys. 67 (1971) 599.
. S L Adler, arXiv:hep-ph/0611267J. Phys. A. 40143S. L. Adler, J. Phys. A 40 (2007) F143 [arXiv:hep-ph/0611267].
. S Biswas, K Melnikov, arXiv:hep-ph/0611345Phys. Rev. D. 7553003S. Biswas and K. Melnikov, Phys. Rev. D 75 (2007) 053003 [arXiv:hep-ph/0611345].
. L Maiani, R Petronzio, E Zavattini, Phys. Lett. B. 175359L. Maiani, R. Petronzio and E. Zavattini, Phys. Lett. B 175 (1986) 359.
. M Ahlers, H Gies, J Jaeckel, A Ringwald, arXiv:hep-ph/0612098Phys. Rev. D. 7535011M. Ahlers, H. Gies, J. Jaeckel and A. Ringwald, Phys. Rev. D 75 (2007) 035011 [arXiv:hep-ph/0612098].
. E Masso, J Redondo, arXiv:hep-ph/0504202JCAP. 050915E. Masso and J. Redondo, JCAP 0509 (2005) 015 [arXiv:hep-ph/0504202].
. P Jain, S , arXiv:astro-ph/0512155Int. J. Mod. Phys. D. 152095P. Jain and S. Mandal, Int. J. Mod. Phys. D 15 (2006) 2095 [arXiv:astro-ph/0512155].
. E Masso, J Redondo, arXiv:hep-ph/0606163Phys. Rev. Lett. 97151802E. Masso and J. Redondo, Phys. Rev. Lett. 97 (2006) 151802 [arXiv:hep-ph/0606163].
. J Jaeckel, E Masso, J Redondo, A Ringwald, F Takahashi, arXiv:hep-ph/0610203Phys. Rev. D. 7513004J. Jaeckel, E. Masso, J. Redondo, A. Ringwald and F. Takahashi, Phys. Rev. D 75 (2007) 013004 [arXiv:hep-ph/0610203].
. R N Mohapatra, S Nasri, arXiv:hep-ph/0610068Phys. Rev. Lett. 9850402R. N. Mohapatra and S. Nasri, Phys. Rev. Lett. 98 (2007) 050402 [arXiv:hep-ph/0610068].
. P Jain, S Stokes, arXiv:hep-ph/0611006P. Jain and S. Stokes, arXiv:hep-ph/0611006.
. P Brax, C Van De Bruck, A C Davis, arXiv:hep-ph/0703243P. Brax, C. van de Bruck and A. C. Davis, arXiv:hep-ph/0703243.
. A Ringwald, arXiv:hep-ph/0306106Phys. Lett. B. 56951A. Ringwald, Phys. Lett. B 569 (2003) 51 [arXiv:hep-ph/0306106].
. P Pugnat, Czech. J. Phys. 55193P. Pugnat et al., Czech. J. Phys. 55 (2005) A389; 56 (2006) C193.
. S J Chen, Q&A CollaborationH H Mei, Q&A CollaborationW T Ni, Q&A CollaborationarXiv:hep-ex/0611050S. J. Chen, H. H. Mei and W. T. Ni [Q&A Collaboration], arXiv:hep-ex/0611050.
. A Ringwald, arXiv:hep-ph/0112254A. Ringwald, arXiv:hep-ph/0112254.
. R Rabadan, A Ringwald, K Sigurdson, arXiv:hep-ph/0511103Phys. Rev. Lett. 96110407R. Rabadan, A. Ringwald and K. Sigurdson, Phys. Rev. Lett. 96 (2006) 110407 [arXiv:hep-ph/0511103].
. U Kötz, A Ringwald, T Tschentscher, arXiv:hep-ex/0606058U. Kötz, A. Ringwald and T. Tschentscher, arXiv:hep-ex/0606058.
. P Sikivie, D B Tanner, K Van Bibber, arXiv:hep-ph/0701198Phys. Rev. Lett. 98172002P. Sikivie, D. B. Tanner and K. van Bibber, Phys. Rev. Lett. 98 (2007) 172002 [arXiv:hep-ph/0701198].
. A V Afanasev, O K Baker, K W Mcfarlane, arXiv:hep-ph/0605250A. V. Afanasev, O. K. Baker and K. W. McFarlane, arXiv:hep-ph/0605250.
| []
|
[
"Nonlinear matrix recovery using optimization on the Grassmann manifold",
"Nonlinear matrix recovery using optimization on the Grassmann manifold"
]
| [
"Florentin Goyens \nMathematical Institute\nUniversity of Oxford\nOxfordUnited Kingdom\n\nThe Alan Turing Institute\nLondonUnited Kingdom\n",
"Coralia Cartis \nMathematical Institute\nUniversity of Oxford\nOxfordUnited Kingdom\n\nThe Alan Turing Institute\nLondonUnited Kingdom\n",
"Armin Eftekhari \nThe Alan Turing Institute\nLondonUnited Kingdom\n\nDepartment of Mathematics and Mathematical Statistics\nUmeå University\nUmeåSweden\n"
]
| [
"Mathematical Institute\nUniversity of Oxford\nOxfordUnited Kingdom",
"The Alan Turing Institute\nLondonUnited Kingdom",
"Mathematical Institute\nUniversity of Oxford\nOxfordUnited Kingdom",
"The Alan Turing Institute\nLondonUnited Kingdom",
"The Alan Turing Institute\nLondonUnited Kingdom",
"Department of Mathematics and Mathematical Statistics\nUmeå University\nUmeåSweden"
]
| []
| We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by a constrained non-convex optimization problem involving the Grassmann manifold. We propose two sets of algorithms, one arising from Riemannian optimization and the other as an alternating minimization scheme, both of which include first-and second-order variants. Both sets of algorithms have theoretical guarantees. In particular, for the alternating minimization, we establish global convergence and worst-case complexity bounds. Additionally, using the Kurdyka-Lojasiewicz property, we show that the alternating minimization converges to a unique limit point. We provide extensive numerical results for the recovery of union of subspaces and clustering under entry sampling and dense Gaussian sampling. Our methods are competitive with existing approaches and, in particular, high accuracy is achieved in the recovery using Riemannian second-order methods. | null | [
"https://export.arxiv.org/pdf/2109.06095v2.pdf"
]
| 237,491,684 | 2109.06095 | e9d922e9a1671bc44dd8eb46d3d94240fb3bbfb6 |
Nonlinear matrix recovery using optimization on the Grassmann manifold
December 12, 2022
Florentin Goyens
Mathematical Institute
University of Oxford
OxfordUnited Kingdom
The Alan Turing Institute
LondonUnited Kingdom
Coralia Cartis
Mathematical Institute
University of Oxford
OxfordUnited Kingdom
The Alan Turing Institute
LondonUnited Kingdom
Armin Eftekhari
The Alan Turing Institute
LondonUnited Kingdom
Department of Mathematics and Mathematical Statistics
Umeå University
UmeåSweden
Nonlinear matrix recovery using optimization on the Grassmann manifold
December 12, 2022nonlinear matrix recoverynonconvex optimizationRiemannian optimizationsecond-order methods
We investigate the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters. The recovery problem is formulated as the rank minimization of a nonlinear feature map applied to the original matrix, which is then further approximated by a constrained non-convex optimization problem involving the Grassmann manifold. We propose two sets of algorithms, one arising from Riemannian optimization and the other as an alternating minimization scheme, both of which include first-and second-order variants. Both sets of algorithms have theoretical guarantees. In particular, for the alternating minimization, we establish global convergence and worst-case complexity bounds. Additionally, using the Kurdyka-Lojasiewicz property, we show that the alternating minimization converges to a unique limit point. We provide extensive numerical results for the recovery of union of subspaces and clustering under entry sampling and dense Gaussian sampling. Our methods are competitive with existing approaches and, in particular, high accuracy is achieved in the recovery using Riemannian second-order methods.
Introduction
In the matrix recovery problem, one tries to estimate a matrix M ∈ R n×s from partial information. The low-rank matrix recovery problem deals with instances where the matrix M is low-rank. This problem has received great attention in the literature, as applications abound in recommender systems and engineering (see [12] and the references therein for an overview). It was shown in [9] that solving a convex program allows to recover the original matrix M with very high probability, provided enough samples are available. However, solving this convex semi-definite problem for large instances is very costly in time and memory allocation. This has sparked the search for alternative nonconvex formulations of the problem [27,40]. Riemannian optimization methods are used in some of the most efficient algorithm known to date for low-rank matrix completion. These methods solve optimization problems defined on smooth Riemannian manifolds, such as the manifold of fixed-rank matrices [39] or the Grassmann manifold [5].
All traditional approaches to matrix completion fail if the matrix M is high-rank. Our work is based on the recent discovery that an adaptation of traditional methods allows to recover specific classes of high-rank matrices [17,32]. This problem is known as nonlinear matrix recovery (or high-rank matrix recovery). Recovering high-rank matrices requires one to make assumptions on the structure of M . Let m 1 , . . . , m s denote the columns of M . When the points m i ∈ R n belong to a low-dimensional subspace in R n , low-rank matrix recovery methods can be applied. Nonlinear matrix recovery attempts to recover M when the points m i are related in a nonlinear way.
Classically, for some integer m < ns, the matrix M satisfies m linear equations of the type A i , M = b i for given matrices A i ∈ R n×s and a given vector b ∈ R m , where we use the usual inner product A i , M = trace(A i M ). The matrices A i are assumed to be randomly drawn from a known distribution. One defines the linear operator
A : R n×s → R m where A(M ) i = A i , M (1.1)
so as to have the compact notation A(M ) = b for the measurements. When each matrix A i has exactly one non-zero entry which is equal to 1, this is known as a matrix completion problem. The matrix M is then known on a subset Ω of the complete set of entries {1, . . . , n} × {1, . . . , s}.
Without loss of generality we assume n ≤ s.
Problem description Nonlinear matrix recovery methods use features that map the columns of M to a space of higher dimension. The feature map is defined as
ϕ : R n → F : v → ϕ(v),(1.2)
where F is a Hilbert space. If F is finite dimensional, we write F = R N where N is the dimension of the feature space, with N ≥ n. We obtain the feature matrix Φ(M ) by applying ϕ to each column of M , Φ(M ) = ϕ(m 1 ) . . . ϕ(m s ) ∈ R N ×s .
(1.
3)
The map ϕ is chosen using a priori knowledge of the data so that the features of the data points ϕ(m i ) for i = 1, . . . , s, all belong to the same subspace in R N . The nonlinear structure in M will cause a rank deficiency in the feature matrix Φ(M ), even though M may be full-rank. This is illustrated in Figure 1. If the features are infinite dimensional or that N is very large, the feature map should be represented using a kernel, which is known as the kernel trick. The set F is then called a reproducing kernel Hilbert space. The kernel map represents the inner product between elements in the Hilbert space of features, k : R n × R n → R : k(x, y) = ϕ(x), ϕ(y) F . (1.4) This allows to define the kernel matrix of the data K(M, M ) ∈ R s×s , with K ij (M, M ) = k(m i , m j ). Throughout, we assume that r = rank(Φ(M )) is known and smaller than min(N, s).
When F = R N , we note that K(M, M ) = Φ(M ) Φ(M ) and therefore rank K(M, M ) = rank Φ(M ) . We use the term embedding to denote a mapping to a higher-dimensional space, which may be performed using a kernel or a feature map. In [32], Ongie et al. use the monomial kernel for the completion of matrices whose columns belong to an algebraic variety (a set defined by a finite number of polynomial equations). This can notably be applied to a union of subspaces. In [17], Fan et al. use the monomial kernel and the Gaussian kernel on image inpainting problems. In Section 2, we detail why polynomial and Gaussian kernels may be used to model data which belongs to, respectively, an algebraic variety or a set of clusters.
Problem formulation For an appropriately chosen feature map, the nonlinear matrix recovery problem can be formulated as the rank minimization of the features matrix under the measurements constraint min X∈R n×s rank(Φ(X))
A(X) = b.
(1.5)
This seeks to find the matrix which fits the observations using a minimum number of independent features. As is the case for low-rank matrix recovery, minimizing the rank directly is NP-hard and should be avoided [9]. It is necessary to find a suitable approximation to the rank function.
Related work In essence, [17] and [32] apply different minimization algorithms to the Schatten p-norm of the features which is defined by
min(N,s) i=1 σ i (Φ(X)) p 1/p for 0 < p ≤ 1.
(1.6)
When p = 1, the sum of the singular values is the nuclear norm. Both [17] and [32] use a kernel representation of the features, so that the features are never computed explicitly. In [16], the authors introduce the algorithm NLMC, which applies a quasi-Newton method to minimize the Schatten p-norm. The Schatten p-norm for 0 < p ≤ 1 is nonsmooth. This has the benefit of encouraging sparsity in the singular values, but it might prevent fast convergence near a minimizer. The algorithm VMC, introduced in [32], minimizes a smooth approximation of the Schatten p-norm. It uses a kernelized version of an iterative reweighted least-squares algorithm (IRLS). The IRLS method was originally proposed in [23] and [29] for low-rank matrix recovery and rank minimization. The IRLS framework has the advantage that it generalizes seamlessly to the kernel setting. In [20] a truncated version of the Schatten norm is proposed, where only the smallest singular values are minimized,
min(N,s) i=r+1 σ i (Φ(X)) p 1/p for 0 < p ≤ 1,(1.7)
where r = rank(Φ(M )). They use the kernel trick and propose an algorithm which alternates between truncated singular value decompositions and a step of the Adam method with an additional tuning of the stepsizes. In [19], the authors propose an extension to handle outliers in the data. This is achieved by introducing a sparse matrix in the model, which absorbs the outliers. In [31], the authors build a tensor representation of the data and apply known matrix completion techniques in the tensor space. Their algorithm, LADMC, is a simple and efficient approach for which they are able to show that the sampling requirements nearly match the information theoretic lower bounds for recovery under a union of subspace model. This is remarkable as the sampling pattern in the tensor space is not random, and low-rank recovery results do not apply directly. Note that the approach in [31] is only applicable to matrix completion problems, not matrix sensing.
In [18], a new algorithm KFMC (kernelized factorization matrix completion) is proposed, which lends itself to online completion. In this setting, the columns of the matrix M are accessible as a stream and the matrix M is never stored in its entirety. They also develop a variant algorithm to deal with out of samples extensions. That is, how to complete a new column without recomputing the model. The offline formulation applies the kernel trick to
min X,D,Z Φ(X) − Φ(D)Z 2 F + α Φ(D) 2 F + β Z 2 F X ij = M ij , (i, j) ∈ Ω,
D ∈ R n×r , Z ∈ R r×s , X ∈ R n×s .
(1.8)
The variable D ∈ R n×r aims to find r points in R n so that their features will form a basis for Φ(M ) in the Hilbert space. The last two terms in the objective are added as regularizers to improve the practical performances of the algorithm, as is often done in low-rank matrix completion [12]. In the method DMF, proposed in [16], the embedding is replaced by a deep-structure neural network who is trained to minimize the reconstruction error for the observable entries of M . This work showcases the applicability and performance of nonlinear matrix completion with numerous examples including image inpainting and collaborative filtering problems. For data drawn from multiple subspaces, [21] proposes a sparse factorization where each subspace is represented in a low rank decomposition. They solve this model with an algorithm in the spirit of PALM [3]. They are able to show sampling bounds to guarantee recovery.
Contribution and outline of the paper In Section 2 we describe the approach taken to recover high-rank matrices. It consists in using a feature map (or kernel) that exploits the nonlinear structure present in the matrix. This is applied to data which follows algebraic variety models or grouped in clusters. For these, we respectively use the monomial kernel and the Gaussian kernel. We demonstrate that the Gaussian kernel can be used to perform clustering with missing data, which expands the use cases of nonlinear matrix recovery. In Section 3, we propose to use a new formulation for nonlinear matrix recovery. We use the feature map to write the recovery problem as a constrained nonconvex optimization problem on the Grassmann manifold. This extends the residual proposed in [15] in the context of low-rank matrix completion to the nonlinear case.
We propose to use Riemannian optimization methods to solve the recovery problem, which is new in the context of nonlinear matrix recovery. Riemannian optimization, as described in Section 4, provides a framework to design algorithms for problems with smooth constraints. This allows to seamlessly choose between standardized first-and second-order methods. The use of second-order methods allows to recover high-rank matrices up to high accuracy if desired.
Section 5 presents an alternating minimization algorithm to solve the recovery problem. First-and second-order variants of the alternating minimization are discussed. We prove global convergence of the algorithm to first-order stationary points in Section 6 and give a global complexity rate to achieve an arbitrary accuracy on the gradient norm from an arbitrary initial guess. In Section 7, we also show convergence of the sequence of iterates to a unique limit point using the Kurdyka-Lojasiewicz property. Our alternating minimization method is a similar approach to the method proposed in [20]. We provide extensive convergence analysis, which was not done in [20]. Section 8 summarizes the applications and algorithms covered in this paper with a framework to solve nonlinear matrix completion.
We conclude with an extensive set of numerical experiments that compare the performances of the optimization approaches and the quality of the solutions that can be obtained (Section 9). We discuss the influence of the complexity of the data and the role of model parameters on the recovery. Moreover, we showcase that our approach is very efficient at clustering data with missing information.
Notations Throughout the paper we use a notation consistent with [2] for the derivative of a function f defined on a Riemannian manifold. The unconstrained gradient of a function f is written ∇f , when the domain of f is extended to an embedding Euclidean space. Conversely, we use gradf for the Riemannian gradient of f defined over a Riemannian manifold. For matrices A, B ∈ R n×s , A, B = trace(A B) is the canonical inner product, range(A) is the column space of A, null(A) is the null space of A. The identity matrix of size n is denoted by I n and Id is the identity operator.
The feature map
As mentioned, our approach uses an embedding of the original matrix in a space of features, in the spirit of [17] [32]. Through the case studies below (2.1, 2.2 and 2.3), we describe the embeddings that we use and some of the data structures to which they apply.
A1. The feature map ϕ is chosen such that Φ(M ) is low rank. In addition, Φ(X) should be high rank if X does not exhibit the same geometrical structure as M .
The goal is to find an embedding that reveals the nonlinear relation between the points m i , the columns of M . In [32] the authors use the polynomial features for data sets represented by algebraic varieties.
Case study 2.1 (Algebraic varieties [11]). Let R[x] be the set of real valued polynomials over R n . A real (affine) algebraic variety is defined as the zero set of a system of polynomials P ⊂ R[x]: V (P ) = {x ∈ R n : p(x) = 0 for all p ∈ P }.
(2.1)
We say that the matrix M follows an algebraic variety model if every column of M belongs to the same algebraic variety.
Let N (n, d) = n + d n , (2.2) which reads n + d choose n, the number of monomials of degree d or less that can be formed with n variables. The monomial features ϕ d for some degree d are defined as
ϕ d : R n → R N (n,d) : ϕ d (x) = x α 1 x α 2 . . . x α N (n,d) (2.3)
where, for i = 1, 2, . . . , N (n, d), the exponent α i = (α i 1 , α i 2 , . . . , α i n ) is a multi-index of nonnegative integers; so that
x α i := x α i 1 1 x α i 2 2 .
. . x α i n n and α i 1 + α i 2 + · · · + α i n ≤ d. The dimension of the feature space N (n, d) increases exponentially in d. Therefore, a kernel implementation is usually used in practice for moderate and large dimensions, or more precisely, whenever s ≤ N (n, d). The monomial kernel of degree d is defined for any X, Y ∈ R n×s as
K d (X, Y ) = (X Y + c1 s×s ) d , (2.4)
where the value c ∈ R is a parameter of the kernel, 1 s×s is a square matrix of size s full of ones and is an entry-wise exponent. If the equations describing the variety are known to be homogeneous, one can set c = 0. Note that the monomial kernel in (2.4) is not exactly the kernel associated with the monomial features in (2.3).
Instead, K d (X, X) =Φ d (X) Φ d (X)
for a map of monomialsΦ d that has non-unitary coefficients given by the multinomial theorem. For x, y ∈ R n , we have
k d (x, y) = (x y + c) d = (x 1 y 1 + · · · + x n y n + c) d (2.5) = α i 1 +α i 2 +···+α i n+1 =d d! α i 1 !α i 2 ! . . . α i n+1 ! (x 1 y 1 ) α i 1 · · · (x n y n ) α i n c α i n+1 (2.6) = α i 1 +α i 2 +···+α i n+1 =d √ d!x α i 1 1 · · · x α i n n c α i n+1 α i 1 !α i 2 ! . . . α i n+1 ! √ d!y α i 1 1 · · · y α i n n c α i n+1 α i 1 !α i 2 ! . . . α i n+1 ! . (2.7) It follows that k d (x, y) = φ d (x),φ d (y) for a mapφ d : R n → R N (n,d) such that the entries of ϕ d (x) are of the form √ d!x α i 1 1 x α i 2 2 . . . x α i n n c α i n+1 / α i 1 ! · · · α i n !α i n+1 ! (2.8) for some natural numbers α i 1 + α i 2 + · · · + α i n+1 = d.
The meaningful consequence is that the kernel K d corresponds to featuresφ d which form a basis of the set of polynomials in n variables of degree at most d. Therefore, Φ d (X) andΦ d (X) have the same rank as K d (X, X) by virtue
of K d (X, X) =Φ d (X) Φ d (X).
In [32], the authors argue why using the monomial embedding is appropriate when the points m i belong to an algebraic variety. Suppose the variety V (P ) ⊂ R n is defined by the set of polynomials P = {p 1 , . . . , p k } where each p i is at most of degree d. Then the columns of X belong to the variety V (P ) if and only if there exists C ∈ R N ×k such that Φ d (X) C = 0, where the columns of C define the coefficients of the polynomials p i in the monomial basis. This implies that rank(Φ d (X)) ≤ min(N − k, s). This justifies that Φ d (X) is rank deficient when there are sufficiently many data points such that s ≥ N − k, and X follows an algebraic variety model. The second case study below presents a union of subspaces as a particular type of algebraic variety.
Case study 2.2 (Union of subspaces). Given two affine subspaces S 1 , S 2 ⊂ R n of dimension r 1 and r 2 respectively, we can write S 1 = {x : q i (x) = 0 for i = 1, . . . , n − r 1 } and S 2 = {x : p j (x) = 0 for j = 1, . . . , n − r 2 } where the q i and p j are affine functions. The union S 1 ∪ S 2 can be expressed as the set where all possible products q i (x)p j (x) vanish. Therefore, S 1 ∪ S 2 is the solution of a system of (n − r 1 )(n − r 2 ) quadratic polynomial equations. Similarly, a union of k affine subspaces of dimensions r 1 , . . . , r k is a variety described by a system of Π k i=1 (n − r i ) polynomial equations of degree k. Proposition 2.1 (Rank of monomial features [32]). If the columns of a matrix X ∈ R n×s belong to a union of p affine subspaces of dimension at mostr, then for any degree d ≥ 1, the matrix Φ d (X) ∈ R N (n,d)×s of monomial features, with N (n, d) the dimension of the features space defined in equation (2.2), satisfies
rank Φ d (X) ≤ p r + d d . (2.9)
In practice, choosing the degree d of the monomial kernel is a tricky task. In Section 9, we discuss the practical choice of this degree and how it impacts the rank of the feature matrix and the possibility to recover M . Previous works using the monomials kernel to recover highrank matrices all restricted themselves to degrees two or three [17,32]. Using a polynomial embedding of large degree would seem helpful to capture all the nonlinearity in some data sets. Unfortunately, increasing the degree will grow the dimensions of the optimization problem exponentially. Indeed, the dimension N (n, d) blows up with d for even moderate values of n and the number of data points required is at least N (n, d) − k where k is the number of polynomial equations that define the variety.
We now define the Gaussian kernel which will be used when the columns of the matrix M are grouped in several clusters.
Case study 2.3 (Clusters). For X, Y ∈ R n×s , the entry (i, j) of the Gaussian kernel K G : R n×s × R n×s → R s×s is defined as
K G ij (X, Y ) = e − x i − y j 2 2 2σ 2 , (2.10)
where σ > 0 is the width of the kernel. The Gaussian kernel acts as a proximity measure. For two columns of X, labelled x i and x j , we observe that x i being close to x j gives K G ij (X, X) ≈ 1 and if x i is far from x j then K G ij (X, X) ≈ 0. Therefore, the rank of the Gaussian kernel approximately coincides with the number of clusters in X. More precisely, one can show that the singular values whose index exceeds the number of clusters decay rapidly [34]. The value of σ should be chosen appropriately depending on the size of the clusters.
In Figure 2, we present a small data set of 100 data points divided in two clusters in R 2 with the singular values of the Gaussian kernel (in log-scale). We see that the two largest singular values are much greater than the third one, and that the following singular values decrease at an approximately exponential rate. Therefore, the Gaussian kernel is near a low rank matrix for clustered data, which will allow us to complete such data sets from partial measurements. In [16], the Gaussian kernel was also used effectively on image inpainting and denoising problems.
Nonlinear matrix recovery as an optimization problem
Noiseless measurements case Considering a noiseless measurements case, we would like to minimize the rank of the feature matrix, as in Equation (1.5). This is unfortunately known to be intractable, even in the case where M is low-rank [22]. We have to resort to approximations of this problem. The second difficulty is the nonlinearity of Φ.
As a nonconvex approximation to (1.5), we consider the formulation in [15] for low-rank problems, and extend it to the nonlinear case. Assuming that Φ(M ) has rank r leads to the following formulation
min X,U f (X, U) := Φ(X) − P U Φ(X) 2 F U ∈ Grass(N, r) A(X) = b,(3.1)
where Grass(N, r) is the Grassmann manifold, the set of all subspaces of dimension r in R N , P U is the orthogonal projection on the subspace U and . F denotes the Frobenius norm. The linear measurements, A(X) = b, are defined in equation (1.1). Given U ∈ R N ×r such that range(U ) = U and U U = I r , the projection is given by P U = U U . In (3.1) ,the objective function is expected to be nonconvex but smooth for practical choices of ϕ, such as case study 2.1, 2.3. If the variable U is additionally constrained to be the range of the r leading singular vectors of Φ(X), the cost function becomes min(N,s) i=r+1 σ i (Φ(X)) 2 . The advantage of the formulation in (3.1) is that is it straightforward to express it as a finite sum of s terms over all the data points. This allows to use stochastic sub-sampling algorithms that scale better to matrices with many columns (large s).
The advantage of using the Grassmann manifold, which is a quotient space, instead of the Stiefel manifold of orthogonal matrices St(N, r) := {U ∈ R N ×r : U U = I r } is that due to the invariance of the cost function with respect to the matrix that represents the subspace U, local optimizers cannot possibly be isolated in a formulation over St(N, r). Therefore, the fast local convergence rates of some second-order algorithms might not apply on St(N, r), while they would apply on the quotient manifold.
Consider U ⊥ a basis of U ⊥ , the orthogonal complement of U in R N . The variable U ⊥ ∈ Grass(N, N − r) has a nice interpretation since U ⊥ spans null(Φ(X) ) when f (X, U) = 0. In the case of algebraic varieties (case study 2.1), U ⊥ gives the coefficients of the polynomials defining the variety in the basis given by Φ. Recovering the equations of the variety is of interest in some applications [8,24].
Noisy measurements case When the measurements are known to be noisy, i.e. A(M ) = b+η for some noise η ∈ R m , the measurement constraint can be lifted into the cost function as a penalty. This gives
min X,U f λ (X, U) := Φ(X) − P U Φ(X) 2 F + λ A(X) − b 2 2 U ∈ Grass(N, r), (3.2)
where the parameter λ > 0 has to be adjusted. This allows to satisfy the noisy measurements approximately.
Kernel representation of the features
When working with a kernel instead of a feature map, as shown in (1.4), we want to find a cost function equivalent to that of (3.1) which uses the kernel instead of the feature map. We find that they are related in the following way.
Proposition 3.1. Given a feature map Φ and the associated kernel K :
X × X → Φ(X) Φ(X), for W ∈ Grass(s, r) we have Φ(X) − P W Φ(X) 2 F = trace K(X, X) − P W K(X, X) . (3.3)
Proof. We write P W ⊥ = I s×s − P W and find
trace P W ⊥ K(X, X) = trace P W ⊥ Φ(X) Φ(X) = trace Φ(X)P W ⊥ Φ(X) = trace Φ(X) (P W ⊥ ) P W ⊥ Φ(X) (3.4) = trace (P W ⊥ Φ(X) ) P W ⊥ Φ(X) = P W ⊥ Φ(X) 2 F .
Using the kernel formula (3.3) corresponds to finding a subspace W of dimension r in the row space of Φ(X). When a kernel is used for the embedding, the following optimization problem is solved,
min W,X f (X, W) := trace K(X, X) − P W K(X, X) W ∈ Grass(s, r) A(X) = b. (3.5)
Replacing the features Φ by the corresponding kernel becomes beneficial when the dimension N of the features is larger than the number of points s and when a convenient formula is available to compute the kernel and its derivatives. For example, in the case of clusters (case study 2.3), the features exist implicitly in an infinite dimensional space and we use the Gaussian kernel to represent them.
In the upcoming sections, we usually describe the algorithms and their properties using the notation of problem (3.1) with a feature map Φ and cost function f (X, U). Unless specified otherwise, the developments also apply to problem (3.5) and the use of a kernel.
Riemannian optimization algorithms
Riemannian optimization methods provide an elegant and efficient way to solve optimization problems with smooth nonlinear constraints. The field of Riemannian optimization has rapidly developed over the past two decades. In particular, Riemannian optimization methods have proved very efficient in low-rank matrix completion [5,39]. In this section, we investigate the use of Riemannian optimization methods to solve (3.1) or (3.5). For an overview of optimization algorithms on Riemannian manifolds, see [2,4]. In order to formally express (3.1) as a Riemannian optimization problem, we define a notation for the affine subspace that represents the measurements on the matrix M ,
L A,b = {X ∈ R n×s : A(X) = b}.
(4.1)
We form the product manifold
M = L A,b × Grass(N, r),(4.2)
so that Problem (3.1) can be viewed as the unconstrained minimisation of a smooth cost function defined on the manifold M,
min (X,U ) Φ(X) − P U Φ(X) 2 F (X, U) ∈ M.
(4.3)
We introduce the notation z = (X, U) ∈ M to denote both variables that appear in the optimization problem. Riemannian optimization algorithms are feasible methods that iteratively exploit the local geometry of the feasible set. Analogously to unconstrained optimization in Euclidean spaces, each iteration of a Riemannian optimization algorithm uses derivatives to build a model that locally approximates the cost function. This model is then fully or approximately minimized. Most commonly, the model uses the gradient and possibly the Hessian or an approximation of it. This yields respectively a first-or second-order method. Riemannian optimization follows these principles, only the model is defined on a local linearisation of the manifold, namely, the tangent space. The Riemannian gradient, written gradf is a vector belonging to the tangent space of the manifold. The Riemannian Hessian, written Hessf is a symmetric operator on that tangent space. In Appendix A, we show how to compute the Euclidean gradient and Hessian for the cost function of (3.5) in the case of the kernels presented in Section 2 (monomial kernel, Gaussian kernel). Then, to find their Riemannian counterparts, ∇f and ∇ 2 f are projected onto the tangent space of M using the tools defined later on in this section. When exploring the tangent space, it is necessary to have a tool that allows to travel on the manifold in a direction prescribed by a tangent vector. This operation is called a retraction [2, Def. 4.1.1].
Definition 4.1 (Retraction).
A retraction on a manifold M is a smooth mapping Retr from the tangent bundle TM to M with the following properties. Let Retr z : T z M → M denote the restriction of Retr to T z M.
(i) Retr z (0 z ) = z, where 0 z is the zero vector in T z M;
(ii) the differential of Retr z at 0 z , DRetr z (0 z ), is the identity map.
The retraction curves t → Retr z (tη) agree up to first order with geodesics passing through z with velocity η, around t = 0.
Let us detail the tools necessary to use Riemannian optimization methods on the two manifolds that compose our search space M. Note that the Cartesian product of two Riemannian manifolds is a Riemannian manifold. The geometry of L A,b is rather trivial because the manifold is affine. It must nonetheless be implemented so we will describe how to handle this constraint in a Riemannian way. More generally, this gives a straightforward way to deal with affine equality constraints.
Measurement subspace At any point X ∈ L A,b , the tangent space is the null space of A,
T X L A,b = null(A) = {∆ ∈ R n×s : A(∆) = 0}. (4.4)
Since the tangent space does not depend on X, we write TL A,b . This tangent space inherits an inner product from the embedding space R n×s ,
∆ 1 , ∆ 2 = trace(∆ 1 ∆ 2 ) for all ∆ 1 , ∆ 2 ∈ TL A,b . (4.5)
The Riemannian gradient is the orthogonal projection of the Euclidean gradient onto the tangent space TL A,b . From the fundamental theorem of algebra, null(A) = range(A ) ⊥ . Therefore we can express P null(A) = Id − P range(A ) . The application A is represented by a flat matrix A ∈ R m×ns such that A(X) = AX(:) ∈ R m , where X(:) is a vector of length ns made of the columns of X taken from left to right and stacked on top of each other. Visually this gives,
A(X) = A 1 , X A 2 , X . . . A m , X = A 1 (:) A 2 (:) . . . A m (:) X(:) =: AX(:). (4.6)
The tall matrix A represents the linear application A . Viewing A as the matrix of an overdetermined system of linear equations convinces us that P range(A ) = A (AA ) −1 A. Equivalently, if Q ∈ R m×m is an orthogonal basis for range(A ) (which can be obtained by a reduced QR factorization of A ), we can apply P range(A ) = QQ . The projection is given by
P null(A) = I ns − A (AA ) −1 A = I ns − QQ . (4.7)
Therefore,
P TL A,b : R n×s → TL A,b : P TL A,b (∆) = (I ns − QQ )∆ (4.8)
where Q ∈ R m×m is an orthogonal basis for range(A ). In the case of matrix completion, the operator A selects the known entries of M . The description of the feasible subspace is simplified. We write L Ω,b = {X : X ij = M ij , ij ∈ Ω} to make explicit that the measurements correspond to matrix completion. The tangent space is TL Ω,b = {∆ : ∆ ij = 0 for ij ∈ Ω} and the projection onto TL Ω,b simply amounts to setting the entries outside of Ω to zero,
P TL Ω,b (∆) = 0 for ij ∈ Ω ∆ ij for ij / ∈ Ω.
The natural retraction on L A,b for X ∈ L A,b and ∆ ∈ TL A,b is given by
Retr X : TL A,b → L A,b : Retr X (∆) = X + ∆, (4.9)
because the manifold is flat. These tools are also needed for the Grassmann manifold and we follow the representation given in [5].
Grassmann manifold The Grassmann manifold, written Grass(N, r), is the set of all linear subspaces of dimension r in R N . A point U ∈ Grass(N, r) is represented by a full-rank matrix U ∈ R N ×r such that range(U ) = U. For any orthogonal matrix Y ∈ R r×r , the matrix U Y is also a valid representation of U, since range(U Y ) = range(U ). The set of matrices with orthonormal columns is defined as the Stiefel manifold, St(N, r) = {U ∈ R N ×r : U U = I r }, and the orthogonal group is defined as O(r) = {Y ∈ R r×r : Y Y = I r }. The orthogonal group induces an equivalence relation on the Stiefel manifold, where any two matrices are equivalent if they have the same column space. In this regard, the Grassmann is a quotient space
Grass(N, r) = St(N, r)/O(r),(4.10)
and each equivalence class consists of all matrices with the same span. The tangent space to the Stiefel manifold at U ∈ St(N, r) has the form
T U St(N, r) = U Z + U ⊥ B : Z ∈ Skew(r), B ∈ R (N −r)×r , (4.11)
where Skew(r) is the set of skew-symmetric matrices of size r [4, Section 7.3]. The equivalence class of U ∈ St(N, r)-seen as a submanifold of St(N, r)-has a tangent space at U , which is called the vertical space
V U St(N, r) = {U Z : Z ∈ Skew(r)} ⊆ T U St(N, r). The orthogonal complement of V U St(N, r) in T U St(N, r)
is called the horizontal space and is given by
H U St(N, r) = U ⊥ B : B ∈ R (N −r)×r = range(U ⊥ ) ⊆ T U St(N, r). (4.12)
As is common in differential geometry, the tangent space to Grass(N, r) at U is represented by the
horizontal space H U St(N, r), that is, H U St(N, r) T U Grass(N, r). Any tangent vector H U ∈ T U Grass(N, r) is represented by an horizontal vector H U ∈ H U St(N, r) called the horizontal lift of H U at U .
A thorough treatment of quotient manifolds, such as the Grassmann, and their usage in optimization can be found in [2,4]. The projection onto the horizontal space is given by
Proj H U St(N,r) : T U St(N, r) → H U St(N, r) : H → Id − U U H. (4.13)
The horizontal space is equipped with the usual inner product
H 1 , H 2 U = trace(H 1 H 2 ) for all H 1 , H 2 ∈ H U St(N, r).
(4.14)
The norm of a tangent vector to the Grassmann is given by the norm of its horizontal lift. Hence we understand the notation
H U F for H U ∈ T U Grass(N, r) as H U F = H U F , where H U is the horizontal lift of H U .
Let us call qf, the mapping that sends a matrix to the Q factor of its (reduced) QR decomposition with Q ∈ St(N, r) and R upper triangular with positive diagonal entries. To move away from U ∈ Grass(N, r) in the direction H ∈ H U St(N, r), we use the following retraction
Retr U : H U St(N, r) → Grass(N, r) : H → range (qf(U + H)) . (4.15)
We are now in a position to present the Riemannian trust-region algorithm [2,Ch.7]. This is an extension of the classical trust-region methods [10] to Riemannian manifolds.
Riemannian trust-region (RTR) At each iterate, RTR builds a local model of the function. The method sequentially minimizes this model under a ball constraint that prevents undesirably large steps where the model does not accurately represent the function. The trust-region subproblem takes the following form around z k ∈ M
min η∈Tz k M η z k ≤∆ km z k (η) := f (z k ) + η, gradf (z k ) z k + 1 2 η, H k [η] z k ,(4.16)
where H k : T z k M → T z k M is a symmetric operator on T z k M, ∆ k is the trust-region radius and the modelm z k : T z k M → R is a quadratic approximation of the pullbackf z k = f • Retr z k , defined on the tangent space at z k ∈ M.
First-order Riemannian trust-region When the Hessian of the cost function is expensive to compute or not available altogether, one can use a first-order model and set H k = 0. This method will be very similar to gradient descent. But the trust-region is used to ensure global convergence, as opposed to a line-search.
Second-order Riemannian trust-region When the true Hessian of the cost function is available, the classical second-order trust-region method is obtained with H k = Hessf (z k ). It is also possible to use an approximation of the true Hessian for H k .
Algorithm 1 Riemannian trust-region (RTR) [6] 1: Given: z 0 ∈ M and 0 < ∆ 0 <∆, ε g > 0, ε H > 0 and 0 < ρ < 1/4 2: Init: k = 0 3: while true do 4: if gradf (z k ) > ε g then
5:
Obtain
η k ∈ T z k M satisfying A5 6: else if ε H < ∞ then 7: if λ min (H k ) < −ε H then 8:
Obtain η k ∈ T z k M satisfying A6 z + k = Retr z k (η k ) 16:
ρ = f (z k ) − f (z + k )/(m z k (0) −m z k (η k ))
17:
if ρ < 1/4 then 18:
∆ k+1 = ∆ k /4 19:
else if ρ > 3/4 and η k = ∆ k then 20:
∆ k+1 = min(2∆ k ,∆) 21: else 22: ∆ k+1 = ∆ k 23: end if 24:
if ρ > ρ then 25:
z k+1 = z + k 26: end if 27: k = k + 1 28: end while
We apply RTR, as described in Algorithm 1, to problem (3.1) or (3.5). If a first-order critical point is sought, set ε H = ∞. The second-order version of RTR provably converges to secondorder critical points for any initialization under a weak decrease condition in the subproblems and satisfies global worst-case complexity bounds matching their unconstrained counterparts, as was shown in [6]. The local convergence rate is quadratic for an appropriate choice of parameters [2,Chap.7]. We note that there is no guarantee on the quality of the stationary point, due to the nonconvexity. Nonetheless, we see in Section 9 that the method performs very well in practice for nonlinear matrix recovery. We introduce the following assumptions.
A2. There exists f * > −∞ such that f (x) ≥ f * for all x ∈ M.
The cost functions of problems (3.1) and (3.5) are nonnegative, so A2 is satisfied throughout this paper. We also state a regularity assumption on the gradient and Hessian of the pullback which was introduced in [6]. In Appendix A, we detail how these conditions relate to the smoothness of the Riemannian derivatives and discuss the practicality of these assumptions for problem (3.5).
A3 (Lipschitz gradient of the pullback). There exists L g ≥ 0 such that for all z = (X, U) ∈ M, the pullbackf z = f • Retr z has Lipschitz continuous gradient with constant L g , that is, for all η ∈ T z M, it holds that
f z (η) − [f (z) + η, gradf (z) ] ≤ L g 2 η 2 . (4.17)
A4 (Lipschitz Hessian of the pullback). There exists L H ≥ 0 such that, for all z = (X, U) ∈ M, the pullbackf z = f • Retr z has Lipschitz continuous Hessian with constant L H , that is, for all η ∈ T z M, it holds that
f z (η) − f (z) + η, gradf (z) + 1 2 η, ∇ 2f z (0 z )[η] ≤ L H 6 η 3 . (4.18)
Algorithm 1 is flexible in that it does not specify how the subproblems are solved. We discuss the implementation of RTR in Section 9. For the complexity results, the following decreases in the model for first-and second-order steps are required.
A5. There exists c 2 > 0 such that all first-order steps η k satisfŷ
m k (0 z k ) −m k (η k ) ≥ c 2 min ∆ k , ε g c 0 ε g . (4.19)
A6. There exists c 3 > 0 such that all second-order steps η k satisfŷ
m k (0 z k ) −m k (η k ) ≥ c 3 ∆ 2 k ε H . (4.20)
A7. There exists c 0 ≥ 0 such that, for all first-order steps, H k ≤ c 0 and H k is radially linear, that is, for all
α ≥ 0 and η ∈ T z k M, it holds H k [αη] = αH k [η].
A8. There exists c 1 ≥ 0 such that, for all second-order steps,
η k , ∇ 2f z k (0 z k ) − H k [η k ] ≤ c 1 ∆ k 3 η k 2 .
In addition, for all second-order steps, H k is linear and symmetric.
Define the following constants
λ g = 1 4 min 1 c 0 , c 2 L g + c 0 and λ H = 3 4 c 3 L H + c 1 . (4.21)
The following (sharp) worst-case bound for Riemannian trust-region was recently established.
Theorem 4.1 (Global complexity of RTR [6]). Under A3, A5, A7 and assuming ε g ≤ ∆ 0 λ g ,
Algorithm 1 produces an iterate z N 1 satisfying gradf (z N 1 ) ≤ ε g with N 1 ≤ O(1/ε 2 g ). (4.22) Furthermore, if ε H < ∞ then under additionally A4, A6, A8 and assuming ε g ≤ c 2 c 3 λ H λ 2 g and ε H ≤ c 2 c 3 λ g , Algorithm 1 also produces an iterate z N 2 satisfying gradf (z N 2 ) ≤ ε g and λ min (H N 2 ) ≥ −ε H with N 1 ≤ N 2 ≤ O 1 ε 2 ε H . (4.23)
An alternating minimization algorithm
In this section, we propose an alternating minimization algorithm to solve (3.1) or (3.5) (Algorithm 2). This comes from the natural separation of the variables into two blocks X and U, yielding two distinct minimization subproblems. Alternating minimization type methods have been very popular in recent years to solve large-scale nonconvex problems [3,40]. This is due to their good practical performances and ease of implementation, as often one or both of the subproblems have a closed form solution. Strictly speaking, this is still a Riemannian optimization approach, as all iterates will be feasible for the constraints. But this section describes a two-block coordinate minimization, whereas the previous section was considering full block variants.
We set the initial guess X 0 as any solution of the underdetermined linear system A(X) = b and U 0 as the span of the r leading singular vectors of Φ(X 0 ). The framework is as follows, for k ≥ 0:
With U k fixed, solve X k+1 = argmin X∈R n×s Φ(X) − P U k Φ(X) 2 F A(X) = b. (5.1) With X k+1 fixed, solve U k+1 = argmin U Φ(X k+1 ) − P U Φ(X k+1 ) 2 F U ∈ Grass(N, r). (5.2)
This separation of the variables takes advantage of the fact that problem (5.2), even though nonconvex, is solved to global optimality by computing the r leading left singular vectors of the matrix Φ(X k+1 ). The result is a consequence of the celebrated Eckart-Young-Mirsky theorem, which gives the best rank r approximation in Frobenius norm of a matrix by the r leading terms of the singular value decomposition [14,28]. In particular, let Φ(
X k ) = min(N,s) i=1 σ i u i v i , then U k+1 = span(u 1 , . . . , u r ) (5.3)
is a global minimizer of (5.2) 1 . This truncated singular value decomposition (SVD) is denoted by truncate_svd in Algorithm 2. The singular vectors can be computed to high accuracy in polynomial time [38].
Problem (5.1) is in general hard to solve to global optimality. The difficulty comes from the nonconvexity of the cost function, which is due to Φ. One can choose from a variety of first-or second-order methods to find an approximate first-or second-order critical point. We will present the merits of both possibilities in Section 9.
i) First-order version of alternating minimization When only gradient information is available, a first-order method will be used to minimize subproblem (5.1). For the sake of illustration, in Algorithm 2 we present a projected gradient descent with line search for (5.1). The gradient of the cost function is projected onto the null space of A. This ensures that the iterates remain in the feasible set L A,b . The line search is a classical backtracking with an Armijo condition for sufficient decrease. Variants in the line search or even constant step sizes are possible.
ii) Second-order version of alternating minimization In the subproblem (5.1), it is possible to use second-order methods to speed up the convergence and reach a higher accuracy. We apply RTR on the affine manifold L A,b with the Hessian of the cost function ∇ 2 XX f (X k , U k ) in the model. Algorithm 2 details the alternating minimization where gradient descent with an Armijo line search, a standard inexact procedure in nonconvex optimization, is applied in the subproblem (5.1). The Armijo line search is described in Algorithm 3.
iii) Accuracy of the subproblems solution Algorithm (2) alternatively solves the subproblems (5.1) and (5.2). For the solution of (5.1), there is no incentive to solve it to very high accuracy early on in the run of the algorithm, as we could still be far from convergence and the variable U might still change a lot. At iterations k, we use the following stopping criterion for some ε x,k > 0,
grad X f (X k+1 , U k ) F ≤ ε x,k . (5.4)
We propose the two following strategies for the choice of ε x,k ,
ε x,k = ε x for all k, (5.5) or ε x,k = max (ε x , θ grad X f (X k , U k ) F ) for some user-chosen 0 < θ < 1. (5.6)
To solve (5.2), it is possible to use a randomized SVD procedure. The randomized SVD is a stochastic algorithm that approximately computes the singular value decomposition of a matrix that exhibits a low-rank pattern [25]. The matrix must be of low rank or have a fast decay in its singular values for the random SVD to be accurate. As the iterates X k converge towards the solution M , the matrix Φ(X k ), for which we have to compute an SVD, becomes low-rank and therefore it is natural to use a randomized SVD in Algorithm 2. In the first iterations, for a random starting point of the algorithm, the feature matrix is not expected to be low-rank. In those case, the random SVD should not be used. When the matrix Φ(X k ) is only approximately Algorithm 2 Alternating minimization scheme for Problem (3.1) or (3.5) 1: Given: The sensing matrix A ∈ R m×ns , measurements b ∈ R m , tolerances ε u ≥ 0, ε x ≥ 0, an estimation of r = rank(Φ(M )).
2: Set k = 0 3: Find X 0 that satisfies AX 0 = b 4: U 0 = truncate_svd(Φ(X 0 )) Equation (5.3) 5: while grad X f (X k , U k ) F > ε x or grad U f (X k , U k ) F > ε u do 6: Set X (0) k = X k , i = 0 7:
Choose ε x,k using Equation (5.5) or (5.6) 8: while
grad X f (X (i) k , U k ) > ε x,k do 9: grad X f (X (i) k , U k ) = P TL A,b ∇ X f (X (i) k , U k ) Equation (4.8) 10: α (i) k = Armijo (X (i) k , U k ), −grad X f (X (i) k , U k ) Algorithm 3 11: X (i+1) k = X (i) k − α (i) k grad X f (X (i) k , U k ) 12: i = i + 1 13: end while 14: X k+1 = X (i) k 15: if grad U f (X k+1 , U k ) ≤ ε u then 16: U k+1 = U k 17: else 18: U k+1 = truncate_svd(Φ(X k+1 )) 19:
end if 20:
k = k + 1 21: end while 22: Output: (X k , U k ) such that gradf (X k , U k ) ≤ ε u + ε x .
Algorithm 3 Armijo(z k , d k ): Line search with Armijo condition
INPUT: Function f and gradient grad X f , current iterate (X
(i) k , U k ) and a descent direction d k such that grad X f (X (i) k , U k ), d k < 0, a sufficient decrease coefficient β ∈]0, 1[, initial step α 0 > 0 and τ ∈]0, 1[.
OUTPUT:
Step size α
(i) k . 1: Set α = α 0 . 2: while f (X (i) k + αd k , U k ) > f (X (i) k , U k ) + βα grad X f (X (i) k , U k ), d k do 3: α = τ α. 4: end while 5: Set α (i) k = α.
low-rank, we can apply power iterations to make the singular values decrease faster. This will make the randomized decomposition more costly, but will improve the accuracy of the computed SVD.
Our strategy is as follows, choose two parameters 0 < τ 1 τ 2 < 1.
As long as f (X k+1 , U k ) > τ 2 , use an exact SVD algorithm, without randomization. When τ 1 < f (X k+1 , U k ) ≤ τ 2 we know that the energy of Φ(X k+1 ) is mostly contained in the span of U k which has dimension r. So we are justified in using a randomized SVD, which we start up with a step of the power method to improve the accuracy. When f (X k+1 , U k ) ≤ τ 1 , the matrix Φ(X k+1 ) is even closer to low-rank and we no longer need to apply a power iteration before computing the randomized SVD.
Convergence of the alternating minimization algorithm
In this section we present convergence results for the alternating minimization Algorithm 2. We consider a first-order version where subproblem (5.1) is minimized with the gradient descent method and the Armijo backtracking line-search (Algorithm 3). We will first show asymptotic convergence of the gradient norms to zero. We also give a worst-case global complexity bound on the number of iterations necessary to achieve a small gradient from an arbitrary initial starting point. Note that we chose the Armijo linesearch for the sake of example. Minor adjustments of the proof below allow to prove similar results for other minimization methods in subproblem (5.1).
Assumptions
A9. There exist constants L x and L u (which are both independent of X and U) such that for all z = (X, U) ∈ M, the pullbackf z = f • Retr z has a Lipschitz continuous gradient in X and U, with constants L x and L u respectively. That is, for all (η x , η u ) ∈ T (X,U ) M,
|f • Retr(η x , 0) − [f (X, U) + grad X f (X, U), η x ]| ≤ L x 2 η x 2 (6.1) and |f • Retr(0, η u ) − [f (X, U) + grad U f (X, U), η u ]| ≤ L u 2 η u 2 . (6.2)
In words, this means that, in each variable, the pullback is well approximated by its first-order Taylor approximation.
Remark 6.1. Note that if A3 holds, then A9 holds with L x = L u = L g .
Let us discuss, under which conditions on the kernel one can ensure that A3 or A9 are satisfied for the cost function of (3.5). The following discussion requires to use the exponential map as the retraction. The exponential map follows geodesics along the manifold in directions prescribed by tangent vectors. Using the exponential map on M is not a restriction, as the exponential map on the Grassmann manifold is computable [1] and the exponential map on L A,b is the identity. Proposition 6.1. Consider the cost function of (3.5) and assume that the retraction being used is the exponential map. If DK(X, X) is Lipschitz continuous over L A,b , then (6.1) holds where L x is the Lipschitz constant of DK(X, X). If K(X, X) F ≤ M for all X ∈ L A,b , condition (6.2) holds with L u = 2M .
Proof. See Appendix A.
The conditions listed in Proposition 6.1 on the kernel and its derivatives are not always satisfied or can be difficult to verify. For instance, the Gaussian kernel K G is bounded above on L A,b , but the monomial kernel K d is not for any degree d ≥ 1. For the Gaussian kernel, the map DK G (X) is always Lipschitz continuous on L A,b . Whereas for the monomial kernel, the map DK d (X) is Lipschitz continuous for d ≤ 2, and only locally Lipschitz continuous for d ≥ 3.
Fortunately, the picture is much simpler if the sequence of iterates (X k ) k∈N generated by Algorithm 1 or 2 is contained in a bounded set. This ensures that we can find Lipschitz constants such that the bounds of A3, A4 and A9 hold at every iterate of the algorithm (and trial points if any), which is all that is needed in the convergence analysis. Proposition 6.2. Consider the cost function of either (3.1) or (3.5) and apply Algorithm 2 or Algorithm 1 with the exponential map as the retraction. If the convex hull of the sequence of iterates (X k ) k∈N and the trial points of the algorithm is a bounded set, then (4.17), (4.18) and (6.2)-(6.1) hold at every iterate (X k , U k ) k∈N and trial points of the algorithm.
Proof. See appendix A.
Global convergence results
We now carry on with the convergence analysis of the alternating minimization algorithm. The next lemma is an adaptation of the classical descent lemma for the SVD step. Then, for any k ≥ 0, the iterates generated by Algorithm 2 satisfy
f (X k , U k ) − f (X k+1 , U k+1 ) ≥ 1 2L u grad U f (X k+1 , U k ) 2 F , (6.3)
where L u is the Lipschitz constant of the gradient of the pullback (A9).
Proof. See Appendix B.
Throughout this section we use the following notation. Let the number of gradient steps between X k and X k+1 be n k ≥ 0 and the intermediate iterates,
X k = X (0) k , X (1) k , X (2) k , . . . , X (n k ) k = X k+1 .
The next lemma gives upper and lower bounds on the step sizes given by the Armijo linesearch. This is an adaptation of a standard argument for linesearch methods [30] where the constraint A(X) = b is added.
Lemma 6.4. Under A9, for the direction −grad X f (X (i) k , U k ) ∈ T X k L A,b , the linesearch Algo- rithm 3 produces a step size α (i) k that satisfies α := min α 0 , 2τ (1 − β) L x ≤ α (i) k ≤ α 0 (6.4)
and produces the following decrease
f (X (i) k , U k ) − f (X (i+1) k , U k ) ≥ βα grad X f (X (i) k , U k ) 2 F , (6.5) where X (i+1) k = X (i) k − α (i) k grad X f (X (i) k , U k ).
Proof. See Appendix B.
We are now ready to prove global convergence of the alternating minimization algorithm.
Theorem 6.5 (Global convergence for Alternating minimization). Let A9 hold for f : M → R from (3.1) or (3.5). Let ε x = 0, ε u = 0 and use Equation (5.6) to set ε x,k , for any starting point
(X 0 , U 0 ) ∈ M, Algorithm 2 produces a sequence X k , U k k∈N such that lim k→∞ gradf (X k , U k ) F = 0. (6.6)
Proof. First note that f is bounded below by f * = 0. For any k ≥ 0,
grad X f (X k , U k ), grad U f (X k , U k ) ≤ grad X f (X k , U k ) F + grad U f (X k , U k ) F (6.7)
≤ grad X f (X k , U k ) F (6.8)
since ε u = 0. Given that each step is non-increasing,
f (X k , U k ) − f (X k+1 , U k+1 ) ≥ f (X k , U k ) − f (X k+1 , U k ) ≥ f (X k , U k ) − f (X (1) k , U k ) ≥ βα (0) k grad X f (X k , U k ) 2 F ≥ βα grad X f (X k , U k ) 2 F ,
where we used Lemma 6.4 about Armijo steps. Summing over all iterations gives a telescopic sum on the left-hand side. For anyk ≥ 0,
f (X 0 , U 0 ) − f * ≥ f (X 0 , U 0 ) − f (Xk, Uk) ≥ βαk k=0 grad X f (X k , U k ) 2 F . (6.9)
The series is convergent since it is bounded independently ofk. Therefore
lim k→∞ grad X f (X k , U k ) F = 0. (6.10)
We have grad U f (X k , U k ) = 0 for all k ≥ 0 since ε u = 0. This corresponds to taking exact SVDs. Taking k → ∞ in equation (6.8) gives convergence of the gradient norms to zero (6.6).
Theorem 6.6 (Global complexity for Alternating minimization). Let A9 hold for f : M → R from (3.1) or (3.5). Let ε x > 0, ε u > 0, and use Equation (5.5) or (5.6) to set ε x,k . For any
starting point z 0 = (X 0 , U 0 ) ∈ M, Algorithm 2 produces a sequence X k , U k k∈N such that grad X f (X k , U k ), grad U f (X k , U k ) F ≤ ε x + ε u (6.11)
is achieved using at most N grad gradient steps and N svd singular value decompositions with Proof. Note that f is bounded below by f * = 0. Define N iter as the number of iterations performed by Algorithm 2, the smallest k such that grad X f (X k , U k ) ≤ ε x and grad U f (X k , U k ) ≤ ε u . Let N svd be the number of SVDs that have to be performed to get grad U f (X k+1 , U k ) ≤ ε u , at which point the algorithm would return without performing another computation. For any k ≤ N svd , from Lemma 6.3 we have
N grad ≤ (f (z 0 ) − f * ) αβε 2 x and N svd ≤ 2L u (f (z 0 ) − f * ) ε 2 u ,(6.f (X k , U k ) − f (X k+1 , U k+1 ) ≥ 1 2L u grad U f (X k+1 , U k ) 2 F ≥ 1 2L u ε 2 u . (6.13)
Summing from k = 0 to N svd gives,
f (z 0 ) − f * ≥ f (z 0 ) − f (z N svd ) ≥ N svd k=0 ε 2 u 2L u = ε 2 u N svd 2L u . (6.14)
Hence, this bounds the number of SVDs to ensure grad U f (X k+1 , U k ) ≤ ε u , as
N svd ≤ 2L u (f (z 0 ) − f * ) ε 2 u . (6.15) For 0 ≤ i ≤ n k −1, we have grad X f (X (i) k , U k ) 2 F ≥ ε 2 x,k by definition since the stopping criterion is grad X f (X (n k ) k , U k ) F ≤ ε x,k . Combined with the Armijo decrease this gives f (X (i) k , U k ) − f (X (i+1) k , U k ) ≥ α (i) k β grad X f (X (i) k , U k ) 2 F ≥ α (i) k βε 2 x,k . (6.16)
We sum these bounds for the n k gradient steps from X k to X k+1 ,
n k −1 i=0 f (X (i) k , U k ) − f (X (i+1) k , U k ) ≥ n k −1 i=0 α (i) k βε 2 x,k . (6.17)
Using that the step sizes α
f (X k , U k ) − f (X k+1 , U k ) ≥ n k αβε 2 x,k for all k ≥ 0. (6.18) The SVD is nonincreasing, meaning f (X k , U k )−f (X k+1 , U k+1 ) ≥ f (X k , U k )−f (X k+1 , U k ). This yields, f (X k , U k ) − f (X k+1 , U k+1 ) ≥ n k αβε 2 x,k ≥ n k αβε 2 x for all k ≤ N iter ,(6.19)
as both (5.5) and (5.6) satisfy ε x,k ≥ ε x . We sum once again over the iterations,
f (X 0 , U 0 ) − f * ≥ f (X 0 , U 0 ) − f (X N iter +1 , U N iter +1 ) ≥ N iter k=0 n k αβε 2 x . (6.20)
We conclude that
(f (z 0 ) − f * ) αβε 2 x ≥ N iter k=0 n k =: N grad . (6.21)
A similar algorithm using fixed step sizes for the update of X will also converge, provided the step sizes are small enough. Corollary 6.7. If the Armijo linesearch in Algorithm 2 is replaced by a gradient descent with constant stepsizes α satisfying α < 2 L x , Algorithm 2 also converges
lim k→∞ grad X f (X k , U k ), grad U f (X k , U k ) F = 0. (6.22)
We also have the worst case bound
N grad ≤ L x (f 0 − f * ) αε 2 x . (6.23)
Proof. We derive the usual descent lemma from Lipschitz continuity of the gradient. This gives
f X k −αgrad X f (X k , U k ), U k ≤ f (X k , U k )−α grad X f (X k , U k ) 2 F +α 2 L x /2 grad X f (X k , U k ) 2 F (6.24) which simplifies to f (X k , U k ) − f (X (1) k , U k ) ≥ (α − α 2 L x /2) grad X f (X k , U k ) 2 F . (6.25)
This bound replaces the Armijo decrease of Equation (6.5). The rest of the proofs from Theorems 6.5 and 6.6 holds verbatim with stepsize α for every iteration. Note that for α > 0, the factor (α − α 2 L x /2) is positive for α < 2/L x and is maximized at α = 1/L x .
Convergence of the iterates using the Kurdyka-Lojasiewicz property
This section proves convergence of the sequence of iterates to a unique stationary point for a simplified version of the alternating minimization scheme. This section considers an algorithm where only one gradient step is performed in between the truncated SVDs (Algorithm 4). This is analogue to the algorithm described in [20] which does not provide theoretical convergence guarantees. Our observations indicate that Algorithm 4 is expected to behave similarly to Algorithm 2 in the limit. Asymptotically, there is usually only one gradient step needed between two truncated SVDs. It is only in the early iterations that Algorithm 2 differs by making several gradient steps in between SVDs. For the purpose of this theoretical section, we will assume that the singular value decompositions in Algorithm 4 are exact and not approximated or randomized. This corresponds to setting ε u = 0 in Algorithm 2. This section is written using the notation of a feature matrix Φ as in problem (3.1), but the results apply similarly to problem (3.5) if one assumes that the Lipschitz condition A11 applies to a kernel K instead of Φ.
Algorithm 4 A simple alternating minimization scheme for Problem (3.1) or (3.5)
1: Given: The sensing matrix A ∈ R m×ns , measurements b ∈ R m , a tolerance ε x > 0, an estimation of r = rank(Φ(M )).
2: Set k = 0 3: Find X 0 that satisfies AX 0 = b. 4: U 0 = truncate_svd(Φ(X 0 )) Equation (5.3) 5: while grad X f (X k , U k ) F > ε x do 6: grad X f (X k , U k ) = P TL A,b (∇ X f (X k , U k ))
Equation (4.7)
7:
α k = Armijo ((X k , U k ), −grad X f (X k , U k )) Algorithm 3 8: X k+1 = X k − α k grad X f (X k , U k ) 9:
U k+1 = truncate_svd(Φ(X k+1 )) exact SVD, not randomized 10: end while 11: Output:
(X k , U k ) such that gradf (X k , U k ) F ≤ ε x .
Let us define a distance on the manifold M (Equation (4.2)).
dist (X 1 , U 1 ), (X 2 , U 2 ) := X 1 − X 2 2 F + r i=1 sin 2 θ i (7.1) for all (X 1 , U 1 ), (X 2 , U 2 ) in M.
We will prove finite length of the sequence of iterates in this metric on M. A more mainstream approach to define the distance between U 1 and U 2 on the Grassmann would be to use Θ F instead of sin Θ F , where Θ = diag(θ i ) is the diagonal matrix containing the principal angles. We do so because the distance (7.1) makes it easier to derive perturbation bounds for the SVD and is equivalent to the usual distance.
The following assumption ensures a useful non-degeneracy of the spectrum of the feature matrix.
A10 (Gap between the singular values). There exists δ > 0 such that, the accumulation points of the sequence generated by Algorithm 4 satisfy
σ r (Φ(X)) − σ r+1 (Φ(X)) ≥ δ > 0. (7.2)
This property ensures that the minimizer of the function f (X, · ) : Grass(N, r) → R is well defined, i.e., that its truncated SVD is unique. As σ r+1 (Φ(X)) ≥ 0, this assumption also implies that σ r (Φ(X)) ≥ δ > 0
In particular it means that we cannot overestimate the rank. If the true rank is r−1, then σ r = 0 and the assumption does not hold. Let us stress that this is an artefact of the convergence proof and does not imply poor practical performance of the algorithm when the rank is overestimated. We investigate this in the numerics Section 9.4.5. We will need Assumption 10 to derive a Lipschitz continuity result on the SVD. We now show the two main lemmas (7.1 and 7.5), inspired by [3].
gradf (X k+1 , U k+1 ) F ≤ ρ 2 dist (X k+1 , U k+1 ), (X k , U k ) (7.3)
with ρ 2 := 2(L g + 1/α) for some L g ≥ 0.
Proof. The expression
X k+1 = X k − α k grad X f (X k , U k ) (7.4) implies grad X f (X k+1 , U k+1 ) = (X k − X k+1 )/α k + grad X f (X k+1 , U k+1 ) − grad X f (X k , U k ) (7.5)
Define the setS = cl (conv((X k ) k∈N )), the closure of the convex hull of the sequence of iterates, and S =S × Grass(N, r). We show that the vector field gradf | S : S → TM is L g -Lipschitz continuous in the sense of Definition A.1 for some L g ≥ 0. Since S is bounded and the Hessian is continuous, there exists L g ≥ 0 such that Hessf (x) ≤ L g for all x ∈ S. By Proposition A.2, gradf | S is L g -Lipschitz continuous. Using the triangular inequality and the fact that α is a lower bound of α k for all k gives
grad X f (X k+1 , U k+1 ) F ≤ X k+1 − X k F /α + grad X f (X k+1 , U k+1 ) − grad X f (X k , U k ) F ≤ dist (X k+1 , U k+1 ), (X k , U k ) /α + L g dist (X k+1 , U k+1 ), (X k , U k ) ≤ (1/α + L g )dist (X k+1 , U k+1 ), (X k , U k ) .
This gives (7.3) recalling that, since grad
U f (X k+1 , U k+1 ) = 0, gradf (X k+1 , U k+1 ) F = grad X f (X k+1 , U k+1 ) F .
Further auxiliary results are needed. Lemma 7.2 (Wedin's theorem [35]). Let Y,Y ∈ R N ×s with singular value decompositions
Y = min(N,s) i=1 σ i u i (v i ) andY = min(N,s) i=1σ iǔi (v i ) ,
with σ 1 ≥ σ 2 ≥ · · · ≥ σ min(N,s) and similarly forY . If there exists δ > 0 such that min 1≤i≤r r+1≤j≤min(N,s)
|σ i − σ j | ≥ δ (7.6) andσ r ≥ δ, then sin Θ 2 F ≤ 2 Y − Y 2 F δ 2 (7.7)
with Θ the matrix of the principal angles between u 1 u 2 · · · u r and ǔ 1ǔ2 · · ·ǔ r .
The following lemma is a direct consequence of Wedin's theorem. σ i u i v i , with σ 1 ≥ σ 2 ≥ · · · ≥ σ min(N,s) . Let us also write U r := u 1 u 2 · · · u r , a matrix whose columns span the left principal subspace associated to the r largest singular values. Similarly,Y = min(N,s) i=1σ iǔiv i , withσ 1 ≥σ 2 ≥ · · · ≥σ min(N,s) . Let us also writeǓ r := ǔ 1ǔ2 · · ·ǔ r . If there exists δ > 0 such that σ r − σ r+1 ≥ δ andσ r −σ r+1 ≥ δ, then
dist(Ǔ r , U r ) 2 ≤ 2 δ 2 Y − Y 2 F ,
where dist(U r ,Ǔ r ) = r i=1 sin(θ i ) 2 (with θ i the principal angles between U r andǓ r ) is the distance between the subspaces U r andǓ r .
Proof. The result follows from the sin Θ bound (7.7) in Wedin's theorem. Let us verify the assumptions. From the assumptions we know that σ r ≥ δ andσ r ≥ δ. If Wedin's theorem does not apply, condition (7.6) is not satisfied and neither is it satisfied with the roles of Y andY reversed. In that case, since there exists no δ > 0 such that (7.6) holds, one must have σ i =σ j , for some i ≤ r, j ≥ r + 1, andσ l = σ m , for l ≤ r, m ≥ r + 1. However, since the singular values are ordered decreasingly, this gives:
σ m ≤ σ i =σ j ≤σ l = σ m ,
which implies that there exists i ≤ r and m ≥ r + 1 such that
σ m = σ i =σ j =σ l .
This is a contradiction with σ r − σ r+1 ≥ δ andσ r −σ r+1 ≥ δ. Therefore, these conditions guarantee that Wedin's theorem applies.
In the next lemma we combine the previous bound with the Lipschitz continuity of Φ.
A11 (Lipschitz continuity of the features). For Problem (3.1), there exists L Φ ≥ 0 such that for any X k , X k+1 produced by Algorithm 4
, Φ(X k+1 ) − Φ(X k ) F ≤ L Φ X k+1 − X k F . For Prob- lem (3.5), there exists L K ≥ 0 such that K(X k+1 , X k+1 ) − K(X k , X k ) F ≤ L Φ X k+1 − X k F .
If we assume that the sequence (X k ) k∈N is bounded, which we do in the main result of this section (Theorem 7.8), then it is sufficient for the features and kernel to be locally Lipschitz in order for A11 to hold. We also note that if the sublevel set {(X, U) ∈ M : f (X, U) ≤ f (X 0 , U 0 )} is bounded, then the iterates are contained in a bounded set since Algorithm 4 is a descent method.
Lemma 7.4. Let A10 and A11 hold. It follows that
dist(U k , U k+1 ) 2 ≤ 2L 2 Φ δ 2 X k+1 − X k 2 F .
Proof. By definition of U k , Lemma 7.3 ensures that
dist(U k , U k+1 ) 2 ≤ 2 δ 2 Φ(X k+1 ) − Φ(X k ) 2 F . (7.8)
Indeed, U k = truncate-svd(Φ(X k )) is composed of the r first left singular vectors of Φ(X k ).
They are uniquely defined due to A10. The result then follows from the Lipschitz continuity of Φ.
This lemma allows us to show the following crucial result.
Lemma 7.5 (Sufficient decrease property). Assume that A10 and A11 hold at X k and X k+1 . Then, there exists ρ 1 > 0, independent of k, such that the iterates of Algorithm 4 satisfy
f (X k , U k ) − f (X k+1 , U k+1 ) ≥ ρ 1 dist (X k , U k ), (X k+1 , U k+1 ) 2 . (7.9)
Proof. From the Armijo decrease of Lemma 6.4,
f (X k , U k ) − f (X k+1 , U k ) ≥ β α 0 X k − X k+1 2 F , (7.10)
where α 0 is the largest step allowed by the backtracking. Set
M 2 = 2L 2 Φ /δ 2 . Using that f (X k+1 , U k+1 ) ≤ f (X k+1 , U k ), we get f (U k , X k ) − f (U k+1 , X k+1 ) ≥ f (U k , X k ) − f (U k , X k+1 ) (7.11) ≥ β α 0 X k+1 − X k 2 F (7.12) = β α 0 (1 + M 2 ) (1 + M 2 ) X k+1 − X k 2 F (7.13) ≥ β α 0 (1 + M 2 ) X k+1 − X k 2 F + dist 2 (U k , U k+1 ) (7.14)
where the last inequality comes from Lemma 7.4. This establishes (7.9) with ρ 1 :=
β α 0 (1 + M 2 )
.
We now show convergence of the gradient norms to zero for Algorithm 4. Proof. Using Lemmas 7.1 and 7.5 gives,
f (X k , U k ) − f (X k+1 , U k+1 ) ≥ ρ 1 dist (X k , U k ), (X k+1 , U k+1 ) 2 (7.16) ≥ ρ 1 /ρ 2 gradf (X k+1 , U k+1 ) 2 F . (7.17)
The telescopic sum is bounded above by f (z 0 ) independently of k since f is nonnegative, which ensures lim k→∞ gradf (X k+1 , U k+1 ) F = 0.
The bounds in Lemmas 7.1 and 7.5 are standard and hold for most descent methods. The values ρ 1 , ρ 2 depend on the specifics of the algorithm used [3].
We now define the Kurdyka-Lojasiewicz inequality on Riemannian manifolds, which was already introduced in [26]. • κ is continuously differentiable on (0, η),
• κ > 0 on (0, η),
• For every y ∈ V with f (x) < f (y) < f (x) + η, we have κ (f (y) − f (x)) gradf (y) ≥ 1.
If f satisfies the KL inequality at every point x ∈ M we call f a KL function. Proof. This is a standard result. A proof can be found in [
∞ k=1 dist (X k , U k ), (X k+1 , U k+1 ) < ∞. (7.18)
Therefore (X k , U k ) k∈N converges to a unique point (X * , U * ), which is a critical point of f on M.
Proof. For case-studies 2.1 and 2.3, the feature map or kernel is an algebraic or exponential function. These functions are known to be KL functions [3], so the cost function f is a KL function (Definition 7.2). For convenience, we write z k = (X k , U k ). Since the sequence (z k ) k∈N is bounded, there is a subsequence (z kq ) q∈N which converges to somez ∈ M. Let ω(z 0 ) denote the set of limit points for some starting point z 0 . The set ω(z 0 ) is bounded by assumption and clearly closed, therefore it is compact. We want to show that ω(z 0 ) is a singleton, i.e. ω(z 0 ) = {z}. The function f is continuous, which implies lim q→∞ f (X kq , U kq ) = f (z). Since f (z kq ) q∈N is non-increasing, the function f is also constant on ω(z 0 ). Since f is a KL function, for every point z ∈ ω(z 0 ), there exists a neighbourhood V z of z and a continuous concave function
κ z : [0, η z ] → [0, ∞[ of class C 1 on ]0, η z [ with κ z (0) = 0, κ z > 0 on ]0, η z [ such that, for all y ∈ V (z) with f (z) < f (y) < f (z) + η z , we have κ z (f (y) − f (z)) gradf (y) F ≥ 1. (7.19)
By compactness of ω(z 0 ), we find a finite number of pointsz 1 , . . . ,
z p ∈ ω(z 0 ) such that ∪ p i=1 Vz i covers ω(z 0 ). We choose ε > 0, such that V := {z ∈ M : dist z, ω(z 0 ) < ε} is contained in ∪ p i=1 Vz i . Then, we set η = min i=1,...,p ηz i , κ (t) = max i=1,...,p κ z i (t) and κ(t) = t 0 κ (τ )dτ . We claim that for every z ∈ ω(z 0 ), and y ∈ V , with f (z) < f (y) < f (z) + η, we have κ (f (y) − f (z)) gradf (y) F ≥ 1. (7.20)
Indeed, there exists somez i such that y ∈ Vz i . Then, from the definition of κ and the fact that f is constant on ω(z 0 ),
κ (f (y) − f (z)) gradf (y) F ≥ κ z i (f (y) − f (z i )) gradf (y) F ≥ 1. (7.21)
For η > 0 given above, there exists k 0 such that for all k > k 0 ,
f (z k ) < f (z) + η. (7.22)
By definition of the accumulation points, there exists k 1 such that for all k > k 1 , dist(z k , ω(z 0 )) < ε. (7.23) Since σ r (Φ(X)) > σ r+1 (Φ(X)) for any X such that (X, U) ∈ ω(z 0 ) (A10), by continuity of the singular values, there existsδ > 0 such that for all points z k satisfying dist(z k , ω(z 0 )) <δ, we have σ r (Φ(X k )) > σ r+1 (Φ(X) k ). Again by definition, there exists k 2 such that for all k > k 2 , dist(z k , ω(z 0 )) <δ. (7.24) For k > l = max{k 0 , k 1 , k 2 }, we have
κ (f (z k ) − f (z)) gradf (z k ) F ≥ 1. (7.25) Using gradf (z k+1 ) F ≤ ρ 2 dist z k , z k+1 (Equation (7.3)),
gives
κ (f (z k ) − f (z)) ≥ 1 ρ 2 dist z k−1 , z k . (7.26) Concavity of κ gives κ f (z k ) − f (z) − κ f (z k+1 ) − f (z) ≥ κ f (z k ) − f (z) f (z k ) − f (z k+1 ) . (7.27) Since k > l ≥ k 2 , we have that ρ 1 dist 2 z k , z k+1 ≤ f (z k ) − f (z k+1 ) by Equation (7.9) , κ f (z k ) − f (z) − κ f (z k+1 ) − f (z) ≥ 1 ρ 2 dist z k−1 , z k ρ 1 dist 2 z k , z k+1 , (7.28)
and so
dist 2 z k , z k+1 dist z k−1 , z k ≤ ρ 2 ρ 1 κ f (z k ) − f (z) − κ f (z k+1 ) − f (z) (7.29)
For any N > l, we sum (7.29) for all l ≤ k ≤ N , using that the right hand side is a telescopic sum,
N k≥l dist 2 z k , z k+1 dist z k−1 , z k ≤ N k≥l ρ 2 ρ 1 κ f (z k ) − f (z) − κ f (z k+1 ) − f (z) ≤ ρ 2 ρ 1 κ f (z l ) − f (z) − κ f (z N ) − f (z) ≤ ρ 2 ρ 1 κ f (z l ) − f (z) − κ f (z) − f (z) = ρ 2 ρ 1 κ f (z l ) − f (z) , (7.30)
where we used that f (z) ≤ f (z N ) , κ is increasing and κ(0) = 0. Letting N → ∞ in (7.30), we deduce that the left-hand side of (7.30) converges. By Lemma 7.7, ∞ k≥l dist z k , z k+1 also converges and therefore ∞ k=1 dist z k , z k+1 < ∞. (7.31) This concludes the proof and shows finite length of the sequence of iterates, which implies convergence of the Cauchy sequence X k , U k k∈N to a unique point (X * , U * ).
Framework for nonlinear matrix recovery
We summarize the different components of the nonlinear matrix recovery problem. The matrix to be completed must be lifted to a higher dimensional space. This can be done through a kernel, in which case one solves problem (3.5), or a matrix of features, in which case one solves (3.1). When the matrix M to be recovered follows an algebraic variety model, one should use the monomial features or kernel (case study 2.1). When the data is scattered in clusters, the Gaussian kernel must be used as lifting (case study 2.3). In addition, one needs to choose an algorithm to solve the problem formulation (3.1) or (3.5). We propose two families of algorithms: Alternating minimization (Algorithm 2) and Riemannian trust region (Algorithm 1). Each of these algorithms have first-and second-order variants, depending on whether they use the Hessian of the cost function.
Data
Algorithm
Riemannian trust-region (Algorithm 1)
Alternating minimization (Algorithm 2)
first-order second-order first-order second-order
Numerical experiments
In this section we validate our approach with numerical results on randomly generated test problems. We also compare the performances of the different algorithms we propose.
Implementation of the algorithms
Let us describe the implementation of the different algorithms and variants that are considered.
Altmin1 is a first-order version of alternating minimization (Algorithm 2) which uses gradient descent with Armijo linesearch to solve subproblem (5.1). It uses the monomial kernel (Equation (2.4)). The degree of the kernel that gives the best results is almost always d = 2. We set the constant c = 1 in the monomial kernel. In Altmin2, a second-order trust region method using the exact Hessian is applied to the minimization of (5.1). This is the only difference with Our code is available at https://github.com/flgoyens/nonlinear-matrix-recovery in both Matlab and Python. We use the Manopt [7] and Pymanopt [37] libraries for optimization on manifold solvers. The Riemannian trust-region RTR2, which implements Algorithm 1, is the corresponding Manopt solver for optimization on manifolds. We used a second-order version with the Hessian in the model and the default parameters of the solver. The maximum number of iterations is set at 500 for RTR2. In Manopt, the subproblems are solved with a truncated conjugate gradient method and the final termination criterion is only a first-order condition (the norm of the gradient) which we set at 10 −6 for RTR2. Pymanopt uses automatic differentiation and does not require to compute the derivatives by hand, while the Manopt uses finite differences if the Hessian is not given as an input.
Test problems
We describe the set of parameters that we want to vary and test the dependence of each algorithm with respect to these parameters.
Union of subspaces Case study 2.2 depends on the following parameters: ambient dimension n, number of subspaces, dimension of each subspace, number of points on each subspace. To generate a random union of subspaces, we place the same number of points on each subspace and take subspaces of the same dimension. We calculate a basis for a random subspace and generate each point on that subspace by taking a random combination of the columns of that basis.
Clusters For case study 2.3, the parameters defining a point cloud divided in clusters in R n are: the number of clusters, the number of points in each cluster and the standard deviation σ c of each cluster. We first generate random centres in R n . We then add to each centre a cluster of points with multivariate Gaussian distribution with zero mean and covariance σ 2 c Id with σ c = 0.5.
Testing methodology
Throughout we say that an algorithm successfully recovers the matrix M ∈ R n×s if it returns a matrix X output such that the root mean square error (RMSE) is below 10 −3 ,
RMSE(M, X output ) := X output − M F / √ ns ≤ 10 −3 . (9.1)
Our goal is to test the ability of our methods to recover the original matrix M . We measure the performance against an increase in difficulty of the problem for several parameters. Parameters that increase the difficulty of the recovery include:
1. Reducing the number of measurements m;
2. Increasing the rank in the feature space.
In the case of unions of subspaces, for a fixed number of points, the rank of Φ d (M ) depends on the number and the dimension of the subspaces, as indicated by Proposition 2.1. For clusters, the rank increases with the number of clusters. The undersampling ratio is defined as
δ = m ns , (9.2)
it is the number of measurements over the number of entries in M . We present phase transition results to numerically show which geometries can be recovered and which undersampling ratios are needed. Typical phase transition plots for matrix completion vary the undersampling ratio and the rank of the matrix [36]. For union of subspaces, the rank of the feature space is difficult to control, therefore we vary the number and dimension of the subspaces. For each value of the varying parameter, we generate 10 random matrices M that follow the desired structure. We try to recover each with varying δ from 0.1 to 0.9 for a random initial guess. If the RMSE is below 10 −3 in the maximum number of iterations allowed by the algorithm, we consider the recovery to be successful. The phase transition plots record which of the 10 random problems is solved for each configuration. In Figures 4 through 9 the grayscale indicates the proportion of problems solved, with white = 100% of instances solved and black = 0%. We chose a problem of matrix completion over a union of subspaces. We find that RTR2 has a local quadratic rate of convergence, which makes it the method of choice if we want to recover M to high accuracy. Both Altmin1 and Altmin2 make faster progress during the early iterations; thus these methods should be considered if the required accuracy is low. We also observed that, in general, the distance to the solution M is of the same order of magnitude than the gradient norm. That is, using an algorithm, such as RTR2, which terminates with a smaller gradient norm yields a greater accuracy for the recovery. We noticed that the first-order methods, as well as the second-order Altmin2, typically stall numerically when the gradient norm is around 10 −7 , but that is not the case for RTR2.
Numerical results
Comparing the performance of RTR and Alternating minimization
Recovery of unions of subspaces
We now illustrate how the parameters at play affect the recovery for data that follows a union of subspaces model.
Degree of the polynomial features Deciding which degree d to use in practice requires a careful choice. Previous works limit themselves to d = 2 and d = 3. This is understandable because the dimension of the features N (n, d) increases exponentially with d, and so the dimension of the Grassmannian variable in (3.1) becomes too large and the problem becomes practically Therefore, the other natural option is to solve the kernel-based problem (3.5), where the dimension of the feature space N (n, d) does not appear explicitly and the Grassmann has dimension s × r. This is attractive because, a priori, s may not be as large as N (n, d). However, there are important requirements on the number of samples needed to allow recovery. We need to ensure that s ≥ N − q where q is the number of linearly independent vectors v such that v Φ d (M ) = 0. That is, s needs to be large enough so that rank(Φ d (M )) is not limited by s but the dimension of the variety. Some analysis in [32] shows that the number of points s needed to allow recovery increases exponentially with d. For that reason, if d is not small, it is not realistic to solve problems where s is large enough to enable recovery. The monomial basis is also known to be ill-conditioned for large degrees. This gives two obstacles to the performances of these algorithms when the degree increases.
In Figure 4, we solve the recovery problems using RTR2 with an increasing number of data points, and using monomial kernels of degree one, two and three to compare the recovery that is possible for each degree. In Figure 4(a), the degree used is d = 1. For n = 15, the dimension of the feature space is N (15, 1) = 16. For a large number of data points spread over 4 subspaces of dimension 2, the rank of the monomial kernel is 9. This explains why recovery is impossible when s ≤ 9, since the kernel is not rank deficient at the solution M . In Figure 4(b), the degree used is d = 2. For n = 15, the dimension of the feature space N (15, 2) = 136. For a large number of data points spread over 4 subspaces of dimension 2, the rank of the monomial kernel is 21. This explains why recovery is impossible when s ≤ 21, since the kernel is not rank deficient at the solution M . In Figure 4(c), the degree used is d = 3. For n = 15, the dimension of the feature space is N (15, 3) = 816. For a large number of data points spread over 4 subspaces of dimension 2, the rank of the monomial kernel is 37. This explains why recovery is impossible when s ≤ 37, since the kernel is not rank deficient at the solution M . We notice that the recovery is still poor for s ≥ 37, which is likely induced by a worse conditioning of the monomial embedding. In general, d = 2 seems to give the best results for the majority of data sets. for an increasing number of data points spread across the subspaces. Each square gives the proportion of problems solved over 50 randomly generated problems, white = 100% of instance recovered and black = 0%.
The dimension of the subspaces that we aim to recover plays a role in the possibility to recover. In Figure 5, we increase the dimension of the subspaces while the other parameters of the data remain fixed. For a fixed number of data points s, increasing the dimension of the subspaces increases the rank of the monomial features (see Proposition 2.1), and therefore, if the dimension of the subspaces becomes too large, the recovery is compromised. For 2 subspaces of dimension smaller than 4 in R 10 , we can observe good recovery depending on the undersampling ratio.
The same phenomenon is observed when the number of subspaces is increased for a fixed number of data points, see Figure 6.
Clustering with missing data
In the case of clusters (case study 2.3), there is a noise inherent to the model because the kernel at the solution is only approximately low-rank. In fact, the matrix K G (M, M ) has numerical full rank, but there is a big gap in the singular values. These matrices are notoriously difficult to recover in low rank matrix completion. For this reason the recovery error M − X * 2 F rarely converges to high accuracy and recovering the matrix up to 2 digits of accuracy is typical. This completed matrix X * allows to do a clustering of the data points starting from missing entries. We are interested in determining when the matrix X * has the same clustering as M . We use the Rand index to measure the compatibility of two different clusterings of the same set [33]. We can see in Figure 7 that for 5 clusters or less, the original clustering can be recovered even though up to 40% of the entries in the original matrix are missing.
Robustness to measurement noise
In applications, it is common to assume some noise on the measurements, namely, A i , M = b i + ξ i =:b i where ξ i ∈ R is some noise. In the following numerical test we generate white Gaussian noise, i.e. ξ i ∼ N (0, σ 2 ) for some variance σ 2 . The problem formulation then becomes Problem (3.2) with λ > 0 as the penalty parameter that should be tuned based on the noise level.
Estimating an appropriate value for λ without knowledge of the noise variance σ 2 is an intricate task. The solution of (3.2) for λ ∈ [0, ∞[ represents the trade-off curve between minimization of the rank residual and minimization of the residual on the linear measurements. In practical settings, a user may be able to determine which trade-off is more meaningful for a particular application. As a general strategy, we use a scheme which increases λ over successive calls to the solver, while warm starting each solve with the previous solution to (3.2). We have found that starting with the value λ = 10 −6 is satisfactory and we multiply λ by a factor 10 at each iteration. Figure 8 shows, for three different noise levels, the evolution of the solution of (3.2), labelled X * , as the penalty parameter λ increases. We see that where the blue and red lines cross, the green line is still near its lowest point, that is, the solution is still minimizing the true measurement residual as well as for any other value of λ. This allows to recommend the simple strategy of choosing the value of λ where the values of the red and blue curves are the closest (which approximates the value for which they intersect). This choice gives equal weight Standard deviation
A(X * ) −b A(X * ) − b X * − M A(M ) − b σ = 10 −2 2 · 10 −1 7 · 10 −2 8 · 10 −2 0.2 σ = 10 −3
2 · 10 −2 8 · 10 −3 8 · 10 −3 0.02 σ = 10 −4 2 · 10 −3 8 · 10 −4 9 · 10 −4 0.002 Table 1: Quality of the solution X * for different levels of noise σ in the measurements.
to the rank minimization and satisfaction of the measurements. Table 1 shows the accuracy of the solution X * for that choice of λ. We see that both the infeasibility ( A(X * ) − b ) and the distance to the solution ( X * − M ) are proportional to the noise level and decreases with the later. This shows that the warm start scheme to find a good value for λ in conjunction with Problem (3.2) handles the presence of noise in the measurements very well.
Robustness to a bad estimate of the rank
In the case of a union of subspaces, the polynomial features are exactly low rank. Then, it is important to have an accurate upper bound on the rank. Recovery is sometimes possible if the upper bound is close to the correct value. If the estimated rank is less than the exact rank or much too large, recovery will normally fail. This intuition is guided by the cost function that we use. If the variable U is artificially constrained to be the leading singular vectors of Φ(X), that is U = truncate-svd(Φ(X)), then it can be substituted and the cost function in (3.1) simplifies to
min X min(N,s) i=r+1 σ 2 i (Φ(X)). (9.3)
The cost function represents the energy in the tail of the singular value decomposition, where r is the estimation of the rank. In Figure 9(a), the data belongs to a union of 2 subspaces of dimension 2 in R 15 , with a total of s = 150 data points. With a kernel of degree 2, the dimension of the feature space is N (15, 2) = 136. Each square gives the proportion of problems solved over 50 randomly generated problems, white = 100% of instance recovered and black = 0%, using the polynomial kernel of degree d = 2 for the embedding.
Comparison with other methods
We compare the proposed methods Altmin1 and RTR2 with VMC (variety matrix completion) from [32], described in the related work section on page 3. We compare the methods on the recovery of a union of subspaces from a subset of entries, as VMC is designed for matrix completion. RTR2 is a second-order method, while Altmin1 and VMC are both first-order methods which are quite similar in spirit. They both alternate between truncated SVDs of the kernel matrix and some gradient steps, which are performed on different cost functions. Altmin1 minimizes a smooth approximation of the Schatten p-norm (Equation (1.6)), while Altmin1 minimizes Equation (5.1). Altmin1 may perform several gradient steps between two SVD, while VMC performs a single gradient step between two SVD. Figure 10 shows the decrease in root mean square error (RMSE) over time for the three methods on the completion of a matrix M ∈ R 15×s whose columns are contained in a union of two subspaces of dimension two. The total number of points (divided equally across each subspace) is taken as s = {100, 200, 400}. Figure 10(c) shows that RTR2 clearly outperforms VMC and Altmin1 in run-time for matrices with many columns (large s). For matrices with fewer columns (Figure 10(a)), VMC performs well in the early iterations in comparison with RTR2, but is consistently slower than Altmin1. Figure 10 indicates that for a comparable runtime, VMC performs many more iterations than Altmin1 does. Each iteration of VMC is therefore faster to compute than an iteration of Altmin1, but they yield a smaller decrease in RMSE. Figure 11 shows the runtime of the methods VMC , Altmin1 and RTR2 for an increasing ambient dimension n. Again, Altmin1 is consistently faster than VMC ; and we see that both first-order methods perform better than the second-order RTR2 in the early iterations as n increases ( Figure 11(c)). Figure 12 compares the proportion of problems solved for a decreasing undersampling ratio (defined in Equation (9.2) as the ratio of observed entries over the size of the matrix to complete). The recovery rate indicates the proportion of problems solved over a set of 5 randomly generated problems of recovery of a matrix M ∈ R 15×100 whose columns belong to the union of two subspaces of dimension two. The methods VMC and RTR2 perform slightly better than Altmin1 at recovering the matrix M when the number of available entries decreases; though the difference is not significant.
Conclusion
In this work, we study the problem of nonlinear matrix completion where one tries to recover a high rank matrix that exhibit low rank structure in a feature space. In terms of the use cases considered, in addition to the union of subspaces and algebraic varieties, we propose the use of the Gaussian kernel for clustering problems with missing data, which we believe is novel in the context of nonlinear matrix completion. We propose a novel formulation for the nonlinear matrix completion problem using the Grassmann manifold, which is inspired from low-rank matrix completion techniques. We then show how Riemannian optimization and alternating minimization methods can be applied effectively to solve this optimization problem. The algorithms we propose, come with strong global convergence results to critical points and worst-case complexity guarantees. In addition, we show that the alternating minimization algorithm converges to a unique limit point using the Kurdyka-Lojasiewicz property.
We provide extensive numerical results that attest to the efficiency of the approach to recover high-rank matrices drawn from union of subspaces or clustered data. We note that the second-order Riemannian trust-region method allows to recover with high accuracy. We expose the difficulty of using polynomials of high degree in the monomial kernel, as they require an exponentially increasing number of sample points to allow recovery. Our approach proves to be efficient at clustering a data set despite the presence of missing entries and our approach also shows great robustness against the presence of noise in the measurements. Finally, we show that our algorithm greatly outperforms other code available online for nonlinear matrix completion.
A Derivatives of cost functions
In this section we compute by hand the first-and second-order derivatives of the cost function that appears in the optimization problem (3.5). To compute the derivative of a matrix valued function, we write a Taylor expansion and identify the gradient by looking at first order terms. Before computing derivatives for a specific kernel, we look at the Lipschitz continuity properties of the gradient problem, for an arbitrary kernel.
A.1 Lipschitz properties
Let us consider the cost function of (3.5), defined over M = L A,b × Grass(s, r). We work out conditions on the problem which ensure that the Riemannian gradient is Lipschitz continuous. Lipschitz continuity of a vector field on a smooth manifold is defined as follows using a vector transport.
PT γ 0←1 V (y) − V (x) ≤ Ldist(x, y),
where γ : [0, 1] → M is the unique minimizing geodesic connecting x to y and PT γ 0←1 denotes the parallel transport along γ.
Functions with Lipschitz continuous gradient exhibit the following regularity condition for the pullbackf = f • Retr, provided the retraction used is the exponential map, Retr = Exp. In order to show Lipschitz continuity of the gradient, we use both the definition, and the following proposition which uses an upper bound on the derivative of the gradient. Consider the cost function of (3.5) and assume that the retraction being used is the exponential map. If DK(X) is Lipschitz continuous over L A,b , then (6.1) holds where L x is the Lipschitz constant of DK(X). If K(X) F ≤ M for all X ∈ L A,b , condition (6.2) holds with L u = 2M .
Proof. For a given X ∈ L A,b , consider the function f W (X, · ) : Grass(s, r) → R. Its Riemannian Hessian is such that for ∆ ∈ T W Grass(s, r), Hessf W (W, X)[∆] = −2P W ⊥ K(x)∆. Hence Hessf W (W, X) ≤ k(X) F . Using [4] Corollary 10.45, the vector field gradf W (X, · ) is Lipschitz continuous with constant L W = 2 k(x) F . If the kernel is upper-bounded for all X ∈ L A,b , the constant L W is independent of X. This implies that (6.2) holds.
We also analyze Lipschitz continuity of the vector field gradf X .
grad X f (X 1 , W) − grad X f (X 2 , W) F = P T L A,b (∇ X f (X 1 , W) − ∇ X f (X 2 , W)) F (A.4) ≤ ∇ X f (X 1 , W) − ∇ X f (X 2 , W) F (A.5) ≤ (DK(X 1 ) * − DK(X 2 ) * ) P W ⊥ F (A.6) ≤ DK(X 1 ) * − DK(X 2 ) * 2 P W ⊥ F (A.7) ≤ DK(X 1 ) − DK(X 2 ) 2 . (A.8) If DK(X) is L x -Lipschitz over L A,b , we can write grad X f (X 1 , W) − grad X f (X 2 , W) F ≤ L x X 1 − X 2 F (A.9)
and grad X f (., W) is also L x -Lipschitz, where the constant L x is independent of W ∈ Grass(s, r). This implies that (6.1) holds.
Proposition A.4. Consider the cost function of either (3.1) or (3.5) and apply Algorithm 2 or Algorithm 1 with the exponential map as the retraction. If the convex hull of the sequence of iterates (X k ) k∈N and the trial points is a bounded set, then (4.17), (4.18) and (6.2)-(6.1) hold at every iterate (X k , U k ) k∈N and trial points of the algorithm.
Proof. Provided the kernel is a smooth function, the derivatives of the cost function are continuous. As a consequence of the Weierstrass theorem, the derivatives are bounded on the closure of the convex hull of the iterates, which is compact (Grass(s, r) is compact). If the Hessian is bounded on the closure of convex hull of the iterates, the gradient is Lipschitz continuous on that set (Proposition A.2) and therefore A3 and A9 hold with the exponential map as the retraction (Proposition A.1). The continuity of the third-order derivatives implies A4 in a similar way.
A.2 Monomial kernel
We wish to find the Euclidean derivatives of the cost function in (3.5) for the monomial kernel defined in (2.4). First, we make the following developments. Up to first order in ∆ X , Let us write f (X, W ) = tr (K d (X) − P W K d (X)) .
K d (X + ∆ X ) = K d (X) + dK d−1 (X) (X ∆ X + ∆ X X) + O(∆ 2 X
We find the gradient in X using direct computation, ∇ X f (X, W ) = ∇ X tr (P W ⊥ K d (X)) , = 2dX (K d−1 (X) P W ⊥ ) , (A. 12) since P W ⊥ is symmetric. Quite naturally we find ∇ W f (X, W ) with the expansion, f (X, W + ∆ W ) = tr (K d (X) − P W +∆ W K d (X)) ,
= tr K d (X) − (P W + W ∆ W + ∆ W W )K d (X) , = tr P W ⊥ K d (X) − (W ∆ W + ∆ W W )K( d X) , = tr (P W ⊥ K d (X)) − tr (W ∆ W + ∆ W W )K d (X) , = f (X, W ) − tr W ∆ W K d (X) − tr ∆ W W K(X)) , = f (X, W ) + ∆ W , −2K d (X)W .
(A.13)
Ans so we observe ∇ W f (X, W ) = −2K d (X)W . Quickly we have ∇ 2 W f (X, W )[E] = −2K d (X)E. Now we want to find the second derivative in X ∇ X f (X + ∆ X , W ) = 2d(X + ∆ X (K d−1 (X + ∆ X ) P U ⊥ ) = 2d(X + ∆ X ) K d−1 (X) + (d − 1)K d−2 (X) (X ∆ X + ∆ X X)
P W ⊥ = 2dX (K d−1 (X) P W ⊥ ) + 2d(d − 1)X K d−2 (X) (X ∆ X + ∆ X X) P W ⊥ + 2d∆ X (K d−1 (X) P W ⊥ ) + O(∆ 2 X ) = ∇ X f (X, W ) + 2d(d − 1)X K d−2 (X) (X ∆ X + ∆ X X) P W ⊥ + 2d∆ X (K d−1 (X) P W ⊥ ) + O(∆ 2 X ).
(A.14) And so we identify ∇ 2 X f (X, W )[∆ X ] = 2d(d−1)X K d−2 (X) (X ∆ X + ∆ X X) P W ⊥ +2d∆ X (K d−1 (X) P W ⊥ ) . (A.15) Now we need the cross derivatives ∇ W ∇ X f (X, W )[∆ W ] and ∇ X ∇ W f (X, W )[∆ W ].
∇ W f (X + ∆ X , W ) = −2K(X + ∆ X )W = −2 K d (X) + dK d−1 (X) (X ∆ X + ∆ X X) W
= −2K d (X)W − 2d K d−1 (X) (X ∆ X + ∆ X X) W = ∇ U f (X, W ) − 2d K d−1 (X) (X ∆ X + ∆ X X) W. (A.16) So ∇ X ∇ W f (X, W )[∆ X ] = −2d K d−1 (X) (X ∆ X + ∆ X X) W ∈ s × r.
Similarly we find
∇ X f (X, W + ∆ W ) = 2dX K d−1 (X) P (W +∆ W ) ⊥ = 2dX K d−1 (X) (P W ⊥ − W ∆ W − ∆ W W ) = 2dX (K d−1 (X) P W ⊥ ) − 2dX K d−1 (X) (W ∆ W − ∆ W W )
= ∇ X f (X, W ) − 2dX K d−1 (X) (W ∆ W − ∆ W W ) .
(A.17)
And so
∇ W ∇ X f (X, W )[∇ W ] = −2dX K d−1 (X) (W ∆ W − ∆ W W ) ∈ n × s.
In the end
∇ 2 f (X, W ) ∆ X ∆ W = ∇ 2 X f (X, W )[∆ X ] + ∇ W ∇ X f (X, W )[∇ W ] ∇ X ∇ W f (X, W )[∆ X ] + ∇ 2 W f (X, W )[∆ W ].
(A.18)
A.3 Gaussian kernel
Consider the Gaussian kernel defined in (2.10). A direct computation gives
∇ X f (X, W ) = − 2 σ 2 X diag sum(K G P W ⊥ , 1) − K G P W ⊥ (A.19)
for K G the Gaussian kernel and sum(K G P W ⊥ , 1) is the vector with the sum of each column of the matrix K G P W ⊥ . And similarly to the monomial kernel above, we have
∇ W f (X, W ) = −2K G (X)W. (A.20)
We do not compute the Hessian by hand for the Gaussian kernel. We either use automatic differentiation or finite differences of the gradient in the algorithm.
B Proofs for Section 6 (Convergence of the alternating minimization algorithm)
f (X k , U k ) − f (X k+1 , U k+1 ) ≥ 1 2L u grad U f (X k+1 , U k ) 2 , (B.1)
where L u is the Lipschitz constant of the gradient of the pullback (A9).
Proof. We follow the development of [6,Theorem 4]. By Lipschitz continuity of the gradient we have,
|f (X k+1 , R U k (η)) − [f (X k+1 , U k ) + grad U f (X k+1 , U k ), η ]| ≤ L u 2 η 2 ∀η ∈ T U Grass. (B.2) Let η = − 1 L u grad U f (X k+1 , U k ) and define U + = Retr U k −1 L u grad U f (X k+1 , U k ) , which gives f (X k+1 , U + ) ≤ f (X k+1 , U k ) + grad U f (X k+1 , U k ), −1 L u grad U f (X k+1 , U k ) (B.3) + L u 2 −1 L u grad U f (X k+1 , U k ) 2 ≤ f (X k+1 , U k ) − 1 2L u grad U f (X k+1 , U k ) 2 . (B.4)
This gives
f (X k+1 , U k ) − f (X k+1 , U + ) ≥ 1 2L u grad U f (X k+1 , U k ) 2 . (B.5)
Using that the singular value decomposition step of Algorithm 2 finds the minimum of f (X k+1 , · ) over Grass (N, r), we have f (X k+1 , U k+1 ) ≤ f (X k+1 , U + ). Each update of the variable X is nonincreasing, that is, f (X k , U k ) ≥ f (X k+1 , U k ). Hence, we can conclude
f (X k , U k ) − f (X k+1 , U k+1 ) ≥ f (X k+1 , U k ) − f (X k+1 , U k+1 ) ≥ 1 2L u grad U f (X k+1 , U k ) 2 . (B.6)
Lemma B.2. Under A9, for the direction −grad X f (X (i) k , U k ) ∈ T X k L A,b , the linesearch Algorithm 3 produces a step size α (i) k that satisfies
α := min α 0 , 2τ (1 − β) L x ≤ α (i) k ≤ α 0 (B.7)
and produces the following decrease
f (X (i) k , U k ) − f (X (i+1) k , U k ) ≥ βα grad X f (X (i) k , U k ) 2 F , (B.8) where X (i+1) k = X (i) k − α (i) k grad X f (X (i) k , U k ).
Proof. It is clear from the algorithm that α If α 0 satisfies Armijo, then α (i) k = α 0 . Otherwise, we have α (i) k = τ α l where α l > α max is the last α that does not satisfy Armijo and α l+1 = τ α l satisfies Armijo. In this case we have
(i) k − αgrad X f (X (i) k , U k ), U k ≤ f (X (i) k , U k ) − α grad X f (X (i) k , U k ) 2 F (B.9) + α 2 L x 2 grad X f (X (i) k , U k )α (i) k ≥ τ α max = 2τ (1 − β) L x .
Figure 1 :
1The feature map ϕ is chosen to exploit the nonlinear structure.
Figure 2 :
2Clustered data and the singular values of the Gaussian kernel in log-scale.
Lemma 6. 3 (
3Descent lemma based on [6, Theorem 4]). Let A2 and A9 hold for f : M → R.
12) where α = min {α 0 , 2τ (1 − β)/L x } is a constant depending on parameters of the line search (5.1).
bounded below by α = min {α 0 , 2τ (1 − β)/L x } (Lemma 6.4),
Definition 7 . 1 (
71Distance on M). Given two subspaces U 1 , U 2 ∈ Grass(N, r), the canonical angles θ i for i = 1, . . . , r are defined as θ i = cos −1 (σ i ) where σ i are the r singular values of U 1 U 2 , with range(U 1 ) = U 1 and range(U 2 ) = U 2 . For all U 1 , U 2 in Grass(N, r) define dist(U 1 , U 2 ) := r i=1 sin 2 θ i . This gives a distance on M,
Lemma 7. 3 .
3Let Y,Y ∈ R N ×s . Consider the singular value decomposition of Y = min(N,s) i=1
Corollary 7. 6 (
6Global convergence for Algorithm 4). Set ε x = 0, for any starting point z 0 = (X 0 , U 0 ) ∈ M, Algorithm 4 applied to (3.1) or (3.5) produces a sequence X k , U k k∈N such that lim k→∞ gradf (X k , U k ) F = 0.(7.15)
Definition 7 . 2 (
72The Kurdyka-Lojasiewicz inequality). A locally Lipschitz function f : M → R satisfies the Kurdyka-Lajasiewicz inequality at x ∈ M iff there exist η ∈ (0, ∞), a neighbourhood V ⊂ M of x, and a continuous concave function κ : [0, η] → [0, ∞[ such that • κ(0) = 0,
Lemma 7. 7 .
7Let {a k } k∈N be a sequence of nonnegative numbers.
Altmin1 .
Altmin1The default values for the parameters of Algorithm 2 and the Gaussian and monomial kernels are presented in the table below.
Figure 3
3compares the performance of RTR2 (Algorithm 1 using a second-order Taylor model), Altmin1 and Altmin2 which are first-and second-order alternating minimization (Algorithm 2).
Figure 3 :
3Comparing alternating minimization (first-order Altmin1 and second-order Altmin2) with the Riemannian trust-region algorithm (RTR2) for a union of subspaces recovery.intractable for even moderate values of d and n. For example, N (n = 20, d = 5) = 53130 while N (n = 20, d = 2) = 231.
Figure 4 :
4Phrase transition for data belonging to a union of 4 subspaces of dimension 2 in R 15
Figure 5 :
5Phrase transition for data belonging to a union of 2 subspaces of increasing dimension in R 10 with 20 points on each subspace. Each square gives the proportion of problems solved over 50 randomly generated problems, white = 100% of instance recovered and black = 0%, using the polynomial kernel of degree d = 2 for the embedding.
Figure 6 :
6Phrase transition for data belonging to a union of an increasing number of subspaces of dimension 2 in R 15 with 150 data points spread across the subspaces. Each square gives the proportion of problems solved over 10 randomly generated problems, white = 100% of instance recovered and black = 0%.
Figure 7 :
7Percentage of problems correctly clustered for different sampling rates and increasing number of clusters. 50 random instances generated in each case; white = 100% of instance correctly clustered and black = 0%. Clusters belong to R 5 with 20 points in each cluster.
Figure 8 :
8Solutions for noisy problems as a function of the parameter λ on the horizontal axis.
Figure 9 :
9Singular values of the monomial kernel k2(M, M ) Impact of an incorrect estimate of the rank for the completion of a union of subspaces.
Figure 10 :
10Comparison of the proposed Altmin1 and RTR2 methods with VMC from [32] on the recovery of a union of 2 subspaces of dimension 2 in R 15 with an under-sampling ratio of 0.9 and an increasing number of points s.
Figure 11 :Figure 12 :
1112Comparison of the proposed Altmin1 and RTR2 methods with VMC from[32] on the recovery of 100 data points belonging to a union of 2 subspaces of dimension 2 in R n for dimensions n = {15, 30, 50} with an under-sampling ratio of 0.9. Proportion of problems solved for decreasing under-sampling ratio
Definition A.1. ([4, Definition 10.42]) A vector field V on a connected manifold M is L-Lipschitz continuous if, for all x, y ∈ M with dist(x, y) < inj(x),
Proposition A.1. ([4, Corollary 10.52]) If f : M → R has L-Lipschitz continuous gradient, then |f (Exp x (s)) − f (x) − s, gradf (x) | ≤ L 2 s 2 for all (x, s) in the domain of the exponential map.
Proposition A.2. ([4, Corollary 10.45]) If f : M → R is twice continuously differentiable on a manifold M, then gradf is L-Lipschitz continuous if and only if Hessf (x) has operator norm bounded by L for all x, that is, if for all x we have Hessf (x) = max x∈TxM s =1 Hessf (x)[s] ≤ L.We first compute the Euclidean gradient with respect to each variable,∇ X f (X, W) = DK(X) * P W ⊥ (A.1)and∇ W f (X, W) = −2K(X)W. (A.2)This naturally gives∇ 2 WW f (X, W)[∆] = −2K(X)∆. (A.3) Proposition A.3.
Lemma B.1. (Descent lemma based on [6, Theorem 4]) Let A2 and A9 hold for f : M → R. Then, for any k ≥ 0, the iterates produced by Algorithm 2 satisfy
k
≤ α 0 . The Armijo condition also ensures (B.8). For any α > 0, Lipschitz continuity of the gradient gives f X
Lemma 7.1 (Gradient lower bound on iterates gap). Assume that Algorithm 4 generates a bounded sequence of iterates. Then, there exists ρ 2 > 0 such that, for all k ∈ N,
For ∆ W = W ⊥ B, we have P W +∆ W = W W + W ∆ W + ∆ W W , = P W + W ∆ W + ∆ W W + O(∆ 2 W ).).
(A.10)
(A.11)
Note that the solution need not be unique, in the case where σr = σr+1.
AcknowledgementThe authors would like to thank Estelle Massart and Greg Ongie for interesting discussions and their helpful ideas.
Riemannian geometry of grassmann manifolds with a view on algorithmic computation. P.-A Absil, R Mahony, R Sepulchre, Acta Applicandae Mathematica. 802P.-A. Absil, R. Mahony, and R. Sepulchre. Riemannian geometry of grassmann manifolds with a view on algorithmic computation. Acta Applicandae Mathematica, 80(2):199-220, 2004.
Optimization Algorithms on Matrix Manifolds. P.-A Absil, R Mahony, R Sepulchre, Princeton University PressP.-A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2008.
Proximal alternating linearized minimization for nonconvex and nonsmooth problems. J Bolte, S Sabach, M Teboulle, Mathematical Programming. 1461-2J. Bolte, S. Sabach, and M. Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146(1-2):459-494, July 2013.
An introduction to optimization on smooth manifolds. To appear with Cambridge University Press. N Boumal, N. Boumal. An introduction to optimization on smooth manifolds. To appear with Cam- bridge University Press, Jun 2022.
Low-rank matrix completion via preconditioned optimization on the grassmann manifold. N Boumal, P.-A , Linear Algebra and its Applications. 475N. Boumal and P.-A. Absil. Low-rank matrix completion via preconditioned optimization on the grassmann manifold. Linear Algebra and its Applications, 475:200-239, 2015.
Global rates of convergence for nonconvex optimization on manifolds. N Boumal, P.-A Absil, C Cartis, IMA Journal of Numerical Analysis. 391N. Boumal, P.-A. Absil, and C. Cartis. Global rates of convergence for nonconvex opti- mization on manifolds. IMA Journal of Numerical Analysis, 39(1):1-33, 2019.
Manopt, a Matlab toolbox for optimization on manifolds. N Boumal, B Mishra, P.-A Absil, R Sepulchre, Journal of Machine Learning Research. 15N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre. Manopt, a Matlab toolbox for optimization on manifolds. Journal of Machine Learning Research, 15:1455-1459, 2014.
Learning algebraic varieties from samples. P Breiding, S Kališnik, B Sturmfels, M Weinstein, Revista Matemática Complutense. 313P. Breiding, S. Kališnik, B. Sturmfels, and M. Weinstein. Learning algebraic varieties from samples. Revista Matemática Complutense, 31(3):545-593, 2018.
Exact matrix completion via convex optimization. E J Candès, B Recht, Foundations of Computational Mathematics. 96E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717-772, apr 2009.
Trust region methods. A R Conn, N I Gould, P L Toint, 1SiamA. R. Conn, N. I. Gould, and P. L. Toint. Trust region methods, volume 1. Siam, 2000.
Ideals, varieties, and algorithms. D Cox, J Little, D O'shea, M Sweedler, American Mathematical Monthly. 1016D. Cox, J. Little, D. O'Shea, and M. Sweedler. Ideals, varieties, and algorithms. American Mathematical Monthly, 101(6):582-586, 1994.
An overview of low-rank matrix recovery from incomplete observations. M A Davenport, J Romberg, arXiv:1601.06422arXiv preprintM. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations. arXiv preprint arXiv:1601.06422, 2016.
A new approach to the proximal point method: convergence on general riemannian manifolds. G De Carvalho, J X Bento, Da Cruz, P R Neto, Oliveira, Journal of Optimization Theory and Applications. 1683G. de Carvalho Bento, J. X. da Cruz Neto, and P. R. Oliveira. A new approach to the prox- imal point method: convergence on general riemannian manifolds. Journal of Optimization Theory and Applications, 168(3):743-755, 2016.
The approximation of one matrix by another of lower rank. C Eckart, G Young, Psychometrika. 13C. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika, 1(3):211-218, 1936.
Streaming principal component analysis from incomplete data. A Eftekhari, G Ongie, L Balzano, M B Wakin, Journal of Machine Learning Research. 2086A. Eftekhari, G. Ongie, L. Balzano, and M. B. Wakin. Streaming principal component analysis from incomplete data. Journal of Machine Learning Research, 20(86):1-62, 2019.
Matrix completion by deep matrix factorization. J Fan, J Cheng, Neural Networks. 98J. Fan and J. Cheng. Matrix completion by deep matrix factorization. Neural Networks, 98:34-41, 2018.
Non-linear matrix completion. J Fan, T W Chow, Pattern Recognition. 77J. Fan and T. W. Chow. Non-linear matrix completion. Pattern Recognition, 77:378-394, 2018.
Online high rank matrix completion. J Fan, M Udell, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Fan and M. Udell. Online high rank matrix completion. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Robust non-linear matrix factorization for dictionary learning, denoising, and clustering. J Fan, C Yang, M Udell, J. Fan, C. Yang, and M. Udell. Robust non-linear matrix factorization for dictionary learning, denoising, and clustering, 2020.
Polynomial matrix completion for missing data imputation and transductive learning. J Fan, Y Zhang, M Udell, J. Fan, Y. Zhang, and M. Udell. Polynomial matrix completion for missing data imputation and transductive learning, 2019.
Matrix completion via sparse factorization solved by accelerated proximal alternating linearized minimization. J Fan, M Zhao, T W S Chow, IEEE Transactions on Big Data. J. Fan, M. Zhao, and T. W. S. Chow. Matrix completion via sparse factorization solved by accelerated proximal alternating linearized minimization. IEEE Transactions on Big Data, pages 1-1, 2018.
Rank minimization and applications in system theory. M Fazel, H Hindi, S Boyd, Proceedings of the. theIEEE4M. Fazel, H. Hindi, and S. Boyd. Rank minimization and applications in system theory. In Proceedings of the 2004 American control conference, volume 4, pages 3273-3278. IEEE, 2004.
Low-rank matrix recovery via iteratively reweighted least squares minimization. M Fornasier, H Rauhut, R Ward, SIAM Journal on Optimization. 214M. Fornasier, H. Rauhut, and R. Ward. Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM Journal on Optimization, 21(4):1614-1640, 2011.
Smoothing of point clouds using riemannian optimization. ICML Workshop Beyond first order methods in machine learning. F Goyens, S Chretien, C Cartis, F. Goyens, S. Chretien, and C. Cartis. Smoothing of point clouds using riemannian opti- mization. ICML Workshop Beyond first order methods in machine learning, 2020.
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. N Halko, P.-G Martinsson, J A Tropp, SIAM review. 532N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Prob- abilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217-288, 2011.
S Hosseini, Convergence of nonsmooth descent methods via kurdyka-lojasiewicz inequality on riemannian manifolds. Hausdorff Center for Mathematics and Institute for Numerical Simulation. University of BonnINS PreprintS. Hosseini. Convergence of nonsmooth descent methods via kurdyka-lojasiewicz inequality on riemannian manifolds. Hausdorff Center for Mathematics and Institute for Numerical Simulation, University of Bonn (2015,(INS Preprint No. 1523)), 2015.
Guaranteed rank minimization via singular value projection. P Jain, R Meka, I S Dhillon, Advances in Neural Information Processing Systems. P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems, pages 937-945, 2010.
Symmetric gauge functions and unitarily invariant norms. The quarterly journal of mathematics. L Mirsky, 11L. Mirsky. Symmetric gauge functions and unitarily invariant norms. The quarterly journal of mathematics, 11(1):50-59, 1960.
Iterative reweighted algorithms for matrix rank minimization. K Mohan, M Fazel, Journal of Machine Learning Research. 13K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization. Journal of Machine Learning Research, 13(Nov):3441-3473, 2012.
Numerical optimization. J Nocedal, S Wright, Springer Science & Business MediaJ. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, 2006.
Tensor methods for nonlinear matrix completion. G Ongie, D Pimentel-Alarcón, L Balzano, R Willett, R D Nowak, SIAM Journal on Mathematics of Data Science. 31G. Ongie, D. Pimentel-Alarcón, L. Balzano, R. Willett, and R. D. Nowak. Tensor methods for nonlinear matrix completion. SIAM Journal on Mathematics of Data Science, 3(1):253- 279, 2021.
Algebraic variety models for high-rank matrix completion. G Ongie, R Willett, R D Nowak, L Balzano, PMLRProceedings of the 34th International Conference on Machine Learning. D. Precup and Y. W. Tehthe 34th International Conference on Machine LearningSydney, Australia70International Convention CentreG. Ongie, R. Willett, R. D. Nowak, and L. Balzano. Algebraic variety models for high-rank matrix completion. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th Inter- national Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2691-2700, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR.
Objective criteria for the evaluation of clustering methods. W M Rand, Journal of the American Statistical association. 66336W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846-850, 1971.
From graph to manifold laplacian: The convergence rate. Applied and Computational Harmonic Analysis. A Singer, 21A. Singer. From graph to manifold laplacian: The convergence rate. Applied and Compu- tational Harmonic Analysis, 21(1):128-134, jul 2006.
Perturbation theory for the singular value decomposition. G W Stewart, Technical reportG. W. Stewart. Perturbation theory for the singular value decomposition. Technical report, 1998.
Normalized iterative hard thresholding for matrix completion. J Tanner, K Wei, SIAM Journal on Scientific Computing. 355J. Tanner and K. Wei. Normalized iterative hard thresholding for matrix completion. SIAM Journal on Scientific Computing, 35(5):S104-S125, jan 2013.
Pymanopt: A python toolbox for optimization on manifolds using automatic differentiation. J Townsend, N Koep, S Weichwald, arXiv:1603.03236arXiv preprintJ. Townsend, N. Koep, and S. Weichwald. Pymanopt: A python toolbox for optimization on manifolds using automatic differentiation. arXiv preprint arXiv:1603.03236, 2016.
Numerical linear algebra. L N Trefethen, D BauIII, 50SiamL. N. Trefethen and D. Bau III. Numerical linear algebra, volume 50. Siam, 1997.
Low-rank matrix completion by riemannian optimization. B Vandereycken, SIAM Journal on Optimization. 232B. Vandereycken. Low-rank matrix completion by riemannian optimization. SIAM Journal on Optimization, 23(2):1214-1236, jan 2013.
Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Z Wen, W Yin, Y Zhang, Mathematical Programming Computation. 44Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix com- pletion by a nonlinear successive over-relaxation algorithm. Mathematical Programming Computation, 4(4):333-361, 2012.
| [
"https://github.com/flgoyens/nonlinear-matrix-recovery"
]
|
[
"Cosmic Reionization and the First Nonlinear Structures in the Universe",
"Cosmic Reionization and the First Nonlinear Structures in the Universe"
]
| [
"Zoltán Haiman "
]
| []
| []
| In this Introduction, we outline expectations for when and how the hydrogen and helium atoms in the universe turned from neutral to ionized, focusing on the earliest, least well understood stages, and emphasize the most important open questions. We include a historical summary, and highlight the role of reionization as one of the few milestones in the evolution of the universe since the Big Bang, and its status as a unique probe of the beginning stages of structure formation. | 10.1007/978-3-319-21957-8_1 | [
"https://arxiv.org/pdf/1511.01125v1.pdf"
]
| 119,210,934 | 1511.01125 | a9c9c1062436f8d3487ce3353912e7b5c070499b |
Cosmic Reionization and the First Nonlinear Structures in the Universe
3 Nov 2015
Zoltán Haiman
Cosmic Reionization and the First Nonlinear Structures in the Universe
3 Nov 2015
In this Introduction, we outline expectations for when and how the hydrogen and helium atoms in the universe turned from neutral to ionized, focusing on the earliest, least well understood stages, and emphasize the most important open questions. We include a historical summary, and highlight the role of reionization as one of the few milestones in the evolution of the universe since the Big Bang, and its status as a unique probe of the beginning stages of structure formation.
Introduction
In the standard cosmological model, dominated by cold dark matter and dark energy, the universe expands and cools dictated by the equations of general relativity and thermodynamics, going through a handful of global milestones. Many of these milestones are well understood, because their physics is within the reach of terrestrial experiments, and observations leave little doubt about their occurrence. These begin with nucleosynthesis, and include the epoch of radiation-matter equality, the recombination of hydrogen and helium, and the decoupling of radiation. In the more recent universe, dark energy has become dominant and begun to accelerate the global expansion. While the evolution of universe preceding nucleosynthesis is less well understood, a generic prediction of inflation, describing the earliest epochs, is the production of primordial density perturbations. These perturbations obey Gaussian statistics, with a nearly scale-invariant initial power spectrum. The subsequent growth of perturbations over time is again well understood, and leads to remarkable agreement with many observations of the cosmic microwave background (CMB) and large-scale structures (LSS). The history of the universe is marked by additional mile-stones, related to the growth of inhomogeneities. The first marks the epoch when the first perturbations -on astrophysically important scales -reach non-linear amplitudes. Ab-initio theoretical predictions become more difficult at later epochs. The first collapsed and gravitationally bound structures form soon afterward, and serve as the natural sites where the first stars and black holes then "light up" the universe. The reionization of the bulk of hydrogen and helium atoms in the universe, several hundred Myr after the big bang, is the most recent of these "global" milestones -resembling a phase transition, and changing the character of the universe as a whole.
In addition to its fundamental place in our cosmic history, there are three practical reasons why reionization is of special interest. First, as will become clear below, and from later chapters in this book, the bulk of reionization is thought to take place between redshifts of 5 < ∼ z < ∼ 10. This range does not extend far beyond our present observational horizon, and is within tantalizing reach of experiments with forthcoming and planned instruments. This makes the study of reionization very timely.
Second, while there remains some room for more exotic scenarios, reionization can be attributed to photo-ionizing radiation from two different sources: an early generation of massive stars, or an early generation of black holes powering (mini-)quasars. The ultimate energy source in these two scenarios is very different -nuclear binding energy, in the case of stars, and gravitational binding energy, in the case of black holes. These sources have different efficiencies of producing radiation, and produce different spectra. The details of how reionization unfolded thus depends on the properties of these early stars and quasars (their luminosity and spectral distribution) as well as on their abundance and spatial distribution as a function of redshift.
Finally, the earliest light-sources are quite plausibly too dim to be detected directly, even with next-generation instruments. Studying reionization is therefore one of the very few ways to glean details about these first-generation objects. It is worth emphasizing that current observations only show the "tip of the iceberg": the luminosity functions in even the deepest surveys show no evidence of a faint-end turn-over, and we expect stars to form inside galaxies orders of magnitude fainter than detectable at current and even forthcoming flux limits.
In this article, we will first present a historical discussion of both observations and modeling of the reionization history ( § 2). Then, in § 3, we discuss two possible ways to directly observe the light of the first generation of ionizing sources. It is important to emphasize that this article contains a biased personal selection of some of the important historical milestones and topics, and is not intended to be a rigorous, complete review of the field.
Historical Overview
The Reionized IGM and its Observational Probes
Early History
The realization that the mass density of neutral hydrogen (HI) in the intergalactic medium (IGM) falls short by many orders of magnitude quickly followed the identification of the first quasars in the early 1960s. Here "falls short" is in comparison to the total mass density expected from cosmology, i.e. comparable to the critical density ρ crit (z) = 3H 2 /8πG, with H = H(z) the redshift-dependent Hubble parameter. The quasar 3C 9 was among the first handful of quasars discovered and identified through their spectra. At the time of its discovery, its redshift of z = 2.01 was an outlier, and held the distance record (with the other several quasars at z < ∼ 1) [1]. Its spectrum lacked any strong absorption on the blue side of the Lyman α emission line, showing only a modest ≈ 40% depression of the flux instead [2]. This implies that the optical depth to Lyman α scattering in the foreground IGM is τ α ≈ 0.5. In their seminal paper, Gunn & Peterson (hereafter GP) in ref. [2] compared this to the optical depth, τ α ≈ few × 10 6 expected from Lyman α scattering by neutral hydrogen spread uniformly over the IGM, with a near-critical mean density ρ crit (z) ∝ (1 + z) 3 , following the expansion of the universe.
It is worth quoting the result of this comparison: "We are thus led to the conclusion that either the present cosmological ideas about the density are grossly incorrect, and that space is very nearly empty, or that the matter exists in some other form. " We now know that the mean density of baryons is indeed lower than the critical density, but "only" by a factor of ≈25. We also know that space can not be empty -while large voids exist, their densities are at most ∼ 10% below the mean. The most plausible explanation, by far, is that hydrogen is in ionized form. This was already the favored interpretation at the time; however, interestingly, GP dismissed stars and quasars as the primary ionizing sources. They instead considered free-free emission or collisional ionization in the IGM itself, both requiring that the IGM is hot ( > ∼ 2 × 10 5 K). Through the study of the Lyman α forest, we now know that the IGM temperature at z > ∼ 2 is T IGM ≈ 10 4 K, more than an order of magnitude lower than this lower limit.
Interestingly, GP already noted that a fully ionized IGM can produce a large electron scattering optical depth. Taking the electron (or proton) number density from ρ crit (z)/m p , this gives a value of τ e = few × 10%, which would be relevant for observations of individual sources. This is reduced by a factor of 25 by the low cosmic baryon density, to τ e = few × 1%.
Remarkably, the cosmic microwave background (CMB) was discovered in the same year in 1965; precisely 50 years ago [3]. This stimulated work on the implications of the ionized IGM on the CMB. With a hot IGM and a large electron scattering optical depth, one would expect large distortions in the spectral shape of the CMB (e.g. ref. [4,5]). However, the estimates of the baryon density and temperature were both soon revised downward significantly. Once again, observations of quasar absorption spectra played important roles in these revisions. First, the discovery of the CMB also stimulated work on big bang nucleosynthesis, making detailed predictions for the abundances of the light elements. The most important of these was the D/H ratio, which placed a tight upper limit (Ω b ≡ ρ b /ρ crit < ∼ 0.1) on the baryon density. Beginning in the mid 1990s, the relative abundance [D/H] was measured in high-resolution quasar spectra and resulted in the value Ω b ∼ 0.04 (although less robust non-cosmological D/H measurements pre-date these). Second, as many more quasars were discovered, and Lyman α absorption statistics were collected over a large number of sight-lines, the modern view of the Lyman α forest emerged. This revealed that the low-density IGM has a temperature of only ∼ 10 4 K, consistent with being photoionized by the UV radiation of stars and quasars [6,7].
Further Development of Observational Diagnostics
In general, the highly ionized IGM can be studied either through measurements of the residual neutral HI, or by detecting the effects of the free electrons (and protons). Beginning in the mid-1990s, both of these possibilities were explored in great detail.
Effect of Free Electrons on the CMB.
On the "electron side", it was realized that even if the IGM is not dense and hot enough to change the spectrum of the CMB, elastic Thomson scattering by free electrons changes the patterns of both the temperature and polarization power spectra (see reviews by [8,9,10]). Scattering a fraction τ e of the CMB photons out of each sightline translates into a suppression of the primary CMB anisotropies (both temperature and polarization) by a factor exp(−τ e ), below angular scales corresponding to the size of the cosmological horizon at reionization (or < ∼ 10 • for reionization at z ∼ 10) [11]. This suppression can be difficult to distinguish from a "red" tilt or a reduced normalization of the primordial fluctuation spectrum. However, scattering of the CMB photons in the low-redshift ionized IGM also produces enhanced linear polarization fluctuations on large scales (the so-called "polarization bump", ref. [12,13]). This bump, on ∼ 10 degree scales, is characteristic of reionization and not present otherwise. The precise shape of this feature (polarization power as a function of angular scale) can be used to constrain the ionization history [14,15]. Finally, if reionization is spatially inhomogeneous (patchy), as generally expected unless the ionizing sources have unusually hard spectra, then this introduces additional power on small (∼ few arcmin) scales. Inhomogeneities in the ionization fraction, rather than in the IGM density, can dominate both the temperature and the polarization power spectra. This was first shown in toy models [16,17] and was later developed based on CDM structure formation models (e.g. [18]; see ref. [19] for a recent analysis of the kinetic Sunyaev-Zeldovich [kSZ] effect, which gives the largest contribution).
As will be discussed in a later chapter in this book, the first measurement of τ e was made by the WMAP satellite, from the temperature-polarization cross power spectrum, and yielded the anomalously high value of τ e ≈ 0.17 (translating to a sudden reionization redshift near z ∼ 17). The increased precision in subsequent WMAP measurements broke degeneracies between τ e and the spectral tilt n s , and lowered this value to τ e ≈ 0.08. The most recent determination from Planck's polarization power spectrum, τ e ≈ 0.066 ± 0.016 [20], remains consistent with this value, and requires instantaneous reionization to occur around z ∼ 10. More generally, the measured optical depth is twice the value τ e = 0.04 of the "guaranteed" contribution from the highly ionized IGM between redshifts 0 < z < 6. This requires that a tail of ionization extends beyond the current observational horizon. However, such a tail is naturally expected even in the simplest models of reionization, and leaves little room for additional, exotic ionizing sources [21,22].
Searching for Neutral Hydrogen.
Going back to history -on the "neutral hydrogen side", work continued on quasar absorption spectra. An idea that dates back to at least the early 1960s [23] is to detect intergalactic neutral HI through its absorption in the 21cm hyperfine structure line. This "radio analog" of the GP trough, however, is much weaker, due to the low oscillator strength of the 21cm line. As a result, the corresponding upper limits on the neutral IGM density -obtained from the lack of any 21cm absorption in the spectrum of the z = 0.056 radio galaxy Cygnus A [23] -were ∼ 10 6 times weaker than those obtained from the (lack of) Lyα GP troughs. Theoretical work on using the redshifted 21cm line, seen either in absorption or emission (depending on the spin temperature) in the context of an IGM being gradually ionized, and including spatial fluctuations, dates back to ref. [24]. The idea apparently lay dormant for nearly two decades, but received attention again from the mid-1990s, motivated by plans to build the Giant Metrewave Radio Telescope (GMRT), and by the consensus emerging about the modern CDM structure formation paradigm [25,26,27]. An excellent review of the many ways of using the statistics of the redshifted 21cm line to study reionization is given in ref. [28].
In parallel with using the 21cm line, work continued on the utility of the Lyα GP trough. On the observational side, as more and more distant quasars were discovered in the late 1990s, it became increasingly puzzling that none of these showed the strong resonant GP trough, expected even from a modestly neutral IGM. This was especially so, since deep optical observations began to show that the abundance of both quasars and galaxies decline beyond their peak at redshifts ∼ 1 − 3. The question arose whether the observed galaxies and quasars can provide the required ionizing radiation -it became necessary to extrapolate well below the faint end of the observed luminosity functions.
On the theoretical side, progress beyond the simple GP calculation of the resonant optical depth τ α , from a uniform IGM, was slow to take off. However, beginning in the late 1990s, several studies have begun to explore the expected absorption features in more detail. For example, it was realized that the Lyα absorption from a near-neutral IGM is so strong that the damping wings should be detectable, and the red wings, in particular should offer a useful diagnostic of a neutral IGM [29]. Also, bright quasars would be surrounded by a large (several Mpc) local ionized bubble [30], blue-shifting the observed location of the GP trough and the damping wings [31]. Another realization was that there should be distinct absorption troughs at Lyα, Lyβ , and possible higher Lyman lines, offering another useful diagnostic [32], at least for the first sources, that would be detected not far beyond the redshift where the IGM turns predominantly neutral. In the context of CDM structure formation models, reionization must be gradual and inhomogeneous, resulting in large line-of-sight variations [33]. All of the above effect had important consequences once the first GP was discovered and had to be interpreted (e.g. [34]).
The discovery [35] of the first GP trough was indeed a watershed event in 2001. The Keck spectrum of a z = 6.28 quasar, one of the first several z > ∼ 5 quasars identified in the Sloan Digital Sky Survey (SDSS), showed no detectable flux over a large wavelength range short-ward of ∼ (1 + z)1215Å. This raised the tantalizing possibility that 35 years after the seminal GP paper, we have finally probed the era when the IGM was significantly neutral. This discovery also stimulated a large body of work on the limits that can be placed on reionization, given a "deep" and "long" dark region (or regions) in the spectrum (e.g. [36]). The issue is that "zero flux" can be consistent with resonant absorption from the residual HI in a highly ionized foreground IGM. Placing constraints on reionization therefore necessitated detailed modeling of the fluctuating IGM with a few Mpc of the quasar, including the quasar's own ionized bubble.
Quasars are of course not unique -a significantly neutral IGM would imprint GP absorption features on any background source at λ obs = (1 + z)λ α . It had long been expected that a strong Lyα emission line would be produced by the first "primeval" galaxies [37]. Numerous searches for high-redshift galaxies using their Lyα emission, however, did not yield any discovery for ≈ two decades -the failure was blamed on extinction of this line by dust internal to the galaxies. Immediately after the first high-redshift Lyα emitters were finally discovered in the late 1990s [38], it was realized that they can be used as a probe of reionization: the neutral IGM can strongly suppress these lines, thus also suppressing the observed luminosity function [39]. This field developed rapidly, both observationally, with the discovery of large samples of z > ∼ 6 Lyα emitters (now in the hundreds), especially in surveys by the Subaru telescope (e.g. ref. [40]). Theoretical predictions were also refined, including improved estimates of the impact of absorption on the observed line profiles, in the presence of a local ionized bubble around the galaxy, galactic winds causing shifts in the emission line frequency, and a peculiar velocity of the host galaxy [41,42]. These then begun to be incorporated into more realistic radiative transfer models through the inhomogeneous IGM [43], yielding better estimates of the (more modest) impact of reionization on the observed luminosity function [44].
Finally, as the epoch of reionization receded farther and farther in redshift, it became increasingly clear that observed galaxies do not provide sufficient UV radiation to account for this ionization. The general search for high-redshift galaxies is therefore an important part of the history of reionization. Summarizing this history is beyond the scope of this article. However, it was not until deep fields with the Hubble Space Telescope discovered a sizable population of galaxies that the integrated emission of the observed objects even came close to providing enough ionizing radiation. At the present time, the observed galaxy population at redshift z > ∼ 6 still fails to reionize the IGM by a factor of a "few", unless extreme assumptions are made about the UV spectrum, and the escape fraction of ionizing radiation from these galaxies (see, e.g. [45]).
Reionization in Hierarchical Structure Formation Models
In parallel with developing observational probes of reionization, over the past several decades, we have gained an understanding of how reionization was likely driven by an early generation of stars and quasars. As mentioned above, at the current horizon of observations at z ∼ 7, the observed population of galaxies fails by only a factor of ∼few to reionize the IGM. It is quite natural to attribute the missing ionizing emissivity to fainter galaxies, just below the current detection threshold. In support of such an extrapolation, there is a firm upper limit on the contribution from faint (individually undetectable) quasars to reionization at z ∼ 6 − 7.
A population of black holes at these redshifts (z ≈ 6 − 7) would be accompanied by the copious production of hard ( > ∼ 10 keV) X-ray photons. The resulting hard X-ray background would redshift and would be observed as a present-day soft Xray background (SXB). This severely limits the abundance of accreting quasar BHs at z ∼ 6 − 7: in order to avoid over-producing the unresolved component of the observed SXB in the 0.5-2 keV range, these BHs can not significantly contribute to reionization [46,47,48], or make up more than a few percent of the present-day total BH mass density [49,50]. It is important to emphasize, however, that these constraints still allow accreting BHs to be dominant over stellar UV radiation at the earliest stages of reionization z ∼ 15, partially "pre-ionizing" the IGM (see below).
Because reionization at z ∼ 6 − 7 is an (almost) solved problem, the most interesting open questions concern the earlier stages of reionization. When did the first light sources turn on? When did the IGM first get significantly ionized (and heated)? What was the relative contribution of the first stars, of their accreting BH remnants, or of possibly more exotic sources of ionization, such as "direct collapse" supermassive stars or BHs, or decaying dark matter particles?
The Astro-chemistry of H 2 and The First Stars
It has long been recognized that the key physics governing the formation of the first stars (or black holes) is the abundance of H 2 molecules, which form via gasphase reactions in the early universe (in 1967, ref. [51]). It is impossible to form an astrophysical object if gas contracts adiabatically, because even with the help of cold dark matter, it is not possible to reach high gas densities. The numerical upper limits on the gas density in halo cores are extremely tight, especially when including the entropy generated during adiabatic collapse (see the recent work in ref. [52]). In the primordial gas, H 2 is the only possible coolant, and determines whether gas can collapse to high densities. Following the pioneering paper in 1967 by Saslaw & Zipoy [51], several groups constructed complete gas-phase reaction networks, and identified the two possible ways of forming H 2 in primordial gas: via the H + 2 or H − channels. These were applied to derive the H 2 abundance in the smooth background gas in the post-recombination universe [53], and also at the higher densities and temperatures expected in collapsing high-redshift objects [54,55].
The basic picture that emerged from these early papers is as follows. The H 2 fraction after recombination in the background universe is small (x H2 = n H2 /n H ∼ 10 −6 ). At high redshifts (z > ∼ 100), H 2 formation is inhibited even in overdense regions because the required intermediaries H + 2 and H − are dissociated by the CMB photons. 1 However, at lower redshifts, when the CMB temperature drops, a sufficiently large H 2 abundance builds up inside collapsed clouds (x H2 ∼ 10 −3 ) at redshifts z < ∼ 100 to cause cooling on a timescale shorter than the dynamical timeleading to a runaway thermal instability and eventual star-formation [57,58,59]. In summary, these early papers identified the most important reactions for H 2 chemistry, and established the key role of H 2 molecules in cooling the first, relatively metal-free clouds, and thus in the formation of population III stars.
The First Stars in Cosmological Structure Formation Models
The work on H 2 chemistry was soon connected with cosmological models for structure formation. Peebles & Dicke [60] speculated already in 1968 that globular clusters, with masses of ∼ 10 5−6 M ⊙ (somewhat above the cosmological Jeans mass, set by Compton-heating of the protogalactic gas by the CMB [61]) forming via H 2 cooling, constitute the first building blocks of subsequent larger structures. Early discussions of the formation of galaxies and clusters have argued that the behavior of gas in a collapsed and virialized object is determined by its ability to cool radiatively on a dynamical time [62,63,64]. The same ideas apply on the smaller scales expected for the very first collapsed clouds [65,66]. Objects that are unable to cool and radiate away their thermal energy maintain their pressure support and identity, until they become part of a larger object via accretion or mergers. On the other hand, objects that can radiate efficiently will cool and continue collapsing.
In the late 1990s, these ideas were developed further, in the context of modern "bottom-up" hierarchical structure formation in a (Λ )CDM cosmology. In particular, the first DM halos in which gas can cool efficiently via H 2 molecules, and condense at the center, are "minihalos" with virial temperatures of T vir ∼few×100K [67,68]. This is essentially a gas temperature threshold, above which roto-vibrational levels of H 2 are collisionally excited, allowing efficient cooling. Because of the emergence of a concordance (Λ CDM) cosmology [69], we can securely predict the collapse redshifts of these minihalos: 2 − 3σ peaks of the primordial density field on the corresponding mass scales of 10 5−6 M ⊙ collapse at redshifts z = 15 − 20. 2 The Abundance of Low-Mass Minihalos at High Redshift. The halo mass functions are now robustly determined, since three-dimensional cosmological simulations reached the required dynamical range to directly resolve the low-mass end of the high-z halo mass function [71,72]. The predictions for the halo mass functions are now therefore limited mainly by the few % uncertainty in the normalization (σ 8 = 0.82 ± 0.02) and the power-law index (n s = 0.972 ± 0.013) of the primordial power spectrum [69]. A possibly (much) larger source of uncertainty is that the primordial power spectrum on the relevant scales is not directly measured -it is extrapolated using the shape of the processed CDM power spectrum (P(k) ∝ k α with α ≈ −3 on the relevant small scales). In principle, the small-scale power could deviate from this prediction significantly, reducing the minihalo abundance by a large factor. This could be caused by a generic "running" (dα/dk = 0) of the primordial scalar index [73], or by free-streaming due to the finite temperature of a low-mass ( < ∼ 1 keV) warm dark matter (WDM) particle [74,71]. While these could have large effects on the expected halo abundance at z = 15 − 20, in practice, there is no evidence of "running" on > ∼ Mpc scales, and the mass of a putative WDM particle is limited to > ∼ 1 keV by the detections of lensed z > 8 galaxies [75] and gamma-ray bursts [76].
Cosmological Simulations of the Formation of First Stars.
In addition to robustly predicting DM halo formation, high-resolution 3D numerical simulations, including hydrodynamics and H 2 chemistry, have become possible, with several groups simulating the cooling and collapse of gas into the first minihalos, located at the intersections of a "protogalactic" cosmic web [77,78,79]. These simulations showed convergence toward a gas temperature T ∼ 300 K and density n ∼ 10 4 cm −3 , dictated by the thermodynamic properties of H 2 , which allows the collapse of a clump of mass 10 2 − 10 3 M ⊙ at the center of the high-redshift minihalos. These early works suggested that the first stars may have been unusually massive, a conclusion based on the low mass accretion rate in the cores of these halos. In a self-gravitating gas, the mass accretion rate depends only on the sound speed c s , and is of order ∼ c 3 s /G ∝ T 3/2 /G (e.g. [80]). Three-dimensional simulations have confirmed this scaling (e.g. [81,82,83]), and in minihalos, the corresponding mass accretion rates are ∼ 10 −3 M ⊙ yr −1 . At this accretion rate, the mass that will accumulate in the halo nucleus within a Kelvin-Helmholtz time (∼ 10 5 years; only weakly dependent on mass for massive protostars) is of order 10 2 M ⊙ .
Simulations in the past few years have been pushed to higher spatial resolution, and, in some cases with the help of sink particles, were able to continue their runs beyond the point at which the first ultra-dense clump developed. The gas in the central regions of at least some of the early minihalos were found to fragment into two or more distinct clumps [84,85,86,87,88]. This raises the possibility that the first stars formed in multiple systems, and that some of these stars had masses < ∼ 100 M ⊙ , lower than previously thought (but see [89] for still higher resolution simulations that suggest less efficient fragmentation).
The First Stars and the Beginning of Reionization.
Even if star-formation in minihalos was inefficient, these early minihalos should have begun ionizing the universe. With a usual Salpeter IMF, each proton in a population of stars would create ≈ 4, 000 ionizing photons (e.g. [90]). A population of massive, metal-free stars would increase the efficiency of ionizing photon production per unit mass by a factor of ∼ 20, to ∼ 10 5 [91,92,93]. Each proton accreted onto a BH could release ∼ 0.1m p c 2 = 0.1GeV of energy, most of it in ionizing radiation, implying enough energy to cause up to 10 7 ionizations. These numbers suggest that once a small fraction ( < ∼ 10 −5 ) of the gas in the universe is converted into massive stars or black holes, a significant ionization of the rest of the IGM can occur.
The simple argument above ignores recombinations (in a fully ionized IGM, each hydrogen atom would recombine several times at z > ∼ 10) and the details of the ionizing spectrum and the photoionization process (which, in the case of hard-spectrum sources, needs to account for secondary ionizations by photoelectrons). Nevertheless, the main conclusion, namely that early stars or black holes should have "kickstarted" reionization, is hard to avoid. In particular, if each minihalo is allowed to form PopIII stars, it would result in a significant τ e , in tension with the electron scattering optical depth measured by WMAP and Planck [21,22]. Indeed, in the wake of the "false alarm" from WMAP's first measurement of a large τ e , several authors investigated the even more efficient "pre-ionization" of the IGM at z ∼ 20 by accreting BHs [94,95]. While those models with a large X-ray emissivity are now ruled out, a contribution from early accreting BHs still remains a natural possibility, especially if fragmentation in early halos (mentioned above) leads to the frequent formation of high-mass X-ray binaries [96,97,98].
Global Reionization Models in a Hierarchical Cosmology
Beginning in the late 1990s, detailed models were put together, in which the wellunderstood cosmological dark matter halos were populated by stars or black holes (early examples include [99,90,100]). These models allowed physically motivated calculations of the entire reionization history, between 6 < ∼ z < ∼ 30, to be confronted with data.
An important physical ingredient in reionization models, especially at the earliest stages, is global radiative feedback. Soon after the first stars appear, early radiation backgrounds begin to build up, resulting in feedback on subsequent star-formation. In particular, the UV radiation in the Lyman-Werner (LW) bands of H 2 can photodissociate these molecules and suppress gas cooling, slowing down the global star-formation rate [101,102,103,104,105,106,107,108,109,110,111,112,113,114,115].
If the metal-free stars forming in the early minihalos were indeed very massive (∼ 100 M ⊙ ), then these stars would leave behind remnant BHs with similar masses [116], and could produce significant X-rays, either by direct accretion or by forming high-mass X-ray binaries. A soft X-ray background at photon energies of > ∼ 1keV, at which the early intergalactic medium (IGM) is optically thin, then provides further global feedback: both by heating the IGM, and by catalyzing H 2 formation in collapsing halos [117,118,119,120,94,121,122,123].
On the other hand, if fragmentation was very efficient, and the typical PopIII stars had low masses, they would not leave BH remnants and they would have softer spectra, with copious infrared (IR) radiation at photon energies ∼ 1eV. Similar to LW and X-ray photons, these photons have a mean-free path comparable to the Hubble distance, building up an early IR background. If soft-spectrum stars, with masses of a few M ⊙ , contributed > ∼ 0.3% of the UV background (or their mass fraction exceeded ∼ 80%), then their IR radiation would have dominated the global (negative) radiative feedback in the early Universe [124]. This feedback is different from the LW feedback from high-mass stars, and occurs through the photo-detachment of H − ions, necessary for efficient H 2 formation. Nevertheless, the baryon fraction which must be incorporated into low-mass stars in order to suppress H 2 -cooling is comparable to the case of high-mass stars.
The net effect of the above long-range "global" feedback effects remains poorly understood. This is a significant outstanding question, as these feedback effects likely determined the earliest stages of the global reionization history. The difficulties with a self-consistent reionization model are two-fold. First, one needs a detailed ab-initio understanding of the feedback on individual protogalaxies with different masses and redshifts. Second, the feedback processes (such as photo-ionization heating, H 2 -dissociation [125,126], and also metal-enrichment), are all affected by the strong clustering of the earliest sources. Semi-analytical models have included either various feedback effects (e.g. [100,127,128,129,130,131]) or the effect of source clustering on the HII bubble-size distribution (e.g. [132]), but not yet both self-consistently. Only the first steps were taken towards such a self-consistent treatment, incorporating photo-ionization feedback, in a simplified way, into a model that partially captures the source clustering (only in the radial direction away from sources) [133].
Numerical simulations do not have the dynamical range for an ab-initio treatment of this issue. The minihalos hosting the first stars arise from primordial perturbations on the scale of ∼ 10 (comoving)kpc. On the other hand, the global feedback effects operate over a distance comparable to the Hubble length, ∼ 1 Gpc. Even if one were to resolve a minihalo with only 10 3 particles, 3D simulations would need to cover a factor of ∼ 10 6 in spatial scales (or contain 10 18 particles). Clearly, this can not be achieved by N-body simulations -let alone hydrodynamical simulations that include the basic physics, such as cooling, chemistry, and radiative transfer. 3 Semi-numerical treatments [134] can offer an order of magnitude higher dynamical range, and have incorporated radiative feedback [135], but are still short of covering the required range of scales (i.e. still to need to prescribe small-scale non-linear processes with sub-grid prescriptions).
Stars vs. Black Holes as Sources of Reionization
As is clear from above, whether the first stars were formed as single stars, or in binaries, matters for the early stages of reionization. If the majority of the first stars formed high-mass X-ray binaries, they could have produced sufficient X-rays to significantly change the expected "Swiss-cheese" morphology of reionization [118,119,120,121]. The thickness of the edges of the cosmological ionized regions would be of order the mean free path of the typical ionizing photon. For the UV photons from stars, this mean free path is small, resulting in sharply defined ionization fronts. But for the hard spectra of X-ray binaries (or more generally, accreting black holes), the mean free path can be long, comparable to the Hubble distance for photon energies above E > [(1 + z)/11] 1/2 x 1/3 HI keV (where x HI is the mean neutral H fraction in the IGM). The diffuse nature of the boundaries of individual ionized regions could be detectable, in principle, through 21cm or Lyα observations [136,137].
Since X-rays in the early Universe can travel across the Hubble distances, they can also change the global reionization topology. The X-rays would ionize and heat the plasma much more uniformly than stars (although they could increase the ionized fraction only to ∼ 20%; nearly all of the energy of the fast photo-electron from the first ionization will subsequently go into heating the IGM). If X-rays are sufficiently prevalent, a range of other interesting effects will occur: the extra heating will raise the pressure of the plasma everywhere, making it resistant to clumping, and more difficult to compress to form new galaxies [118,49]. On the other hand, X-rays can penetrate the successfully collapsing protogalaxies and can ionize hydrogen and helium atoms in their interior. This will catalyze the formation of molecular hydrogen, and help the gas to cool and form new stars [103]. These effects will leave behind their signatures in the spatial distribution of neutral and ionized hydrogen and helium in the Universe. Distinguishing these different global morphologies could be possible in 21cm experiments [138], or in the CMB through the kSZ effect [19].
There are other possible sources of X-rays, in addition to binaries, connected to the formation of the first stars. One example is gas accretion onto the black-hole remnants left behind by the collapse of single (super)massive stars [116,94,95]. Another possible source is supernovae (SNe): if the first stars exploded as SNe, then similar X-rays would be produced by thermal emission from the gas heated by these SNe, and by the collisions between the energetic electrons produced in the SN explosion and the CMB photons [118]. Thermal emission from a hot ISM has indeed been found to dominate the soft X-ray emission in a sample of local star-forming galaxies [139].
We emphasize that X-ray sources can not contribute significantly to reionization at lower redshifts, as they would then have overproduced the unresolved X-ray background [46,47,48], nor could they have elevated the ionized fraction to > ∼ 20% at early times. However, a smooth partial "pre"ionization by sources whose spectrum peaks near ∼ 1keV remains a plausible an interesting scenario.
In summary -the simplest possibility is that the first stars and black holes started reionizing the universe by redshift z ≈ 15 − 25; the process then was completed predominantly by small galaxies, in the redshift range 6 < ∼ z < ∼ 10. 4 . The relative contribution of these two types of sources is yet to be understood, especially at the earliest epochs; as is the net effect of the global radiation backgrounds that should build up early on. These are fundamental outstanding questions. The relative abundance of the two types of sources determined the global ionization topology, and their feedback processes likely drove global time-evolution of reionization.
Finally, for completeness, it is useful to note that there are several other, more exotic sources that may have contributed to reionization in principle. These include decay products of various different dark matter particles [141,142,143,144,145], high energy cosmic rays [118,146,147], or excess small-scale structure formation arising from primordial non-Gaussianities [148], a running of the spectral index [73], or a red spectral tilt [149,15]. Many of these alternatives were proposed in the wake of the anomalously high τ e in the WMAP3 data, and, at the present time, there is no longer a need for these additional contributions.
Can We Detect the First Stars Directly?
As mentioned in the Introduction, reionization is a probe of the earliest light sources. The redshift and duration of reionization of reionization, inferred from quasar absorption spectra, 21cm signatures, and the CMB, will place a constraint on the host halos and the ionizing efficiencies. The observed level of "patchiness" will constrain the spectral hardness of the typical source, constrain the relative contribution of stars and black holes, and shed light on the birth and death of the first galaxies.
One may, however, ask: is this the best we can do, or is there a hope to directly detect the light from the first stars or black holes? It is simple to obtain a rough estimate for the stellar mass of in a proto-galaxy, or the mass of a bright (near-Eddington) black hole, which could be detected at the ∼ 1nJy detection threshold in a deep exposure with the James Webb Space Telescope. At z = 10, this requires a mass of about 10 5 M ⊙ , either in stars [90] or in a BH [100]. (The former is consistent with a recent detailed estimate [150].) It is quite plausible (or even likely) that the very first galaxies and quasars were below this threshold.
So what hopes do we have of directly seeing the light of these first sources? I believe there are three possibilities.
First, observations can be about an order of magnitude more sensitive, using a foreground cluster to gravitationally lens and magnify the z ∼ 10 background sources. Indeed, there are two examples of detecting z = 8 − 10 galaxies [151,152] using this technique on 28 foreground clusters the CLASH survey [153]. The ongoing Hubble Frontier Fields, going an order of magnitude deeper using 4-6 clusters. This technique gives a chance of discovering 10 4 M ⊙ mini-galaxies or miniquasars at z ∼ 10.
Second, and most promising, would be to detect the individual supernovae (SNe) from the first stellar populations. Even "normal" core collapse SNe are bright enough to be visible well beyond z = 10, and the pair instability SNe expected from massive PopIII stars with ∼ 130 − 250 M ⊙ would be even brighter. It has been shown that JWST could detect many hundreds of these SNe; the challenge will be that repeated observations will be required on many JWST fields, separated by years, to identify the slowly evolving light-curves of these ultra-distant SNe [154].
Third, even if we cannot directly detect individual stars, black holes, or SNe, we can still directly detect their cumulative faint emission, through the technique known as "intensity mapping". In general this technique consists of "tomographic" observations of the fluctuating intensity in the emission lines from faint, individually undetectable sources [155,156]. In practice, at least two emission lines are required, so that their spatial fluctuations (in sky position and in redshift space) can be cross-correlated, eliminating contaminating signals from a foreground line. The same technique can be applied, in principle, to the strong HeII 1640Å emission lines expected from the first generation PopIII stars, cross-correlated with CO emission from the same galaxies, or with 21cm emission from the IGM [157]. This would require a next-generation UV instrument (the example considered in [157] is a space-borne 2m dish, with 100 individual detector pixels with spectral resolution R=1000).
The Future
As the rest of this book will make clear, the future is bright, with JWST, ALMA, and several new 21cm experiments coming on line, allowing us to peer farther back in redshift. The main challenge will likely become to constrain parametric models, since it is unlikely that we will have full, ab-initio calculations of the reionization process incorporating all the relevant physics, on scales ranging from star-formation inside minihalos, to the global radiative feedback processes operating on the Hubble scale. With a combination of multiple observational probes, this will nevertheless give us a chance to understand the cosmic history of structure formation from its very beginning.
This topic was recently revisited[56] in a more rigorous analysis, following the time-dependent, non-equilibrium H 2 population levels. This yielded the same conclusion, i.e. that the postrecombination "intergalactic" H 2 abundance is negligibly low.
As an amusing aside: the highest redshift in our Hubble volume where we may find a star in a collapsed minihalo is z = 65, corresponding to an ≈ 8σ fluctuation on the mass scale 10 5 M ⊙[70].
For reference, the largest existing N-body simulation is the Millennium-XXL project with 3 × 10 11 particles.
Reionization must end by z ∼ 6, as shown recently using the fraction of dark Lyα and Lyβ pixels in a sample of 22 quasars[140]
Acknowledgements I thank my students and collaborators, who taught me a lot about reionization, the US federal agencies NASA and NSF for funding much of my research, and Andrei Mesinger for the initiative to put together this volume, and his patience and dedication during the production process.
Large Redshifts of Five Quasi-Stellar Sources. M Schmidt, ApJ. 1411295M. Schmidt. Large Redshifts of Five Quasi-Stellar Sources. ApJ, 141:1295, April 1965.
On the Density of Neutral Hydrogen in Intergalactic Space. J E Gunn, B A Peterson, ApJ. 142J. E. Gunn and B. A. Peterson. On the Density of Neutral Hydrogen in Intergalactic Space. ApJ, 142:1633-1641, November 1965.
A Measurement of Excess Antenna Temperature at 4080. A A Penzias, R W Wilson, A. A. Penzias and R. W. Wilson. A Measurement of Excess Antenna Temperature at 4080
. / S Mc, Apj, 142Mc/s. ApJ, 142:419-421, July 1965.
Microwave background radiation as a probe of the contemporary structure and history of the universe. R A Sunyaev, I B Zeldovich, ARAA. 18R. A. Sunyaev and I. B. Zeldovich. Microwave background radiation as a probe of the contemporary structure and history of the universe. ARAA, 18:537-560, 1980.
Interpretation of anisotropy in the cosmic background radiation. C J Hogan, N Kaiser, M J Rees, Royal Society of London Philosophical Transactions Series A. 307C. J. Hogan, N. Kaiser, and M. J. Rees. Interpretation of anisotropy in the cosmic background radiation. Royal Society of London Philosophical Transactions Series A, 307:97-109, Octo- ber 1982.
The Lyman-Alpha Forest in the Cold Dark Matter Model. L Hernquist, N Katz, D H Weinberg, J Miralda-Escudé, ApJ. 45751L. Hernquist, N. Katz, D. H. Weinberg, and J. Miralda-Escudé. The Lyman-Alpha Forest in the Cold Dark Matter Model. ApJ, 457:L51, February 1996.
Radiative Transfer in a Clumpy Universe. II. The Ultraviolet Extragalactic Background. F Haardt, P Madau, ApJ. 46120F. Haardt and P. Madau. Radiative Transfer in a Clumpy Universe. II. The Ultraviolet Extra- galactic Background. ApJ, 461:20, April 1996.
Reionization of the Intergalactic Medium and its Effect on the CMB. Z Haiman, L Knox, Microwave Foregrounds. A. de Oliveira-Costa and M. Tegmark181227Z. Haiman and L. Knox. Reionization of the Intergalactic Medium and its Effect on the CMB. In A. de Oliveira-Costa and M. Tegmark, editors, Microwave Foregrounds, volume 181 of Astronomical Society of the Pacific Conference Series, page 227, 1999.
Cosmic Microwave Background Anisotropies. W Hu, S Dodelson, ARAA. 40W. Hu and S. Dodelson. Cosmic Microwave Background Anisotropies. ARAA, 40:171-216, 2002.
CMBPol Mission Concept Study: Reionization Science with the Cosmic Microwave Background. M Zaldarriaga, L Colombo, E Komatsu, A Lidz, M Mortonson, S P Oh, E Pierpaoli, L Verde, O Zahn, arXiv:0811.3918CMBPol White Paper, e-printM. Zaldarriaga, L. Colombo, E. Komatsu, A. Lidz, M. Mortonson, S. P. Oh, E. Pierpaoli, L. Verde, and O. Zahn. CMBPol Mission Concept Study: Reionization Science with the Cosmic Microwave Background. CMBPol White Paper, e-print arXiv:0811.3918, November 2008.
The Damping Tail of Cosmic Microwave Background Anisotropies. W Hu, M White, ApJ. 479W. Hu and M. White. The Damping Tail of Cosmic Microwave Background Anisotropies. ApJ, 479:568-579, April 1997.
Interpretation of anisotropy in the cosmic background radiation. C J Hogan, N Kaiser, M J Rees, Royal Society of London Philosophical Transactions Series A. 307C. J. Hogan, N. Kaiser, and M. J. Rees. Interpretation of anisotropy in the cosmic background radiation. Royal Society of London Philosophical Transactions Series A, 307:97-109, Octo- ber 1982.
Microwave Background Constraints on Cosmological Parameters. M Zaldarriaga, D N Spergel, U Seljak, ApJ. 488M. Zaldarriaga, D. N. Spergel, and U. Seljak. Microwave Background Constraints on Cos- mological Parameters. ApJ, 488:1-13, October 1997.
Probing the Reionization History of the Universe using the Cosmic Microwave Background Polarization. M Kaplinghat, M Chu, Z Haiman, G P Holder, L Knox, C Skordis, ApJ. 583M. Kaplinghat, M. Chu, Z. Haiman, G. P. Holder, L. Knox, and C. Skordis. Probing the Reionization History of the Universe using the Cosmic Microwave Background Polarization. ApJ, 583:24-32, January 2003.
Model-Independent Constraints on Reionization from Large-Scale Cosmic Microwave Background Polarization. M J Mortonson, W Hu, ApJ. 672M. J. Mortonson and W. Hu. Model-Independent Constraints on Reionization from Large- Scale Cosmic Microwave Background Polarization. ApJ, 672:737-751, January 2008.
Secondary Cosmic Microwave Background Anisotropies in a Universe Reionized in Patches. A Gruzinov, W Hu, ApJ. 508A. Gruzinov and W. Hu. Secondary Cosmic Microwave Background Anisotropies in a Uni- verse Reionized in Patches. ApJ, 508:435-439, December 1998.
Impact of Inhomogeneous Reionization on Cosmic Microwave Background Anisotropy. L Knox, R Scoccimarro, S Dodelson, Physical Review Letters. 81L. Knox, R. Scoccimarro, and S. Dodelson. Impact of Inhomogeneous Reionization on Cos- mic Microwave Background Anisotropy. Physical Review Letters, 81:2004-2007, September 1998.
Small-Scale Cosmic Microwave Background Temperature and Polarization Anisotropies Due to Patchy Reionization. M G Santos, A Cooray, Z Haiman, L Knox, C.-P Ma, ApJ. 598M. G. Santos, A. Cooray, Z. Haiman, L. Knox, and C.-P. Ma. Small-Scale Cosmic Mi- crowave Background Temperature and Polarization Anisotropies Due to Patchy Reioniza- tion. ApJ, 598:756-766, December 2003.
The kinetic Sunyaev-Zel'dovich signal from inhomogeneous reionization: a parameter space study. A Mesinger, M Mcquinn, D N Spergel, MNRAS. 422A. Mesinger, M. McQuinn, and D. N. Spergel. The kinetic Sunyaev-Zel'dovich signal from inhomogeneous reionization: a parameter space study. MNRAS, 422:1403-1417, May 2012.
. P A R Ade, N Aghanim, M Arnaud, M Ashdown, J Aumont, C Baccigalupi, A J Banday, R B Barreiro, J G Bartlett, arXiv:1502.01589results. XIII. Cosmological parameters. A&A. submitted; e-printP. A. R. Ade, N. Aghanim, M. Arnaud, M. Ashdown, J. Aumont, C. Baccigalupi, A. J. Ban- day, R. B. Barreiro, J. G. Bartlett, et al. Planck 2015 results. XIII. Cosmological parameters. A&A, submitted; e-print arXiv:1502.01589, February 2015.
Was Star Formation Suppressed in High-Redshift Minihalos?. Z Haiman, G L Bryan, ApJ. 650Z. Haiman and G. L. Bryan. Was Star Formation Suppressed in High-Redshift Minihalos? ApJ, 650:7-11, October 2006.
Limits on Population III star formation in minihaloes implied by Planck. E Visbal, Z Haiman, G L Bryan, MNRAS. 453E. Visbal, Z. Haiman, and G. L. Bryan. Limits on Population III star formation in minihaloes implied by Planck. MNRAS, 453:4456-4466, November 2015.
Absorption by Intergalactic Hydrogen. G B Field, ApJ. 135G. B. Field. Absorption by Intergalactic Hydrogen. ApJ, 135:684-693, May 1962.
Spectral appearance of non-uniform gas at high Z. C J Hogan, M J Rees, MNRAS. 188C. J. Hogan and M. J. Rees. Spectral appearance of non-uniform gas at high Z. MNRAS, 188:791-798, September 1979.
Neutral Hydrogen at High Redshifts as a Probe of Structure Formation -Part One -Post-Cobe Analysis of CDM and HDM Models. K Subramanian, T Padmanabhan, MNRAS. 265101K. Subramanian and T. Padmanabhan. Neutral Hydrogen at High Redshifts as a Probe of Structure Formation -Part One -Post-Cobe Analysis of CDM and HDM Models. MNRAS, 265:101, November 1993.
21 Centimeter Tomography of the Intergalactic Medium at High Redshift. P Madau, A Meiksin, M J Rees, ApJ. 475P. Madau, A. Meiksin, and M. J. Rees. 21 Centimeter Tomography of the Intergalactic Medium at High Redshift. ApJ, 475:429-444, February 1997.
Radio Signatures of H I at High Redshift: Mapping the End of the "Dark Ages. P Tozzi, P Madau, A Meiksin, M J Rees, ApJ. 528P. Tozzi, P. Madau, A. Meiksin, and M. J. Rees. Radio Signatures of H I at High Redshift: Mapping the End of the "Dark Ages". ApJ, 528:597-606, January 2000.
Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe. S R Furlanetto, S P Oh, F H Briggs, Physics Reports. 433S. R. Furlanetto, S. P. Oh, and F. H. Briggs. Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe. Physics Reports, 433:181-301, October 2006.
Reionization of the Intergalactic Medium and the Damping Wing of the Gunn-Peterson Trough. J Miralda-Escudé, ApJ. 501J. Miralda-Escudé. Reionization of the Intergalactic Medium and the Damping Wing of the Gunn-Peterson Trough. ApJ, 501:15-22, July 1998.
Cosmological H II regions and the photoionization of the intergalactic medium. P R Shapiro, M L Giroux, ApJ. 321P. R. Shapiro and M. L. Giroux. Cosmological H II regions and the photoionization of the intergalactic medium. ApJ, 321:L107-L112, October 1987.
Quasar Strömgren Spheres Before Cosmological Reionization. R Cen, Z Haiman, ApJ. 542R. Cen and Z. Haiman. Quasar Strömgren Spheres Before Cosmological Reionization. ApJ, 542:L75-L78, October 2000.
Determining the Redshift of Reionization from the Spectra of High-Redshift Sources. Z Haiman, A Loeb, ApJ. 519Z. Haiman and A. Loeb. Determining the Redshift of Reionization from the Spectra of High-Redshift Sources. ApJ, 519:479-485, July 1999.
Reionization of the Inhomogeneous Universe. J Miralda-Escudé, M Haehnelt, M J Rees, ApJ. 530J. Miralda-Escudé, M. Haehnelt, and M. J. Rees. Reionization of the Inhomogeneous Uni- verse. ApJ, 530:1-16, February 2000.
Evidence of a Cosmological Strömgren Surface and of Significant Neutral Hydrogen Surrounding the Quasar SDSS J1030+0524. A Mesinger, Z Haiman, ApJ. 611A. Mesinger and Z. Haiman. Evidence of a Cosmological Strömgren Surface and of Sig- nificant Neutral Hydrogen Surrounding the Quasar SDSS J1030+0524. ApJ, 611:L69-L72, August 2004.
Evidence for Reionization at z˜6: Detection of a Gunn-Peterson. R H Becker, X Fan, R L White, M A Strauss, V K Narayanan, R H Lupton, J E Gunn, J Annis, N A Bahcall, J Brinkmann, A J Connolly, I Csabai, P C Czarapata, M Doi, T M Heckman, G S Hennessy, Ž Ivezić, G R Knapp, D Q Lamb, T A Mckay, J A Munn, T Nash, R Nichol, J R Pier, G T Richards, D P Schneider, C Stoughton, A S Szalay, A R Thakar, D G York, 122Trough in a z=6.28 Quasar. AJR. H. Becker, X. Fan, R. L. White, M. A. Strauss, V. K. Narayanan, R. H. Lupton, J. E. Gunn, J. Annis, N. A. Bahcall, J. Brinkmann, A. J. Connolly, I. Csabai, P. C. Czarapata, M. Doi, T. M. Heckman, G. S. Hennessy,Ž. Ivezić, G. R. Knapp, D. Q. Lamb, T. A. McKay, J. A. Munn, T. Nash, R. Nichol, J. R. Pier, G. T. Richards, D. P. Schneider, C. Stoughton, A. S. Szalay, A. R. Thakar, and D. G. York. Evidence for Reionization at z˜6: Detection of a Gunn-Peterson Trough in a z=6.28 Quasar. AJ, 122:2850-2857, December 2001.
Did the universe reionize at redshift six?. R Barkana, New Ast. 7R. Barkana. Did the universe reionize at redshift six? New Ast., 7:85-100, March 2002.
Are Young Galaxies Visible?. R B Partridge, P J E Peebles, ApJ. 147868R. B. Partridge and P. J. E. Peebles. Are Young Galaxies Visible? ApJ, 147:868, March 1967.
The Density of Lyα Emitters at Very High Redshift. E M Hu, L L Cowie, R G Mcmahon, ApJ. 502E. M. Hu, L. L. Cowie, and R. G. McMahon. The Density of Lyα Emitters at Very High Redshift. ApJ, 502:L99-L103, August 1998.
Models for Dusty Lyα Emitters at High Redshift. Z Haiman, M Spaans, ApJ. 518Z. Haiman and M. Spaans. Models for Dusty Lyα Emitters at High Redshift. ApJ, 518:138- 144, June 1999.
M Ouchi, K Shimasaku, H Furusawa, T Saito, M Yoshida, M Akiyama, Y Ono, T Yamada, K Ota, N Kashikawa, M Iye, T Kodama, S Okamura, C Simpson, M Yoshida, Statistics of 207 Lyα Emitters at a Redshift Near 7: Constraints on Reionization and Galaxy Formation Models. 723M. Ouchi, K. Shimasaku, H. Furusawa, T. Saito, M. Yoshida, M. Akiyama, Y. Ono, T. Ya- mada, K. Ota, N. Kashikawa, M. Iye, T. Kodama, S. Okamura, C. Simpson, and M. Yoshida. Statistics of 207 Lyα Emitters at a Redshift Near 7: Constraints on Reionization and Galaxy Formation Models. ApJ, 723:869-894, November 2010.
The Detectability of High-Redshift Lyα Emission Lines prior to the Reionization of the Universe. Z Haiman, ApJ. 576Z. Haiman. The Detectability of High-Redshift Lyα Emission Lines prior to the Reionization of the Universe. ApJ, 576:L1-L4, September 2002.
Probing reionization with Lyman α emission lines. M R Santos, MNRAS. 349M. R. Santos. Probing reionization with Lyman α emission lines. MNRAS, 349:1137-1152, April 2004.
The impact of The IGM on high-redshift Lyα emission lines. M Dijkstra, A Lidz, J S B Wyithe, MNRAS. 377M. Dijkstra, A. Lidz, and J. S. B. Wyithe. The impact of The IGM on high-redshift Lyα emission lines. MNRAS, 377:1175-1186, May 2007.
Luminosity functions of Lyα emitting galaxies and cosmic reionization of hydrogen. M Dijkstra, J S B Wyithe, Z Haiman, MNRAS. 379M. Dijkstra, J. S. B. Wyithe, and Z. Haiman. Luminosity functions of Lyα emitting galaxies and cosmic reionization of hydrogen. MNRAS, 379:253-259, July 2007.
Concordance models of reionization: implications for faint galaxies and escape fraction evolution. M Kuhlen, C.-A Faucher-Giguère, MNRAS. 423M. Kuhlen and C.-A. Faucher-Giguère. Concordance models of reionization: implications for faint galaxies and escape fraction evolution. MNRAS, 423:862-876, June 2012.
A Limit from the X-Ray Background on the Contribution of Quasars to Reionization. M Dijkstra, Z Haiman, A Loeb, ApJ. 613M. Dijkstra, Z. Haiman, and A. Loeb. A Limit from the X-Ray Background on the Contri- bution of Quasars to Reionization. ApJ, 613:646-654, October 2004.
Cosmic backgrounds from miniquasars. R Salvaterra, F Haardt, A Ferrara, MNRAS. 362R. Salvaterra, F. Haardt, and A. Ferrara. Cosmic backgrounds from miniquasars. MNRAS, 362:L50-L54, September 2005.
Constraints on X-ray emissions from the reionization era. M Mcquinn, MNRAS. 426M. McQuinn. Constraints on X-ray emissions from the reionization era. MNRAS, 426:1349- 1360, October 2012.
The Assembly of Supermassive Black Holes at High Redshifts. T Tanaka, Z Haiman, ApJ. 696T. Tanaka and Z. Haiman. The Assembly of Supermassive Black Holes at High Redshifts. ApJ, 696:1798-1822, May 2009.
Limits on the high redshift growth of massive black holes. R Salvaterra, F Haardt, M Volonteri, A Moretti, A&A. 5456R. Salvaterra, F. Haardt, M. Volonteri, and A. Moretti. Limits on the high redshift growth of massive black holes. A&A, 545:L6, September 2012.
Molecular Hydrogen in Pre-galactic Gas Clouds. W C Saslaw, D Zipoy, Nature. 216W. C. Saslaw and D. Zipoy. Molecular Hydrogen in Pre-galactic Gas Clouds. Nature, 216:976-978, December 1967.
A no-go theorem for direct collapse black holes without a strong ultraviolet background. E Visbal, Z Haiman, G L Bryan, MNRAS. 442E. Visbal, Z. Haiman, and G. L. Bryan. A no-go theorem for direct collapse black holes without a strong ultraviolet background. MNRAS, 442:L100-L104, July 2014.
Molecules in the early universe. S Lepp, J M Shull, ApJ. 280S. Lepp and J. M. Shull. Molecules in the early universe. ApJ, 280:465-469, May 1984.
Formation of Protogalaxies and Molecular Processes in Hydrogen Gas. T Hirasawa, Progress of Theoretical Physics. 42T. Hirasawa. Formation of Protogalaxies and Molecular Processes in Hydrogen Gas. Progress of Theoretical Physics, 42:523-543, September 1969.
Cooling of Pre-Galactic Gas Clouds by Hydrogen Molecule. T Matsuda, H Satō, H Takeda, Progress of Theoretical Physics. 42T. Matsuda, H. Satō, and H. Takeda. Cooling of Pre-Galactic Gas Clouds by Hydrogen Molecule. Progress of Theoretical Physics, 42:219-233, August 1969.
Molecular hydrogen in the cosmic recombination epoch. E Alizadeh, C M Hirata, PRD8483011E. Alizadeh and C. M. Hirata. Molecular hydrogen in the cosmic recombination epoch. PRD, 84(8):083011, October 2011.
The thermal effects of H2 molecules in rotating and collapsing spheroidal gas clouds. J B Hutchins, ApJ. 205J. B. Hutchins. The thermal effects of H2 molecules in rotating and collapsing spheroidal gas clouds. ApJ, 205:103-121, April 1976.
The first stars. J Silk, MNRAS. 205J. Silk. The first stars. MNRAS, 205:705-718, November 1983.
Primordial star formation -The role of molecular hydrogen. F Palla, E E Salpeter, S W Stahler, ApJ. 271F. Palla, E. E. Salpeter, and S. W. Stahler. Primordial star formation -The role of molecular hydrogen. ApJ, 271:632-641, August 1983.
Origin of the Globular Star Clusters. P J E Peebles, R H Dicke, ApJ. 154891P. J. E. Peebles and R. H. Dicke. Origin of the Globular Star Clusters. ApJ, 154:891-+, December 1968.
The Black-Body Radiation Content of the Universe and the Formation of Galaxies. P J E Peebles, ApJ. 1421317P. J. E. Peebles. The Black-Body Radiation Content of the Universe and the Formation of Galaxies. ApJ, 142:1317, November 1965.
Cooling, dynamics and fragmentation of massive gas clouds -Clues to the masses and radii of galaxies and clusters. M J Rees, J P Ostriker, MNRAS. 179M. J. Rees and J. P. Ostriker. Cooling, dynamics and fragmentation of massive gas clouds - Clues to the masses and radii of galaxies and clusters. MNRAS, 179:541-559, June 1977.
Core condensation in heavy halos -A two-stage theory for galaxy formation and clustering. S D M White, M J Rees, MNRAS. 183S. D. M. White and M. J. Rees. Core condensation in heavy halos -A two-stage theory for galaxy formation and clustering. MNRAS, 183:341-358, May 1978.
The origin of dwarf galaxies, cold dark matter, and biased galaxy formation. A Dekel, J Silk, ApJ. 303A. Dekel and J. Silk. The origin of dwarf galaxies, cold dark matter, and biased galaxy formation. ApJ, 303:39-55, April 1986.
On the fragmentation of cosmic gas clouds. I -The formation of galaxies and the first generation of stars. J Silk, ApJ. 211J. Silk. On the fragmentation of cosmic gas clouds. I -The formation of galaxies and the first generation of stars. ApJ, 211:638-648, February 1977.
Formation of population III stars and pregalactic evolution. A Kashlinsky, M J Rees, MNRAS. 205A. Kashlinsky and M. J. Rees. Formation of population III stars and pregalactic evolution. MNRAS, 205:955-971, December 1983.
Cosmological Formation of Low-Mass Objects. Z Haiman, A A Thoul, A Loeb, ApJ. 464523Z. Haiman, A. A. Thoul, and A. Loeb. Cosmological Formation of Low-Mass Objects. ApJ, 464:523, June 1996.
How Small Were the First Cosmological Objects?. M Tegmark, J Silk, M J Rees, A Blanchard, T Abel, F Palla, ApJ. 4741M. Tegmark, J. Silk, M. J. Rees, A. Blanchard, T. Abel, and F. Palla. How Small Were the First Cosmological Objects? ApJ, 474:1, January 1997.
Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. G Hinshaw, D Larson, E Komatsu, D N Spergel, C L Bennett, J Dunkley, M R Nolta, M Halpern, R S Hill, N Odegard, L Page, K M Smith, J L Weiland, B Gold, N Jarosik, A Kogut, M Limon, S S Meyer, G S Tucker, E Wollack, E L Wright, ApJS. 20819G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, M. R. Nolta, M. Halpern, R. S. Hill, N. Odegard, L. Page, K. M. Smith, J. L. Weiland, B. Gold, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, G. S. Tucker, E. Wollack, and E. L. Wright. Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. ApJS, 208:19, October 2013.
The first stars in the Universe. S Naoz, S Noter, R Barkana, MNRAS. 373S. Naoz, S. Noter, and R. Barkana. The first stars in the Universe. MNRAS, 373:L98-L102, November 2006.
Early Structure Formation and Reionization in a Warm Dark Matter Cosmology. N Yoshida, A Sokasian, L Hernquist, V Springel, ApJ. 591N. Yoshida, A. Sokasian, L. Hernquist, and V. Springel. Early Structure Formation and Reionization in a Warm Dark Matter Cosmology. ApJ, 591:L1-L4, July 2003.
Simulations of the formation, evolution and clustering of galaxies and quasars. V Springel, S D M White, A Jenkins, C S Frenk, N Yoshida, L Gao, J Navarro, R Thacker, D Croton, J Helly, J A Peacock, S Cole, P Thomas, H Couchman, A Evrard, J Colberg, F Pearce, Nature. 435V. Springel, S. D. M. White, A. Jenkins, C. S. Frenk, N. Yoshida, L. Gao, J. Navarro, R. Thacker, D. Croton, J. Helly, J. A. Peacock, S. Cole, P. Thomas, H. Couchman, A. Evrard, J. Colberg, and F. Pearce. Simulations of the formation, evolution and clustering of galaxies and quasars. Nature, 435:629-636, June 2005.
Early Structure Formation and Reionization in a Cosmological Model with a Running Primordial Power Spectrum. N Yoshida, A Sokasian, L Hernquist, V Springel, ApJ. 598N. Yoshida, A. Sokasian, L. Hernquist, and V. Springel. Early Structure Formation and Reionization in a Cosmological Model with a Running Primordial Power Spectrum. ApJ, 598:73-85, November 2003.
Constraints on Warm Dark Matter from Cosmological Reionization. R Barkana, Z Haiman, J P Ostriker, ApJ. 558R. Barkana, Z. Haiman, and J. P. Ostriker. Constraints on Warm Dark Matter from Cosmo- logical Reionization. ApJ, 558:482-496, September 2001.
Focusing on warm dark matter with lensed highredshift galaxies. F Pacucci, A Mesinger, Z Haiman, MNRAS. 435F. Pacucci, A. Mesinger, and Z. Haiman. Focusing on warm dark matter with lensed high- redshift galaxies. MNRAS, 435:L53-L57, August 2013.
Constraints on warm dark matter models from high-redshift long gamma-ray bursts. R S Souza, A Mesinger, A Ferrara, Z Haiman, R Perna, N Yoshida, MNRAS. 432R. S. de Souza, A. Mesinger, A. Ferrara, Z. Haiman, R. Perna, and N. Yoshida. Constraints on warm dark matter models from high-redshift long gamma-ray bursts. MNRAS, 432:3218- 3227, July 2013.
The Formation and Fragmentation of Primordial Molecular Clouds. T Abel, G L Bryan, M L Norman, ApJ. 540T. Abel, G. L. Bryan, and M. L. Norman. The Formation and Fragmentation of Primordial Molecular Clouds. ApJ, 540:39-44, September 2000.
The Formation of the First Stars. I. The Primordial Star-forming Cloud. V Bromm, P S Coppi, R B Larson, ApJ. 564V. Bromm, P. S. Coppi, and R. B. Larson. The Formation of the First Stars. I. The Primordial Star-forming Cloud. ApJ, 564:23-51, January 2002.
N Yoshida, T Abel, L Hernquist, N Sugiyama, Simulations of Early Structure Formation: Primordial Gas Clouds. 592N. Yoshida, T. Abel, L. Hernquist, and N. Sugiyama. Simulations of Early Structure Forma- tion: Primordial Gas Clouds. ApJ, 592:645-663, August 2003.
Self-similar collapse of isothermal spheres and star formation. F H Shu, ApJ. 214F. H. Shu. Self-similar collapse of isothermal spheres and star formation. ApJ, 214:488-497, June 1977.
Population III Star Formation in a Λ CDM Universe. I. The Effect of Formation Redshift and Environment on Protostellar Accretion Rate. B W O'shea, M L Norman, ApJ. 654B. W. O'Shea and M. L. Norman. Population III Star Formation in a Λ CDM Universe. I. The Effect of Formation Redshift and Environment on Protostellar Accretion Rate. ApJ, 654:66-92, January 2007.
Resolving the Formation of Protogalaxies. J H Wise, M J Turk, T Abel, II. Central Gravitational Collapse. 682ApJJ. H. Wise, M. J. Turk, and T. Abel. Resolving the Formation of Protogalaxies. II. Central Gravitational Collapse. ApJ, 682:745-757, August 2008.
Supermassive black hole formation by direct collapse: keeping protogalactic gas H 2 free in dark matter haloes with virial temperatures T vir > ∼ 10 4 K. C Shang, G L Bryan, Z Haiman, MNRAS. 402C. Shang, G. L. Bryan, and Z. Haiman. Supermassive black hole formation by direct collapse: keeping protogalactic gas H 2 free in dark matter haloes with virial temperatures T vir > ∼ 10 4 K. MNRAS, 402:1249-1262, February 2010.
The Formation of Population III Binaries from Cosmological Initial Conditions. M J Turk, T Abel, B O'shea, Science. 325601M. J. Turk, T. Abel, and B. O'Shea. The Formation of Population III Binaries from Cosmo- logical Initial Conditions. Science, 325:601-, July 2009.
The first stars: formation of binaries and small multiple systems. A Stacy, T H Greif, V Bromm, MNRAS. 403A. Stacy, T. H. Greif, and V. Bromm. The first stars: formation of binaries and small multiple systems. MNRAS, 403:45-60, March 2010.
Simulations on a Moving Mesh: The Clustered Formation of Population III Protostars. T H Greif, V Springel, S D M White, S C O Glover, P C Clark, R J Smith, R S Klessen, V Bromm, ApJ. 73775T. H. Greif, V. Springel, S. D. M. White, S. C. O. Glover, P. C. Clark, R. J. Smith, R. S. Klessen, and V. Bromm. Simulations on a Moving Mesh: The Clustered Formation of Pop- ulation III Protostars. ApJ, 737:75, August 2011.
Gravitational Fragmentation in Turbulent Primordial Gas and the Initial Mass Function of Population III Stars. P C Clark, S C O Glover, R S Klessen, V Bromm, ApJ. 727110P. C. Clark, S. C. O. Glover, R. S. Klessen, and V. Bromm. Gravitational Fragmentation in Turbulent Primordial Gas and the Initial Mass Function of Population III Stars. ApJ, 727:110, February 2011.
Population III Stars from Turbulent Fragmentation at Redshift˜11. J Prieto, P Padoan, R Jimenez, L Infante, ApJ. 73138J. Prieto, P. Padoan, R. Jimenez, and L. Infante. Population III Stars from Turbulent Frag- mentation at Redshift˜11. ApJ, 731:L38, April 2011.
Magnetic Fields in Population III Star Formation. M J Turk, J S Oishi, T Abel, G L Bryan, ApJ. 745154M. J. Turk, J. S. Oishi, T. Abel, and G. L. Bryan. Magnetic Fields in Population III Star Formation. ApJ, 745:154, February 2012.
Signatures of Stellar Reionization of the Universe. Z Haiman, A Loeb, ApJ. 483Z. Haiman and A. Loeb. Signatures of Stellar Reionization of the Universe. ApJ, 483:21-37, July 1997.
Zero-Metallicity Stars and the Effects of the First Stars on Reionization. J Tumlinson, J M Shull, ApJ. 528J. Tumlinson and J. M. Shull. Zero-Metallicity Stars and the Effects of the First Stars on Reionization. ApJ, 528:L65-L68, January 2000.
Generic Spectrum and Ionization Efficiency of a Heavy Initial Mass Function for the First Stars. V Bromm, R P Kudritzki, A Loeb, ApJ. 552V. Bromm, R. P. Kudritzki, and A. Loeb. Generic Spectrum and Ionization Efficiency of a Heavy Initial Mass Function for the First Stars. ApJ, 552:464-472, May 2001.
On the properties of massive Population III stars and metal-free stellar populations. D Schaerer, A&A. 382D. Schaerer. On the properties of massive Population III stars and metal-free stellar popula- tions. A&A, 382:28-42, January 2002.
Early Reionization by Miniquasars. P Madau, M J Rees, M Volonteri, F Haardt, S P Oh, ApJ. 604P. Madau, M. J. Rees, M. Volonteri, F. Haardt, and S. P. Oh. Early Reionization by Mini- quasars. ApJ, 604:484-494, April 2004.
X-ray pre-ionization powered by accretion on the first black holes -I. A model for the WMAP polarization measurement. M Ricotti, J P Ostriker, MNRAS. 352M. Ricotti and J. P. Ostriker. X-ray pre-ionization powered by accretion on the first black holes -I. A model for the WMAP polarization measurement. MNRAS, 352:547-562, August 2004.
Signatures of X-rays in the early Universe. A Mesinger, A Ferrara, D S Spiegel, MNRAS. 431A. Mesinger, A. Ferrara, and D. S. Spiegel. Signatures of X-rays in the early Universe. MNRAS, 431:621-637, May 2013.
Radiative feedback from highmass X-ray binaries on the formation of the first galaxies and early reionization. M Jeon, A H Pawlik, V Bromm, M Milosavljević, MNRAS. 440M. Jeon, A. H. Pawlik, V. Bromm, and M. Milosavljević. Radiative feedback from high- mass X-ray binaries on the formation of the first galaxies and early reionization. MNRAS, 440:3778-3796, June 2014.
X-ray emission from high-redshift miniquasars: self-regulating the population of massive black holes through global warming. T Tanaka, R Perna, Z Haiman, MNRAS. 425T. Tanaka, R. Perna, and Z. Haiman. X-ray emission from high-redshift miniquasars: self-regulating the population of massive black holes through global warming. MNRAS, 425:2974-2987, October 2012.
Reionization in a cold dark matter universe: The feedback of galaxy formation on the intergalactic medium. P R Shapiro, M L Giroux, A Babul, ApJ. 427P. R. Shapiro, M. L. Giroux, and A. Babul. Reionization in a cold dark matter universe: The feedback of galaxy formation on the intergalactic medium. ApJ, 427:25-50, May 1994.
Observational Signatures of the First Quasars. Z Haiman, A Loeb, ApJ. 503Z. Haiman and A. Loeb. Observational Signatures of the First Quasars. ApJ, 503:505-517, August 1998.
Destruction of Molecular Hydrogen during Cosmological Reionization. Z Haiman, M J Rees, A Loeb, ApJ. 476458Z. Haiman, M. J. Rees, and A. Loeb. Destruction of Molecular Hydrogen during Cosmolog- ical Reionization. ApJ, 476:458-+, February 1997.
Photodissociative Regulation of Star Formation in Metal-free Pregalactic Clouds. K Omukai, R Nishi, ApJ. 518K. Omukai and R. Nishi. Photodissociative Regulation of Star Formation in Metal-free Pre- galactic Clouds. ApJ, 518:64-68, June 1999.
The Radiative Feedback of the First Cosmological Objects. Z Haiman, T Abel, M J Rees, ApJ. 534Z. Haiman, T. Abel, and M. J. Rees. The Radiative Feedback of the First Cosmological Objects. ApJ, 534:11-24, May 2000.
Intergalactic H 2 Photodissociation and the Soft Ultraviolet Background Produced by Population III Objects. B Ciardi, A Ferrara, T Abel, ApJ. 533B. Ciardi, A. Ferrara, and T. Abel. Intergalactic H 2 Photodissociation and the Soft Ultraviolet Background Produced by Population III Objects. ApJ, 533:594-600, April 2000.
Simulations of Pregalactic Structure Formation with Radiative Feedback. M E Machacek, G L Bryan, T Abel, ApJ. 548M. E. Machacek, G. L. Bryan, and T. Abel. Simulations of Pregalactic Structure Formation with Radiative Feedback. ApJ, 548:509-521, February 2001.
Feedback from Galaxy Formation: Production and Photodissociation of Primordial H 2. M Ricotti, N Y Gnedin, J M Shull, ApJ. 560M. Ricotti, N. Y. Gnedin, and J. M. Shull. Feedback from Galaxy Formation: Production and Photodissociation of Primordial H 2 . ApJ, 560:580-591, October 2001.
The Fate of the First Galaxies. I. Self-consistent Cosmological Simulations with Radiative Transfer. M Ricotti, N Y Gnedin, J M Shull, ApJ. 575M. Ricotti, N. Y. Gnedin, and J. M. Shull. The Fate of the First Galaxies. I. Self-consistent Cosmological Simulations with Radiative Transfer. ApJ, 575:33-48, August 2002.
A Mesinger, G L Bryan, Z Haiman, Ultraviolet Radiative Feedback on High-Redshift Protogalaxies. 648A. Mesinger, G. L. Bryan, and Z. Haiman. Ultraviolet Radiative Feedback on High-Redshift Protogalaxies. ApJ, 648:835-851, September 2006.
Suppression of H 2 Cooling in the Ultraviolet Background. J H Wise, T Abel, ApJ. 671J. H. Wise and T. Abel. Suppression of H 2 Cooling in the Ultraviolet Background. ApJ, 671:1559-1567, December 2007.
Population III Star Formation in a Λ CDM Universe. II. Effects of a Photodissociating Background. B W O'shea, M L Norman, ApJ. 673B. W. O'Shea and M. L. Norman. Population III Star Formation in a Λ CDM Universe. II. Effects of a Photodissociating Background. ApJ, 673:14-33, January 2008.
Occurrence of metal-free galaxies in the early Universe. J L Johnson, T H Greif, V Bromm, MNRAS. 388J. L. Johnson, T. H. Greif, and V. Bromm. Occurrence of metal-free galaxies in the early Universe. MNRAS, 388:26-38, July 2008.
How Very Massive Metal-Free Stars Start Cosmological Reionization. J H Wise, T Abel, ApJ. 684J. H. Wise and T. Abel. How Very Massive Metal-Free Stars Start Cosmological Reioniza- tion. ApJ, 684:1-17, September 2008.
Resolving the Formation of Protogalaxies. III. Feedback from the First Stars. J H Wise, T Abel, ApJ. 685J. H. Wise and T. Abel. Resolving the Formation of Protogalaxies. III. Feedback from the First Stars. ApJ, 685:40-56, September 2008.
How the First Stars Regulated Local Star Formation. I. Radiative Feedback. D Whalen, B W Shea, J Smidt, M L Norman, ApJ. 679D. Whalen, B. W. O'Shea, J. Smidt, and M. L. Norman. How the First Stars Regulated Local Star Formation. I. Radiative Feedback. ApJ, 679:925-941, June 2008.
Relic HII regions and radiative feedback at high redshifts. A Mesinger, G L Bryan, Z Haiman, MNRAS. 399A. Mesinger, G. L. Bryan, and Z. Haiman. Relic HII regions and radiative feedback at high redshifts. MNRAS, 399:1650-1662, November 2009.
How Massive Single Stars End Their Life. A Heger, C L Fryer, S E Woosley, N Langer, D H Hartmann, ApJ. 591A. Heger, C. L. Fryer, S. E. Woosley, N. Langer, and D. H. Hartmann. How Massive Single Stars End Their Life. ApJ, 591:288-300, July 2003.
H 2 Cooling of Primordial Gas Triggered by UV Irradiation. Z Haiman, M J Rees, A Loeb, ApJ. 467522Z. Haiman, M. J. Rees, and A. Loeb. H 2 Cooling of Primordial Gas Triggered by UV Irradiation. ApJ, 467:522-+, August 1996.
Reionization by Hard Photons. I. X-Rays from the First Star Clusters. S P Oh, ApJ. 553S. P. Oh. Reionization by Hard Photons. I. X-Rays from the First Star Clusters. ApJ, 553:499-512, June 2001.
Heating and Ionization of the Intergalactic Medium by an Early X-Ray Background. A Venkatesan, M L Giroux, J M Shull, ApJ. 563A. Venkatesan, M. L. Giroux, and J. M. Shull. Heating and Ionization of the Intergalactic Medium by an Early X-Ray Background. ApJ, 563:1-8, December 2001.
Radiative feedback from an early X-ray background. S C O Glover, P W J L Brand, MNRAS. 340S. C. O. Glover and P. W. J. L. Brand. Radiative feedback from an early X-ray background. MNRAS, 340:210-226, March 2003.
The Spin-Kinetic Temperature Coupling and the Heating Rate due to Lyα Scattering before Reionization: Predictions for 21 Centimeter Emission and Absorption. X Chen, J Miralda-Escudé, ApJ. 602X. Chen and J. Miralda-Escudé. The Spin-Kinetic Temperature Coupling and the Heating Rate due to Lyα Scattering before Reionization: Predictions for 21 Centimeter Emission and Absorption. ApJ, 602:1-11, February 2004.
X-ray pre-ionization powered by accretion on the first black holes -II. Cosmological simulations and observational signatures. M Ricotti, J P Ostriker, N Y Gnedin, MNRAS. 357M. Ricotti, J. P. Ostriker, and N. Y. Gnedin. X-ray pre-ionization powered by accretion on the first black holes -II. Cosmological simulations and observational signatures. MNRAS, 357:207-219, February 2005.
Stellar black holes at the dawn of the universe. I F Mirabel, M Dijkstra, P Laurent, A Loeb, J R Pritchard, A&A. 528149I. F. Mirabel, M. Dijkstra, P. Laurent, A. Loeb, and J. R. Pritchard. Stellar black holes at the dawn of the universe. A&A, 528:A149, April 2011.
J Wolcott-Green, Z Haiman, Feedback from the infrared background in the early Universe. 425J. Wolcott-Green and Z. Haiman. Feedback from the infrared background in the early Uni- verse. MNRAS, 425:L51-L55, September 2012.
Fluctuations in the high-redshift Lyman-Werner background: close halo pairs as the origin of supermassive black holes. M Dijkstra, Z Haiman, A Mesinger, J S B Wyithe, 391MN-RASM. Dijkstra, Z. Haiman, A. Mesinger, and J. S. B. Wyithe. Fluctuations in the high-redshift Lyman-Werner background: close halo pairs as the origin of supermassive black holes. MN- RAS, 391:1961-1972, December 2008.
The Inhomogeneous Background Of H 2 -Dissociating Radiation During Cosmic Reionization. K Ahn, P R Shapiro, I T Iliev, G Mellema, U.-L Pen, ApJ. 695K. Ahn, P. R. Shapiro, I. T. Iliev, G. Mellema, and U.-L. Pen. The Inhomogeneous Back- ground Of H 2 -Dissociating Radiation During Cosmic Reionization. ApJ, 695:1430-1445, April 2009.
The Reionization History at High Redshifts. I. Physical Models and New Constraints from Cosmic Microwave Background Polarization. Z Haiman, G P Holder, ApJ. 595Z. Haiman and G. P. Holder. The Reionization History at High Redshifts. I. Physical Models and New Constraints from Cosmic Microwave Background Polarization. ApJ, 595:1-12, September 2003.
Reionization of Hydrogen and Helium by Early Stars and Quasars. J S B Wyithe, A Loeb, ApJ. 586J. S. B. Wyithe and A. Loeb. Reionization of Hydrogen and Helium by Early Stars and Quasars. ApJ, 586:693-708, April 2003.
The Implications of Wilkinson Microwave Anisotropy Probe Observations for Population III Star Formation Processes. R Cen, ApJ. 591R. Cen. The Implications of Wilkinson Microwave Anisotropy Probe Observations for Pop- ulation III Star Formation Processes. ApJ, 591:L5-L8, July 2003.
Self-regulated reionization. I T Iliev, G Mellema, P R Shapiro, U.-L Pen, MNRAS. 376I. T. Iliev, G. Mellema, P. R. Shapiro, and U.-L. Pen. Self-regulated reionization. MNRAS, 376:534-548, April 2007.
Local Radiative Feedback in the Formation of the First Protogalaxies. J L Johnson, T H Greif, V Bromm, ApJ. 665J. L. Johnson, T. H. Greif, and V. Bromm. Local Radiative Feedback in the Formation of the First Protogalaxies. ApJ, 665:85-95, August 2007.
The Growth of H II Regions During Reionization. S R Furlanetto, M Zaldarriaga, L Hernquist, ApJ. 613S. R. Furlanetto, M. Zaldarriaga, and L. Hernquist. The Growth of H II Regions During Reionization. ApJ, 613:1-15, September 2004.
Feedback from Clustered Sources during Reionization. R H Kramer, Z Haiman, S P Oh, ApJ. 649R. H. Kramer, Z. Haiman, and S. P. Oh. Feedback from Clustered Sources during Reioniza- tion. ApJ, 649:570-578, October 2006.
Efficient Simulations of Early Structure Formation and Reionization. A Mesinger, S Furlanetto, ApJ. 669A. Mesinger and S. Furlanetto. Efficient Simulations of Early Structure Formation and Reionization. ApJ, 669:663-675, November 2007.
Comparison of reionization models: radiative transfer simulations and approximate, seminumeric models. O Zahn, A Mesinger, M Mcquinn, H Trac, R Cen, L E Hernquist, MNRAS. 414O. Zahn, A. Mesinger, M. McQuinn, H. Trac, R. Cen, and L. E. Hernquist. Comparison of reionization models: radiative transfer simulations and approximate, seminumeric models. MNRAS, 414:727-738, June 2011.
The thickness of high-redshift quasar ionization fronts as a constraint on the ionizing spectral energy distribution. R H Kramer, Z Haiman, MNRAS. 385R. H. Kramer and Z. Haiman. The thickness of high-redshift quasar ionization fronts as a constraint on the ionizing spectral energy distribution. MNRAS, 385:1561-1575, April 2008.
Time-evolution of ionization and heating around first stars and miniqsos. R M Thomas, S Zaroubi, MNRAS. 384R. M. Thomas and S. Zaroubi. Time-evolution of ionization and heating around first stars and miniqsos. MNRAS, 384:1080-1096, March 2008.
Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe. S R Furlanetto, S P Oh, F H Briggs, Physics Reports. 433S. R. Furlanetto, S. P. Oh, and F. H. Briggs. Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe. Physics Reports, 433:181-301, October 2006.
X-ray emission from star-forming galaxies -II. Hot interstellarmedium. S Mineo, M Gilfanov, R Sunyaev, MNRAS. 426S. Mineo, M. Gilfanov, and R. Sunyaev. X-ray emission from star-forming galaxies -II. Hot interstellarmedium. MNRAS, 426:1870-1883, November 2012.
Model-independent evidence in favour of an end to reionization by z > ∼ 6. I D Mcgreer, A Mesinger, V D'odorico, MNRAS. 447I. D. McGreer, A. Mesinger, and V. D'Odorico. Model-independent evidence in favour of an end to reionization by z > ∼ 6. MNRAS, 447:499-505, February 2015.
Particle decays during the cosmic dark ages. X Chen, M Kamionkowski, PRD7043502X. Chen and M. Kamionkowski. Particle decays during the cosmic dark ages. PRD, 70(4):043502, August 2004.
Do We Need Stars to Reionize the Universe at High Redshifts? Early Reionization by Decaying Heavy Sterile Neutrinos. S H Hansen, Z Haiman, ApJ. 600S. H. Hansen and Z. Haiman. Do We Need Stars to Reionize the Universe at High Redshifts? Early Reionization by Decaying Heavy Sterile Neutrinos. ApJ, 600:26-31, January 2004.
Partially ionizing the universe by decaying particles. S Kasuya, M Kawasaki, N Sugiyama, PRD. 69223512S. Kasuya, M. Kawasaki, and N. Sugiyama. Partially ionizing the universe by decaying particles. PRD, 69(2):023512, January 2004.
Relic keV Sterile Neutrinos and Reionization. P L Biermann, A Kusenko, Physical Review Letters. 96991301P. L. Biermann and A. Kusenko. Relic keV Sterile Neutrinos and Reionization. Physical Review Letters, 96(9):091301, March 2006.
The impact of dark matter decays and annihilations on the formation of the first structures. E Ripamonti, M Mapelli, A Ferrara, MNRAS. 375E. Ripamonti, M. Mapelli, and A. Ferrara. The impact of dark matter decays and annihila- tions on the formation of the first structures. MNRAS, 375:1399-1408, March 2007.
Primordial star formation triggered by UV photons from UHECR. Y A Shchekinov, E O Vasiliev, A&A. 419Y. A. Shchekinov and E. O. Vasiliev. Primordial star formation triggered by UV photons from UHECR. A&A, 419:19-23, May 2004.
Impact of cosmic rays on Population III star formation. A Stacy, V Bromm, MNRAS. 382A. Stacy and V. Bromm. Impact of cosmic rays on Population III star formation. MNRAS, 382:229-238, November 2007.
Can non-Gaussian cosmological models explain the WMAP high optical depth for reionization?. X Chen, A Cooray, N Yoshida, N Sugiyama, MNRAS. 346X. Chen, A. Cooray, N. Yoshida, and N. Sugiyama. Can non-Gaussian cosmological models explain the WMAP high optical depth for reionization? MNRAS, 346:L31-L35, December 2003.
Implications of WMAP 3 Year Data for the Sources of Reionization. M A Alvarez, P R Shapiro, K Ahn, I T Iliev, ApJ. 644M. A. Alvarez, P. R. Shapiro, K. Ahn, and I. T. Iliev. Implications of WMAP 3 Year Data for the Sources of Reionization. ApJ, 644:L101-L104, June 2006.
The Spectral Evolution of the First Galaxies. I. James Webb Space Telescope Detection Limits and Color Criteria for Population III Galaxies. E Zackrisson, C.-E Rydberg, D Schaerer, G Östlin, M Tuli, ApJ. 74013E. Zackrisson, C.-E. Rydberg, D. Schaerer, G.Östlin, and M. Tuli. The Spectral Evolution of the First Galaxies. I. James Webb Space Telescope Detection Limits and Color Criteria for Population III Galaxies. ApJ, 740:13, October 2011.
A magnified young galaxy from about 500 million years after the Big Bang. W Zheng, M Postman, A Zitrin, J Moustakas, X Shu, S Jouvel, O Høst, A Molino, L Bradley, D Coe, L A Moustakas, M Carrasco, H Ford, N Benítez, T R Lauer, S Seitz, R Bouwens, A Koekemoer, E Medezinski, M Bartelmann, T Broadhurst, M Donahue, C Grillo, L Infante, S W Jha, D D Kelson, O Lahav, D Lemze, P Melchior, M Meneghetti, J Merten, M Nonino, S Ogaz, P Rosati, K Umetsu, A Van Der Wel, Nature. 489W. Zheng, M. Postman, A. Zitrin, J. Moustakas, X. Shu, S. Jouvel, O. Høst, A. Molino, L. Bradley, D. Coe, L. A. Moustakas, M. Carrasco, H. Ford, N. Benítez, T. R. Lauer, S. Seitz, R. Bouwens, A. Koekemoer, E. Medezinski, M. Bartelmann, T. Broadhurst, M. Don- ahue, C. Grillo, L. Infante, S. W. Jha, D. D. Kelson, O. Lahav, D. Lemze, P. Melchior, M. Meneghetti, J. Merten, M. Nonino, S. Ogaz, P. Rosati, K. Umetsu, and A. van der Wel. A magnified young galaxy from about 500 million years after the Big Bang. Nature, 489:406- 408, September 2012.
CLASH: Three Strongly Lensed Images of a Candidate z ≈ 11 Galaxy. D Coe, A Zitrin, M Carrasco, X Shu, W Zheng, M Postman, L Bradley, A Koekemoer, R Bouwens, T Broadhurst, A Monna, O Host, L A Moustakas, H Ford, J Moustakas, A Van Der Wel, M Donahue, S A Rodney, N Benítez, S Jouvel, S Seitz, D D Kelson, P Rosati, ApJ. 76232D. Coe, A. Zitrin, M. Carrasco, X. Shu, W. Zheng, M. Postman, L. Bradley, A. Koekemoer, R. Bouwens, T. Broadhurst, A. Monna, O. Host, L. A. Moustakas, H. Ford, J. Moustakas, A. van der Wel, M. Donahue, S. A. Rodney, N. Benítez, S. Jouvel, S. Seitz, D. D. Kelson, and P. Rosati. CLASH: Three Strongly Lensed Images of a Candidate z ≈ 11 Galaxy. ApJ, 762:32, January 2013.
. M Postman, D Coe, N Benítez, L Bradley, T Broadhurst, M Donahue, H Ford, O Graur, G Graves, S Jouvel, A Koekemoer, D Lemze, E Medezinski, A Molino, L Moustakas, S Ogaz, A Riess, S Rodney, P Rosati, K Umetsu, W Zheng, A Zitrin, M Bartelmann, R Bouwens, N Czakon, S Golwala, O Host, L Infante, S Jha, Y Jimenez-Teja, D Kelson, O Lahav, R Lazkoz, D Maoz, C Mccully, P Melchior, M Meneghetti, J Merten, J Moustakas, M Nonino, B Patel, E Regös, J Sayers, S Seitz, A Van Der Wel, The Cluster Lensing and Supernova Survey with Hubble: An Overview. ApJS. 25M. Postman, D. Coe, N. Benítez, L. Bradley, T. Broadhurst, M. Donahue, H. Ford, O. Graur, G. Graves, S. Jouvel, A. Koekemoer, D. Lemze, E. Medezinski, A. Molino, L. Moustakas, S. Ogaz, A. Riess, S. Rodney, P. Rosati, K. Umetsu, W. Zheng, A. Zitrin, M. Bartelmann, R. Bouwens, N. Czakon, S. Golwala, O. Host, L. Infante, S. Jha, Y. Jimenez-Teja, D. Kelson, O. Lahav, R. Lazkoz, D. Maoz, C. McCully, P. Melchior, M. Meneghetti, J. Merten, J. Mous- takas, M. Nonino, B. Patel, E. Regös, J. Sayers, S. Seitz, and A. Van der Wel. The Cluster Lensing and Supernova Survey with Hubble: An Overview. ApJS, 199:25, April 2012.
The Redshift Distribution of Distant Supernovae and Its Use in Probing Reionization. A Mesinger, B D Johnson, Z Haiman, ApJ. 637A. Mesinger, B. D. Johnson, and Z. Haiman. The Redshift Distribution of Distant Supernovae and Its Use in Probing Reionization. ApJ, 637:80-90, January 2006.
Carbon monoxide line emission as a CMB foreground: tomography of the star-forming universe with different spectral resolutions. M Righi, C Hernández-Monteagudo, R A Sunyaev, A&A. 489M. Righi, C. Hernández-Monteagudo, and R. A. Sunyaev. Carbon monoxide line emission as a CMB foreground: tomography of the star-forming universe with different spectral reso- lutions. A&A, 489:489-504, October 2008.
Measuring the 3D clustering of undetected galaxies through cross correlation of their cumulative flux fluctuations from multiple spectral lines. E Visbal, A Loeb, JCAP. 1116E. Visbal and A. Loeb. Measuring the 3D clustering of undetected galaxies through cross correlation of their cumulative flux fluctuations from multiple spectral lines. JCAP, 11:16, November 2010.
Looking for Population III stars with He II line intensity mapping. E Visbal, Z Haiman, G L Bryan, MNRAS. 450E. Visbal, Z. Haiman, and G. L. Bryan. Looking for Population III stars with He II line intensity mapping. MNRAS, 450:2506-2513, July 2015.
| []
|
[
"Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models",
"Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models"
]
| [
"Wangchunshu Zhou [email protected] \nState Key Lab of Software Development Environment\nBeihang University\nBeijingChina\n",
"Ke Xu [email protected] \nState Key Lab of Software Development Environment\nBeihang University\nBeijingChina\n"
]
| [
"State Key Lab of Software Development Environment\nBeihang University\nBeijingChina",
"State Key Lab of Software Development Environment\nBeihang University\nBeijingChina"
]
| []
| Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chitchat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model. | 10.1609/aaai.v34i05.6521 | [
"https://ojs.aaai.org/index.php/AAAI/article/download/6521/6377"
]
| 211,082,630 | 2002.05058 | 143ca4ff7a0a5a8834632006eebf20100ba6bf6c |
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models
Wangchunshu Zhou [email protected]
State Key Lab of Software Development Environment
Beihang University
BeijingChina
Ke Xu [email protected]
State Key Lab of Software Development Environment
Beihang University
BeijingChina
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chitchat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model.
Introduction
Recent advances in sequence-to-sequence learning architecture (Sutskever, Vinyals, and Le 2014) and the transformer model (Vaswani et al. 2017) have raised increasing interest in natural language generation (NLG) tasks, including story generation (Fan, Lewis, and Dauphin 2018), open-domain dialogue response generation (Sordoni et al. 2015) and abstractive summarization (See, Liu, and Manning 2017). Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU (Papineni et al. 2002), METEOR (Banerjee and Lavie 2005) and ROUGE (Lin 2004) capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation (Liu Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. et al. 2016) in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.
Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.
To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the modellevel quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO (Elo 1978) and Trueskill (Herbrich, Minka, and Graepel 2007), which is a method for assigning a numerical skill to players in a player-vs-player game, given a winloss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run n 2 matches and is able to take into account the amount of new information each comparison provides.
The contribution of this paper is threefold: • We propose a "learning to compare" model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by finetuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set. • We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches. • We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
Related Work
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below. Text Overlap Metrics, including BLEU (Papineni et al. 2002), METEOR (Banerjee and Lavie 2005) and ROUGE (Lin 2004), are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks (Liu et al. 2016). There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in opendomain NLG tasks such as story generation (Fan, Lewis, and Dauphin 2018) and open domain dialogue systems. However, "how likely a sentence is generated by a given model" may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models (Kannan and Vinyals 2017;Li et al. 2017a) assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability (Kannan and Vinyals 2017). Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference (Garbacea et al. 2019). Automated Dialogue Evaluation Model (ADEM) (Lowe et al. 2017) is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate (Lowe et al. 2017). As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE (Shimanaka, Kajiwara, and Komachi 2018) and BERTScore . These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation (Hashimoto, Zhang, and Liang 2019;Chaganty, Mussman, and Liang 2018). These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.
Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs (Goodfellow et al. 2014) for synthesizing images (Olsson et al. 2018) by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the "tie" option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models.
Methodology
We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.
Learning to Compare
The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model (Lowe et al. 2017), it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.
The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better (>), worse (<), or indistinguishable (≈) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third "tie" option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the "tie" option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.
We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote S + /S − as the set of real/generated samples. For a real sample s + ∈ S + and a generated sample s − ∈ S − , we assign the label "better (>)" to the pair (s + , s − ) and "worse (<)" to (s − , s + ). For two samples both from real data or from the generated samples, we assign the label "indistinguishable (≈)" to such pairs (i.e., (s i + , s j + ) and (s i − , s j − )). For a training set with n real samples and n generated samples, we can construct 2n 2 pairwise training examples for the comparative evaluator, al-lowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair (s i − , s j − ), s i − and s j − are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.
One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.
To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be 10% of the total training iteration and do not select two "almost converged" checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning (Bengio et al. 2009) by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.
The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq 1
L = −E (x1,x2)∼X [log D Q(x1,x2) φ (x 1 , x 2 )](1)
where X is the set of pairwise training examples contructed as described above, Q(x 1 , x 2 ) ∈ {>, <, ≈} is the true label for the pair (x 1 , x 2 ), D q φ (x 1 , x 2 ) is the probability of the comparative discriminator's prediction being q (q ∈ {>, < , ≈}) for the pair (x 1 , x 2 ).
As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT (Devlin et al. 2018) as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1 that the compared sample A and B are based on the same context, which ensures that they are comparable.
Skill Rating
In player-vs-player games such as chess or tennis, skill rating systems such as Elo (Elo 1978) or Glicko2 (Glickman 2012) evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the "playground" and NLG models as "player", the "player-vs-player" game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.
Following previous work (Olsson et al. 2018), in our paper, we use the Glicko2 system (Glickman 2012). The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their "true" skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player's skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the "tie" option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a "game" between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization (Snoek, Larochelle, and Adams 2012) or multi-armed bandit algorithms (Vermorel and Mohri 2005), we choose to keep the method as simple as possible and use random sampling.
Experiments
We set up experiments in order to answer the following research questions: (Li et al. 2017b), which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog.
Compared Models and Metrics As our objective is to evaluate the evaluators rather than comparing state-of-theart models, we choose three representative sequence-tosequence architectures: LSTM (Hochreiter and Schmidhuber 1997) seq2seq, Convolutional seq2seq (Gehring et al. 2017), and transformer (Vaswani et al. 2017) model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.
Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.
The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices 1 . For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations.
Detail of Parameterized Evaluators
The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT.
Human Evaluation Procedure As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or earlystopping checkpoint with other variants fixed.
We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to 2n 2 training examples for our comparative evaluator with n human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.
We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from Lowe et al., we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score κ = 0.53 for directly scoring and κ = 0.76 with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator.
Experimental Designs & Results
RQ1: Sample-Level Correlation
To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.
The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using samplelevel skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this vari- ance does not exist when we regard a sample as a model which always generates the same sample.
RQ2: Model-Level Correlation As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as modellevel score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model. Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.
RQ3&4: Automated Metrics for Model Training
We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and earlystopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.
The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in suboptimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem.
Qualitative Analysis
We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. "I don't know") should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference.
Ablation Study
To better understand the proposed comparative evaluator and analyze the relative importance of its different components, Training the comparative evaluator without "strong supervision", which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models. • w/o weak supervision: Training without "weak supervision", which models the inductive bias that the quality of NLG models generally improves during training. • w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision). • w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty. • w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT. We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance.
Discussion and Conclusion
In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the "tie" option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.
By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices.
Figure 1 :
1model architecture of the comparative evaluator, the context is concatenated with generated samples.
• RQ1 :
RQ1Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models? • RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better? • RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping? • RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?4.1 Experimental Settings
Datasets We evaluate the effectiveness of the proposed
approach on two open domain natural language generation
tasks: story generation and open domain dialogue response
generation. For story generation, we use the WritingPrompts
dataset released by Fan, Lewis, and Dauphin. The Writ-
ingPrompts dataset is a large dataset of 303,358 human-
generated stories paired with writing prompts from an online
forum. NLG models are trained by taking writing prompts as
input and generating the whole story. The average length of
prompts is 28.4 and the average length of stories is 734.5
words, which makes human evaluation very expensive and
better automated metrics are thus critical. For open domain
dialogue response generation task, we use the Dailydialog
dataset
Table 2 :
2Model-level correlation between metrics and human judgments, with p-values shown in brackets.
Table 3 :
3Performance of different metrics in hyperparameter tuning and earlystop checkpoint selecting. Jim,how about going for a few beers after dinner? I do not know about it.No, it is not good. A < B I suggest a walk over to the gym where we can meet some friends. That's a good idea, ok.No, I do not like to. Tie What shall we do ? I don't feel like sitting at home.We can go for a walk. I suggest staying at home. A > BContext
Table 4 :
4Examples of comparison results between two generated samples given context.Metric
Spearman Pearson
full model
0.764
0.783
w/o comparison
0.491
0.502
w/o tie option
0.557
0.561
w/o strong supervision
0.697
0.703
w/o weak supervision
0.728
0.737
w/o human annotation
0.602
0.609
w/o BERT
0.644
0.662
Table 5 :
5Model-level correlation between ablated variants and human judgments in the Dailydialog dataset we conduct an ablation study with several variants of the proposed model: • w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method. • w/o strong supervision:
For each model, we randomly sample 5 hyperparameter choices in a predefined range.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. S Banerjee, A Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationBanerjee, S., and Lavie, A. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with hu- man judgments. In Proceedings of the acl workshop on in- trinsic and extrinsic evaluation measures for machine trans- lation and/or summarization, 65-72.
Curriculum learning. Y Bengio, J Louradour, R Collobert, J Weston, Proceedings of the 26th annual international conference on machine learning. the 26th annual international conference on machine learningACMBengio, Y.; Louradour, J.; Collobert, R.; and Weston, J. 2009. Curriculum learning. In Proceedings of the 26th an- nual international conference on machine learning, 41-48. ACM.
The price of debiasing automatic metrics in natural language evaluation. A T Chaganty, S Mussman, P Liang, arXiv:1807.02202arXiv preprintChaganty, A. T.; Mussman, S.; and Liang, P. 2018. The price of debiasing automatic metrics in natural language evalua- tion. arXiv preprint arXiv:1807.02202.
Elo, A. E. 1978. The rating of chessplayers, past and present. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. Arco PubarXiv preprintDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. Elo, A. E. 1978. The rating of chessplayers, past and present. Arco Pub.
A Fan, M Lewis, Y Dauphin, arXiv:1805.04833Hierarchical neural story generation. arXiv preprintFan, A.; Lewis, M.; and Dauphin, Y. 2018. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833.
Judge the judges: A large-scale evaluation study of neural language models for online review generation. C Garbacea, S Carton, S Yan, Q Mei, arXiv:1901.00398arXiv preprintGarbacea, C.; Carton, S.; Yan, S.; and Mei, Q. 2019. Judge the judges: A large-scale evaluation study of neural lan- guage models for online review generation. arXiv preprint arXiv:1901.00398.
Convolutional sequence to sequence learning. J Gehring, M Auli, D Grangier, D Yarats, Y N Dauphin, 1243-1252. JMLR. orgProceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; and Dauphin, Y. N. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Ma- chine Learning-Volume 70, 1243-1252. JMLR. org.
Example of the glicko-2 system. M E Glickman, Boston UniversityGlickman, M. E. 2012. Example of the glicko-2 system. Boston University 1-6.
. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y.
Generative adversarial nets. Advances in neural information processing systems. Generative adversarial nets. In Advances in neural information processing systems, 2672-2680.
Unifying human and statistical evaluation for natural language generation. T B Hashimoto, H Zhang, P Liang, arXiv:1904.02792arXiv preprintHashimoto, T. B.; Zhang, H.; and Liang, P. 2019. Unifying human and statistical evaluation for natural language gener- ation. arXiv preprint arXiv:1904.02792.
Trueskill: a bayesian skill rating system. R Herbrich, T Minka, T Graepel, Advances in neural information processing systems. Herbrich, R.; Minka, T.; and Graepel, T. 2007. Trueskill: a bayesian skill rating system. In Advances in neural informa- tion processing systems, 569-576.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735-1780.
Adversarial evaluation of dialogue models. A Kannan, O Vinyals, arXiv:1701.08198arXiv preprintKannan, A., and Vinyals, O. 2017. Adversarial evaluation of dialogue models. arXiv preprint arXiv:1701.08198.
J Li, W Monroe, T Shi, S Jean, A Ritter, D Jurafsky, arXiv:1701.06547Adversarial learning for neural dialogue generation. arXiv preprintLi, J.; Monroe, W.; Shi, T.; Jean, S.; Ritter, A.; and Jurafsky, D. 2017a. Adversarial learning for neural dialogue genera- tion. arXiv preprint arXiv:1701.06547.
Dailydialog: A manually labelled multi-turn dialogue dataset. Y Li, H Su, X Shen, W Li, Z Cao, S Niu, arXiv:1710.03957arXiv preprintLi, Y.; Su, H.; Shen, X.; Li, W.; Cao, Z.; and Niu, S. 2017b. Dailydialog: A manually labelled multi-turn dia- logue dataset. arXiv preprint arXiv:1710.03957.
Rouge: A package for automatic evaluation of summaries. C.-Y Lin, Text summarization branches out. Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, 74-81.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. C.-W Liu, R Lowe, I V Serban, M Noseworthy, L Charlin, J Pineau, arXiv:1603.08023arXiv preprintLiu, C.-W.; Lowe, R.; Serban, I. V.; Noseworthy, M.; Char- lin, L.; and Pineau, J. 2016. How not to evaluate your di- alogue system: An empirical study of unsupervised evalua- tion metrics for dialogue response generation. arXiv preprint arXiv:1603.08023.
R Lowe, M Noseworthy, I V Serban, N Angelard-Gontier, Y Bengio, J Pineau, arXiv:1708.07149Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprintLowe, R.; Noseworthy, M.; Serban, I. V.; Angelard-Gontier, N.; Bengio, Y.; and Pineau, J. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprint arXiv:1708.07149.
C Olsson, S Bhupatiraju, T Brown, A Odena, I Goodfellow, arXiv:1808.04888Skill rating for generative models. arXiv preprintOlsson, C.; Bhupatiraju, S.; Brown, T.; Odena, A.; and Goodfellow, I. 2018. Skill rating for generative models. arXiv preprint arXiv:1808.04888.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPapineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of the 40th annual meeting on associa- tion for computational linguistics, 311-318. Association for Computational Linguistics.
Get to the point: Summarization with pointer-generator networks. A See, P J Liu, C D Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1See, A.; Liu, P. J.; and Manning, C. D. 2017. Get to the point: Summarization with pointer-generator networks. Pro- ceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Ruse: Regressor using sentence embeddings for automatic machine translation evaluation. H Shimanaka, T Kajiwara, M Komachi, Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersShimanaka, H.; Kajiwara, T.; and Komachi, M. 2018. Ruse: Regressor using sentence embeddings for automatic ma- chine translation evaluation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, 751-758.
Practical bayesian optimization of machine learning algorithms. J Snoek, H Larochelle, R P Adams, Advances in neural information processing systems. Snoek, J.; Larochelle, H.; and Adams, R. P. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, 2951- 2959.
A neural network approach to context-sensitive generation of conversational responses. A Sordoni, M Galley, M Auli, C Brockett, Y Ji, M Mitchell, J.-Y Nie, J Gao, B Dolan, I Sutskever, O Vinyals, Q V Le, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAdvances in neural information processing systemsSordoni, A.; Galley, M.; Auli, M.; Brockett, C.; Ji, Y.; Mitchell, M.; Nie, J.-Y.; Gao, J.; and Dolan, B. 2015. A neural network approach to context-sensitive generation of conversational responses. Proceedings of the 2015 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, 3104-3112.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in neural information processing systems, 5998-6008.
Multi-armed bandit algorithms and empirical evaluation. J Vermorel, M Mohri, European conference on machine learning. SpringerVermorel, J., and Mohri, M. 2005. Multi-armed bandit al- gorithms and empirical evaluation. In European conference on machine learning, 437-448. Springer.
T Zhang, V Kishore, F Wu, K Q Weinberger, Y Artzi, arXiv:1904.09675Bertscore: Evaluating text generation with bert. arXiv preprintZhang, T.; Kishore, V.; Wu, F.; Weinberger, K. Q.; and Artzi, Y. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
| []
|
[
"HEAT-TYPE EQUATIONS ON MANIFOLDS WITH FIBERED BOUNDARIES II: PARAMETRIX CONSTRUCTION",
"HEAT-TYPE EQUATIONS ON MANIFOLDS WITH FIBERED BOUNDARIES II: PARAMETRIX CONSTRUCTION"
]
| [
"Bruno Caldeira ",
"Giuseppe Gentile "
]
| []
| []
| This is the second part of a two parts work on the analysis of heattype equations on manifolds with fibered boundary equipped with a Φ-metric. This setting generalizes the asymptotically conical (scattering) spaces and includes special cases of magnetic and gravitational monopoles. The core of this second part consists on the construction of parametrix for heat-type equations. Consequently we use the constructed parametrix to infer results regarding existence and regularity of certain homogeneous and non homogeneous second order linear parabolic equations with non constant coefficients. This work represents the first step towards the analysis of geometric flows such as Ricci-, Yamabe and Mean Curvature flow on some families of non compact manifolds. | null | [
"https://export.arxiv.org/pdf/2302.13111v1.pdf"
]
| 257,219,292 | 2302.13111 | cd980730cdd4f5069bc7325a21dd261b4b294400 |
HEAT-TYPE EQUATIONS ON MANIFOLDS WITH FIBERED BOUNDARIES II: PARAMETRIX CONSTRUCTION
25 Feb 2023
Bruno Caldeira
Giuseppe Gentile
HEAT-TYPE EQUATIONS ON MANIFOLDS WITH FIBERED BOUNDARIES II: PARAMETRIX CONSTRUCTION
25 Feb 2023
This is the second part of a two parts work on the analysis of heattype equations on manifolds with fibered boundary equipped with a Φ-metric. This setting generalizes the asymptotically conical (scattering) spaces and includes special cases of magnetic and gravitational monopoles. The core of this second part consists on the construction of parametrix for heat-type equations. Consequently we use the constructed parametrix to infer results regarding existence and regularity of certain homogeneous and non homogeneous second order linear parabolic equations with non constant coefficients. This work represents the first step towards the analysis of geometric flows such as Ricci-, Yamabe and Mean Curvature flow on some families of non compact manifolds.
Introduction and statement of the main results
In the first part ( [CaGe22]) of this two parts work the authors presented mapping properties for the heat-kernel operator H and derived existence and uniqueness of the heat equation on a Φ-manifold. The aim of the present work is to extend the analysis carried over in [CaGe22] to a slightly more general family of equations. Namely we consider some linear parabolic equations with variable coefficients on Φ-manifolds which we refer to as heat-type equations.
Manifolds with fibered boundary are a class of compact manifold M whose boundary ∂M is the total space of a fibration φ : ∂M → Y over a closed (i.e. compact without boundary) Riemannian manifold Y. Moreover, the fibers of the fibration φ are copies of a fixed closed Riemannian manifold Z. An open manifold M, which is the interior of a manifold fibered boundary M, is a Φ-manifold if it is equipped with a specific Riemannian metric known as Φmetric. Such a metric is such that, near the boundary ∂M, has asymptotic behavior described by
g Φ = d x 2 x 4 + φ * g Y x 2 + g Z + h, (1.1)
where h is the collection of cross-terms and it contains extra powers of x in each of its terms. In the above, g Y is a Riemannian metric on the base Y, while g Z is a symmetric bilinear form on ∂M which restricts to a Riemannian metric at each fiber.
The simplest example of a Φ-manifold is R m equipped with the Euclidean metric expressed in polar coordinates g = d r 2 + r 2 d θ.
In fact, to obtain an expression as the one in (1.1) from the above, one could simply perform a change of coordinates x = r −1 far from the origin. In this case, note that Y = S m−1 and Z = {pt}. Other example of Φ-manifolds include several complete Ricci-flat metrics, products of locally Euclidean spaces with a compact manifold and some classes of gravitational instantons.
Despite the fact that Φ-manifold have been firstly introduced in 1990's, they remain relatively new in the field of Geometric Analysis and, in particular, in the analysis of geometric flows such as Yamabe-,Ricci-and the Mean Curvature flow, among others. This paper can be thought as a preparation for the analysis of the above mentioned flows. Indeed we prove short-time existence for Cauchy problems of the form
(∂ t + a∆)u = ℓ, u| t=0 = u 0 , (1.2)
for some suitable functions ℓ and a and u 0 . It is well known that (most) geometric flows give rise to quasilinear parabolic PDE's but the arguments treated here can be tweaked a bit (e.g. by linearizing the quasi-linear equation) to guarantee short-time existence for such geometric flows, as it has been done by the first named author for the Yamabe flow in [CHV21] and by the second named author for the mean curvature flow in [GeVe22].
1.1. Main results and structure of the paper. Our aim is to extend the results in [CaGe22] to Cauchy problems of the form (1.2). This is achieved by making use of the mapping properties proved by the authors in [CaGe22]. Therefore in §2 we give an overview on Φ-manifolds and their properties. Moreover we recall the definition of the "geometry adapted" Hölder spaces and the mapping properties of the heat-kernel between these Hölder spaces. §3 is devoted to the discussion of a parabolic maximum principle, based on the Omori-Yau maximum principle for stochastically complete manifolds. Based on [BaVe19] we employ the maximum principle in 1.1 to construct a parametrix for heat-type operators in §4. In particular we prove: Theorem 1.2. Consider a function a ∈ C β Φ (M × [0, T ]) positive and bounded away from zero. Then for any α < β and for any γ ∈ R there exist two bounded operators
Q : x γ C α Φ (M × [0, T ]) → x γ C 2,α Φ (M × [0, T ]), E : x γ C α Φ (M) → x γ C 2,α Φ (M × [0, T ])
, so that the homogeneous and inhomogeneous Cauchy problems (∂ t + a∆)u = ℓ; u| t=0 = 0,
(1.4) (∂ t + a∆)u = 0; u| t=0 = u 0 (1.5) have solutions Qℓ and Eu 0 respectively.
Finally, in §5 we generalize the short-time existence and regularity result previously obtained by the authors in [CaGe22]. In particular we prove shorttime existence and regularity of solutions to a class of linear parabolic equation with variable coefficients. (1) F 1 : x γ C 2,α Φ → x γ C 1,α Φ (M × [0, T ]), (2) F 2 :
x γ C 2,α Φ → x γ C α Φ (M × [0, T ])
and, for u, u ′ ∈ x γ C 2,α Φ (M × [0, T ]) satisfying u 2,α,γ , u ′ 2,α,γ ≤ µ, exists some C µ > 0 such that
(1) F 1 (u) − F 1 (u ′ ) 1,α,γ ≤ C µ u − u ′ 2,α,γ , F 1 (u) 1,α,γ ≤ C µ , (2) F 2 (u) − F 2 (u ′ ) α,γ ≤ C µ max{ u 2,α,γ , u ′ 2,α,γ } u − u ′ 2,α,γ , F 2 (u) α,γ ≤ C µ u 2 2,α,γ . Then there exists a unique u * ∈ x γ C 2,α Φ (M × [0, T ′ ]) solution for (1.6) for some T ′ > 0 sufficiently small.
Acknowledgements. The authors wish to thank Boris Vertman for the supervision as advisor for their Ph.D. theses. The authors wish to thank the University of Oldenburg for the financial support and hospitality. The first author wishes also to thank the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES-Brasil-Finance Code 001) for the financial support (Process 88881.199666/2018-01).
2. Review of part I As mentioned in §1, this section is dedicated to recollect the main points of [CaGe22].
2.1. Geometry of Φ-manifolds. We say that a compact manifold with boundary M has fibered boundary if its boundary ∂M is the total space of a fibration
Z ֒→ ∂M φ − → Y, (2.1)
where both Y and Z are closed manifolds of dimensions b and f respectively. Moreover, consider g Y a Riemannian metric on Y and g Z a symmetric bilinear form on ∂M which restricts to Riemannian metrics on each fiber. Assume, furthermore, φ : (∂M, φ * g Y + g Z ) → (Y, g Y ) to be a Riemannian submersion. Finally, we will denote by x ∈ C ∞ (M) the total boundary defining function of ∂M. That is ∂M = {x = 0} and the differential d x never vanishes on ∂M.
Definition 2.1. A Φ-metric on M,
which is the open interior or M, is a Riemannian metric g Φ that, on a collar neighborhood U ≃ (0, 1) × ∂M, can be expressed as
g Φ = d x 2 x 4 + φ * g Y x 2 + g Z + h =: g + h, (2.2) where |h| g = O(x). A pair (M, g Φ ) is called a Φ-manifold.
Note that, due to the fibration assumption at the boundary, U can be covered by open coordinated charts U i on which every point p ∈ U i can be written as a triple (x, y, z), where y = (y 1 , · · · , y b ) and z = (z 1 , · · · , z f ) are lifts of base and fiber coordinates, respectively.
Following [MaMe98], the most reasonable family of vector fields to consider for the analysis are Φ-vector fields. The Lie algebra of Φ-vector fields is denoted by V Φ (M) and Φ-vector fields are locally spanned by
x 2 ∂ x , x∂ y 1 , · · · , x∂ y b , ∂ z 1 , · · · , ∂ z f . (2.3)
Remark 2.2. Note that Φ-vector fields have bounded norm with respect to the Φ metric g Φ .
One can now recursively define Φ-k-differentiable functions as follows:
C 1 Φ (M) = u ∈ C 0 (M) | Vu ∈ C 0 (M) for every V ∈ V Φ (M) , C k Φ (M) = u ∈ C k−1 Φ (M) | Vu ∈ C k−1 Φ (M) for every V ∈ V Φ (M) , (2.4) where k ∈ Z ≥2 . Since V Φ (M)
is a Lie algebra and a C ∞ (M) module, we can consider the algebra Diff * Φ (M) of Φ-differential operators. In particular a Φ-k-
differential operator P ∈ Diff k Φ (M) is a map P : C ∞ Φ (M) → C ∞ Φ (M) so that it can locally be expressed as P = |α|+|β|+q≤k P α,β,q (x, y, z) x 2q+|β| ∂ q x ∂ β y ∂ α z , (2.5)
where α and β are multi-indices, each P α,β,q is a smooth function, ∂ y = ∂ y 1 · · · ∂ y b and ∂ z = ∂ z 1 · · · ∂ z f . For simplicity, we often denote Diff k Φ (M) as V k Φ .
2.2. Stochastic completeness of Φ-manifolds. A crucial property of Φ-manifolds, as highlighted in [CaGe22,§3] is that they are stochastically complete. In our previous work stochastic completeness has been used to deduce mapping properties of the heat-kernel. In the current work we will employ stochastic completeness to make use of the Omori-Yau maximum principle.
A Riemannian manifold (M, g) is said to be stochastically complete if the heat kernel H(t, p, p) of the (positive) Laplace-Beltrame operator ∆ satifies M H(t, p, p) dvol g ( p) = 1, (2.6) for every t ≥ 0 and p ∈ M.
In particular, as shown in [CaGe22,§3], Φ-manifolds are stochastically complete because the function f(·) := · log(vol(B(p, ·))) / ∈ L 1 (1, +∞).
(2.7)
We remind the reader that (for complete manifolds) condition (2.7) is enough to conclude stochastic completeness as stated in [AMR16, Theorem 2-11] (see also [Gri86]).
2.3.
Hölder continuity on Φ-manifolds. Next we present Hölder spaces suitable for our analysis. As mentioned in the introduction, these spaces are "geometry-adapted" meaning that the distance function as well as the vector fields employed in the definitions encode the singularities arising from the Φmetric. More precisely, let 0 < α < 1 and u ∈ C 0 (M × [0, T ]), for some T > 0. We define
u α = u ∞ + sup |u(p, t) − u(p ′ , t ′ )| d Φ (p, p ′ ) α + |t − t ′ | α/2 =: u ∞ + [u] α , (2.8)
where the distance function d Φ between p = (x, y, z) and p ′ = (x ′ , y ′ , z ′ ), is expressed locally as
d Φ (p, p ′ ) = |x − x ′ | 2 + (x + x ′ ) 2 y − y ′ 2 + (x + x ′ ) 4 z − z ′ 2 .
(2.9)
Thus we define the space of α-Hölder continuous functions by
C α Φ (M × [0, T ]) = {u ∈ C 0 (M × [0, T ]) | u α < +∞}. (2.10)
As it is natural, we define α-Hölder spaces with higher regularity by
C k,α Φ (M × [0, T ]) := u ∈ C 0 (M × [0, T ]) V l 1 Φ ∂ l 2 t u ∈ C α Φ (M × [0, T ]), with l 1 + 2l 2 ≤ k , (2.11) where k ∈ Z ≥0 . For each pair (k, α) as above, C k,α Φ (M × [0, T ])
is a Banach space endowed with the norm u k,α :=
l 1 +2l 2 ≤k V∈V l 1 Φ (V • ∂ l 2 t )u α .
(2.12)
It follows directly from the definition that
C k 2 ,α Φ (M × [0, T ]) ⊂ C k 1 ,α Φ (M × [0, T ]) whenever 0 ≤ k 1 ≤ k 2 .
We can generalize to weighted-Hölder spaces as follows: for γ ∈ R, define
x γ C k,α Φ (M × [0, T ]) := {x γ u | u ∈ C k,α Φ (M × [0, T ])}, x γ u k,α,γ := u k,α .
(2.13)
The pair (x γ C k,α Φ (M×[0, T ], · k,α,γ ) is a Banach space as well. One can conclude this simply by noticing that the operator "multiplication by x γ " M(x γ ) is an isometry between C k,α Φ and x γ C k,α Φ .
2.4.
Mapping properties on Φ-manifolds. The mapping properties of the heatkernel op H proved in [CaGe22] will play a key role in the construction of the parametrix for heat-type operators. Therefore, for the sake of completeness, we present them here. We refer the interested reader to our previous work for a very detailed analysis.
For a function u : M × [0, T ] → R, T > 0, define the function Hu by convolution with the heat-kernel associated to the unique self-adjoint extension of the positive Laplace-Beltrami operator ∆ Φ . That is
Hu(p, t) := t 0 M H(t − t, p, p)u( p, t) dvol Φ ( p) d t,
(2.14)
By making use of the asymptotic behavior of the heat-kernel H provided in [TaVe21, Theorem 7.2], we proved:
Theorem 2.3. [CaGe22, Theorem 1.1] Let (M, g Φ ) be a Φ-manifold.
Then, for any 0 < α < 1, k ∈ Z ≥0 , γ ∈ R and T > 0, the heat-kernel operator acts continuously as follows:
H : x γ C k,α Φ (M × [0, T ]) → x γ C k+2,α Φ (M × [0, T ]), H : x γ C k,α Φ (M × [0, T ]) → √ t x γ C k+1,α Φ (M × [0, T ]), H : x γ C k,α Φ (M × [0, T ]) → t α/2 x γ C 2 Φ (M × [0, T ]).
(2.15)
Consequently, we proved the following result regarding short-time existence and regularity solutions for the heat equation oh Φ-manifolds.
(∂ t + ∆ Φ )u = F(u), u| t=0 = 0.
(2.16)
Assume F to satisfy the following conditions:
(1) F :
x γ C k+2,α Φ (M × [0, T ]) → C k,α Φ (M × [0, T ]);
(2) F can be written as a sum F = F 1 + F 2 with i) F 1 :
x γ C k+2,α Φ → x γ C k+1,α Φ (M × [0, T ]), ii) F 2 : x γ C k+2,α Φ → x γ C k,α Φ (M × [0, T ]); (3) For u, u ′ ∈ x γ C k+2,α Φ (M × [0, T ])
with · k+2,α,γ -norm bounded from above by some η > 0, i.e. u k+2,α,γ , u ′ k+2,α,γ ≤ η, there exists some C η > 0 such that
i) F 1 (u)−F 1 (u ′ ) k+1,α,γ ≤ C η u−u ′ k+2,α,γ , F 1 (u) k+1,α,γ ≤ C η u k+2,γ,α , ii) F 2 (u) − F 2 (u ′ ) k,α,γ ≤ C η max{ u k+2,α,γ , u ′ k+2,α,γ } u − u ′ k+2,α,γ , F 2 (u) k,α,γ ≤ C η u 2
k+2,α,γ . Then there exists a unique solution u * ∈ x γ C k,α Φ (M × [0, T 0 ]) of the Cauchy problem (2.16), for some T 0 > 0 sufficiently small.
A generalization of Theorem 2.4 for some linear parabolic equation with non-constant coefficient will be presented here. This will be achieved by a slight generalization of the mapping properties of H to the constructed parametrix for heat-type operators.
Maximum principle for stochastically complete manifolds
In order to construct a parametrix for the heat-type operator ∂ t + a∆ Φ , we will employ a maximum principle. We have seen in §2.2 that Φ-manifolds are stochastically complete. A very neat property of stochastically complete manifolds, which is actually equivalent to stochastic completeness, is that they satisfy the Omori-Yau maximum principle. We begin this section by recalling the (strong) Omori-Yau maximum principle. Afterwards we employ Omori-Yau to prove a parabolic maximum principle based on the first author's previous work [CHV21].
3.1. Omori-Yau maximum principle. The Omori-Yau maximum principle for the Laplacian, defined in e.g. [AMR16, Definition 2.1], means that for any function u ∈ C 2 (M) with bounded supremum there is a sequence
{p k } k ⊂ M satisfying u(p k ) > sup M u − 1 k , |∇u(p k )| ≤ 1 k , −∆ g u(p k ) < 1 k . (3.1) Similarly, provided u has bounded infimum, there exists a sequence {p ′ k } k ⊂ M such that u(p ′ k ) < inf M u + 1 k , |∇u(p ′ k )| ≤ 1 k , −∆ g u(p ′ k ) > 1 k . (3.2)
As an example, by [Yau75], see also [AMR16, Theorem 2.3], the Omori-Yau maximum principle for the Laplacian holds for every complete Riemannian manifold (M, g) with Ricci curvature bounded from below. We shall refer to this principle as the strong Omori-Yau maximum principle in order to distinguish it from another version of the principle on stochastically complete manifolds.
Remark 3.1. We want to point out a difference with [AMR16] in the different sign convention for the Laplace-Beltrami operator.
According to Pigola, Rigoli and Setti in [PRS03, Theorem 1.1] (see also [AMR16, Theorem 2.8 (i) and (iii)]), a similar version of the Omori-Yau maximum principle holds for stochastically complete manifolds. More precisely, for any (M, g) satisfying e.g. the volume growth condition in (2.7), and any function u ∈ C 2 (M) bounded from above, there is a sequence
{p k } k ⊂ M such that u(p k ) > sup M u − 1 k and − ∆ g u(p k ) < 1 k . (3.3) Similarly, if u is bounded from below, there exists a sequence {p ′ k } k ⊂ M such that u(p ′ k ) < inf M u + 1 k and − ∆ g u(p ′ k ) > 1 k . (3.4)
3.2. Classical Hölder spaces. As mentioned above, if a Riemannian manifold (M, g) is e.g. stochastically complete, then the Omori-Yau maximum principle in either of the formulations (3.3) and (3.4) hold for bounded functions. For general non-compact manifolds one can not expect to be dealing with bounded functions. Now, Φ-manifolds are stochastically complete as discussed in §2.2 (see also [CaGe22,§3]). Also, Φ-manifolds can be thought as non-compact manifolds which are asymptotically conical (this can be achieved by performing a change of coordinates r = 1/x therefore "pushing" the boundary to infinity). This means that one can not use Omori-Yau for any function. But in §2.3 we have introduced some geometry-adapted Hölder spaces; in view of the Hölder norm defined in (2.8) one sees that Φ-k, α Hölder functions are indeed bounded and, as a bonus, the heat-kernel is very well behaved as an operator between those spaces. This leads to the following observation. If a stochastically complete Riemannian manifold is given, then the Omori-Yau maximum principle would hold for functions living in some appropriate Hölder space. Therefore here we give the classical definition of Hölder spaces and later ( §3.3) we prove a parabolic Omori-Yau maximum principle for functions lying in such Hölder spaces. As a remark, one can see that, in the setting of Φ-manifolds, the geometry-adapted Hölder spaces (defined in §2.3) are a subspace of the ones defined here; thus implying that the maximum principle presented in Theorem 1.1 will also hold for Φ-k, α Hölder functions.
Definition 3.2. Let α ∈ (0, 1). We define the semi-norm
[u] α := sup M 2 T |u(p, t) − u(p ′ , t ′ )| d(p, p ′ ) α + |t − t ′ | α/2 , (3.5) where the supremum is over M 2 T with M T := M × [0, T ].
The distance d is induced by the metric g. The Hölder space C α (M × [0, T ]), is then defined as usual, as the space of continuous functions u ∈ C 0 (M × [0, T ]) with bounded α-norm, that is
u ∞ + [u] α =: u α < ∞.
(3.6)
Once equipped with the α-norm (3.6), the resulting normed vector space
C α (M × [0, T ])
is a Banach space. Similarly one defines higher order Hölder spaces.
Definition 3.3. Let (M, g) be a Riemannian manifold and consider k, l 1 and l 2 to be non negative integers. We say that a function u lies in
C k,α (M × [0, T ]) if (P • ∂ l 2 t )u lies in C α (M × [0, T ]), for P ∈ Diff l 1 (M), 0 ≤ l 1 + 2l 2 ≤ k.
Here Diff l 1 (M) denotes the space of differential operators of order l 1 over M. In particular, this is equivalent to require that the (k, α)-norm, defined by
u k,α = u α + l 1 +2l 2 ≤k P∈Diff l 1 (M) (P • ∂ l 2 t )u α (3.7)
is bounded.
Remark 3.4. By definition, we have the chain of inclusions
C l,α (M × [0, T ]) ⊂ C k,α (M × [0, T ]) for every 0 ≤ k ≤ l.
3.3. Maximum principle. Based on the Omori-Yau maximum principle in §3.1, the first named author jointly with Hartmann and Vertman proved the following enveloping theorem (cf. [CHV21]). For convenience to the reader, we present the proof here as well.
u sup (t) := sup M u(·, t), u inf (t) := inf M u(·, t) are locally Lipschitz, hence differentiable almost everywhere in (0, T ). Moreover, at those differentiable times t ∈ (0, T ) we find ∂ ∂t u sup (t) ≤ lim ǫ→ 0 + lim sup k→∞ ∂u ∂t (p k (t + ǫ), t + ǫ) , ∂ ∂t u inf (t) ≥ lim ǫ→ 0 + lim inf k→∞ ∂u ∂t (p ′ k (t + ǫ), t + ǫ) , (3.8)
where (p k (t + ǫ)) k and (p ′ k (t + ǫ)) are maximizing and minimizing sequences for the functions u(_, t + ǫ) respectively as in (3.3) and (3.4).
Proof. We begin by applying (3.3) to u(t + ǫ). Moreover, an application of the Mean Value Theorem leads to
u sup (t + ǫ) ≤ u(p k (t + ǫ), t) + ǫ · ∂u ∂t (p k (t + ǫ), ξ) + 1 k , for some ξ ∈ (t, t + ǫ). Next we want to estimate u sup (t + ǫ) from below. By recalling that u sup (t) ≥ u (p k (t + ǫ), t) we get u sup (t + ǫ) ≥ u(p k (t + ǫ), t) + ǫ · u sup (t + ǫ) − u sup (t) ǫ .
Combining the inequalities above, canceling the term u(p k (t + ǫ), t) on each side and taking limit superior as k → ∞ on the right hand side, we obtain
ǫ · u sup (t + ǫ) − u sup (t) ǫ ≤ ǫ · lim sup k→∞ ∂u ∂t (p k (t + ǫ), ξ).
Canceling ǫ on both sides, we find
u sup (t + ǫ) − u sup (t) ǫ ≤ lim sup k→∞ ∂u ∂t (p k (t + ǫ), ξ) − ∂u ∂t (p k (t + ǫ), t + ǫ) + lim sup k→∞ ∂u ∂t (p k (t + ǫ), t + ǫ).
(3.9)
Since u ∈ C 2,α (M × [0, T ]), we can estimate • lim sup k→∞ ∂u ∂t (p k (t + ǫ), ξ) − ∂u ∂t (p k (t + ǫ), t + ǫ) ≤ u 2,α ǫ α/2 , • lim sup k→∞ ∂u ∂t (p k (t + ǫ), t + ǫ) ≤ u 2,α .
(3.10)
Hence, the two terms on the right-hand side in (3.9) are bounded uniformly in ǫ. Now, after repeating the arguments with the roles of u sup (t) and u sup (t + ǫ) interchanged, we conclude that u sup is locally Lipschitz. Consequently, Rademacher's theorem implies that u sup differentiable almost everywhere. Let now t ∈ (0, T ) be one of the points at which u sup is differentiable. From (3.9) and the first line in (3.10), by taking ǫ → 0 we conclude that
∂ ∂t u sup (t) ≤ lim ǫ→0 lim sup k→∞ ∂u ∂t (p k (t + ǫ), t + ǫ) ., (3.11)
showing that the first inequality in (3.8) holds. The second inequality follows from the first, using (3.4), with u replaced by (−u).
We are now in the position to prove the claimed maximum principle.
Theorem 3.6. (Theorem 1.1) Let (M, g) be a m-dimensional stochastically complete manifold. Furthermore, let a ≥ δ > 0 be a bounded function on M.
If u ∈ C 2,α (M × [0, T ])
is a solution of the Cauchy problem
(∂ t + a∆ g )u = 0, u| t=0 = 0 (3.12) then u = 0. Proof. Since u ∈ C 2,α (M×[0, T ])
it is, in particular, bounded for every t meaning that u(_, t) is bounded. Therefore we can find Omori-Yau maximizing and minimizing sequences (p k (t)) and (p ′ k (t)) satisfying (3.3) and (3.4). Combining the first inequality in Proposition 3.5 and (3.3), it follows that
∂ ∂t u sup (t) ≤ lim ǫ→0 lim sup k→∞ a(p k (t + ǫ), t + ǫ) k ≤ 0.
Analogously, by combining the second inequality in Proposition 3.5 and (3.4),
we get ∂ ∂t u inf (t) ≥ lim ǫ→0 lim inf k→∞ −a(p k (t + ǫ), t + ǫ) k ≥ 0.
This means that the infimum of the function u over M is non-decreasing in time, while the supremum of the function u over M is non-increasing in time;
since u = 0 at time t = 0, follows directly that u = 0 on M × [0, T ].
The above result allows us to prove uniqueness of solutions to homogeneous and non-homogeneous linear heat-type Cauchy problems with variable coefficients.
Corollary 3.7. Denote by P the heat-type operator P
= ∂ t + a∆ g . If u, v ∈ C 2,α (M × [0, T ]) are such that Pu = Pv with u t=0 = v t=0 then u = v. Proof. Note that P is linear therefore, by setting h = u − v we see that h satisfies the Cauchy problem (∂ t + a∆ g )h = 0 with h t=0 = u t=0 − v t=0 = 0. The above result implies h = 0 resulting in u = v.
Parametrix construction for heat-type equations
We will now leave the more general setting of stochastically complete manifolds and move to the manifolds we are interested in, that is Φ-manifolds.
For a given Φ-manifold (M, g Φ ), the heat-kernel operator H represents an inverse of the heat operator (∂ t +∆). Recall that here ∆ denotes the unique selfadjoint extension of the Laplace-Beltrami operator associated to the Φ-metric g Φ . This means that, given some function
ℓ ∈ x γ C k,α Φ (M × [0, T ]), u = H(ℓ) is a solution of the Cauchy problem (∂ t + ∆ Φ ) u = ℓ, u| t=0 = 0.
(4.1)
The aim of this section is to get a similar result for heat-type operators
P := ∂ t + a∆ Φ , (4.2)
where a is a function on M × [0, T ]. Although not explicitly expressed here, the function a will be subject to some restrictions (see Theorem 1.2). This will be accomplished by firstly constructing an approximate inverse, i.e. a parametrix, for the operator P. By looking at heat-type operators as in (4.2), it is clear that the parametrix will be constructed by means of the standard heat-kernel operator H. Hence, by looking at Theorem 2.3 one might expect to find "well behaved" parametrix for heat-type operators between the weighted Hölder spaces introduced in §2.3.
A parametrix for heat-type operators allows us to prove short-time existence of solutions to the following Cauchy problems
Pu = (∂ t + a∆) u = ℓ, u| t=0 = 0, (4.3) Pu = (∂ t + a∆) u = 0, u| t=0 = u 0 , (4.4)
for some functions ℓ : M × [0, T ] → R and u 0 : M → R respectively. These last two statements are the core of Theorem 1.2 which we recall here for convenience of the reader.
Theorem 4.1. Let β be in (0, 1) and consider a positive function a in C k,β Φ (M×[0, T ]) to so that it is bounded from below away from zero. There exist two operators Q and E so that, for every α ∈ (0, 1), α < β and for every γ ∈ R,
Q : x γ C k,α Φ (M × [0, T ]) → x γ C k+2,α Φ (M × [0, T ]), E : x γ C k,α Φ (M) → x γ C k+2,α Φ (M × [0, T ]), are both bounded. Furthermore, for ℓ ∈ x γ C k,α (M × [0, T ]) and u 0 ∈ C k,α (M), Qℓ
and Eu 0 are solutions of the Cauchy problems
(i) Pu = ℓ; u| t=0 = 0 and (ii) Pu = 0; u| t=0 = u 0 (4.5)
respectively.
The construction of a parametrix will be split in two steps: a boundary parametrix and an interior parametrix. A combination of those will then give rise to a parametrix for heat-type operators. A boundary parametrix will be constructed in §4.1. Our construction follows along the same steps of the boundary parametrix in [BaVe19]. It is a technical construction since it requires a careful analysis near the boundary. The construction of an interior parametrix, along with a parametrix for heat-type operators, will instead take place in §4.2. The interior parametrix will follow as a consequence of the standard analysis of parabolic PDE's on compact manifolds. Proposition 4.11 will finally give us the parametrix of heat-type operators P = ∂ t + a∆ Φ . We will conclude this section with the proof of Theorem 1.2. 4.1. Boundary parametrix. As in [BaVe19], the boundary parametrix will be constructed by localizing the problem in appropriate coordinate patches by making use of two partitions of unity. Thus, we will firstly construct a localized parametrix, then by summing over the partition of unity, we get an approximate inverse of P near the boundary. The next Lemma explains the reason why the choice of partitions of unity, localized near the boundary, are useful for the purposes described at the beginning of this section. and that ψ is supported away from the boundary ∂M of M. Let H be the heat-kernel operator described in (2.14). Denote by R 0 the operator defined by
R 0 = M(ψ) • H • M(ϕ), i.e. R 0 u = ψH(ϕu).
Here M(ψ) stands for the operator "multiplication by ψ". For every non negative integer k, for every α ∈ (0, 1) and γ ∈ R, the operator R 0 acting between the weighted Hölder spaces
R 0 : x γ C k,α Φ (M × [0, T ]) → √ tx γ C k+1,α Φ (M × [0, T ])
has operator norm R 0 op satisfying
R 0 op T →0 −−→ 0.
Proof. With the same argument employed in the proof of [CaGe22, Theorem 1], it is enough to prove the result for k = 0. It is important to point out that the operator R 0 acts as a convolution, i.e. for u ∈ x γ C α (M × [0, T ]),
R 0 u(p, t) = t 0 M ψ(p)H(t − t, p, p)ϕ( p)u( p, t) dvol Φ ( p) d t,
with H being the heat-kernel whose asymptotics have been discussed in [CaGe22,§5]. For simplicity we will denote the kernel of the operator R 0 just by ψHϕ.
Since ψ is supported away from the boundary ∂M of M, the lift of ψHϕ to the heat space M 2 h is (compactly) supported away from ff, fd, lf and rf (see [CaGe22,§4]). Therefore, according to [CaGe22,§5], we conclude that the asymptotic behavior of ψHϕ is given by the asymptotic of the operator H near td, that is
β * (ψHϕ) ∼ τ −m G 1 ,
where G 1 is a bounded function vanishing to infinite order as |(S, U, Z)| → ∞.
In [CaGe22, Theorem 6.1 and Theorem 6.2] we have proven similar estimates for the heat-kernel operator H. In that casae we have made use of the fact that the heat-kernel H is "stochastically complete", meaning that it integrates to 1. Unfortunately, this is not the case here due to the presence of the functions ψ and ϕ. But estimating in projective coordinates and the above observation allow us to prove the claimed mapping properties. In conclusion
R 0 op = sup u α=1 R 0 u 1,α = sup u α=1 R 0 u α + sup u α=1 X∈V Φ X(R 0 u) α ≤ c √ t.
The above estimate implies the result since, for T → 0, √ t → 0.
We can now construct the specific partition of unity.
B(d) = [0, d) × (−d, d) b × (−d, d) f ⊂ R ≥0 × R b × R f ,
where b and f denote the dimension of the closed manifolds Y and Z respectively. Since M is a compact manifolds with boundary, every point p ∈ ∂M admits some coordinate chart φ : We will now define bump functions supported on the finite family of open neighborhoods of the points p i ∈ ∂M. We begin by setting σ : R ≥0 → R to be a compactly supported function so that σ(x) ≤ 1, with σ(x) = 1 for x ∈ [0, 1/2] and σ(x) = 0 for x ≥ 1. Employing the Mean Value theorem, it is easy to see that σ lies in C k,α (R ≥0 ) for every k ≥ 0 and for every α ∈ (0, 1].
B(1) → A, where p ∈ A. Moreover,
Remark 4.3. The Hölder space C k,α (R ≥0 ) above denotes the classical Hölder space, for which the Hölder bracket is defined by
[σ] α = sup x,x ′ ∈R ≥0 |σ(x) − σ(x ′ )| |x − x ′ | α .
Since it will play an important role later, we will stress here what happens to the function σ after rescailing. That is, if we consider some fixed number ε ∈ (0, 1) then we want to see what is the α-Hölder semi-norm of the function σ x ε . Since σ lies in C α (R ≥0 ) by definition then one readily sees that for every
x, x ′ ∈ R ≥0 , σ x ε − σ x ′ ε ≤ C x ε − x ′ ε α = Cε −α |x − x ′ | α ,
thus implying that σ x ε α ≤ Cε −α . Before proceeding with the definition of the bump functions, we need an intermediate result. As it has been already done in §2.1, we will use the short hand notation y and z for (y 1 , . . . , y b ) and (z 1 , . . . , z f ) respectively.
d q,Φ (p, p ′ ) = |x − x ′ | q + (x + x ′ ) q y − y ′ q + (x + x ′ ) 2 q z − z ′ q 1/q , d ∞,Φ (p, p ′ ) = max{|x − x ′ |, (x + x ′ ) y − y ′ , (x + x ′ ) 2 z − z ′ },
Here, by equivalent, we mean that for every q, q ′ ∈ [1, ∞] there exist constants c, C > 0 so that for every p, p ′ ∈ M,
c d q ′ ,Φ (p, p ′ ) ≤ d q,Φ (p, p ′ ) ≤ C d q ′ ,Φ (p, p ′ ).
Proof. Notice that it is enough to prove that for a given q ∈ [1, ∞), there exist constants c, C > 0 so that
c d ∞,Φ (p, p ′ ) ≤ d q,Φ (p, p ′ ) ≤ C d ∞,Φ (p, p ′ )
for every p, p ′ ∈ M. Indeed one can use the transitive property to gain the other inequalities. Thus, let us consider q ∈ [1, ∞). For given p, p ′ ∈ M, it is straightforward that d ∞,Φ (p, p ′ ) ≤ d q,Φ (p, p ′ ). The other inequality follows by arguing as follows:
d q,Φ (p, p ′ ) ≤ (d ∞,Φ (p, p ′ ) q + d ∞,Φ (p, p ′ ) q + d ∞,Φ (p, p ′ ) q ) 1/q = 3 1/q d ∞,Φ (p, p ′ ).
We are now in the position to define the appropriate bump functions. Let p ∈ ∂M be fixed. From the definition of the open covering, defined above, there exists some φ i : B(1) → A i so that φ i (p) = (0, y, z) for some y ∈ (−1, 1) b and z ∈ (−1, 1) f . Proposition 4.5. Let ε ∈ (0, 1) be fixed. For p ∈ A i , with φ −1 i (p) = (x, y, z), consider the functions ψ i,p , ϕ i,p : A i → R defined by
ϕ i,p (p) =σ x ε σ(x y − y )σ(εx 2 z − z ), ψ i,p (p) =σ x 2ε σ x y − y 2 σ εx 2 z − z 2 .
Then ϕ i,p and ψ i,p satisfy:
I. ψ i,p ≡ 1 on the support of ϕ i,p . II. There exists constants (all of which will be denoted by
C) so that [ ψ i,p ] α ≤ Cε −α , [ ϕ i,p ] α ≤ Cε −α . III. ϕ i,p , ψ i,p ∈ C 2,α Φ (M) (see §2.
3 for the definition of Hölder spaces on Φ-manifolds). IV. There exists some constant C > 0 (depending solely on the dimension of Y and Z) so that diam (supp( ϕ i,p )) ≤ Cε. Here by diam we mean the diameter, that is diam (supp( ϕ i,p )) = max
p,p ′ ∈supp( ϕ i,p) d Φ (p, p ′ ).
Note that in the above we did not specify whit respect to which of the distances on M is the diameter considered since, by Lemma 4.4 they are all equivalent.
Proof. Property I follows directly from the definition of σ.
Clearly, the fact that ϕ i,p and ψ i,p lie in C α Φ (M) is a direct consequence of II, due to ψ i,p and ϕ i,p being bounded. Let us therefore prove II. Since ψ i,p is just a rescaling of ϕ i,p , it is enough to prove II for the function ϕ i,p .
Let p, p ′ ∈ A i and assume φ −1 i (p) = (x, y, z) while φ −1 i (p ′ ) = (x ′ , y ′ , z ′ ). We have the following chain of inequalities:
ϕ i,p (p) − ϕ i,p (p ′ ) =σ x ε σ(x y − y )σ(εx 2 z − z ) −σ x ′ ε σ(x ′ y ′ − y )σ(εx ′2 z ′ − z ) ≤Cε −α |x − x ′ | α + C (σ(x y − y ) − σ(x ′ y ′ − y )) +C σ(εx 2 z − z ) − σ(εx ′2 z ′ − z ) ≤Cε −α |x − x ′ | α + C ((x − x ′ ) y − y + x ′ y − y − x ′ y ′ − y ) α +Cε α x 2 z − z − x ′2 z − z + x ′2 z − z − x ′2 z ′ − z α ≤Cε −α |x − x ′ | α + C (|x − x ′ | y − y ) α + C (x ′ y − y ′ ) α +Cε α |x 2 − x ′2 | z − z α + Cε α x ′2 z − z ′ α ≤Cε −α |x − x ′ | α + Cε −α (|x − x ′ | y − y ) α + Cε −α (x ′ y − y ′ ) α +Cε −α (|x − x ′ |(x + x ′ ) z − z ) α + Cε −α x ′2 z − z ′ α ≤Cε −α |x − x ′ | α + Cε −α (x ′ y − y ′ ) α + Cε −α x ′2 z − z ′ α ≤Cε −α |x − x ′ | α + Cε −α (x ′ y − y ′ + x y − y ′ ) α +Cε −α x ′2 z − z ′ + (2xx ′ + x 2 ) z − z ′ α ≤Cε −α |x − x ′ | α + (x + x ′ ) α y − y ′ α + (x + x ′ ) 2α z − z ′ α ≤Cε −α d ∞,Φ (p, p ′ ) α ≤ C d 2,Φ (p, p ′ ) α .
It is important to mention that the C's in the above estimate represent (perhaps different) uniform constants. Note that the third inequality is obtained by making use of the reverse triangle inequality and sublinearity of x α (with α ∈ (0, 1)). The fourth inequality follows from the inequalities ε α ≤ 1 ≤ ε −α while the fifth inequality is a direct consequence of y − y , as well as |x + x ′ | and z − z , being bounded. So far, we have seen ϕ i,p and ψ i,p to lie in C α Φ (M). This result can be extended to C 2,α Φ (M) just by noticing σ to be constant near p. Finally, let us prove IV. Consider p, p ′ ∈ supp( ϕ i,p ) with φ −1 i (p) = (x, y, z) and φ −1 i (p ′ ) = (x ′ , y ′ , z ′ ). From the definition of σ, it is known that x, x ′ ∈ (0, ε]. Thus, computing d 1,Φ (p, p ′ ) we get
d 1,Φ (p, p ′ ) =|x − x ′ | + (x + x ′ ) y − y ′ + (x + x ′ ) 2 z − z ′ ≤ ε + 4 √ bε + 4 √ fε 2 ≤ Cε with C = max{1, 4 √ b, 4 √ f}.
Notice that the values 2 √ b and 2 √ f come from the Euclidean length of the diagonal of the cubes (−1, 1) b and (−1, 1) f respectively.
Remark 4.6. We want to point out that the function ϕ i,p and ψ i,p from Proposition 4.5 are defined on the open sets A i . Due to the nature of the function σ, it is possible to extended each of them to the entire manifold M by making them vanish outside their respective supports. With a slight abuse of notation, we shall not distinguish ϕ i,p and ψ i,p from their extensions.
Moreover, it is worth pointing out that the functions ϕ i,p and ψ i,p are, in fact, far more regular than simply C 2 . Non the less, property III is stated only for C 2 due to extra negative powers of ε appearing in estimating the α-seminorm of the derivatives.
The functions ϕ i,p and ψ i,p will allow us to construct the claimed partitions of unity. Recall that, for a partition of unity, only a finite number of functions may be non-vanishing in a neighborhood. Although we have a finite family of open sets (A i ) i , the functions ϕ i,p , ψ i,p are defined for every point on the boundary ∂M of M. This makes virtually impossible to have only finitely many nonvanishing functions in neighborhoods of points in a collar neighborhood of the boundary. Hence, the final step for the construction of partitions of unity is to "reduce" the amount of points p ∈ ∂M by means of which we defined the bump functions ϕ i,p and ψ i,p . To this end, let us consider the following set: for a fixed ϑ ∈ (0, 1) consider
E i,ϑ = A i ∩ φ i (0, ϑΛ) | Λ ∈ Z b+f .
Recall that φ i : B(1) → A i is a diffeomorphism, thus the set E i,ϑ consists of finitely many boundary points in A i . This especially means that the family of functions ( ϕ i,p ) i,p∈E i,ϑ , as well as for the family ( ψ i,p ) i,p∈E i,ϑ , are finite.
( ϕ i,p ) i,p∈E i,ϑ and ( ψ i,p ) i,p∈E i,ϑ .
The only thing left to get partitions of unity (on an open neighborhood of the boundary) is to let the families ( ϕ i,p ) and ( ψ i,p ) to sum up to 1. This is achieved by a trivial "normalization", to this end, for some ϑ ∈ (0, 1) and for p ∈ E i,ϑ , we define the functions ϕ i,p and ψ i,p as follows:
ϕ i,p (p) := ϕ i,p (p) j p∈E j,ϑ ϕ j,p (p)
and ψ i,p (p) := ψ i,p (p) j p∈E j,ϑ ψ j,p (p)
.
(4.6)
It is now clear that both families (ϕ i,p ) i,p∈E i,ϑ and (ψ i,p ) i,p∈E i,ϑ are partition of unity on open neighborhoods of ∂M. Furthermore, since (4.6) holds only for points contained in the support of some of the functions ϕ i,p and ψ i,p , it follows that properties I to IV in Proposition 4.5 hold for the families (ϕ i,p ) i,p and (ψ i,p ).
Remark 4.8. Notice that the functions ϕ i,p and ψ i,p are defined in terms of some ε ∈ (0, 1). Thus the families (ϕ i,p ) i,p and (ψ i,p ) i,p are partitions of unity for any choice of ε.
Finally, the function
φ := i p∈E i,ϑ ϕ i,p (4.7)
is constantly equal to 1 on an open neighborhood of ∂M and satisfies properties I to IV in Proposition 4.5 as well.
Boundary Parametrix.
The partitions of unity presented in §4.1.1 allow us to construct a boundary parametrix for heat-type operators P (cf. (4.2)).
Let γ ∈ R and α ∈ (0, 1) be fixed and consider ℓ ∈ x γ C α Φ (M × [0, T ]). A parametrix for an heat-type operator P is a map Q :
x γ C α Φ (M × [0, T ]) → x γ C 2,α Φ (M×[0, T ]) so that u =
Qℓ is a solution for the parabolic Cauchy problem Pu = (∂ t + a∆)u = ℓ; u| t=0 = 0.
(4.8)
Our first step towards the construction of Q is establishing an operator Q B :
x γ C α Φ (M × [0, T ]) → x γ C 2,α Φ (M × [0, T ])
giving rise to approximate solutions of (4.8) near the boundary (the notion of approximate solutions is in the spirit of Lemma 4.9 below). Hence, in order to do this, we localize (4.8) near the boundary. Let us therefore fix some p ∈ ∂M. As pointed out in Remark 4.7 every point on the boundary lies in the support of at most finitely many of the functions defined in (4.6). Thus, without loss of generality, we can assume p to lie in some E i,ϑ for some i and some ϑ ∈ (0, 1), which, from now on, will be considered to be fixed. Next we freeze the coefficient a of the Laplace-Beltrami operator at t = 0. In particular we focus our attention to the parabolic Cauchy problem with constant coefficient P(p, 0)u p := (∂ t + a(p, 0)∆)u p = ϕ i,p ℓ, u p | t=0 = 0.
(4.9)
Note that the Cauchy problem (4.9) is formally different from the Cauchy problem in (4.8), not only due to the localization but especially because the coefficient a of the Laplace-Beltrami operator is now constant.
By assuming a to be positive and bounded from below away from zero, it is clear that, upon rescaling, the heat-kernel operator of a(p, 0)∆, denoted by H γ,p , is the same as the one from §2.4. It follows that a solution for (4.9) is given by H γ,p (ϕ i,p ℓ). In particular, by defining
u p = Q γ,i,p (ℓ) := ψ i,p H γ,p (ϕ i,p ℓ),
(4.10)
we have the following:
Lemma 4.9. Let α < β ≤ 1 and assume a ∈ C β Φ (M × [0, T ]) to be positive and bounded from below away from zero. Then, for every ℓ ∈ C α Φ (M × [0, T ]), the function u p = Q γ,i,p (ℓ), defined in (4.10), satisfies
Pu p := (∂ t + a∆)u p = ϕ i,p ℓ + R 1 i,p ℓ + R 2 i,p ℓ (4.11) where a) R 1 i,p : x γ C α Φ (M × [0, T ]) → x γ C α Φ (M × [0, T ])
is a bounded operator. Moreover, if T < 1 there exists some constant C > 0 so that
R 1 i,p ℓ α ≤ C T α/2 + ε β ε −α ℓ α . b) R 2 i,p : x γ C α Φ (M × [0, T ]) → x γ C α Φ (M × [0, T ])
is a bounded operator and its operator norm goes to 0 as T → 0 + i.e.
lim T →0 + R 2 i,p op = 0.
Proof. In order to avoid the plethora of indices we will suppress all the indices on ϕ, ψ and the error terms R 0 and R 1 . Following the same computations as in [BaVe14, Lemma 4.3] one gets Pu p =ψ∂ t H p (ϕℓ) + [a∆, ψ] (H p (ϕℓ)) + ψa∆ (H p (ϕℓ)) =ψ(∂ t + a∆) (H p (ϕℓ)) + [a∆, ψ] (H p (ϕℓ)) =ψ (∂ t + a(p, 0)) ∆) (H p (ϕℓ)) + ψ (a − a(p, 0)∆) (H p (ϕℓ)) (4.12)
+ [a∆, ψ] (H p (ϕℓ))
=ψϕℓ + ψ (a − a(p, 0)) ∆ (H p (ϕℓ)) + [a∆, ψ] (H p (ϕℓ)) = : ψϕℓ + R 1 ℓ + R 2 ℓ = ϕℓ + R 1 ℓ + R 2 ℓ;
where [a∆, ψ] denotes the commutator between the differential operators a∆ and the "multiplication by ψ" operator. Note that the fourth equality in (4.12) follows from H p (ϕf) being a solution of the localized Cauchy problem. Moreover, the last equality is a consequence of property I in Proposition 4.5.
We will estimate the norms of R 1 and R 2 with γ = 0; the case for generic γ is slightly more involved but it follows along the same lines. Furthermore, the estimates will be performed on supp(ψ) since the α-norm is not effected by such a change.
Let us begin by estimating the α-norm of the operator R 1 applied to the function ℓ.
R 1 ℓ α = R 1 ℓ ∞ + [R 1 ℓ] α ≤ ψ ∞ a − a(p, 0) ∞ ∆H(ϕℓ) ∞ + [ψ] α a − a(p, 0) ∞ ∆H(ϕℓ) ∞ + ψ ∞ [a − a(p, 0)] α ∆H(ϕℓ) ∞ + ψ ∞ a − a(p, 0) ∞ [∆H(ϕℓ)] α .
(4.13)
We will estimate each term in (4.13) separately. In what follows, unless otherwise specified, we will denote all the uniform constants by C.
We begin by estimating the first term in (4.13). By assumption a ∈ C β Φ (M × [0, T ]) with β > α. Thus one deduces a − a(p, 0) ∞ ≤ C(ε β + T β/2 );
(4.14)
for some constant C > 0, due to property IV in Proposition 4.5. From Theorem 2.3 one has boundedness of the operator ∆H :
C α Φ (M × [0, T ]) → t α/2 C 0 Φ (M × [0, T ]); thus resulting in the estimate ∆ (H(ϕℓ)) ∞ ≤ Ct α/2 ϕℓ ∞ ≤ CT α/2 ℓ ∞ ≤ CT α/2 ℓ α .
(4.15)
Hence the first term in (4.13) can be estimated by ψ ∞ a − a(p, 0) ∞ ∆ (H(ϕℓ)) ∞ ≤ C(ε β + T β/2 )T α/2 ℓ α .
(4.16)
For the second term in (4.13) we use property II in Proposition 4.5 paired with (4.14) and (4.15), resulting in
[ψ] α a − a(p, 0) ∞ ∆ (H(ϕℓ)) ∞ ≤ Cε −α T α/2 (ε β + T β/2 ).
(4.17)
The third term in (4.13) can be estimated by noticing the following. Recall that we are estimating on the supp(ψ); thus, from property IV in Proposition 4.5, for very p, p ′ lying in the support of ψ, d Φ (p, p ′ ) ≤ Cε. By choosing ε small enough, e.g. ε ≤ 1/C, and T < 1, which will be consistent for our future applications (cf. Proposition 4.11), we have
d Φ (p, p ′ ) β + |t − t ′ | β/2 ≤ d Φ (p, p ′ ) α + |t − t ′ | α/2 .
This implies, due to the assumption a ∈ C β Φ (M × [0, T ]) and thus [a] β ≤ C, that [a − a(p, 0)] α = [a] α ≤ C. Therefore we find ψ ∞ [a − a(p, 0)] α ∆ (H(ϕℓ)) ∞ ≤ CT α/2 ℓ α .
(4.18)
Finally, in order to estimate the fourth, and last, term in (4.13) we use the mapping property discussed in Theorem 2.3 to deduce ∆H :
C α Φ (M × [0, T ]) → C α Φ (M × [0, T ]) to be bounded. Thus [∆ (H(ϕℓ))] α ≤ ∆ (H(ϕℓ)) α ≤ C ϕℓ α ≤ Cε −α ℓ α which, in turn, implies ψ ∞ a − a(p, 0) ∞ [∆ (H(ϕℓ))] α ≤ C(ε β + T β/2 )ε −α ℓ α . (4.19)
Joining (4.16)-(4.19) together, in view of (4.13), we conclude
R 1 ℓ α ≤C T α/2 (ε β + T β/2 ) + ε −α T α/2 (ε β + T β/2 ) + T α/2 + ε −α (ε β + T β/2 ) ℓ α ≤C T α/2 + ε β ε −α ℓ α ;
where the C's denote different uniform constants. We want to point out that the estimate above holds due to ε ≤ 1/C < 1 and 0 < α < β ≤ 1, concluding the first part of the statement.
For the second part we argue as follows. By making use of the product rule one sees that for every twice-differentiable functionw,
[a∆, ψ]w = a∆(ψ) · w − 2a g Φ (∇ψ, ∇w),
where ∇ denotes the gradient. Note that our choice of ψ implies that all of its derivatives are vanishing near the boundary ∂M. Thus, by choosing w = H p (ϕℓ), we see that the assumption of Lemma 4.2 are satisfied. Hence,
R 2 : C α Φ (M × [0, T ]) → C α Φ (M × [0, T ])
is a bounded operator with operator norm converging to 0 as T → 0 + . Remark 4.10. We want to point out the main difference between the result presented here and the analogous result for edge manifolds [BaVe14, Lemma 4.3]. In [BaVe14] the authors use the Mean Value Theorem to estimate the supremum norm of the coefficient a of the Laplace-Beltrami operator. This leads to terms which can be estimate against the incomplete edge distance. In particular they reach an estimate of the form a − a(p, 0) ∞ ≤ C(ε + T α/2 ) for some positive constant C (cf. [BaVe14,page 21]). In our case, an application of the Mean Value Theorem does not lead to something comparable with the Φ-distance d Φ . Therefore, we could assume less regularity from a differentiability point of view. But the assumption a ∈ C α Φ (M × [0, T ]) is not enough to guarantee the existence of a boundary parametrix (see Proposition 4.11). Indeed one can see that, by assuming a ∈ C α Φ (M × [0, T ]), the estimates performed in the proof of Lemma 4.9 lead to R 1 ℓ α ≤ C(T α/2 ε −α + 1)
which, in turn, can not be made less than one thus making it impossible for R 1 to have small operator norm.
By means of the operators Q γ,i,p we define
Q B = i p∈E i,ϑ Q γ,i,p ,
so that, for a given function ℓ in
x γ C α Φ (M × [0, T ]), one has Q B ℓ = i p∈E i,ϑ ψ i,p H γ,p (ϕ i,p ℓ).
(4.20)
Proposition 4.11. For every 0 < δ < 1 there exist ε and T positive and sufficiently small so that
Q B : x γ C α Φ (M × [0, T ]) → x γ C 2,α Φ (M × [0, T ]), Q B : x γ C α Φ (M × [0, T ]) → x γ √ tC 1,α Φ (M × [0, T ]) (4.21)
are bounded operators. Moreover, in terms of the function φ defined in (4.7) one has,
for every ℓ ∈ x γ C α Φ (M × [0, T ]), (∂ t + a∆)(Q B ℓ) = φℓ + R 1 ℓ + R 2 ℓ
with R 1 op ≤ δ and R 2 op converging to 0 as T goes to 0. Proof. The mapping properties in (4.21) are a straightforward consequence of the mapping properties of the heat-kernel operator H (cf. Theorem 2.3) and by noticing that multiplication by either ψ i,p or ϕ i,p are bounded operators, thus preserving the regularity.
For the second part of the statement we begin by explicitly computing (∂ t + a∆)(Q B ℓ). Since the sum defining Q B in (4.20) is locally finite, by Lemma 4.9 we conclude
(∂ t + a∆)Q B ℓ = i p∈E i,ϑ (∂ t + a∆) (ψ i,p H γ,p (ϕ i,p ℓ)) =φℓ + i p∈E i,ϑ R 1 i,p ℓ + i p∈E i,ϑ R 2 i,p ℓ.
For simplicity let us denote R j ℓ = i p∈E i,ϑ R j i,p ℓ for j = 1, 2. Lemma 4.9 gives R 1 i,p ℓ α ≤ C T α/2 + ε β ε −α ℓ α . Hence, by letting ℓ α ≤ 1 we find that the operator norm of R 1 is bounded by
R 1 op ≤ C T α/2 + ε β ε −α .
Again, the C's denote different uniform constants. For a given 0 < δ < 1 and C as in the above estimate, it is possible to choose 0 < T < 1 and ε < min{1, 1/C} sufficiently small so that T α/2 ε −α + ε β−α < δ C ;
and x = ε is a smooth hypersurface. This might be accomplished, for instance, by choosing
T α/2 < δ 2C ε α ; ε β < δ 2C ε α .
In concerns of the operator norm of R 2 , the estimate follows directly by employing Lemma 4.9.
4.2. Construction of the Parametrix. In §4.1.2 we constructed an approximate boundary parametric for an heat-type operator P. Here, we will first construct an approximate parametrix Q I for P in the interior M of M. After obtaining Q I , we will see that a combination of Q B , as in (4.20), and Q I , defined below in (4.23), will lead to an approximate parametrix Q for P on the whole M. As it is usual in Operator Theory, we will then get rid of the error, arising from Q being an approximate parametrix, via von Neumann series resulting in the claimed parametrix Q for P.
Let 0 < δ < 1 be fixed and consider ε and T as in Proposition 4.11. From ε being fixed, it follows that an ε-neighborhood of ∂M is also fixed and the function φ (defined in (4.7)) is identically 1 on this neighborhood. The idea now is to cut off a neighborhood of ∂M from M. Let M ε := {p ∈ M | x(p) ≥ ε/2}. Clearly M ε is a compact manifold with boundary, meaning that we can consider its double space M. Recall that the double space consists of two copies of M ε glued along the boundary and, for compact manifolds with boundary, it is a compact manifold without boundary. Note that the double space construction does not lead to a smooth metric on M. In order to smooth it up we consider a smoothing of such a metric so that the metric on M and the one on M coincide on M 2ε . Moreover, in dealing with M, we are working away from the boundary ∂M of M. Thus, the α-Hölder spaces are exactly the classical ones.
The function (1 − φ) is defined on M ε , but by setting it to be zero on the second copy of M ε , we can extend it to a function, still denoted by (1 − φ), on the double space M. Hence (1 − φ) defines, in particular, a smooth cut off function over M ε in M. Similarly, let P denote the uniform parabolic extension of P| Mε to M. From classical parabolic PDE theory, it is well known that there exists a parametrix Q I for the heat operator P so that the maps
Q I : C k,α ( M × [0, T ]) → C k+2,α ( M × [0, T ]),Q I : C k,α ( M × [0, T ]) → √ tC k+1,α ( M × [0, T ]),(4.22)
are bounded. The idea is to use such a parametrix Q I and the boundary parametrix constructed above to construct a parametrix Q for the Cauchy problem (4.1).
Note that, for a given function u ∈ C k,α ( M × [0, T ]), the second mapping property in (4.22) implies Q I u ∈ √ tC k+1,α ( M × [0, T ]). In order to turn Q I u into a function in C k,α Φ (M × [0, T ]), let us consider a cut off function Ψ on M so that Ψ = 1 on supp(1 − φ). We can now define the operator
Q I := M( Ψ) • Q I • M(1 − φ),
(4.23)
As pointed out in the proof of Proposition 4.11, multiplication by Ψ and (1 − φ) preserve the regularity and are bounded operators. Therefore the operator Q I
Q I : x γ C k,α Φ (M × [0, T ]) M(1−φ) −−−−→ C k,α ( M × [0, T ]) Q I − → Q I − → √ tC k+1,α ( M × [0, T ]) M(Ψ) − −− → √ tC k+1,α (M ε × [0, T ])
acts continuously. Moreover, since we are working away from the boundary of M, the spaces C k+1,α (M ε ×[0, T ]) can be identified with the space
x γ C k+1,α Φ (M ε × [0, T ])
. We can hence conclude that the operator Q I mapping
Q I : x γ C k,α Φ (M × [0, T ]) → x γ √ tC k+1,α Φ (M × [0, T ])
is bounded. We can therefore construct an approximate parametrix Q for the operator P by setting Qℓ = Q B ℓ + Q I ℓ. In particular, in view of the construction above and Proposition 4.11, one sees that Q :
x γ C α Φ (M × [0, T ]) → x γ C 2,α Φ (M × [0, T ]) Q : x γ C α Φ (M × [0, T ]) → x γ √ tC 1,α Φ (M × [0, T ])
(4.24) are bounded.
Proposition 4.12. Let 0 < α < β ≤ 1 and consider a ∈ C k,β Φ (M × [0, T ]) to be positive and bounded from below away from zero. There exists T 0 > 0 sufficiently small so that the operator Q acts continuously when mapping
Q : x γ C α Φ (M × [0, T 0 ]) → x γ C 2,α Φ (M × [0, T 0 ]), Q : x γ C α Φ (M × [0, T 0 ]) → x γ √ tC 1,α Φ (M × [0, T 0 ]). Moreover, for every function ℓ in x γ C α Φ (M × [0, T ]),
Qℓ is a solution of the inhomogeneous Cauchy problem
(∂ t + a∆)u = ℓ, u| t=0 = 0.
(4.25)
Proof. Let ℓ be a function in x γ C α Φ (M × [0, T ]). By Proposition 4.11 and the construction above one computes
(∂ t + a∆)(Qℓ) = φℓ + R 1 ℓ + R 2 ℓ + (1 − φ)ℓ + R 3 ℓ;
where R 1 and R 2 are the ones arising from Proposition 4.11 while R 3 is given by
R 3 ℓ = [a∆, ψ] Q I ((1 − φ)ℓ) . Clearly R 3 : x γ C α Φ (M × [0, T ]) → x γ C α Φ (M × [0, T ])
is bounded. Furthermore, the operator norm of R 3 can be estimated in the same way as it has been done for R 2 in Lemma 4.9. In particular, it follows that both R 2 op and R 3 op converge to 0 as T goes to 0, while R 1 op < δ. We can now find T 0 sufficiently small so that, for every t ≤ min T 0 , T, by denoting R := R 1 + R 2 + R 3 ,
R op ≤ R 1 op + R 2 op + R 3 op < 1.
It is now clear that id +R is invertible, with inverse obtained via Neumann series of R. The claimed right parametrix of P will then be
Q = Q(id +R) −1 .
Remark 4.13. In the above statement, T 0 arises from R 2 op and R 3 op converging to 0 for T → 0 + . So, since R 1 op ≤ δ we can fix 1 − δ and find T 0 so that R 2 op + R 3 op < 1 − δ for every t ≤ T 0 .
Corollary 4.14. Let a ∈ C β Φ (M × [0, T ]) be positive and bounded from below away from zero. Then there exists T 0 sufficiently small (depending on (β−α)), and a bounded operator
E : x γ C 2,α Φ (M) → x γ C 2,α Φ (M × [0, T 0 ])
, so that, for every u 0 in x γ C 2,α Φ (M), u = Eu 0 is a solution of the homogeneous Cauchy problem (∂ t + a∆)u = 0, u t=0 = u 0 .
(4.26)
Proof. Since u 0 ∈ C 2,α Φ (M), a∆u 0 lies in C α Φ (M×[0, T ]). Using the right parametrix for the inhomogeneous Cauchy problem constructed in Proposition 4.12, set
Eu 0 = u 0 − Q(a∆u 0 ).
An easy computation shows that Eu 0 solves the homogeneous Cauchy problem.
Note that, unlike the statement of Theorem 1.2, the last two results gives us a solution only on an interval [0, T 0 ] which is possibly different from the initial interval [0, T ].
Proof of Theorem 1.2. Consider a function ℓ ∈ x γ C α Φ (M×[0, T ]) and the Cauchy problem (∂ t + a∆)u = ℓ; u| t=0 = 0, (4.27) From Proposition 4.12, we know that the Cauchy problem above admits a solution u lying in x γ C 2,α Φ (M × [0, T 0 ]). Clearly, if T 0 ≥ T then the statement is true and there is nothing to prove. Suppose, otherwise, that T 0 < T . We claim that the solution u can be extend past T 0 meaning that we can find a C 2,α Φ (M × [0, T ]) solution to (4.27) which agrees with u up to time T 0 ; therefore allowing us to find solutions defined on the whole time interval definition of the function ℓ. Let λ ∈ (0, T 0 ) and consider the Cauchy problem
(∂ t + a∆)v 1 = 0; v 1 | t=0 = u| t=T 0 −λ , (4.28)
that is the homogeneous Cauchy problem with initial condition u| t=T 0 −λ . From Corollary 4.14 we know that (4.28) admits a solution, say v 1 , v 1 ∈ C 2,α Φ (M × [0, T 0 ]) (T 0 is independent on the initial condition). By performing a change of coordinates, i.e. t → t + T 0 − λ we can consider the function v 1 ∈ C 2,α
Φ (M × [T 0 − λ, 2T 0 − λ]).
Similarly we can consider the "shifted" problem for (4.27). That is
(∂ t + a∆)u 1 = ℓ(_, t + T 0 − λ), u 1 | T 0 −λ = 0. (4.29)
Again, by Proposition 4.12, a solution to (4.29)
u 1 ∈ x γ C 2,α Φ (M × [T 0 , 2T 0 − λ]) exists.
Denote by w the function u 1 + v 1 . Since P is a linear operator we see that
w ∈ x γ C 2,α (M × [T 0 , 2T 0 − λ]) satisfies (∂ t + a∆)w = ℓ(_, t + T 0 − λ), w| t=0 = u| T 0 −λ .
(4.30)
It is now the time to point out that the function u satisfies (4.30) in [T 0 − λ, T 0 ] as well. Therefore, from Corollary 3.7 we conclude that u(_, t) = w(_, t) for every t ∈ [T 0 − λ, T 0 ]. This means that we can C 2 -glue u and w giving rise to u(p, t) = u(p, t) if 0 ≤ t ≤ T 0 w(p, t) if T 0 < t ≤ 2T 0 − λ. Note that this extension was obtained employing the parametrix construction, i.e. the maps Q and E. Such maps are bounded, thus the extended map Q so that ℓ → u is also bounded. The proof of Corollary 4.14 implies that the operator E can be extended as well, thus completing the proof.
Generalization of short-time existence
In §4 we proved the existence of solutions for non-homogeneous Cauchy problems with vanishing initial condition (cf. Theorem 1.2). In the analysis of geometric flows, as the Yamabe flow or the Mean Curvature flow, one deals with quasi-linear heat-type Cauchy problems. It is therefore useful to introduce some non-linearity in the heat-type Cauchy problems in the setting of Φ-manifolds.
For 0 < α < β ≤ 1 and a ∈ C k,β Φ (M ×[0, T ]) as in the assumptions of Theorem 1.2. We are interested in Cauchy problems of the form (∂ t + a∆)u = F(u), u| t=0 = 0, (5.1) with the operator F subject to some restrictions. We have already seen something like this, namely Theorem 2.4; indeed under the assumption a = 1 and F satisfying (1) F :
x γ C k+2,α Φ (M × [0, T ]) → C k,α Φ (M × [0, T ]);
(2) F can be written as a sum F = F 1 + F 2 with (i) F 1 :
x γ C k+2,α Φ → x γ C k+1,α Φ (M × [0, T ]), (ii) F 2 : x γ C k+2,α Φ → x γ C k,α Φ (M × [0, T ]);
(3) For u, u ′ ∈ x γ C k+2,α Φ (M×[0, T ]) with · k+2,α,γ -norm bounded from above by some η > 0, i.e. u k+2,α,γ , u ′ k+2,α,γ ≤ η, there exists some C η > 0 such that (i) F 1 (u)−F 1 (u ′ ) k+1,α,γ ≤ C η u−u ′ k+2,α,γ , F 1 (u) k+1,α,γ ≤ C η u k+2,γ,α , (ii) F 2 (u) − F 2 (u ′ ) k,α,γ ≤ C η max{ u k+2,α,γ , u ′ k+2,α,γ } u − u ′ k+2,α,γ , F 2 (u) k,α,γ ≤ C η u 2 k+2,α,γ , Theorem 2.4 guarantees existence and uniqueness of solution to the Cauchy problem aforementioned. It should be noted, on the other hand, that the proof for such result (c.f. [CaGe22,) uses only the mapping properties of the heat-kernel operator H that hold for the parametrix Q. Therefore, one can naturally extend the result to the parametrix constructed in §4, providing a proof for our last main result that is Corollary 1.3.
Remark 5.1. Contrarily to the same statement for the nonlinear heat equation with constant coefficient, we can not provide higher regularity, that is a solution u * existing in C k+2,α Φ (M × [0, T ′ ]) for some T ′ small enough. This is fairly reasonable and it should attainable. Unfortunately the estimates in the error term R 1 in Lemma 4.9 do not seem to extend easily to higher regularity, due to some problems arising in the estimate of the sup-norm of the coefficient a in case a ∈ C k,β Φ (M × [0, T ]). As mentioned at the beginning of this section the operator F will allow us to deal with some non-linear heat-type Cauchy problems. We want to conclude this work by explaining in a bit more details why this is the case.
A generic quasi-linear second order parabolic Cauchy problem on M if of the form ∂ t u = Lu, u| t=0 = u 0 (5.2) for some suitable function u 0 where Lu = a ij (p, t, u, ∇u)D ij u + b(p, t, u, ∇u) with D ij being a second order partial differential operator. (Note that in order to have parabolicity, one needs that the Frechét derivative of L is indeed an elliptic operator with eigenvalues bounded away from zero). In order to conclude short time existence of solutions to (5.2) one usually argues by means of perturbations; that is, if we stay "close" to the the initial condition u 0 we may find some evolution of u 0 in terms of the equation in (5.2) for short time. This is equivalent to consider u = u 0 + v and derive an equation for v from ∂ t u = Lu. This will lead to a new Cauchy problem of the form
∂ t v = L 0 v, v| t=0 = 0. (5.3)
Now the operator L 0 is some sort of linearization of the operator L. As one can expect, the operator L 0 might not be of the form L 0 = a∆ − F with F satisfying the conditions (1), (2) and (3) in the hypothesis of Corollary 1.3. That really depends on the quasi-linear operator L at hand. Therefore a unique treatment for every quasi-linear parabolic operators is impossible. Finally, we want to point out that a linearization of the form a∆ + F with F satisfying the three condition in Corollary 1.3 is expect for most of the geometric flows. Indeed in such a case one deals with quasi-linear evolution operators containing, as higher order derivative term, a "time-dependent" Laplacian (see e.g. Mean Curvature flow) or some power of u multiplying a (fixed-in time) Laplacian (e.g. Yamabe flow).
Theorem 1. 1 .
1Let (M, g) be a stochastically complete manifold and let a be a function on M which is bounded and bounded from below away from zero. If u ∈ C 2,α (M × [0, T ]) is a solution of the Cauchy problem (∂ t + a∆)u = 0; u| t=0 = 0, (1.3) then u = 0.
Corollary 1. 3 .
3Let α, β ∈ (0, 1) with α < β. Consider the Cauchy problem(∂ t + a∆)u = F(u), u| t=0 = 0, (1.6) with coefficient a ∈ C β Φ (M × [0,T ]) positive and bounded from below away from zero. Furthermore, assume the map F :x γ C 2,α Φ (M × [0, T ]) → C α Φ (M × [0, T ])to satisfy the following conditions: one can write F = F 1 + F 2 , with
Theorem 2.4. [CaGe22, Corollary 1.2] Let α, k, γ and T be as in Theorem 2.3 and consider the nonlinear Cauchy problem
Proposition 3. 5 .
5[CHV21, Proposition 3.1] Let (M, g) be a stochastically complete manifold and consider u ∈ C 2,α (M × [0, T ]). Then the functions
Lemma 4. 2 .
2Let (M, g Φ ) be a Φ-manifold and consider two functions ϕ, ψ ∈ C ∞ (M) to be compactly supported. Assume, furthermore, that ϕ and ψ lie in C α Φ (M) (cf. §2.3)
4.1. 1 .
1Partitions of unity. Let us fix some R > 0 and consider the collar neighborhood U R = p ∈ M | x(p) ≤ R of ∂M in M. Furthermore, for d > 0 let us define the family of half-cubes
due to compactness of ∂M, we can consider finitely many charts {p i , φ i : B(1) → A i } where the p i 's are points on the boundary ∂M. By choosing R sufficiently small, the finite family (A i ) i will cover the whole collar neighborhood U R . Such a covering can be extended to a covering of the whole manifold M by considering an additional open set A 0 = {p ∈ M | x(p) > R/2}.
Lemma 4. 4 .
4Let (M, g Φ ) be a Φ-manifold. For every q ∈ [1, ∞), the following distances on M are equivalent:
Remark 4. 7 .
7By definition of σ we can conclude that there exists an open neighborhood of the boundary ∂M of M, contained in the collar neighborhood U R , so that every point q in such a neighborhood lies in the support of at most finitely many of the functions
Now, if 2T 0 − λ ≥ T , the result is proved. If not, repeat the process with u until nT 0 − nλ ≥ T (which is possible in a finite number of repetitions since [0, T ] is compact). Thus we have an extension of u defined on M × [0, T ].
L J Alías, P Mastrolia, M Rigoli, Maximum principles and geometric applications. Springer700L. J. Alías, P. Mastrolia, and M. Rigoli, Maximum principles and geometric ap- plications, vol. 700, Springer, (2016).
Long-time existence of the edge Yamabe flow. E Bahuaud, B Vertman, Journal of the Mathematical Society of Japan. 71E. Bahuaud and B. Vertman, Long-time existence of the edge Yamabe flow, Journal of the Mathematical Society of Japan 71.2, 651-688, (2019).
E Bahuaud, B Vertman, Yamabe flow on manifolds with edges. 287E. Bahuaud and B. Vertman, Yamabe flow on manifolds with edges, Mathematische Nachrichten 287.2-3, 127-159, (2014).
Normalized Yamabe flow on some complete manifolds of infinite volume. B Caldeira, L Hartmann, B Vertman, B. Caldeira, L. Hartmann and B. Vertman, Normalized Yamabe flow on some complete manifolds of infinite volume, (2021).
Heat-type Equations on manifolds with fibered boundaries I: Schauder estimates. B Caldeira, G Gentile, B. Caldeira and G. Gentile, Heat-type Equations on manifolds with fibered bound- aries I: Schauder estimates, (2021).
Prescribed mean curvature flow of non-compact spacelike Cauchy hypersurfaces. G Gentile, B Vertman, arXiv:2202.02424arXiv preprintG. Gentile and B. Vertman, Prescribed mean curvature flow of non-compact space- like Cauchy hypersurfaces, arXiv preprint arXiv:2202.02424, (2022).
Basics of the b-calculus, Approaches to singular analysis. D Grieser, Birkhäuser, BaselD. Grieser, Basics of the b-calculus, Approaches to singular analysis, Birkhäuser, Basel, 30-84, (2001).
A A Grigor'yan, Stochastically complete manifolds. 290Doklady Akademii NaukA. A. Grigor'yan, Stochastically complete manifolds, Doklady Akademii Nauk, vol. 290, Russian Academy of Sciences, pp. 534-537, (1986).
Hamilton and others, Four-manifolds with positive curvature operator. R S , Journal of Differential Geometry. 242R. S. Hamilton and others, Four-manifolds with positive curvature operator, Journal of Differential Geometry 24, no. 2, 153-179, (1986).
Pseudodifferential operators on manifolds with fibred boundaries. R Mazzeo, R B Melrose, Asian Journal of Mathematics. 2R. Mazzeo and R. B. Melrose, Pseudodifferential operators on manifolds with fibred boundaries, Asian Journal of Mathematics, 2.4, 833-866, (1998).
A remark on the maximum principle and stochastic completeness. S Pigola, M Rigoli, Setti, Proceedings of the American Mathematical Society. 131S. Pigola, M. Rigoli, andA. Setti, A remark on the maximum principle and stochastic completeness, Proceedings of the American Mathematical Society, 131.4, 1283-1288, (2003).
M Talebi, B Vertman, arXiv:2101.08844Spectral geometry on manifolds with fibred boundary metrics II: heat kernel asymptotics. arXiv preprintM. Talebi and B. Vertman, Spectral geometry on manifolds with fibred boundary metrics II: heat kernel asymptotics, arXiv preprint arXiv:2101.08844, (2021).
Harmonic functions on complete Riemannian manifolds. S T Yau, Communications on Pure and Applied Mathematics. 282S. T. Yau Harmonic functions on complete Riemannian manifolds, Communications on Pure and Applied Mathematics, 28(2), 201-228, (1975).
Germany Email address: [email protected]. deLeibniz Universität HannoverLeibniz Universität Hannover, Germany Email address: [email protected]
| []
|
[
"Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection",
"Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection"
]
| [
"Shuhei Yokoo [email protected] \nDeNA Co., Ltd\n\n"
]
| [
"DeNA Co., Ltd\n"
]
| []
| Copy detection, which is a task to determine whether an image is a modified copy of any image in a database, is an unsolved problem. Thus, we addressed copy detection by training convolutional neural networks (CNNs) with contrastive learning. Training with a large memory-bank and hard data augmentation enables the CNNs to obtain more discriminative representation. Our proposed negative embedding subtraction further boosts the copy detection accuracy. Using our methods, we achieved 1st place in the Facebook AI Image Similarity Challenge: Descriptor Track. Our code is publicly available here: https://github. com/lyakaap/ISC21-Descriptor-Track-1st arXiv:2112.04323v1 [cs.CV] 8 Dec 2021 | null | [
"https://arxiv.org/pdf/2112.04323v1.pdf"
]
| 244,954,456 | 2112.04323 | bccda2f709927583022533b496ab4f800ba2304e |
Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection
Shuhei Yokoo [email protected]
DeNA Co., Ltd
Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection
Copy detection, which is a task to determine whether an image is a modified copy of any image in a database, is an unsolved problem. Thus, we addressed copy detection by training convolutional neural networks (CNNs) with contrastive learning. Training with a large memory-bank and hard data augmentation enables the CNNs to obtain more discriminative representation. Our proposed negative embedding subtraction further boosts the copy detection accuracy. Using our methods, we achieved 1st place in the Facebook AI Image Similarity Challenge: Descriptor Track. Our code is publicly available here: https://github. com/lyakaap/ISC21-Descriptor-Track-1st arXiv:2112.04323v1 [cs.CV] 8 Dec 2021
Introduction
In recent years, with the development of social media, the problems of plagiarism and unauthorized copy have become more serious. Accordingly, attention to research on copy detection, which is a task to determine whether an image is a modified copy of any image in a database, has increased [9,12,5].
In this study, we propose a strong copy detection pipeline that consists of EfficientNetV2 [18] trained with contrastive learning and a post-process that effectively utilizes negative embeddings. Our training pipeline has multiple steps inspired by progressive learning [18], which is a model training technique that increases input image resolution and regularization as a training step proceeds. In each step, models are trained by contrastive learning with hard data augmentation to obtain discriminative representation. Embeddings extracted by our model are processed employing our proposed negative embedding subtraction, which improves the copy detection performance by isolating a target sample from similar negative samples. Using our methods, accurate copy detection is realized even in difficult samples as shown in Figure 1.
Our contribution is summarized as follows: (1) training with contrastive learning and large memory-bank, (2) carefully designed data augmentation strategy that precisely reproduces manipulated images from the dataset, (3) novel post-process method that enhances embeddings utilizing negative samples, (4) results that significantly outperform baseline methods on a challenging dataset, enabling to win the Facebook AI Image Similarity Challenge: Descriptor Track at NeurIPS'21.
Dataset
During the Facebook AI Image Similarity Challenge (ISC21), a new dataset for copy detection, DISC21 [9] was released. The DISC21 is composed of 1 million reference images, 1 million training images, and 50,000 query images. A subset of the query images was derived from the reference images, and the rest of the query images were not. Both the query and reference set contain a majority of "distractor" images that do not match. The goal of the ISC21 was to distinguish "distractor" images and identify which reference images are used for the query images.
Ground truth pairs between query images and corresponding reference images were also provided. However, augmenting the query and reference images for training was prohibited by the competition rules. Therefore, participants of the competition were encouraged to use the training images for model training, as augmenting the training images was permitted for any usages.
Method
Data Augmentation
Our augmentation pipeline consists of crop, rotation, horizontal flip, vertical flip, padding, aspect ratio change, perspective transform, overlay onto background image, overlay of text and emoji, changes in brightness, saturation and grayscale, random erasing [24], blur, color palette with dithering, JPEG encoding, edge enhance, pixelization, and pixel shuffling. The order to apply augmentation is continuously and, randomly shuffled to generate more diversified examples. These augmentations were selected based on how the dataset for ISC21 was created. Parameters of the augmentations were manually set to the range, as close as possible to the actual copied examples by eye-checking. Examples of images processed by our data augmentation pipeline are shown in Figure 2.
As observed, the processed examples were drastically changed from the original image and barely recognizable. The final evaluation set (query set of phase-2) contained more difficult examples than the evaluation set provided during the competition period, thus we believe such drastic data augmentations contributed to the final result.
Training
For model training, we employ contrastive loss [4] with cross-batch memory [21]. Our initial experiments showed this combination of methods outperforms other metric learning losses, such as triplet loss [17,11,20] and AP loss [16]. By using the cross-batch memory, our model can train with more beneficial and diversified negative pairs.
Positive pairs are created from a single training image, as in the self-supervised learning methods [10,3]. Specifically, the first sample is generated by applying our data augmentation pipeline described in Section 3.1, and another one is generated by applying standard data augmentations includ- Table 1: Comparison of each step, including post-processing step. We report µAP (micro-average precision) and Recall@P90 (Recall at Precision 90) in the private-set of phase-1. Higher values of these metrics are better. "Augmentation" means the magnitude of the data augmentation. If the column of "Augmentation" is empty, it means no data augmentation was applied. "Trained w/ Reference" means using reference images for negative pairs of contrastive learning, and "Trained w/ GT" means using ground-truth pairs of the public set of phase-1. "Post-process" is our proposed negative embedding subtraction. The results shown here are based on the previous row (except for the first row).
ing resize, crop, and flip. For the negative pairs, all possible combination pairs are used except the positive pairs. The model training is conducted in multiple steps inspired by progressive learning [18]. As the training steps proceed, the input resolution and the magnitude of the data augmentation are increased. We increase the magnitude of the data augmentation by expanding the range of transformation and increasing the probability of application of transformations.
The entire training procedure is provided as follows. In the first step, we start training with a small input resolution and weak data augmentation. In the next step, the model trained in the first step is fine-tuned, where the input resolution and the magnitude of the data augmentation are increased. Next, we use both the training images and the reference images as a negative pair for training. Finally, the positive pair formed by the query images and the reference images is also used for training. In the last two steps, we did not perform data augmentation on the reference images and the query images, because this is prohibited by the rules. For more information regarding each step, refer to Table 1.
Post-process
In the Descriptor Track of ISC21, we were required to directly submit image descriptor. Thus, some optimization approaches such as similarity normalization [14] is not applicable. Moreover, interaction between other query images are prohibited by the rules of the competition, well-known re-ranking methods such as query expansion and diffusion [6,1,19,8] are also unavailable.
However, the use of the training set is allowed. The training set was designed as a statistical twin of the reference set, and was carefully collected such that there were no overlaps between the training set and the reference set. This means samples from the training set are always negative samples (not copied) for the query set, and samples from the training set that are similar to a query sample can be regarded as hard negative examples. Exploiting this nature, we propose a post-process method in which the query sample is isolated from such examples by vector subtraction, named as negative embedding subtraction.
Our negative embedding subtraction is simple, but effective for obtaining discriminative representation in feature space. Algorithm 1 illustrates the procedure of our proposed method, which has three hyperparameters: number of iterations n, top-k of k-NN search k, and subtraction factor β. We set n to 1, k to 10, and β to 0.35 respectively.
Algorithm 1: Negative embedding subtraction.
Input: target image descriptor x, negative image descriptor set X neg , number of iterations n, top-k of k-NN search k, subtraction factor β Output: processed image descriptor x
1 for i = 0 to n − 1 do 2 search NN k (x) ⊂ X neg 3 for x neg ∈ NN k (x) do 4 x ← x − β k x neg 5 end 6
x ← x/ x 2 7 end 8 return x
Experiments and Results
Implementation Details
Details for model architecture and training settings basically follow [15,23].
We employ "tf efficientnetv2 m in21ft1k" in timm [22] library as a backbone. The weights of the backbone were pre-trained on the ImageNet-21K [7]. Model training is conducted by using the stochastic gradient descent with momentum set to 0.9. The dimension of the descriptor is set to 256, which is specified by the competition rules. For contrastive loss, we set 0.0 for the positive margin, 1.0 for the negative margin,
Comparison of Each Step
We evaluate the performance of each training step and post-process using the private-set of phase-1. Table 1 shows the evaluation results of each-step. All steps considerably improves the performance, particularly the step "Trained w/ GT". Training with the ground truth pairs may be highly beneficial, as these pairs contain images manually edited, which is difficult to replicate with automated data augmentation. Our proposed post-process further improves the performance by a large margin, in spite of its simplicity.
Competition Results
The final results of the competition are shown in Table 2. As the team of the top score did not share their entire solution, we ranked 1st place among the participants holding the prize eligibility.
In addition, we experiment replacing the negative reference images with the training images in the last two training steps. This provides a score of 0.6297 with our post-process, only degrading 0.0057 from the original score. This suggests that our copy detection solution does not overfit to the reference set.
Conclusion
We presented a strong pipeline for copy detection based on contrastive learning with a carefully designed data augmentation pipeline. Our work leverages recent approaches and we proposed a novel post-process method that shows significant improvements. Using our methods, we achieved 1st place in the Facebook AI Image Similarity Challenge: Descriptor Track.
Figure 1 :
1Example of manipulation from the dataset for the Image Similarity Challenge 2021. The original image (top) is overlaid in the small part of the manipulated image (bottom), which hinders its identification in a large database. Our model successfully detected the copy and identified the original image in this example. Credit of images (user name of Flickr): CrusinOn2Wheels (original image), bortescristian (background image used for manipulation).
Figure 2 :
2Examples of images processed by our data augmentation pipeline. The image at the top left is the original image, and the others are processed images. An exhaustive list of data augmentation is shown in Section 3.1.
Input Resolution Augmentation Trained w/ Reference Trained w/ GT Post-processµAP
Recall@P90
256×256
weak
0.5831
0.4644
384×384
intermediate
0.6231
0.5237
512×512
strong
0.6435
0.5662
512×512
0.7557
0.6404
512×512
0.7743
0.6892
Table 2 :
2Leaderboard with final results (only top 5 teams are listed here). Our results are in bold. and 20,000 for the memory size. For data augmentation, we use AugLy[2] library. For nearest neighbor search, we use faiss[13] library.Team
µAP
Recall@P90
titanshield2
0.7418
0.7018
lyakaap (Ours) 0.6354
0.5536
S-square
0.5905
0.5086
visionForce
0.5788
0.4886
forthedream2
0.5736
0.4980
Three things everyone should know to improve object retrieval. Relja Arandjelovic, Andrew Zisserman, CVPR. Relja Arandjelovic and Andrew Zisserman. Three things everyone should know to improve object retrieval. In CVPR, pages 2911-2918, 2012. 3
Augly: A data augmentations library for audio. Joanna Bitton, Zoë Papakipos, Joanna Bitton and Zoë Papakipos. Augly: A data augmen- tations library for audio, image, text, and video. https:// github.com/facebookresearch/AugLy, 2021. 4
Exploring simple siamese representation learning. Xinlei Chen, Kaiming He, CVPR. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In CVPR, pages 15750-15758, 2021. 2
Learning a similarity metric discriminatively, with application to face verification. Sumit Chopra, Raia Hadsell, Yann Lecun, CVPR. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR, pages 539-546, 2005. 2
An evaluation of popular copymove forgery detection approaches. Vincent Christlein, Christian Riess, Johannes Jordan, Corinna Riess, Elli Angelopoulou, IEEE Transactions on Information Forensics and Security. 71Vincent Christlein, Christian Riess, Johannes Jordan, Corinna Riess, and Elli Angelopoulou. An evaluation of popular copy- move forgery detection approaches. IEEE Transactions on Information Forensics and Security, 7:1841-1854, 2012. 1
Total recall: Automatic query expansion with a generative feature model for object retrieval. Ondřej Chum, James Philbin, Josef Sivic, Michael Isard, Andrew Zisserman, ICCV. Ondřej Chum, James Philbin, Josef Sivic, Michael Isard, and Andrew Zisserman. Total recall: Automatic query expansion with a generative feature model for object retrieval. In ICCV, pages 1-8, 2007. 3
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248-255, 2009. 3
Diffusion processes for retrieval revisited. Michael Donoser, Horst Bischof, CVPR. Michael Donoser and Horst Bischof. Diffusion processes for retrieval revisited. In CVPR, pages 1320-1327, 2013. 3
Matthijs Douze, Giorgos Tolias, Zoë Papakipos, Ed Pizzi, Lowik Chanussot, Filip Radenovic, Tomas Jenicek, Maxim Maximov, Laura Leal-Taixé, Ismail Elezi, Ondřej Chum, and Cristian Canton Ferrer. The 2021 image similarity dataset and challenge. arXiv e-prints, 2021. 1, 2Matthijs Douze, Giorgos Tolias, Zoë Papakipos, Ed Pizzi, Lowik Chanussot, Filip Radenovic, Tomas Jenicek, Maxim Maximov, Laura Leal-Taixé, Ismail Elezi, Ondřej Chum, and Cristian Canton Ferrer. The 2021 image similarity dataset and challenge. arXiv e-prints, 2021. 1, 2
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross B Girshick, CVPR. 2020Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual repre- sentation learning. In CVPR, pages 9726-9735, 2020. 2
Deep metric learning using triplet network. Elad Hoffer, Nir Ailon, ICLRW. Elad Hoffer and Nir Ailon. Deep metric learning using triplet network. In ICLRW, 2015. 2
Vcdb: A large-scale database for partial copy detection in videos. Yugang Jiang, Yudong Jiang, Jiajun Wang, ECCV. Yugang Jiang, Yudong Jiang, and Jiajun Wang. Vcdb: A large-scale database for partial copy detection in videos. In ECCV, 2014. 1
Billion-scale similarity search with gpus. Jeff Johnson, Matthijs Douze, Hervé Jégou, arXiv:1702.08734arXiv preprintJeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017. 4
Exploiting descriptor distances for precise image search. Hervé Jégou, Matthijs Douze, Cordelia Schmid, Hervé Jégou, Matthijs Douze, and Cordelia Schmid. Ex- ploiting descriptor distances for precise image search. 2011. 3
Large-scale landmark retrieval/recognition under a noisy and diverse dataset. Kohei Ozaki, Shuhei Yokoo, arXiv:1906.04087arXiv preprintKohei Ozaki and Shuhei Yokoo. Large-scale landmark re- trieval/recognition under a noisy and diverse dataset. arXiv preprint arXiv:1906.04087, 2019. 3
Learning with average precision: Training image retrieval with a listwise loss. Jérome Revaud, Jon Almazan, Rafael Sampaio De Rezende, Cesar Roberto De Souza, ICCV. Jérome Revaud, Jon Almazan, Rafael Sampaio de Rezende, and Cesar Roberto De Souza. Learning with average preci- sion: Training image retrieval with a listwise loss. In ICCV, 2019. 2
Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, CVPR. Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clus- tering. In CVPR, pages 926-935, 2015. 2
Efficientnetv2: Smaller models and faster training. Mingxing Tan, Quoc V Le, ICML. 13Mingxing Tan and Quoc V. Le. Efficientnetv2: Smaller mod- els and faster training. In ICML, pages 10096-10106, 2021. 1, 3
Visual query expansion with or without geometry: Refining local descriptors by feature aggregation. Giorgos Tolias, Hervé Jégou, Pattern Recognition. 4710Giorgos Tolias and Hervé Jégou. Visual query expansion with or without geometry: Refining local descriptors by feature aggregation. Pattern Recognition, 47(10):3466-3476, 2014. 3
Learning fine-grained image similarity with deep ranking. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, Ying Wu, CVPR. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. Learn- ing fine-grained image similarity with deep ranking. In CVPR, pages 1386-1393, 2014. 2
Cross-batch memory for embedding learning. Xun Wang, H Zhang, Weilin Huang, Matthew R Scott, CVPR. 2020Xun Wang, H. Zhang, Weilin Huang, and Matthew R. Scott. Cross-batch memory for embedding learning. In CVPR, pages 6387-6396, 2020. 2
Pytorch image models. Ross Wightman, Ross Wightman. Pytorch image mod- els. https://github.com/rwightman/ pytorch-image-models, 2019. 3
Two-stage discriminative re-ranking for large-scale landmark retrieval. Shuhei Yokoo, Kohei Ozaki, Edgar Simo-Serra, Satoshi Iizuka, CVPR Workshops. 2020Shuhei Yokoo, Kohei Ozaki, Edgar Simo-Serra, and Satoshi Iizuka. Two-stage discriminative re-ranking for large-scale landmark retrieval. In CVPR Workshops, pages 4363-4370, 2020. 3
. Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang, arXiv:1708.04896Random erasing data augmentation. arXiv preprintZhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017. 2
| [
"https://github.com/rwightman/"
]
|
[
"Large-scale Distance Metric Learning with Uncertainty",
"Large-scale Distance Metric Learning with Uncertainty"
]
| [
"Qi Qian [email protected] \nAlibaba Group\n98004BellevueWAUSA\n",
"Jiasheng Tang [email protected] \nAlibaba Group\n98004BellevueWAUSA\n",
"Hao Li \nAlibaba Group\n98004BellevueWAUSA\n",
"Shenghuo Zhu [email protected] \nAlibaba Group\n98004BellevueWAUSA\n",
"Rong Jin [email protected] \nAlibaba Group\n98004BellevueWAUSA\n"
]
| [
"Alibaba Group\n98004BellevueWAUSA",
"Alibaba Group\n98004BellevueWAUSA",
"Alibaba Group\n98004BellevueWAUSA",
"Alibaba Group\n98004BellevueWAUSA",
"Alibaba Group\n98004BellevueWAUSA"
]
| []
| Distance metric learning (DML) has been studied extensively in the past decades for its superior performance with distance-based algorithms. Most of the existing methods propose to learn a distance metric with pairwise or triplet constraints. However, the number of constraints is quadratic or even cubic in the number of the original examples, which makes it challenging for DML to handle the large-scale data set. Besides, the real-world data may contain various uncertainty, especially for the image data. The uncertainty can mislead the learning procedure and cause the performance degradation. By investigating the image data, we find that the original data can be observed from a small set of clean latent examples with different distortions. In this work, we propose the margin preserving metric learning framework to learn the distance metric and latent examples simultaneously. By leveraging the ideal properties of latent examples, the training efficiency can be improved significantly while the learned metric also becomes robust to the uncertainty in the original data. Furthermore, we can show that the metric is learned from latent examples only, but it can preserve the large margin property even for the original data. The empirical study on the benchmark image data sets demonstrates the efficacy and efficiency of the proposed method. | 10.1109/cvpr.2018.00891 | [
"https://arxiv.org/pdf/1805.10384v1.pdf"
]
| 44,121,346 | 1805.10384 | 1468b0a8be83ba784f375f87fa3e4a7185e057ed |
Large-scale Distance Metric Learning with Uncertainty
Qi Qian [email protected]
Alibaba Group
98004BellevueWAUSA
Jiasheng Tang [email protected]
Alibaba Group
98004BellevueWAUSA
Hao Li
Alibaba Group
98004BellevueWAUSA
Shenghuo Zhu [email protected]
Alibaba Group
98004BellevueWAUSA
Rong Jin [email protected]
Alibaba Group
98004BellevueWAUSA
Large-scale Distance Metric Learning with Uncertainty
Distance metric learning (DML) has been studied extensively in the past decades for its superior performance with distance-based algorithms. Most of the existing methods propose to learn a distance metric with pairwise or triplet constraints. However, the number of constraints is quadratic or even cubic in the number of the original examples, which makes it challenging for DML to handle the large-scale data set. Besides, the real-world data may contain various uncertainty, especially for the image data. The uncertainty can mislead the learning procedure and cause the performance degradation. By investigating the image data, we find that the original data can be observed from a small set of clean latent examples with different distortions. In this work, we propose the margin preserving metric learning framework to learn the distance metric and latent examples simultaneously. By leveraging the ideal properties of latent examples, the training efficiency can be improved significantly while the learned metric also becomes robust to the uncertainty in the original data. Furthermore, we can show that the metric is learned from latent examples only, but it can preserve the large margin property even for the original data. The empirical study on the benchmark image data sets demonstrates the efficacy and efficiency of the proposed method.
Introduction
Distance metric learning (DML) aims to learn a distance metric where examples from the same class are well separated from examples of different classes. It is an essential task for distance-based algorithms, such as k-means clustering [18], k-nearest neighbor classification [17] and information retrieval [2]. Given a distance metric M , the squared Mahalanobis distance between examples x i and x j can be computed as
D 2 M (x i , x j ) = (x i − x j ) M (x i − x j )
Most of existing DML methods propose to learn the metric by minimizing the number of violations in the set of pairwise or triplet constraints. Given a set of pairwise constraints, DML tries to learn a metric such that the distances between examples from the same class are sufficiently small (e.g., smaller than a predefined threshold) while those between different ones are large enough [3,18]. Different from pairwise constraints, each triplet constraint consists of three examples (x i , x j , x k ), where x i and x j have the same label and x k is from a different class. An ideal metric can push away x k from x i and x j by a large margin [17]. Learning with triplet constraints optimizes the local positions of examples and is more flexible for real-world applications, where defining the appropriate thresholds is hard for pairwise constraints. In this work, we will focus on DML with triplet constraints.
Optimizing the metric with a set of triplet constraints is challenging since the number of triplet constraints can be up to O(n 3 ), where n is the number of the original training examples. It makes DML computationally intractable for the large-scale problems. Many strategies have been developed to deal with this challenge and most of them fall into two categories, learning by stochastic gradient descent (SGD) and learning with the active set. With the strategy of SGD, DML methods can sample just one constraint or a mini-batch of constraints at each iteration to observe an unbiased estimation of the full gradient and avoid comput-ing the gradient from the whole set [2,10]. Other methods learn the metric with a set of active constraints (i.e., violated by the current metric), where the size can be significantly smaller than the original set [17]. It is a conventional strategy applied by cutting plane methods [1]. Both of these strategies can alleviate the large-scale challenge but have inherent drawbacks. Approaches based on SGD have to search through the whole set of triplet constraints, which results in the slow convergence, especially when the number of active constraints is small. On the other hand, the methods relying on the active set have to identify the set at each iteration. Unfortunately, this operation requires computing pairwise distances with the current metric, where the cost is O(n 2 ) and is too expensive for large-scale problems.
Besides the challenge from the size of data set, the uncertainty in the data is also an issue, especially for the image data, where the uncertainty can come from the differences between individual examples and distortions, e.g., pose, illumination and noise. Directly learning with the original data will lead to a poor generalization performance since the metric tends to overfit the uncertainty in the data. By further investigating the image data, we find that most of original images can be observed from a much smaller set of clean latent examples with different distortions. The phenomenon is illustrated in Fig. 5. This observation inspires us to learn the metric with latent examples in lieu of the original data. The challenge is that latent examples are unknown and only images with uncertainties are available.
In this work, we propose a framework to learn the distance metric and latent examples simultaneously. It sufficiently explores the properties of latent examples to address the mentioned challenges. First, due to the small size of latent examples, the strategy of identifying the active set becomes affordable when learning the metric. We adopt it to accelerate the learning procedure via avoiding the attempts on inactive constraints. Additionally, compared with the original data, the uncertainty in latent examples decreases significantly. Consequently, the metric directly learned from latent examples can focus on the nature of the data rather than the uncertainty in the data. To further improve the robustness, we adopt the large margin property that latent examples from different classes should be pushed away with a data dependent margin. Fig. 1 illustrates that an appropriate margin for latent examples can also preserve the large margin for the original data. We conduct the empirical study on benchmark image data sets, including the challenging ImageNet data set, to demonstrate the efficacy and efficiency of the proposed method.
The rest of the paper is organized as follows: Section 2 summarizes the related work of DML. Section 3 describes the details of the proposed method and Section 4 summarizes the theoretical analysis. Section 5 compares the proposed method to the conventional DML methods on the benchmark image data sets. Finally, Section 6 concludes this work with future directions.
Related Work
Many DML methods have been proposed in the past decades [3,17,18] and comprehensive surveys can be found in [7,19]. The representative methods include Xing's method [18], ITML [3] and LMNN [17]. ITML learns a metric according to pairwise constraints, where the distances between pairs from the same class should be smaller than a predefined threshold and the distances between pairs from different classes should be larger than another predefined threshold. LMNN is developed with triplet constraints and a metric is learned to make sure that pairs from the same class are separated from the examples of different classes with a large margin. Compared with pairwise constraints, triplet constraints are more flexible to depict the local geometry.
To handle the large number of constraints, some methods adopt SGD or online learning to sample one constraint or a mini-batch of constraints at each iteration [2,10]. OA-SIS [2] randomly samples one triplet constraint at each iteration and computes the unbiased gradient accordingly. When the size of the active set is small, these methods require extremely large number of iterations to improve the model. Other methods try to explore the concept of the active set. LMNN [17] proposes to learn the metric effectively at each iteration by collecting an active set that consists of constraints violated by the current metric within the k-nearest neighbors for each example. However, it requires O(n 2 ) to obtain the appropriate active set.
Besides the research about conventional DML, deep metric learning has attracted much attention recently [9,13,15,16]. These studies also indicate that sampling active triplets is essential for accelerating the convergence. FaceNet [15] keeps a large size of mini-batch and searches hard constraints within a mini-batch. LeftedStruct [16] generates the mini-batch with the randomly selected positive examples and the corresponding hard negative examples. Proxy-NCA [9] adopts proxy examples to reduce the size of triplet constraints. Once an anchor example is given, the similar and dissimilar examples will be searched within the set of proxies. In this work we propose to learn the metric only with latent examples which can dramatically reduce the computational cost of obtaining the active set. Besides, the triangle inequality dose not hold for the squared distance, which makes our analysis significantly different from the existing work.
Margin Preserving Metric Learning
Given a training set {(x i , y i )|i = 1, · · · , n}, where x i ∈ R d is an example and y i is the corresponding label, DML aims to learn a good distance metric such that
∀x i , x j , x k D 2 M (x i , x k ) − D 2 M (x i , x j ) ≥ 1
where x i and x j are from the same class and x k is different. Given the distance metric M ∈ S d×d + , the squared distance is defined as
D 2 M (x i , x j ) = (x i − x j ) M (x i − x j ) where S d×d + denotes the set of d × d positive semi-definite (PSD) matrices.
For the large-scale image data set, we assume that each observed example is from a latent example with certain zero mean distortions, i.e.,
∀i, E[x i ] = z o:f (i)=o
where f (·) projects the original data to its corresponding latent example.
Then, we consider the expected distance [20] between observed data and the objective is to learn a metric such that
∀x i , x j , x k E[D 2 M (x i , x k )] − E[D 2 M (x i , x j )] ≥ 1 (1)
Let z o , z p and z q denote latent examples of x i , x j and x k respectively. For the distance between examples from the same class, we have
E[D 2 M (x i , x j )] = E[(x i − z o + z o ) M (x i − z o + z o )] + E[(x j − z p + z p ) M (x j − z p + z p )] − E[2x i M x j ] = D 2 M (z o , z p ) + E[D 2 M (x i , z o )] + E[D 2 M (x j , z p )] = D 2 M (z o , z p ) + 2E[D 2 M (x i , z o )](2)
The last equation is due to the fact that x i and x j are i.i.d, since they are from the same class. By applying the same analysis for the dissimilar pair, we have
E[D 2 M (x i , x k )] = D 2 M (z o , z q ) + E[D 2 M (x i , z o )] + E[D 2 M (x k , z q )] ≥ D 2 M (z o , z q ) + E[D 2 M (x i , z o )] (3)
The inequality is because that M is a PSD matrix. Combining Eqns. 2 and 3, we find that the difference between the distances in the original triplet can be lower bounded by those in the triplet consisting of latent examples
E[D 2 M (x i , x k )] − E[D 2 M (x i , x j )] ≥ D 2 M (z o , z q ) − D 2 M (z o , z p ) − E[D 2 M (x i , z o )]
Therefore, the metric can be learned with the constraints defined on latent examples such that
∀z o , z p , z q D 2 M (z o , z q )−D 2 M (z o , z p ) ≥ 1+E[D 2 M (x i , z o )]
Once the metric is observed, the margin for the expected distances between original data (i.e., as in Eqn. 1) is also guaranteed. Compared with the original constraints, the margin between latent examples is increased by the factor of
E[D 2 M (x i , z o )]
. This term indicates the expected distance between the original data and its corresponding latent example. It means that the tighter a local cluster is, the less a margin should be increased. Furthermore, each class takes a different margin, which depends on the distribution of the original data and makes it more flexible than a global margin.
With the set of triplets {z t o , z t p , z t q }, the optimization problem can be written as
min M ∈S d×d + , M F ≤δ,z∈R d×m L(M, z) = t (z t o , z t p , z t q ; M )
where m n is the number of latent examples. We add a constraint for the Frobenius norm of the learned metric to prevent it from overfitting. (·) is the loss function and the hinge loss is applied in this work.
(z t o , z t p , z t q ; M ) = [1 + E[D 2 M (x t i , z t o )] − (D 2 M (z t o , z t q ) − D 2 M (z t o , z t p ))] +
This problem is hard to solve since both the metric and latent examples are the variables to be optimized. Therefore, we propose to solve it in an alternating way and the detailed steps are demonstrated below.
Update z with Upper Bound
When fixing M k−1 , the subproblem at the k-th iteration becomes
min z L(M k−1 , z) = t 1 + E[D 2 M k−1 (x t i , z t o )] a − (D 2 M k−1 (z t o , z t q ) − D 2 M k−1 (z t o , z t p )) b +(4)
The variable z appears in both the term of margin a and the term of the triplet difference b, which makes it hard to optimize directly. Our strategy is to find an appropriate upper bound for the original problem and solve the simple problem instead.
Theorem 1. The function L(M k−1 , z) can be upper bounded by the series of functions r F r (z). For the rth class, we have
F r (z) = c 1 E[D 2 M k−1 (x i , z o )]+c 2 +c 3 o D 2 M k−1 (z o , z k−1 o )
where c 1 , c 2 and c 3 are constants and r F r (z k−1 ) = L(M k−1 , z k−1 ).
The detailed proof can be found in Section 4. After removing the constant terms and rearrange the coefficients, optimizing F r (z) is equivalent to optimizing the following problem min z∈R d×mr ,µ:µi,o∈{0,1}, o µi,o=1F
r (z) = (5) i:y(i)=r o µ i,o D 2 M k−1 (x i , z o ) + γ o D 2 M k−1 (z o , z k−1 o )
where µ denotes the membership that assigns a latent example for each original example. Till now, it shows that the original objective L(M k−1 , z) can be upper bounded by r F r (z). Minimizing the upper bound is similar to k-means but with the distance defined on the metric M k−1 . So we can solve it by the standard EM algorithm.
When fixing µ, latent examples can be updated by the closed-form solution
∀o, z o = 1 i µ i,o + γ ( i µ i,o x i + γz k−1 o )(6)
When fixing z, µ just assigns each original example to its nearest latent example with the distance defined on the
metric M k−1 ∀i, µ i,o = 1 o = arg min o D 2 M k−1 (x i , z o ) 0 o.w.(7)
Alg. 1 summarizes the method for solvingF r (z).
Algorithm 1 Algorithm of Updating z
Input: data set {X, Y }, z k−1 , M k−1 , γ and S Initialize z = z k−1 for s = 1 to S do Fix z and obtain the assignment µ as in Eqn. 7 Fix µ and update z as in Eqn. 6 end for return z k = z
Update M with Upper Bound
When fixing z k at the k-th iteration, the subproblem becomes
min M ∈S d×d + L(M, z k ) = (8) t [1 + E[D 2 M (x t i , z t o )] a − (D 2 M (z t o , z t q ) − D 2 M (z t o , z t p )) b ] +
where M also appears in multiple terms. With the similar procedure, an upper bound can be found to make the optimization simpler.
H(M ) = λ 2 M − M k−1 2 F + t 1 + E[D 2 M k−1 (x t i , z t o )] − (D 2 M (z t o , z t q ) − D 2 M (z t o , z t p )) + where λ is a constant and H(M k−1 ) = L(M k−1 , z k ).
Minimizing H(M ) is a standard DML problem. Since the number of latent examples z k is small, many existing DML methods can handle the problem well. In this work we solve the problem by SGD but sample one epoch active constraints at each stage. The active constraints contain the triplets of z k that incur the hinge loss with the distance defined on M k−1 . This strategy enjoys the efficiency of SGD and the efficacy of learning with the active set. To further improve the efficiency, one projection paradigm is adopted to avoid the expensive PSD projection which costs O(d 3 ). It performs the PSD projection once at the end of the learning algorithm and shows to be effective in many applications [2,11]. Finally, since the problem is strongly convex, we apply the α-suffix averaging strategy, which averages the solutions over the last several iterations, to obtain the optimal convergence rate [12]. The complete approach for obtaining M k is shown in Alg. 2.
Theoretical Analysis
Proof of Theorem 1
Proof. First, for the distance of the dissimilar pair in term b of Eqn. 4, we have ) is bounded by a constant c 2 , the inequality can be simplified as
D 2 M (z o , z q ) = D 2 M (z k−1 o , z k−1 q ) + D 2 M (z o , z k−1 o ) + 2(z o − z k−1 o ) M (z k−1 o − z k−1 q ) + D 2 M (z q , z k−1 q ) − 2(z q − z k−1 q ) M (z k−1 o − z k−1 q ) − 2(z o − z k−1 o ) M (z q − z k−1 q ) ≥ D 2 M (z k−1 o , z k−1 q ) − 2D M (z o , z k−1 o )D M (z k−1 o , z k−1 q ) − 2D M (z q , z k−1 q )D M (z k−1 o , z k−1 q ) where z kD 2 M (z o , z q ) ≥ (9) D 2 M (z k−1 o , z k−1 q ) − cD 2 M (z o , z k−1 o ) − cD 2 M (z q , z k−1 q )
The assumption is easy to verify since
D M (z k−1 o , z k−1 q ) ≤ z k−1 o − z k−1 q 2 2 M k−1 2
Note that M k−1 2 ≤ M k−1 F ≤ δ and z is in the convex hull of the original data, and the constant c can be set as c = 8δ max i x i 2 2 . With the similar procedure, we have the bound for the distance of the similar pair as
D 2 M (z o , z p ) ≤ D 2 M (z k−1 o , z k−1 p )(10)+(c + 2)D 2 M (z o , z k−1 o ) + (c + 2)D 2 M (z p , z k−1 p )
Taking Eqns. 9 and 10 back to the original function L(M k−1 , z) and using the property of the hinge loss, the original one can be upper bounded by
G(z) = t [1 + E[D 2 M (x t i , z t o )] − (D 2 M (z t:k−1 o , z t:k−1 q ) − D 2 M (z t:k−1 o , z t:k−1 p ))] + + c 3 m o D 2 M (z o , z k−1 o ) where c 3 = O(T c) is a constant.
By investigating the structure of this problem, we find that each class is independent in the optimization problem and the subproblem for the r-th class can be written as
min z∈R d×mr G r (z) = t:y(z t o )=r [E[D 2 M (x i , z o )] + c t ] + + c 3 o:y(zo)=r D 2 M (z o , z k−1 o )
where m r is the number of latent examples for the r-th class and c t is a constant as
c t = 1 − (D 2 M (z t:k−1 o , z t:k−1 q ) − D 2 M (z t:k−1 o , z t:k−1 p ))
Next we try to upper bound the hinge loss in G r (z) with a linear function in the interval of
[c t , E[D 2 M (x i , z k−1 o )]+c t ],
where the hinge loss incurred by the optimal solution z k is guaranteed to be in it.
Let α = E[D 2 M (x i , z k−1 o )]
, which is the expected distance between the original data of the r-th class and the corresponding latent examples from the last iteration, and β be a constant sufficiently large as
β ≥ − min t c t
Then, for each active hinge loss (i.e., α + c t > 0), if
E[D 2 M (x i , z o )] ≤ α(11)
we have Fig. 2 illustrates the linear function that can bound the hinge loss and the proof is straightforward. We will show that the condition in Eqn. 11 can be satisfied throughout the algorithm later.
[E[D 2 M (x i , z o )] + c t ] + ≤ α + c t α + c t + β (E[D 2 M (x i , z o )] + c t + β)
With the upper bound of the hinge loss, G r (z) can be bounded by
F r (z) = c 1 E[D 2 M (x i , z o )] + c 2 + c 3 o D 2 M (z o , z k−1 o ) where c 1 = t:y(z t o )=r α + c t α t + c t + β I(α + c t )
and
c 2 = t:y(z t o )=r α + c t α t + c t + β (c t + β)I(α + c t ) I(·)
is an indicator function as I(ν) = 1 ν > 0 0 o.w. Finally, we check the condition in Eqn. 11. Let z k denote latent examples obtained by optimizingF(z) with Alg. 1. Since we use z k−1 as the starting point to optimizeF r (z), it is obvious thatF
r (z k ) ≤F r (z k−1 ) At the same time, we have o D 2 M (z k o , z k−1 o ) ≥ o D 2 M (z k−1 o , z k−1 o ) = 0
It is observed that Eqn. 11 is satisfied by combining these inequalities.
Proof of Theorem 2
Proof. For the term a in Eqn. 8, we have
E[D 2 M (x i , z o )] = E[D 2 M k−1 (x i , z o ) + (x i − z o ) (M − M k−1 )(x i − z o )] ≤ E[D 2 M k−1 (x i , z o )] + max i x i − z o 2 2 M − M k−1 F ≤ E[D 2 M k−1 (x i , z o )] +c M − M k−1 2 F
where we assume that M − M k−1 F is sufficiently large andc is a constant which has max i x i − z o 2 2 ≤c and can be set asc = 4 max i x i 2 2 . Therefore, the original function L(M, z k ) can be upper bounded by
H(M ) = λ 2 M − M k−1 2 F + t 1 + E[D 2 M k−1 (x t i , z t o )] − (D 2 M (z t o , z t q ) − D 2 M (z t o , z t p )) + where λ = O(Tc).
Proof of Theorem 3
Proof. When fixing M k−1 at the k-th iteration, we have
L(M k−1 , z k ) ≤ r G r (z k ) ≤ r F r (z k ) ≤ r F r (z k−1 ) = L(M k−1 , z k−1 )
When fixing z k , we have
L(M k , z k ) ≤ H(M k ) ≤ H(M k−1 ) = L(M k−1 , z k )
Therefore, after each iteration, we have
L(M k , z k ) ≤ L(M k−1 , z k−1 )
Since the value of L(·) is bounded, the sequence will converge after a finite number of iterations.
Experiments
We conduct the empirical study on four benchmark image data sets. 3-nearest neighbor classifier is applied to verify the efficacy of the learned metrics from different methods. The methods in the comparison are summarized as follows.
• Euclid: 3-NN with Euclidean distance.
• LMNN [17]: the state-of-the-art DML method that identifies a set of active triplets with the current metric at each iteration. The active triplets are searched within 3-nearest neighbors for each example.
• OASIS [2]: an online DML method that receives one random triplet at each iteration. It only updates the metric when the triplet constraint is active.
• HR-SGD [10]: one of the most efficient DML methods with SGD. We adopt the version that randomly samples a mini-batch of triplets at each iteration in the comparison. After sampling, a Bernoulli random variable is generated to decide if updating the current metric or not. With the PSD projection, it guarantees that the learned metric is in the PSD cone at each iteration. The parameters of OASIS, HR-SGD and MaPML are searched in {10 i : i = −3, · · · , 3}. The size of mini-batch in HR-SGD is set to be 10 as suggested [10]. To train the model sufficiently, the number of iterations for LMNN is set to be 10 3 while the number of randomly sampled triplets is 10 5 for OASIS and HR-SGD. The number of iterations for MaPML is set as K = 10 while the number of maximal iterations for solving M k in the subproblem is set as S = 10 4 , which roughly has the same number of triplets as OASIS and HR-SGD. All experiments are implemented on a server with 96 GB memory and 2 Intel Xeon E5-2630 CPUs. Average results with standard deviation over 5 trails are reported.
MNIST
First, we evaluate the performance of different algorithms on MNIST [8]. It consists of 60, 000 handwritten digit images for training and 10, 000 images for test. There are 10 classes in the data set, which are corresponding to the digits 0 -9. Each example is a 28 × 28 grayscale image which leads to the 784-dimensional features and they are normalized to the range of [0, 1]. Fig. 3 (a) compares the performance of different metrics on the test set. For MaPML, we vary the ratio of latent examples from 5% to 25%. First of all, It is obvious that the metrics learned with the active set outperform those from random triplets. It confirms that the strategy of sampling triplets randomly can not explore the data set sufficiently due to the extremely large number of triplets. Secondly, the performance of MaPML 10 -O is comparable with LMNN, which shows that the proposed method can learn a good metric with only a small amount of latent examples (i.e., 10%). Finally, both MaPML and MaPML-O work well with the metric obtained by MaPML, which verifies that the learned metric can preserve the large margin property for both the original and latent data. Note that when the number of latent examples is small, the performance of k-NN with latent examples is slightly worse than that with the whole training set. However, k-NN with latent examples can be more robust in real-world applications.
To demonstrate the robustness, we conduct another experiment that randomly introduces the zero mean Gaussian noise (i.e., N (0, σ 2 )) to each pixel of the original training images. The standard deviation of the Gaussian noise is varied in the range of [50/255, 250/255] and τ is fixed as 10. Fig. 3 (b) summarizes the results. It shows that MaPML 10 has the comparable performance as MaPML 10 -O and LMNN when the noise level is low. However, with the increasing of the noise, the performance of LMNN drops dramatically. This can be interpreted by the fact that the metric learned with the original data has been misled by the noisy information. In contrast, the errors made by MaPML and MaPML-O increase mildly and it demonstrates that the learned metric is more robust than the one learned from the original data. MaPML performs best among all methods and it is due to the reason that the uncertainty in latent examples are much less than that in the original ones. It implies that k-NN with latent examples is more appropriate for real-world applications with large uncertainty. Then, we compare the CPU time cost by different algorithms to evaluate the efficiency. The results can be found in Fig. 4 (a). First, as expected, all algorithms with SGD are more efficient than LMNN, which has to compute the full gradient from the redefined active set at each iteration. Moreover, the running time of MaPML 10 is comparable to that of HR-SGD, which shows the efficiency of MaPML with the small set of latent examples. Note that OASIS has the extremely low cost, since it allows the internal metric to be out of the PSD cone. Fig. 4 (b) illustrates the convergence curve of MaPML and shows that the proposed method converges fast in practice.
Finally, since we apply the proposed method to the orig-
CIFAR-10 & CIFAR-100
CIFAR-10 contains 10 classes with 50, 000 color images of size 32 × 32 for training and 10, 000 images for test. CIFAR-100 has the same number of images in training and test but for 100 classes [6]. Since deep learning algorithms show the overwhelming performance on these data sets, we adopt ResNet18 [4] in Caffe [5], which is pre-trained on Im-ageNet ILSVRC 2012 data set [14], as the feature extractor and each image is represented by a 512-dimensional feature vector. Table 1 summarizes error rates of methods in the comparison. First, we have the same observation as on MNIST, where the performance of methods adopting active triplets is much better than that of the methods with randomly sampled triplets. Different from MNIST, MaPML 10 outperforms LMNN on both of the data sets. It is because that images in these data sets describe natural objects which contain much more uncertainty than digits in MNIST. Finally, the performance of MaPML 10 -O is superior over OASIS and HR-SGD, which shows the learned metric can work well with the original data represented by deep features. It confirms that the large margin property is preserved even for the original data.
ImageNet
Finally, we demonstrate that the proposed method can handle the large-scale data set with ImageNet. ImageNet ILSVRC 2012 consists of 1, 281, 167 training images and 50, 000 validation data. The same feature extraction procedure as above is applied for each image. Given the large number of training data, we increase the number of triplets for OASIS and HR-SGD to 10 6 . Correspondingly, the number of maximal iterations for solving the subproblem in MaPML is also raised to 10 5 . 33.92 ± 0.09 LMNN does not finish the training after 24 hours so the result is not reported for it. In contrast, MaPML obtains the metric within about one hour. The performance of available methods can be found in Table 2. Since ResNet18 is trained on ImageNet, the extracted features are optimized for this data set and it is hard to further improve the performance. However, with latent examples, MaPML can further reduce the error rate by 1.7%. It indicates that latent examples with low uncertainty are more appropriate for the large-scale data set as the reference points. Note that the small number of reference points will also accelerate the test phase. For example, it costs 0.15s to predict the label of an image with the original set while the cost is only 0.007s if evaluating with latent examples. It makes MaPML with latent examples a potential method for real-time applications.
Conclusion
In this work, we propose a framework to learn the distance metric and latent examples simultaneously. By learning from a small set of clean latent examples, MaPML can sample the active triplets efficiently and the learning procedure is robust to the uncertainty in the real-world data. Moreover, MaPML can preserve the large margin property for the original data when learning merely with latent examples. The empirical study confirms the efficacy and efficiency of MaPML. In the future, we plan to evaluate MaPML on different tasks (e.g., information retrieval) and different types of data. Besides, incorporating the proposed strategy to deep metric learning is also an attractive direction. It can accelerate the learning for deep embedding and the resulting latent examples may further improve the performance.
Figure 1 .
1Illustration of the proposed method. Let round and square points denote the target data and impostors, respectively. Let triangle points denote the corresponding latent examples. Data points with the same color are from the same class. It demonstrates that the metric learned with latent examples not only separates the dissimilar latent data with a large margin but also preserves the large margin for the original data.
Theorem 2 .
2The function L(M, z k ) can be upper bounded by the function H(M ) which is
Algorithm 2
2Algorithm of Updating M Input: data set {X, Y }, z k , M k−1 , δ, λ and S Initialize M 0 = M k−1 Sample one epoch active constraints A according to z k and M k−1 for s = 1 to S do Randomly sample one constraint from A Compute the stochastic gradient g = ∇H(M ) Update the metric as M s = M s−1 − 1 λs g Check the Frobenius norm M s = Π δ (M s ) end for Project the learned matrix onto the PSD cone M k = Π P SD ( 2 S S s=S/2+1 M s ) return M kAlg. 3 summarizes the proposed margin preserving metric learning framework. Different from the standard alternating method, we only optimize the upper bound for each subproblem. However, the method converges as shown in the following theorem.
Theorem 3 .
3Let (z k−1 , M k−1 ) and (z k , M k ) denote the results obtained by applying the algorithm in Alg. 3 at (k − 1)-th and k-th iterations respectively. Then, we haveL(z k , M k ) ≤ L(z k−1 , M k−1 )which means the proposed method can converge.Algorithm 3 Margin Preserving Metric Learning (MaPML) Input: data set {X, Y }, δ, m, γ, λ and K Initialize M 0 = I for k = 1 to K do Fix M k−1 and obtain latent examples z k by Alg. 1 Fix z k and update the metric M k by Alg. 2 end for return M K and z K Computational Complexity The proposed method consists of two parts: obtaining latent examples and metric learning. For the former one, the cost is linear in the number of latent examples and original examples as O(mn). For the latter one, the cost of sampling an active set dominates the learning procedure. Since the number of iterations is fixed, the complexity of sampling becomes min{O(Sm), O(m 2 )}. Therefore, the whole algorithm can be linear in the number of latent examples. Note that the efficiency can be further improved with distributed computing since many components of MaPML can be implemented in parallel. For example, when updating z, each class is independent and all subproblems can be solved simultaneously.
− 1
1are latent examples from the last iteration. We let M denote M k−1 in this proof for simplicity. The inequality is from that M is a PSD matrix and can be decomposed as M = LL . Then it is obtained by applying the Cauchy-Schwarz inequality. With the assumptions that ∀o, D M (z o , z
Figure 2 .
2Illustration of bounding the hinge loss. The hinge loss between [ct, α + ct] is upper bounded by the linear function denoted by the red line.
•
MaPML τ : the proposed method that learns the metric and latent examples simultaneously, where τ denotes the ratio between the number of latent examples and the number of original ones τ % = m n Different from other methods, 3-NN is implemented with latent examples as reference points. The method that takes 3-NN with original data is referred as MaPML τ -O.
Figure 3 .
3Comparisons on MNIST.
Figure 4 .
4Illustration of the efficiency of the proposed method.
Figure 5 .
5Illustration of the learned latent examples and corresponding original examples from MNIST. The left column indicates latent examples while five original images from each corresponding cluster are on the right.
inal pixel features directly, the learned latent examples can be recovered as images. Fig. 5 illustrates the learned latent examples and the corresponding examples in the original training set. It is obvious that the original examples are from latent examples with different distortions as claimed.
Table 1 .
1Comparison of error rate (%) on CIFAR-10 and CIFAR-100. MaPML 10 -O 13.59 ± 0.14 40.49 ± 0.15 MaPML 10 12.64 ± 0.16 34.70 ± 0.16Methods
CIFAR-10
CIFAR-100
Euclid
16.81
42.57
OASIS
15.22 ± 0.18
42.46 ± 0.21
HR-SGD
15.16 ± 0.22
42.53 ± 0.19
LMNN
13.62 ± 0.12
40.05 ± 0.13
Table 2 .
2Comparison of error rate (%) on ImageNet.Methods
Test error (%)
Euclid
35.65
OASIS
36.51 ± 0.08
HR-SGD
36.15 ± 0.08
MaPML 5 -O 35.59 ± 0.03
MaPML 5
Convex Optimization. S Boyd, L Vandenberghe, Cambridge University PressNew York, NY, USAS. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. 2
Large scale online learning of image similarity through ranking. G Chechik, V Sharma, U Shalit, S Bengio, JMLR. 116G. Chechik, V. Sharma, U. Shalit, and S. Ben- gio. Large scale online learning of image similarity through ranking. JMLR, 11:1109-1135, 2010. 1, 2, 4, 6
Information-theoretic metric learning. J V Davis, B Kulis, P Jain, S Sra, I S Dhillon, ICML. 1J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In ICML, pages 209-216, 2007. 1, 2
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770- 778, 2016. 8
Caffe: Convolutional architecture for fast feature embedding. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, ACM MM. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM MM, pages 675-678, 2014. 8
Learning multiple layers of features from tiny images. A Krizhevsky, A. Krizhevsky. Learning multiple layers of features from tiny images. 2009. 8
Metric learning: A survey. Foundations and Trends in Machine Learning. B Kulis, 5B. Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287-364, 2013. 2
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recog- nition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 7
No fuss distance metric learning using proxies. Y Movshovitz-Attias, A Toshev, T K Leung, S Ioffe, S Singh, ICCV. Y. Movshovitz-Attias, A. Toshev, T. K. Leung, S. Ioffe, and S. Singh. No fuss distance metric learn- ing using proxies. In ICCV, pages 360-368, 2017. 2
Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent (SGD). Q Qian, R Jin, J Yi, L Zhang, S Zhu, ML. 9937Q. Qian, R. Jin, J. Yi, L. Zhang, and S. Zhu. Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent (SGD). ML, 99(3):353-372, 2015. 2, 6, 7
Fine-grained visual categorization via multi-stage metric learning. Q Qian, R Jin, S Zhu, Y Lin, CVPR. Q. Qian, R. Jin, S. Zhu, and Y. Lin. Fine-grained vi- sual categorization via multi-stage metric learning. In CVPR, pages 3716-3724, 2015. 4
Making gradient descent optimal for strongly convex stochastic optimization. A Rakhlin, O Shamir, K Sridharan, ICML. A. Rakhlin, O. Shamir, and K. Sridharan. Making gra- dient descent optimal for strongly convex stochastic optimization. In ICML, 2012. 4
Metric learning with adaptive density discrimination. O Rippel, M Paluri, P Dollár, L D Bourdev, ICLR. O. Rippel, M. Paluri, P. Dollár, and L. D. Bourdev. Metric learning with adaptive density discrimination. In ICLR, 2016. 2
. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M S Bernstein, A C Berg, F Li, 115Imagenet large scale visual recognition challenge. IJCVO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li. Imagenet large scale visual recognition challenge. IJCV, 115(3):211- 252, 2015. 8
Facenet: A unified embedding for face recognition and clustering. F Schroff, D Kalenichenko, J Philbin, CVPR. F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and cluster- ing. In CVPR, pages 815-823, 2015. 2
Deep metric learning via lifted structured feature embedding. H O Song, Y Xiang, S Jegelka, S Savarese, CVPR. H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature em- bedding. In CVPR, pages 4004-4012, 2016. 2
Distance metric learning for large margin nearest neighbor classification. K Q Weinberger, L K Saul, JMLR. 106K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classifica- tion. JMLR, 10:207-244, 2009. 1, 2, 6
Distance metric learning with application to clustering with side-information. E P Xing, A Y Ng, M I Jordan, S J Russell, NIPS. 1E. P. Xing, A. Y. Ng, M. I. Jordan, and S. J. Russell. Distance metric learning with application to clustering with side-information. In NIPS, pages 505-512, 2002. 1, 2
Distance metric learning: a comprehensive survery. L Yang, R Jin, L. Yang and R. Jin. Distance metric learning: a com- prehensive survery. 2006. 2
Learning mahalanobis distance metric: Considering instance disturbance helps. H Ye, D Zhan, X Si, Y Jiang, IJCAI. H. Ye, D. Zhan, X. Si, and Y. Jiang. Learning maha- lanobis distance metric: Considering instance distur- bance helps. In IJCAI, pages 3315-3321, 2017. 3
| []
|
[
"PAMI: partition input and aggregate outputs for model interpretation",
"PAMI: partition input and aggregate outputs for model interpretation"
]
| [
"Wei Shi \nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n\n",
"Wentao Zhang \nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n\n",
"Weishi Zheng [email protected] \nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n\n",
"Ruixuan Wang \nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n\n"
]
| [
"Sun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n",
"Sun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n",
"Sun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n",
"Sun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\nSun Yat-sen University\n"
]
| []
| There is an increasing demand for interpretation of model predictions especially in high-risk applications. Various visualization approaches have been proposed to estimate the part of input which is relevant to a specific model prediction. However, most approaches require model structure and parameter details in order to obtain the visualization results, and in general much effort is required to adapt each approach to multiple types of tasks particularly when model backbone and input format change over tasks. In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions. The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction. For each input, since only a set of model outputs are collected and aggregated, PAMI does not require any model details and can be applied to various prediction tasks with different model backbones and input formats. Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions, and when applied to different model backbones and input formats. The source code will be released publicly. | 10.48550/arxiv.2302.03318 | [
"https://export.arxiv.org/pdf/2302.03318v2.pdf"
]
| 256,627,527 | 2302.03318 | 208e861d55125a57c943b666b0bcaed879aa540a |
PAMI: partition input and aggregate outputs for model interpretation
Wei Shi
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Wentao Zhang
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Weishi Zheng [email protected]
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Ruixuan Wang
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
Sun Yat-sen University
PAMI: partition input and aggregate outputs for model interpretation
There is an increasing demand for interpretation of model predictions especially in high-risk applications. Various visualization approaches have been proposed to estimate the part of input which is relevant to a specific model prediction. However, most approaches require model structure and parameter details in order to obtain the visualization results, and in general much effort is required to adapt each approach to multiple types of tasks particularly when model backbone and input format change over tasks. In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions. The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction. For each input, since only a set of model outputs are collected and aggregated, PAMI does not require any model details and can be applied to various prediction tasks with different model backbones and input formats. Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions, and when applied to different model backbones and input formats. The source code will be released publicly.
Introduction
Deep learning models have shown human-level performance in various machine learning tasks and started to be applied in real scenarios, such as face identification [26,33,77], medical image analysis [10,37,61], and language translation [70,75,76]. However, current deep learning models often lack interpretations for their decision making, which hinders the massive deployment of intelligent systems particularly in high-risk applications like med-ical diagnosis and autonomous driving.
To improve interpretation of model predictions, multiple visualization approaches have been proposed to localize input regions or components which are more relevant to the model prediction given any specific input to the model. For image classification task as an example, the class activation map (CAM) and its variants utilize the output (i.e., feature maps) of certain convolutional layer in the convolutional neural networks (CNNs) and their contribution weights to find the image regions which are responsible for the specific model prediction given any input image [8,41,62,73,82], and the back-propagation approaches propagate the CNN output layer-by-layer to the input image space either based on gradient information (or its modified versions) at each layer [8,62,[66][67][68] or based on the relevance between input elements and output at each layer [2,6,30,48]. While the CAM-like approaches can only roughly localize relevant regions due to the lower resolution of feature maps, the back-propagation approaches often just find sparse and incomplete object regions relevant to the model prediction. Moreover, both types of approaches need either part of or the whole model structure and parameter details, which may be unavailable in some applications due to privacy or security concerns. When the model details are not available, the occlusion method may be utilized to roughly localize image regions relevant to the model prediction by occluding each local patch and checking the change in model output [54,79]. However, the occlusion method often only localizes the most discriminative object part and misses the other parts which actually also contribute to the model prediction. LIME [58] is another method without requiring model details for model interpretation by locally approximating the model decision surface for any specific input, but it can often estimate the contributions of a subset of input parts and require an optimization process for interpretation of a specific model prediction. Furthermore, most existing visualization approaches for interpretation of model predictions are developed for specific type of tasks (e.g., just for image classification), model backbone (e.g., for CNNs), and input format (e.g., just for image data). Substantial efforts are often required to adapt one visualization approach to various tasks (e.g., image caption) with different model backbones (e.g., Transformer backbone) or input formats (e.g., sequence of items).
Different from existing visualization approaches, a simple yet effective visualization framework for interpretation of model predictions is proposed in this study. The proposed framework, called PAMI ('Partition input and Aggregate outputs for Model Interpretation'), is inspired by the observation that both humans and popular deep learning models extract and aggregate features of local regions for image understanding and decision making. Suppose a well-trained image classifier predicts an input image as a specific class. To find the relevant image regions and their contributions to the model prediction, the proposed framework first partitions the input into multiple parts, and then feeds only one part (with the remaining regions masked) to the model to obtain the corresponding output probability of the specific class. Aggregating the output probabilities over all the individual parts would result in an importance map representing the contribution of each input part to the original model prediction. In contrast to existing visualization approaches, the proposed PAMI framework does not require model structure and parameter details, can more likely find all possible input parts which are relevant to the model prediction, more precisely localize relevant parts, and work for various model backbones with different input formats. Such merits of the proposed PAMI framework has been confirmed by extensive experiments on multiple tasks with different model backbones and input formats.
Related Work
In computer vision, post-hoc interpretation of deep learning models focuses on either understanding of model neurons (e.g., convlutional kernels, output elements) which is independent of input information [4,16,32,45,78], or understanding of a specific model prediction given an input image [8,15,39,54,62]. This study belongs to the latter one, i.e., trying to understand what information in the input causes the specific model prediction.
Multiple approaches have been proposed for understanding of model predictions, including the activation map approach [8,31,62,82], the back-propagation approach [24,30,79,79], the perturbation approach [22,54], the local approximation approach [58,81] and the optimization-based approach [15,39]. The activation map approach often obtains a class-specific activation map with the weighted sum of all feature maps often at the last convolutional layer, and considers the regions with stronger activation relevant to the specific model output [8,62,73,82]. Since the activa-tion map is often much smaller than the input image, only approximate image regions corresponding to the stronger activation regions can be localized for interpretation of the specific model prediction. Different from the activation approach which often works at higher layer of the deep learning model, the back-propagation approach tries to estimate the importance of each input pixel by propagating the specific model output layer-by-layer back to the input space. This can be obtained by calculating the gradient of the specific model output with respect to input elements at each layer [8,62,[66][67][68], the input-relevant contribution of each kernel at each layer [34], or the relevance between output and each input element at each layer [2,6,30,48]. The backpropagation approach considers each input pixel as an independent component and often only a subset of disconnected pixels in the relevant regions are estimated to be relevant to the model prediction.
To find local image regions rather than disconnected pixels relevant to the model prediction, the perturbation approach has been proposed by perturbing local image regions somehow and checking the change in model output of the predicted class [22,79]. If certain perturbed local region causes large drop in the output, the local region in the input image is considered crucial to the original model prediction. Perturbation can be in the form of simply masking a local region by a constant pixel intensity [79], by neighboring image patches [83], or by blurring the original region information with a constant value or smoothing operator [22,54]. This approach often can find only the most discriminative part of the relevant regions which are responsible for the model prediction, because perturbing less-discriminative part of the relevant regions often does not cause much drop in model output. Besides the perturbation approach, the local approximation approach provides another way to estimate the contribution of each meaningful image region (e.g., object parts, background region) to the model prediction [22,58,73]. This approach assumes that the model decision surface is locally linear in the regionbased feature space for any specific input, and therefore can be approximated with a linear model in the feature space. The weight parameters in the linear model can directly indicate the contribution of each meaningful image region to the original model output, thus obtaining image regions most relevant to the model prediction.
While most of these visualization approaches to interpretation of model predictions were originally developed for image classification models, they have been extended or modified for other tasks [11,72] or other deep learning models [9,38]. Besides these approaches, prototypebased [49,60] and and attention-based [50] approaches have also been proposed for model interpretation. Note that except the local approximation approach and part of the perturbation approach (e.g., the occlusion method [22], most approaches require at least part of the model structure and parameter details in order to find input parts which are relevant to the model prediction.
Method
In this study, we aim to provide interpretation for model prediction given any specific input to a well-trained and fixed deep learning model. The interpretation is demonstrated by estimating relative contribution of each input part to the specific model prediction and correspondingly localizing input regions or elements which are relevant to the model prediction. It is worth noting that no model structure and parameter details are assumed to be known during the model interpretation process.
Motivation
Although humans can often instantly recognize objects in images, certain attention mechanism in human brain is likely involved in the process of object recognition [29,53]. In other words, humans often need to implicitly or explicitly attend to local regions for image understanding and object recognition. While the detailed human attention mechanism is yet to be further explored, initial studies [13] suggest that most local parts of an object in an image help humans recognize the object, and appearance of only an individual object part could help humans recall the corresponding class of the object. Consistent with the visual attention studies, recent exploration of convolutional neural networks (CNNs) shows that convolutional kernels even at higher convolutional layers (i.e., closer to the CNN output) often have smaller receptive fields than expected [44]. Considering that a global pooling is performed at the last convolutional layer in most CNN classifier models, it is widely accepted that CNN models largely depend on the collection of local image region features for image classification. For the other type of deep learning model backbone Transformer and its variants (e.g., ViT [14], Swin Transformer [42]), since most items in the input sequence at each model layer correspond to components (e.g., words for a sentence input, image patches for an image input) of the original input, the final model prediction also largely depends on the collection of local features of the original input. With the above observation, we hypothesize that the model output response to each single component of the original input may directly imply the importance of the single input component for the specific model prediction.
The proposed PAMI framework
The proposed interpretation framework is demonstrated in Figure 1. For image classification task as an example, given a well-trained classifier model f (·) and any input image x, denote by f c (x) the output prediction probability of the input image belonging to the c-th class. Suppose the model predicts the input as the k-th class, i.e., f k (x) is the maximum over all the output probabilities. To interpret the classifier's prediction, the proposed framework first partitions the input into multiple either overlapped or nonoverlapped parts (see the following subsections), and then with the j-th individual part preserved and all the remaining image regions masked, the output probability f k (x j ) of the k-th class for the majority-masked input x j is used to estimate the relative contribution of the preserved j-th image part to the original model prediction f k (x). By collecting and aggregating the output responses f k (x j )'s over all the partitioned image parts, an importance map with the same spatial size as that of the input image x can be generated to represent the contribution of each input element (i.e., pixel here). Local image regions with correspondingly higher response values in the importance map are supposed to contribute more to the model prediction f k (x), thus providing visual evidence for the model prediction. It is worth noting that the interpretation framework can be applied to different tasks (e.g., image caption and sentiment analysis) with various input formats.
Input partition strategy I: sliding window
One simple way to partition input is to apply the sliding window strategy with a pre-defined window size and sliding step size, where the window is in certain regular shape (e.g., circular or rectangular). In this way, the original input can be easily partitioned into multiple parts, and each part can be more or less overlapped by its neighboring parts determined by window size and sliding step size. For each partitioned part, all the remaining image regions will be masked somehow (e.g., by black pixels or blurred version of the original regions), and the output probability of the originally predicted class can be directly obtained with the majority-masked image as input. Since each input element (e.g., pixel of an input image) could be covered by multiple partitioned parts, the contribution of each input element in the final importance map can be obtained by averaging the output probabilities of the originally predicted class over all the partitioned parts covering the input element.
Note that the window size would affect the resolution level of the final importance map. Although smaller window would result in desired higher resolution, an image part with much smaller size (i.e., too small window) could contain little semantic information such that it becomes challenging for the framework to estimate the contribution of the smaller image part to a specific model prediction. In practice, users can choose one appropriate window size for interpretation of model prediction, or multiple window sizes for interpretation at multiple scales of image parts.
Input partition strategy II: pre-segmentation
Another way to partition input is to pre-segment the input into multiple parts with certain segmentation strategy. When the input is an image, various unsupervised segmentation algorithms can be adopted for pre-segmentation of the input. In this study, super-pixel segmentation algorithms are used for input image partition [5,19,46,57]. With a particular super-pixel segmentation method, an input image can be partitioned into multiple non-overlapped parts (i.e., super-pixels), with each part often having irregular form of region boundary and likely containing homogeneous visual information. As introduced above, the contribution of each super-pixel to the model prediction can be obtained by preserving the single super-pixel and masking the other superpixels as the input and collecting the model output response of the predicted class to the majority-masked input.
In practice, due to the imperfect performance of any single super-pixel segmentation method, some super-pixels may contain parts of both object region and background region, resulting in the importance map where part of background regions also have relatively higher responses. To alleviate such an issue, multiple super-pixel segmentation methods are employed, and multiple importance maps based on these segmentation methods are then averaged to estimate the contribution of each input element (e.g., pixels) to the original model prediction. The average importance map may be further improved by running the above process once more (i.e., second run), in which the superpixel segmentation methods are performed on the average importance map rather than the original input image. In addition, when generating each majority-masked input, the highly smoothed version of the original input image is used to fill the corresponding masked regions.
Compared to the sliding window strategy, the partitioned parts by the pre-segmentation strategy have more precise and reasonable region boundaries particularly for image data. This in turn often leads to the final importance map with clear boundaries between object regions and background regions, thus more precisely locating the image regions which contribute to the model prediction. Note that both input partition strategies also work when input is a sequence of items. For example, when input is a sentence as in the sentiment analysis task, any input can be partitioned into words or phrases with either the sliding window strategy or appropriate pre-segmentation strategy.
Comparison with relevant studies
The proposed PAMI framework can provide interpretation of model prediction without requiring to know model structure and parameter information. In contrast, most existing interpretation methods requires either part of or the whole model details. For example, CAM and its variants need the feature maps from certain convolutional layers and part of model parameters in order to obtain the final class activation map [8,41,62,73,82], and the gradient-based methods need all the model details to calculate gradient information over model layers [8,62,[66][67][68]. One exception is the occlusion method [54,79] which does not require model details as the proposed PAMI framework. PAMI can be considered as an opposite version of the occlusion method, i.e., only preserving an input part versus only removing or occluding an input part for estimating the contribution of the input part to the model prediction. In image classification, occluding part of the foreground object in the image may not significantly affect the model prediction because the model can use the other object parts in the image for confident prediction. As a result, occlusion method may neglect contribution of certain object parts to the original model prediction, and often performs worse than the proposed PAMI.
Because the proposed PAMI framework can consider the Transfomers libary [74] Sentiment140 [23] well-trained model as a black-box, it can potentially work for various backbone structures (e.g., both CNN and Transformer backbones). In contrast, the majority of interpretation methods were proposed for the CNN backbone, and specific modifications are often required when applying existing interpretation methods (e.g., CAM or Grad-CAM) to other backbones like Transformer [9,36] and graph neural networks [3,12,55]. Another merit of the proposed PAMI framework is its potential usage in multiple tasks with different input formats. While this study mainly use the image classification task for evaluation of the PAMI framework, PAMI in principle can be applied to various model prediction tasks, such as image caption and sentiment analysis, where the input data can be in the format of sentence or image. In contrast, most existing interpretation methods do not work across tasks without further modifications or extensions.
The most relevant interpretation methods are RISE [54] and ScoreCAM [73] which also estimate the importance map based on linear combination of input masks with weights from model outputs. However, RISE is based on a large set of randomly generated masks and ScoreCAM is based on the feature maps at last convolutional layer of the (CNN) model, both leading to low-resolution and often inappropriate importance maps.
Experiments
Experimental setup
In this study, three image classification datasets ImageNet-2012 [59], Pascal VOC 2007 [18], and COCO 2014 [40] were mainly used for evaluation of the proposed PAMI method. In addition, an image caption dataset COCO [40] and sentiment analysis dataset Senti-ment140 [23] were also employed to show the wide applications of the proposed method. All the models were from the publicly released resources and evaluated on the corresponding validation or test set (see Table 1 for more details).
By default, for the sliding window strategy of the proposed PAMI, circular window with radius 40 pixels and step size 6 pixels was used to generate local image regions. For the pre-segmentation strategy, four super-pixel segmentation algorithms, i.e., felzenszwalb [19], SLIC [57], water-shed [46] from scikit-image library [71], and SEEDS [5] from the OpenCV library [7] were utilized respectively for pre-segmentation of each image into multiple regions. Considering region of interest (i.e., relevant region to the model prediction) could vary a lot over images, each segmentation algorithm was run multiple times with different hyperparameter settings to generate sets of local regions at different scales (see the supplementary A for detailed settings). The importance maps over all hyper-parameter settings and all the four pre-segmentation algorithms were averaged as the estimated importance map. A Gaussian kernel with size 49 × 49 pixels and standard deviation 100 was used to generate the smoothed (blurred) image for region masking.
The proposed PAMI was compared with widely used visualization methods for interpretation of model predictions, including Gradient [63], GradCAM [62], ScoreCAM [73], RISE [54], FullGrad [67], MASK [22], Occlusion [79], GuidedBP [66], SmoothGrad [65] and LRP [48]. The default hyper-parameter setting for each method was adopted (see supplementary B for details). Besides qualitative evaluation, quantitative evaluation was also performed using the pointing game [80] and the insertion metric [54]. In the pointing game, it measures whether the pixel with the highest activation in the importance map is successfully within the image region of the object corresponding to the interpreted class, with 'hit' for success and 'miss' for failure. The average hit rate over all classes is used to measure performance of each method. For the insertion metric, it gradually restores the original pixels in the blurred version of the original image, with pixels having higher activation in the importance map restored earlier. Higher insertion score would indicate a better performance of interpretation.
Qualitative evaluation
The efficacy of the proposed PAMI method was extensively evaluated on the ImageNet-2012 data. Figure 2 demonstrates the visualization results on multiple representative images with the VGG19bn model. The visualization results were generated with respect to the model output of the ground-truth class for each image. It can be observed that, the proposed PAMI with the pre-segmentation strategy for input partition (last column) can often precisely and largely completely localize the object regions which are actually relevant to the specific model prediction, while existing methods roughly localize either object regions at low resolution, sparse part of object regions, disconnected pixels within object regions, or even irrelevant background regions. Multiple colors within relevant region in the importance map from the proposed PAMI method suggests that different object regions may have different degrees of contributions to the model prediction. On the other hand, the proposed PAMI simply with the sliding window strategy can often obtain similar performance as GradCAM but Figure 2. Qualitative evaluation of the proposed PAMI method on the ImageNet-2012 dataset. The first column list the input images to the classifier. The last two columns are the importance maps from the proposed PAMI method with the two strategies respectively, and all the other columns are from the representative strong baseline methods. In each importance map or heatmap, higher activation is in yellow and lower activation is in blue. without requiring model structure and parameter details.
Another observation is that the proposed PAMI method can work more stably than existing methods under challenging conditions. In particular, the PAMI method can well localize small-scale objects (rows 3 & 4) and relatively largescale objects (row 7) in images, and also can precisely localize the regions of multiple object instances of the same class (rows 1 & 2). In comparison, most existing methods often perform worse under at least some of these challenging conditions. Similar observations were also obtained on the PASCAL-VOC dataset and the COCO dataset (see supplementary C), consistently supporting that the proposed PAMI method is effective in providing visual evidence for interpretation of model predictions.
Quantitative evaluation
Although the non-existence of ground-truth or ideal interpretation for any specific model prediction makes it challenging to quantitatively evaluate any interpretation method, the pointing game [80] and the insertion metric [54] have been proposed to roughly evaluate the performance on correctly localizing regions relevant to model predictions. With the pointing game, Table 2 (columns 2, 4, and 6) shows that the proposed PAMI method with the presegmentation partition strategy (last row) has similar hitting rate on the ImageNet and COCO datasets compared with the best baseline GradCAM and higher hitting rate than all the baselines on the VOC dataset, suggesting that the local region which is considered most relevant to the model prediction by PAMI is often actually part of the object region. Similarly with the insertion metric (Table 2, columns 3, 5, and 7), PAMI has the best performance on ImageNet and VOC datasets, and is close to the best baseline RISE on COCO dataset, again supporting that PAMI can well localize image regions belonging to the interpreted class. More experimental details for quantitative evaluation can be found in the supplementary D.
Generality of the proposed PAMI method
To evaluate the generality of the proposed method, well-trained deep learning classifiers with multiple different backbones were employed, including VGG16 [64], ResNet50 [25], SE-ResNet [27], InceptionV3 [69], DenseNet121 [28], RegNet-X-16GF [56], ConvNext-Tiny [43], ViT-L-16 [14] and SwinT-Tiny [42]. From Figure 3, we can see that the proposed PAMI method can robustly and precisely localize the object regions which are relevant to the model prediction for each input image, regardless of the classifier backbones. In contrast, for each representative baseline method, the importance maps often In the proposed method, masking majority of the input is one necessary step to estimate the contribution of each single region or part of the input. There are multiple choices for the masking operator. When input is an image, the tobe-masked region could be replaced by constant intensity value such as 0 (i.e., becoming black) or 255 (i.e., becoming white), or by the blurred version of the input image. Figure 4 demonstrates exemplar results with different masking operators, which shows that masking by blurred region ('Blurred') results in better separation between the background regions and the object regions of interest at the first run, in turn leading to better importance maps with clearer boundaries between background and object regions at the second (i.e., final) run. When the majority of the input is replaced with extreme black or white pixels, the modified input image becomes further from the original class distribution in the feature space compared to the modified image with blurred region, which makes the model prediction unstable and therefore may not faithfully represent the importance of the preserved local region. Figure 5. Effect of applying multiple pre-segmentation algorithms. From left to right: input, importance maps from four individual pre-segmentation algorithms, and average importance maps at the first and the second run respectively.
Sensitivity and ablation study
In addition, the average of multiple importance maps from multiple pre-segmentation algorithms often results in better visualization than that of using a single presegmentation algorithm, as demonstrated in Figure 5 (columns 2-5 vs. column 6). Figure 5 also shows that the second run (last column) can often refine the importance map from the first run (column 6). This is probably because sometimes the adopted pre-segmentation algorithms cannot well separate background regions from object regions at the first run, but the initially estimated importance map from the first run provides alternative information for pre-segmentation algorithms to well separate background regions from object regions. Note that the proposed PAMI is independent of the adopted pre-segmentation algorithms, and better pre-segmentation algorithms can be adopted to replace the current ones in the future. Please see more ablation study results with consistent observations from the supplementary F.
Extensive applications of the proposed PAMI
The proposed PAMI method is expected to work for multiple types of prediction tasks. For example, based on a well-trained image caption model [47], the PAMI method can well localize the image regions relevant to the predicted words (e.g., 'dog', 'laying', 'sidewalk', 'bicycle') which refer to any object or behaviour in the image ( Figure 6, rows 1, 3), while the representative baseline method RISE often cannot precisely localize relevant regions ( Figure 6, rows 2, 4). Another example is for the sentiment analysis task, where the model tries to evaluate whether the viewpoint in an input sentence is positive or negative. With an input partition strategy (see supplementary G)) similar to the pre-segmentation for images, the proposed PAMI method can directly and correctly estimate the contribution of each word to the final model prediction (Figure 7). These results (also see more results in the supplementary G) confirm that the proposed PAMI method can work for various tasks with different input modalities.
Conclusion
In this study, we propose a novel visualization method PAMI for interpretation of model predictions. PAMI does not requires any model parameter details and works stably across model backbones and input formats. Compared to existing visualization approaches, PAMI can more likely and precisely find the possible local input regions which contribute to the specific model prediction to some degree. It can be used as a plug-in component and applied to multiple types of prediction tasks, which has been partly confirmed by image classification, image caption, and sentiment analysis tasks. Its utility in more tasks including various natural language processing tasks will be evaluated in future work.
A. Hyper-parameters for pre-segmentation
The hyper-parameters of the four pre-segmentation algorithms used in both the first and the second run are summarized in table 3. The hyper-parameter names used in the methods correspond to those in the skimage and cv2 packages. Table 3. Hyper-parameter configuration for each segmentation algorithm.
Method
Hyper-parameter felzenszwalb [19] scale=250, 200, 150, 100, 70, 50 sigma=0.8 min size=784
SLIC [57] n segments=10, 20, 30, 40, 50, 60, 70, 80 compactness=20 SEEDS [5] num superpixels=10, 20,30 num levels=5 n iter=10
watershed [46] markers=10, 20, 30 compactness=0.0001
B. Hyper-parameters in baseline methods
Details of each baseline method and reference source code were provided in Table 4. For gradient-based methods, following the related work [63], the maximum importance value among the three channels at each spatial position was used as the final importance value for the spatial position in the importance map. For LRP [48],the values from three channels were averaged as the final result for each spatial position. Table 4. Hyper-parameter configuration for each baseline method.
Method
Hyper-parameter Code Source GradCAM [19] The last layer of feature extractor PyTorch CNN Visualizations [51] ScoreCAM [73] The last layer of feature extractor TorchCam [20] Occlusion [79] strides=6, shapes= (3,40,40) Captum [35] RISE [54] num mask=4000, cell size=7 probability=0. C. More qualitative evaluation C.1. More results on ImageNet-2012
More qualitative evaluation of the proposed PAMI method on the ImageNet-2012 dataset [59] can be seen in Figure 8, supporting the effectiveness of the method.
C.2. More results on Pascal VOC 2007
The effectiveness of the proposed PAMI method was also validated on the Pascal VOC 2007 dataset [17], as shown in Figure 9.
C.3. More results on COCO 2014
The effectiveness of the proposed PAMI method was also validated on the COCO 2014 dataset [40], as shown in Figure 10.
D. Details of quantitative evaluation
For the pointing game, the process provided by TorchRay [21] was followed. On the Pascal VOC and the COCO datasets, the evaluation code provided by TorchRay was directly used, and on the ImageNet dataset, the same process was performed by marking points that fall within the bounding box of the object as hits and the rest as misses. For the insertion metric, a Gaussian kernel with size 49×49 pixels and standard deviation 100 was used to generate the blurred image where we gradually restore the original pixels from. The importance map for quantitative evaluation were generated with respect to the model output of the ground-truth class for each image. Two exemplar results were shown in Figure 11.
E. Generality of the PAMI
More results with different model backbones were shown in Figure 12, supporting the generality of the proposed method.
F. More ablation study results
F.1. Effect of different masking operators
More visualization results based on different masking operators were provided in Figure 13, which shows that the blurred version results in better results especially when image background is complex.
F.2. Effect of multiple pre-segmentation algorithms
More results in Figure 14 shows that using multiple presegmentations and two runs result in better visualizations.
G. Extensive applications of PAMI G.1. Image caption task More experimental results for the image caption task on the COCO dataset can be seen in Figure 15.
G.2. Sentiment analysis task
More test sentences in Sentiment140 [23] were randomly selected for effectiveness evaluation of the proposed method in sentiment analysis tasks. The experimental results can be seen in Figure 16.
Figure 1 .
1The proposed PAMI framework with the sliding window based (left half) or the pre-segmentation based (right half) input partition strategy. Each time only one local part of the input is preserved and the remaining parts are masked (blurred here).
Figure 3 .
3Representative visualization results from the proposed method and representative baselines with different model backbones. Cross sign means the relevant baseline is not working on the corresponding model backbone.
Figure 4 .
4Importance map of an representative input based on different masking operators. 'Black/White/Blurred': masking types. Red boxes in images: object regions relevant to model predictions.
Figure 6 .
6Two exemplar visualization results from the proposed method and the strong baseline RISE for the image caption task.
Figure 7 .
7Two exemplar visualization results from the proposed method for the sentiment analysis task. The first row shows the contribution of each word to a positive emotion prediction, and the second row for a negative emotion prediction.
Figure 8 .
8More qualitative evaluation of the proposed PAMI method on the ImageNet-2012 dataset. For each pair: input image is on the left and the visualization result from the proposed PAMI is on the right.
Figure 9 .Figure 10 .
910Qualitative evaluation of the proposed PAMI method on Pascal VOC 2007 dataset. Note that each input was resized to be square for demonstration. Qualitative evaluation of the proposed PAMI method on COCO 2014 dataset. Note that each input was resized to be square for demonstration.
Figure 11 .
11Two examples of quantitative evaluation. First column: original input image. Second column: the importance map and the most important pixel marked with asterisks. Third column: image after restoring a certain percentage of pixels and the prediction probability of the image being the ground-truth class by the model. Fourth column: the insertion curve and the area under the curve as the insertion score.
Figure 12 .
12More representative visualization results from the proposed method with different model backbones.
Figure 13 .
13Importance maps of more representative inputs based on different masking operators.
Figure 14 .
14Effect of applying multiple pre-segmentation algorithms. From left to right: input, importance maps from four individual pre-segmentation algorithms, and average importance maps at the first and the second run respectively.
Figure 15 .
15More visualization results from the proposed method for the image caption task.
Figure 16 .
16More visualization results from the proposed method for the sentiment analysis task.
Table 1 .
1Models and datasets used in experiments.Task
Model source
Dataset
Cassification
VGG19bn from PyTorch [52]
50000 images of ImageNet-2012 [59]
validation set with 1000 classes
VGG16 from TorchRay [21]
Ffirst 1000 images of Pascal VOC 2007 [18]
test set with 20 classes
VGG16 from TorchRay [21]
First 1000 images of COCO 2014 [40]
instances validation set with 80 classes
Image caption
ClipCap [47]
COCO 2014 [40]
Sentiment analysis
Table 2 .
2Quantitative evaluation of the proposed PAMI method on the three image datasets. 'Random': randomly generating a heatmap for each input image. 'Center': taking the fixed image center position as the highest activation point for each input image.Method
ImageNet
VOC
COCO
Pointing Insertion Pointing Insertion Pointing Insertion
Random
47.89
-
33.39
-
11.27
-
Center
81.96
-
70.79
-
25.97
-
Gradient [63]
83.14
0.1928
72.61
0.3321
34.65
0.1585
GuidedBP [66]
83.95
0.2632
71.14
0.4737
32.83
0.1935
Occlusion [79]
84.53
0.5741
84.49
0.6753
54.83
0.3229
MASK [22]
84.49
0.4867
76.30
0.5616
49.78
0.2664
RISE [54]
91.58
0.5460
82.43
0.6885
56.95
0.3305
SmoothGrad [65]
86.51
0.2494
75.38
0.3824
39.45
0.1753
GradCAM [62]
93.22
0.5154
87.45
0.5720
57.95
0.2660
ScoreCAM [73]
92.01
0.5191
86.51
0.6030
55.01
0.2656
FullGrad [67]
87.01
0.5045
77.58
0.5049
44.52
0.2362
Ours (Strategy I)
89.17
0.5566
74.95
0.6133
48.19
0.2688
Ours (Strategy II)
92.32
0.5965
87.87
0.7213
56.85
0.3291
change over model backbones and even may not work for
the Transformer backbone ViT and SwinT. This confirms
that the proposed PAMI method is more stable and can be
applied to interpretation of model predictions with various
model backbones. Please see more results with consistent
observations from the supplementary E.
A bicycle parked outside of a house with a window.Abicycle parked … A bicycle parked outside … A bicycle … of a house … A bicycle … to a window. A man and a woman riding horses on a beach. A man and a woman … A man and a woman … A man and a woman riding … A man … riding a horses … A man …. a beach. A man standing next to a truck in the woods. A man standing … A man standing … A man … a truck in the woods. A man … the woods. A cat standing on top of a car in a garage. A cat standing on … A cat standing on … A cat … of a car in a garage. A cat … in a garage. A man pushing a cart full of bananas. A man pushing a … A man pushing a … A man pushing a cart … A man … full of bananas. A woman eating a large slice of pizza. A woman eating a … A woman eating a … A woman eating … of pizza. A person riding a horse in a parade. A person riding … A person riding a … A person riding a horse … A person … in a parade.
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, Wojciech Samek, PLoS ONE. 107130140Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On Pixel-Wise Explanations for Non-Linear Classi- fier Decisions by Layer-Wise Relevance Propagation. PLoS ONE, 10(7):e0130140, 2015. 1, 2
The Logical Expressiveness of Graph Neural Networks. Pablo Barceló, V Egor, Mikael Kostylev, Jorge Monet, Juan Pérez, Juan-Pablo Reutter, Silva, International Conference on Learning Representations. Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan-Pablo Silva. The Logical Expressive- ness of Graph Neural Networks. In International Conference on Learning Representations, 2020. 5
Understanding the role of individual units in a deep neural network. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, Antonio Torralba, Proceedings of the National Academy of Sciences. 11748David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences, 117(48):30071-30078, 2020. 2
Seeds: Superpixels extracted via energy-driven sampling. Michael Van Den, Xavier Bergh, Gemma Boix, Roig, Luc Benjamin De Capitani, Van Gool, European conference on computer vision. 413Michael Van den Bergh, Xavier Boix, Gemma Roig, Ben- jamin de Capitani, and Luc Van Gool. Seeds: Superpixels extracted via energy-driven sampling. In European confer- ence on computer vision, pages 13-26, 2012. 4, 5, 13
Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers. Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek, International Conference on Artificial Neural Networks. 1Alexander Binder, Grégoire Montavon, Sebastian La- puschkin, Klaus-Robert Müller, and Wojciech Samek. Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers. In International Conference on Artificial Neural Networks, pages 63-71, 2016. 1, 2
The OpenCV Library. Dr. Dobb's Journal of Software Tools. G Bradski, G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000. 5
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, N Vineeth, Balasubramanian, IEEE Winter conference on Applications of Computer Vision. Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. Grad-CAM++: General- ized Gradient-Based Visual Explanations for Deep Convolu- tional Networks. In IEEE Winter conference on Applications of Computer Vision, pages 839-847, 2018. 1, 2, 4
Transformer Interpretability Beyond Attention Visualization. Hila Chefer, Shir Gur, Lior Wolf, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition25Hila Chefer, Shir Gur, and Lior Wolf. Transformer Inter- pretability Beyond Attention Visualization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 782-791, 2021. 2, 5
Sihong Chen, Kai Ma, Yefeng Zheng, arXiv:1904.00625Transfer Learning for 3D Medical Image Analysis. 3arXiv preprintSihong Chen, Kai Ma, and Yefeng Zheng. Med3D: Transfer Learning for 3D Medical Image Analysis. arXiv preprint arXiv:1904.00625, 2019. 1
Yannis Katsis, Ban Kawas, and Prithviraj Sen. A Survey of the State of Explainable AI for Natural Language Processing. Marina Danilevsky, Ranit Kun Qian, Aharonov, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingMarina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Kat- sis, Ban Kawas, and Prithviraj Sen. A Survey of the State of Explainable AI for Natural Language Processing. In Pro- ceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Pro- cessing, pages 447-459, 2020. 2
Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology. Nima Dehmamy, Albert-László Barabási, Rose Yu, Advances in Neural Information Processing Systems. 32Nima Dehmamy, Albert-László Barabási, and Rose Yu. Un- derstanding the Representation Power of Graph Neural Net- works in Learning Graph Topology. Advances in Neural In- formation Processing Systems, 32, 2019. 5
Neural mechanisms of selective visual attention. Robert Desimone, John Duncan, Annual Review of Neuroscience. 181Robert Desimone, John Duncan, et al. Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18(1):193-222, 1995. 3
Sylvain Gelly, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, International Conference on Learning Representations. 36Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An Image is Worth 16x16 Words: Trans- formers for Image Recognition at Scale. In International Conference on Learning Representations, 2020. 3, 6
Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball. Andrew Elliott, Stephen Law, Chris Russell, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAndrew Elliott, Stephen Law, and Chris Russell. Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10693-10702, 2021. 2
Visualizing Higher-Layer Features of a Deep Network. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pascal Vincent, 1341University of MontrealDumitru Erhan, Yoshua Bengio, Aaron Courville, and Pas- cal Vincent. Visualizing Higher-Layer Features of a Deep Network. University of Montreal, 1341(3):1-13, 2009. 2
The PASCAL Visual Object Classes Challenge. Mark Everingham, Mark Everingham. The PASCAL Visual Object Classes Challenge 2007. http : / / www . pascal - network . org / challenges / VOC / voc2007 / workshop/index.html. 13
The PASCAL Visual Object Classes Challenge. M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman, M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 Results. http://www.pascal- network.org/challenges/VOC/voc2007/workshop/index.html. 5
Efficient Graph-Based Image Segmentation. F Pedro, Felzenszwalb, P Daniel, Huttenlocher, International Journal of Computer Vision. 59213Pedro F Felzenszwalb and Daniel P Huttenlocher. Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59(2):167-181, 2004. 4, 5, 13
Torchcam: class activation explorer. François-Guillaume Fernandez, François-Guillaume Fernandez. Torchcam: class activation explorer. https://github.com/frgfm/torch- cam, March 2020. 13
. Ruth Fong, Mandela Patrick, Andrea Vedaldi, Torchray, 513Ruth Fong, Mandela Patrick, and Andrea Vedaldi. Torchray. https : / / github . com / facebookresearch / TorchRay, 2019. 5, 13
Interpretable Explanations of Black Boxes by Meaningful Perturbation. C Ruth, Andrea Fong, Vedaldi, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision713Ruth C Fong and Andrea Vedaldi. Interpretable Explanations of Black Boxes by Meaningful Perturbation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3429-3437, 2017. 2, 5, 7, 13
Twitter sentiment classification using distant supervision. Alec Go, Richa Bhayani, Lei Huang, 513Alec Go, Richa Bhayani, and Lei Huang. Twit- ter sentiment classification using distant supervision. http://help.sentiment140.com/home, 2009. 5, 13
Understanding Individual Decisions of CNNs via Contrastive Backpropagation. Jindong Gu, Yinchong Yang, Volker Tresp, Asian Conference on Computer Vision. Jindong Gu, Yinchong Yang, and Volker Tresp. Understand- ing Individual Decisions of CNNs via Contrastive Backprop- agation. In Asian Conference on Computer Vision, pages 119-134, 2018. 2
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 6
Wasserstein CNN: Learning Invariant Features for NIR-VIS Face Recognition. Ran He, Xiang Wu, Zhenan Sun, Tieniu Tan, IEEE Transactions on Pattern Analysis and Machine intelligence. 417Ran He, Xiang Wu, Zhenan Sun, and Tieniu Tan. Wasser- stein CNN: Learning Invariant Features for NIR-VIS Face Recognition. IEEE Transactions on Pattern Analysis and Machine intelligence, 41(7):1761-1773, 2018. 1
Squeeze-and-Excitation Networks. Jie Hu, Li Shen, Gang Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJie Hu, Li Shen, and Gang Sun. Squeeze-and-Excitation Net- works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7132-7141, 2018. 6
Densely Connected Convolutional Networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil- ian Q Weinberger. Densely Connected Convolutional Net- works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4700-4708, 2017. 6
Computational modelling of visual attention. Laurent Itti, Christof Koch, Nature Reviews Neuroscience. 23Laurent Itti and Christof Koch. Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3):194- 203, 2001. 3
Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation. Ryohei Brian Kenji Iwana, Seiichi Kuroki, Uchida, Proceedings of the IEEE International Conference on Computer Vision Workshop. the IEEE International Conference on Computer Vision Workshop1Brian Kenji Iwana, Ryohei Kuroki, and Seiichi Uchida. Ex- plaining Convolutional Neural Networks using Softmax Gra- dient Layer-wise Relevance Propagation. In Proceedings of the IEEE International Conference on Computer Vision Workshop, pages 4176-4185, 2019. 1, 2
LayerCAM: Exploring Hierarchical Class Activation Maps for Localization. Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, Yunchao Wei, IEEE Transactions on Image Processing. 30Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei. LayerCAM: Exploring Hierarchi- cal Class Activation Maps for Localization. IEEE Transac- tions on Image Processing, 30:5875-5888, 2021. 2
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, International Conference on Machine Learning. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability Be- yond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In International Conference on Machine Learning, pages 2668-2677, 2018. 2
GroupFace: Learning Latent Groups and Constructing Group-based Representations for Face Recognition. Yonghyun Kim, Wonpyo Park, Myung-Cheol Roh, Jongju Shin, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYonghyun Kim, Wonpyo Park, Myung-Cheol Roh, and Jongju Shin. GroupFace: Learning Latent Groups and Con- structing Group-based Representations for Face Recogni- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5621-5630, 2020. 1
Learning how to explain neural networks: Pattern-Net and PatternAttribution. Pieter-Jan Kindermans, T Kristof, Maximilian Schütt, Klaus-Robert Alber, Dumitru Müller, Been Erhan, Sven Kim, Dähne, International Conference on Learning Representations. Pieter-Jan Kindermans, Kristof T Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. Learning how to explain neural networks: Pattern- Net and PatternAttribution. In International Conference on Learning Representations, 2018. 2
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, arXiv:2009.07896A unified and generic model interpretability library for pytorch. arXiv preprintNarine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Mel- nikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, et al. Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896, 2020. 13
TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation. Ruiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, Scott Sanner, arXiv:2203.07239arXiv preprintRuiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, and Scott Sanner. TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation. arXiv preprint arXiv:2203.07239, 2022. 5
Medical Image Segmentation using Squeeze-and-Expansion Transformers. Shaohua Li, Xiuchao Sui, Xiangde Luo, Xinxing Xu, Yong Liu, Rick Siow Mong Goh, International Joint Conferences on Artificial Intelligence. 2021Shaohua Li, Xiuchao Sui, Xiangde Luo, Xinxing Xu, Yong Liu, and Rick Siow Mong Goh. Medical Image Segmenta- tion using Squeeze-and-Expansion Transformers. In Inter- national Joint Conferences on Artificial Intelligence, 2021. 1
Graph Neural Network for Interpreting Task-fMRI Biomarkers. Xiaoxiao Li, C Nicha, Yuan Dvornek, Juntang Zhou, Pamela Zhuang, James S Ventola, Duncan, In International Conference on Medical Image Computing and Computer-Assisted Intervention. 2Xiaoxiao Li, Nicha C Dvornek, Yuan Zhou, Juntang Zhuang, Pamela Ventola, and James S Duncan. Graph Neural Network for Interpreting Task-fMRI Biomarkers. In In- ternational Conference on Medical Image Computing and Computer-Assisted Intervention, pages 485-493, 2019. 2
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation. Dohun Lim, Hyeonseok Lee, Sungchan Kim, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionDohun Lim, Hyeonseok Lee, and Sungchan Kim. Building Reliable Explanations of Unreliable Neural Networks: Lo- cally Smoothing Perspective of Model Interpretation. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6468-6477, 2021. 2
Microsoft COCO: Common Objects in Context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European Conference on Computer Vision. 513Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision, pages 740-755, 2014. 5, 13
Partial Class Activation Attention for Semantic Segmentation. Sun-Ao Liu, Hongtao Xie, Hai Xu, Yongdong Zhang, Qi Tian, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition14Sun-Ao Liu, Hongtao Xie, Hai Xu, Yongdong Zhang, and Qi Tian. Partial Class Activation Attention for Semantic Seg- mentation. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 16836-16845, 2022. 1, 4
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision36Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 10012-10022, 2021. 3, 6
A ConvNet for the 2020s. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feicht- enhofer, Trevor Darrell, and Saining Xie. A ConvNet for the 2020s. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. 6
Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. Wenjie Luo, Yujia Li, Raquel Urtasun, Richard Zemel, Advances in Neural Information Processing Systems. 293Wenjie Luo, Yujia Li, Raquel Urtasun, and Richard Zemel. Understanding the Effective Receptive Field in Deep Convo- lutional Neural Networks. Advances in Neural Information Processing Systems, 29, 2016. 3
Understanding Deep Image Representations by Inverting Them. Aravindh Mahendran, Andrea Vedaldi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAravindh Mahendran and Andrea Vedaldi. Understanding Deep Image Representations by Inverting Them. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5188-5196, 2015. 2
Color Image Segmentation. Fernand Meyer, International Conference on Image Processing and its Applications. 413Fernand Meyer. Color Image Segmentation. In International Conference on Image Processing and its Applications, pages 303-306, 1992. 4, 5, 13
Ron Mokady, Amir Hertz, Amit H Bermano, arXiv:2111.09734Clip-Cap: CLIP Prefix for Image Captioning. 5arXiv preprintRon Mokady, Amir Hertz, and Amit H Bermano. Clip- Cap: CLIP Prefix for Image Captioning. arXiv preprint arXiv:2111.09734, 2021. 5, 8
Explaining nonlinear classification decisions with deep Taylor decomposition. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, Klaus-Robert Müller, Pattern Recognition. 6513Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explain- ing nonlinear classification decisions with deep Taylor de- composition. Pattern Recognition, 65:211-222, 2017. 1, 2, 5, 13
Neural Prototype Trees for Interpretable Fine-grained Image Recognition. Meike Nauta, Ron Van Bree, Christin Seifert, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMeike Nauta, Ron van Bree, and Christin Seifert. Neural Prototype Trees for Interpretable Fine-grained Image Recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 14933-14943, 2021. 2
Visual vs internal attention mechanisms in deep neural networks for image classification and object detection. Abraham Montoya Obeso, Jenny Benois-Pineau, Mireya Saraí García, Alejandroálvaro Ramírez Vázquez, Acosta, Pattern Recognition. 1232108411Abraham Montoya Obeso, Jenny Benois-Pineau, Mireya Saraí García Vázquez, and AlejandroÁlvaro Ramírez Acosta. Visual vs internal attention mechanisms in deep neu- ral networks for image classification and object detection. Pattern Recognition, 123:108411, 2022. 2
Pytorch cnn visualizations. Utku Ozbulak, Utku Ozbulak. Pytorch cnn visualizations. https : / / github . com / utkuozbulak / pytorch -cnn - visualizations, 2019. 13
PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Advances in Neural Information Processing Systems. Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith ChintalaAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Rai- son, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Im- perative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems, pages 8024-8035, 2019. 5
The Attention System of the Human Brain: 20 Years After. E Steven, Michael I Petersen, Posner, Annual Review of Neuroscience. 35373Steven E Petersen and Michael I Posner. The Attention Sys- tem of the Human Brain: 20 Years After. Annual Review of Neuroscience, 35:73, 2012. 3
RISE: Randomized Input Sampling for Explanation of Black-box Models. Vitali Petsiuk, Abir Das, Kate Saenko, Proceedings of the British Machine Vision Conference. the British Machine Vision Conference713Vitali Petsiuk, Abir Das, and Kate Saenko. RISE: Random- ized Input Sampling for Explanation of Black-box Models. In Proceedings of the British Machine Vision Conference, 2018. 1, 2, 4, 5, 6, 7, 13
Explainability Methods for Graph Convolutional Neural Networks. Soheil Phillip E Pope, Mohammad Kolouri, Charles E Rostami, Heiko Martin, Hoffmann, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPhillip E Pope, Soheil Kolouri, Mohammad Rostami, Charles E Martin, and Heiko Hoffmann. Explainability Methods for Graph Convolutional Neural Networks. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10772-10781, 2019. 5
Designing Network Design Spaces. Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKaiming He, and Piotr DollárIlija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollár. Designing Network Design Spaces. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 10428-10436, 2020. 6
Learning a Classification Model for Segmentation. Xiaofeng Ren, Jitendra Malik, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision413Xiaofeng Ren and Jitendra Malik. Learning a Classification Model for Segmentation. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 10-10, 2003. 4, 5, 13
Why Should I Trust You?": Explaining the Predictions of Any Classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining1Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the ACM SIGKDD In- ternational Conference on Knowledge Discovery and Data Mining, pages 1135-1144, 2016. 1, 2
ImageNet Large Scale Visual Recognition Challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Li Fei-Fei, International Journal of Computer Vision. 115313Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpa- thy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recogni- tion Challenge. International Journal of Computer Vision, 115(3):211-252, 2015. 5, 13
ProtoPShare: Prototypical Parts Sharing for Similarity Discovery in Interpretable Image Classification. Dawid Rymarczyk, Łukasz Struski, Jacek Tabor, Bartosz Zieliński, Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the ACM SIGKDD International Conference on Knowledge Discovery and Data MiningDawid Rymarczyk, Łukasz Struski, Jacek Tabor, and Bartosz Zieliński. ProtoPShare: Prototypical Parts Sharing for Sim- ilarity Discovery in Interpretable Image Classification. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1420- 1430, 2021. 2
FCN-Transformer Feature Fusion for Polyp Segmentation. Edward Sanderson, J Bogdan, Matuszewski, Annual Conference on Medical Image Understanding and Analysis. 2022Edward Sanderson and Bogdan J Matuszewski. FCN- Transformer Feature Fusion for Polyp Segmentation. In Annual Conference on Medical Image Understanding and Analysis, pages 892-907, 2022. 1
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision57Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 618-626, 2017. 1, 2, 4, 5, 7
Deep inside convolutional networks: visualising image classification models and saliency maps. K Simonyan, A Vedaldi, Zisserman, International Conference on Learning Representations. 713K Simonyan, A Vedaldi, and A Zisserman. Deep inside con- volutional networks: visualising image classification models and saliency maps. In International Conference on Learning Representations, 2014. 5, 7, 13
Very Deep Convolutional Networks for Large-Scale Image Recognition. Karen Simonyan, Andrew Zisserman, International Conference on Learning Representations. Karen Simonyan and Andrew Zisserman. Very Deep Convo- lutional Networks for Large-Scale Image Recognition. In In- ternational Conference on Learning Representations, 2015. 6
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg, arXiv:1706.03825SmoothGrad: removing noise by adding noise. 57arXiv preprintDaniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017. 5, 7
Striving for Simplicity: The All Convolutional Net. J Springenberg, Alexey Dosovitskiy, Thomas Brox, M Riedmiller, International Conference on Learning Representations Workshop. 57J Springenberg, Alexey Dosovitskiy, Thomas Brox, and M Riedmiller. Striving for Simplicity: The All Convolutional Net. In International Conference on Learning Representa- tions Workshop, 2015. 1, 2, 4, 5, 7
Full-Gradient Representation for Neural Network Visualization. Suraj Srinivas, François Fleuret, Advances in Neural Information Processing Systems. 3213Suraj Srinivas and François Fleuret. Full-Gradient Represen- tation for Neural Network Visualization. Advances in Neural Information Processing Systems, 32, 2019. 1, 2, 4, 5, 7, 13
Axiomatic Attribution for Deep Networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, International Conference on Machine Learning. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic Attribution for Deep Networks. In International Conference on Machine Learning, pages 3319-3328, 2017. 1, 2, 4
Rethinking the Inception Architecture for Computer Vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the Inception Ar- chitecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818-2826, 2016. 6
Rethinking Perturbations in Encoder-Decoders for Fast Training. Sho Takase, Shun Kiyono, Conference of the North American Chapter of the Association for Computational Linguistics. Sho Takase and Shun Kiyono. Rethinking Perturbations in Encoder-Decoders for Fast Training. In Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 5767-5780, 2021. 1
Emmanuelle Gouillart, Tony Yu, and the scikit-image contributors. scikit-image: image processing in Python. Johannes L Stéfan Van Der Walt, Juan Schönberger, François Nunez-Iglesias, Joshua D Boulogne, Neil Warner, Yager, PeerJ. 25453Stéfan van der Walt, Johannes L. Schönberger, Juan Nunez- Iglesias, François Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle Gouillart, Tony Yu, and the scikit-image con- tributors. scikit-image: image processing in Python. PeerJ, 2:e453, 6 2014. 5
Interpreting Predictions of NLP models. Eric Wallace, Matt Gardner, Sameer Singh, Proceedings of the Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts. the Conference on Empirical Methods in Natural Language Processing: Tutorial AbstractsEric Wallace, Matt Gardner, and Sameer Singh. Interpreting Predictions of NLP models. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Process- ing: Tutorial Abstracts, pages 20-23, 2020. 2
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, Xia Hu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop. the IEEE Conference on Computer Vision and Pattern Recognition Workshop713Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Score-CAM: Score-Weighted Visual Explanations for Convolutional Neu- ral Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop, pages 24-25, 2020. 1, 2, 4, 5, 7, 13
Transformers: State-of-the-Art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Proceedings of the Conference on Empirical Methods in Natural Language Processing: system demonstrations. the Conference on Empirical Methods in Natural Language Processing: system demonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chau- mond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Transform- ers: State-of-the-Art Natural Language Processing. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing: system demonstrations, pages 38-45, 2020. 5
Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, Regularized Dropout for Neural Networks. Advances in Neural Information Processing Systems. 34Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. R-Drop: Regularized Dropout for Neural Networks. Advances in Neural Informa- tion Processing Systems, 34:10890-10905, 2021. 1
BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation. Haoran Xu, Benjamin Van Durme, Kenton Murray, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingHaoran Xu, Benjamin Van Durme, and Kenton Murray. BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation. In Proceed- ings of the Conference on Empirical Methods in Natural Language Processing, pages 6663-6675, 2021. 1
VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition. Mengjia Yan, Mengao Zhao, Zining Xu, Qian Zhang, Guoli Wang, Zhizhong Su, International Conference on Computer Vision Workshop. Mengjia Yan, Mengao Zhao, Zining Xu, Qian Zhang, Guoli Wang, and Zhizhong Su. VarGFaceNet: An Efficient Vari- able Group Convolutional Neural Network for Lightweight Face Recognition. In International Conference on Computer Vision Workshop, 2019. 1
Understanding Neural Networks Through Deep Visualization. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson, International Conference on Machine Learning Workshop. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding Neural Networks Through Deep Visualization. In International Conference on Machine Learning Workshop, 2015. 2
Visualizing and Understanding Convolutional Networks. D Matthew, Rob Zeiler, Fergus, European Conference on Computer Vision. 713Matthew D Zeiler and Rob Fergus. Visualizing and Under- standing Convolutional Networks. In European Conference on Computer Vision, pages 818-833, 2014. 1, 2, 4, 5, 7, 13
Top-Down Neural Attention by Excitation Backprop. Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Stan Sclaroff, International Journal of Computer Vision. 126106Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top-Down Neu- ral Attention by Excitation Backprop. International Journal of Computer Vision, 126(10):1084-1102, 2018. 5, 6
Object Detectors Emerge in Deep Scene CNNs. Bolei Zhou, Aditya Khosla, Àgata Lapedriza, Aude Oliva, Antonio Torralba, International Conference on Learning Representations. Bolei Zhou, Aditya Khosla,Àgata Lapedriza, Aude Oliva, and Antonio Torralba. Object Detectors Emerge in Deep Scene CNNs. In International Conference on Learning Rep- resentations, 2015. 2
Learning Deep Features for Discriminative Localization. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionBolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning Deep Features for Discrimi- native Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2921- 2929, 2016. 1, 2, 4
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. Luisa M Zintgraf, S Taco, Tameem Cohen, Max Adel, Welling, International Conference on Learning Representations. Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. Visualizing Deep Neural Network Decisions: Pre- diction Difference Analysis. In International Conference on Learning Representations, 2017. 2
| [
"https://github.com/frgfm/torch-"
]
|
[
"Gauge-invariant absorption of light from a coherent superposition of states",
"Gauge-invariant absorption of light from a coherent superposition of states"
]
| [
"Axel Stenquist \nDepartment of Physics\nLund University\n22100LundSweden\n",
"Felipe Zapata \nDepartment of Physics\nLund University\n22100LundSweden\n",
"Jan Marcus Dahlström \nDepartment of Physics\nLund University\n22100LundSweden\n"
]
| [
"Department of Physics\nLund University\n22100LundSweden",
"Department of Physics\nLund University\n22100LundSweden",
"Department of Physics\nLund University\n22100LundSweden"
]
| []
| Absorption and emission of light is studied theoretically for excited atoms in coherent superposition of states subjected to isolated attosecond pulses in the extreme ultraviolet range. A gauge invariant formulation of transient absorption theory is motivated using the energy operator from Yang's gauge theory. The interaction, which simultaneously couples both bound and continuum states, is simulated by solving the time dependent Schrödinger equation for hydrogen and neon atoms. A strong dependence on the angular momentum and the relative phase of the states in the superposition is observed. Perturbation theory is used to disentangle the fundamental absorption processes and a rule is established to interpret the complex absorption behaviour. It is found that non-resonant transitions are the source of asymmetry in energy and phase, while resonant transitions to the continuum contribute symmetrically to absorption of light from coherent superpositions of states. | 10.1103/physreva.107.053106 | [
"https://export.arxiv.org/pdf/2302.05345v1.pdf"
]
| 256,808,270 | 2302.05345 | 90eeae2f11006a624eb70ec78158ed5465af2c1b |
Gauge-invariant absorption of light from a coherent superposition of states
Axel Stenquist
Department of Physics
Lund University
22100LundSweden
Felipe Zapata
Department of Physics
Lund University
22100LundSweden
Jan Marcus Dahlström
Department of Physics
Lund University
22100LundSweden
Gauge-invariant absorption of light from a coherent superposition of states
Absorption and emission of light is studied theoretically for excited atoms in coherent superposition of states subjected to isolated attosecond pulses in the extreme ultraviolet range. A gauge invariant formulation of transient absorption theory is motivated using the energy operator from Yang's gauge theory. The interaction, which simultaneously couples both bound and continuum states, is simulated by solving the time dependent Schrödinger equation for hydrogen and neon atoms. A strong dependence on the angular momentum and the relative phase of the states in the superposition is observed. Perturbation theory is used to disentangle the fundamental absorption processes and a rule is established to interpret the complex absorption behaviour. It is found that non-resonant transitions are the source of asymmetry in energy and phase, while resonant transitions to the continuum contribute symmetrically to absorption of light from coherent superpositions of states.
I. INTRODUCTION
Pulses of attosecond temporal duration in the extreme ultraviolet (XUV) regime can be created through a nonlinear optical process that is known as high-order harmonic generation (HHG) [1]. As a result, coherent dynamical processes in quantum systems can be studied on the attosecond timescale [2]. In 2010, the first attosecond transient absorption spectroscopy (ATAS) experiment was conducted using a pump-probe setup to investigate the dynamics of ions in superposition states [3]. Here, and in subsequent works, an intense ultra-short laser field was used to create ions by strong-field ionization, while transient absorption of a weak XUV attosecond pulse was monitored to interpret the evolution and coherence of the ions [3][4][5]. Modified ATAS schemes have been used to investigate various light-matter phenomena, such as Autler-Townes splittings, Lorentz-Fano line shapes and light-induced structures in photoabsorption spectra [6][7][8][9][10][11][12][13][14][15]. Recent ATAS experiments have triggered the electronic exchange interaction in complex systems such as SF 6 molecules, giving the opportunity to laser control effective electron-electron interactions in molecular systems [16]. ATAS has also been used to study dynamics in semiconductors, such as band-gap dynamics in silicon [17] and separate electron and hole relaxation dynamics in germanium [18]. Reviews on experimental developments of ATAS are given in Refs. [19,20].
The foundation of our theoretical work is based on the ATAS tutorial by Wu et al. [21]. In the present article, however, the ATAS theory is reformulated to be consistent with Yang's gauge theory, which is based on the so-called energy operator [22], and it forms a natural finale to the question of optimal gauge in time-resolved ATAS simulations [23,24]. Previous characterizations of ATAS structures have mainly been focused on bound states driven or dressed by a laser field using few-state models, see e.g. Refs. [14,21,25]. ATAS features in this regime were characterized in Ref. [25] through the creation of an analytical model. Macroscopic propagation of the fields can give rise to additional effects, but such effects can be avoided by considering thin optical media [21]. Coherent superpositions of s-wave Rydberg states in hydrogen and helium atoms have been predicted to show dependence on the relative phase of the states in the superposition when the XUV field couples to the p-wave continuum [23], but experimental studies of continuum effects using ATAS are rare [12]. Analogously, coherent superpositions have been investigated in ionization studies, where a clear dependence on the relative phase is found [26][27][28][29]. This type of phenomena is often referred to as quantum beating of superposition states. Although numerous investigations have been performed on a large range of complex systems, the fundamental processes that lay the foundations of ATAS have not yet been systematically explored from coherent superpositions of Rydberg states in atoms. Here a perturbation theory model is presented, which goes beyond the use of few-state models, to disentangle the key processes in atomic ATAS. In this way, it is possible to identify two kinds of light-matter interaction processes: resonant and off-resonant processes, by their different symmetries in ATAS experiments. Results for hydrogen and neon are presented, but the general conclusions are expected to be valid in any atom excited into a coherent superposition of Rydberg states. The article is organized as follows. Section II presents the formulation of the gaugeinvariant transient absorption theory. In section III, the disentanglement of the fundamental ATAS processes is performed. In section IV, results for hydrogen and neon are discussed. Finally, section V contains our conclusion and outlook. Atomic units are used throughout this text, e = ̵ h = m = 4π 0 = 1, unless otherwise stated.
II. TRANSIENT ABSORPTION THEORY
The Hamiltonian for an electron with mass m = 1, charge q = −1 and canonical momentum p = −i∇ in presence of a classical time-dependent electromagnetic field and a static potential can be written in the minimal coupling form:
H(A, A 0 ) = 1 2m (p − qA) 2 + qA 0 + V,(1)
where A(r, t) and A 0 (r, t) correspond to the vector and scalar potentials of the time-dependent field, and V (r) to the static and conservative potential of the target [30]. In this work we will consider the specific case of the hydrogen atom, where the conservative potential corresponds to the Coulomb interaction between the electron and its nucleus, V (r) = q r. The corresponding time-dependent Schrödinger equation (TDSE) is given by
i ∂ ∂t ψ(t) = H(A, A 0 )ψ(t) = 1 2m (p − qA) 2 ψ(t) + qA 0 ψ(t) + V ψ(t).(2)
An important property of Eq. (2) is that it is form invariant when the wave function ψ → ψ ′ and the potentials (A, A 0 ) → (A ′ , A 0 ′ ) are gauge transformed in the correct way [31]. The expectation values of physical observables must be independent of the choice of gauge, i.e.
⟨ψ(t) O ψ(t)⟩ = ⟨ψ ′ (t) O ′ ψ ′ (t)⟩ .(3)
According to the theory of Wu et al., the physical observables in ATAS can be derived by an energyconservation principle, between the atom (quantum system) and the electromagnetic radiation (classical field). The exchanged energy is defined as:
∆E = +∞ −∞ ∆Ė(t)dt,(4)
with ∆Ė(t) being the instantaneous power transferred to the quantum system due to its coupling with the classical field [21]. Intuitively, this instantaneous power should describe instantaneous absorption/emission processes of radiation by the atom. According to Wu et al., ∆Ė(t) should be defined as the time-derivative of the expectation value of the atomic Hamiltonian, i.e.
∆Ė = d dt ⟨ψ(t) H(A, A 0 ) ψ(t)⟩ ,(5)
where further the particular case of the length gauge was employed. In previous works, we have stressed that ∆Ė(t) is an elusive quantity that is not gauge invariant [24,32]. The origin of this gauge ambiguity is related to the fact that the expectation value of the minimalcoupling Hamiltonian H(A, A 0 ) is not gauge invariant in presence of an electromagnetic field [31], i.e.
⟨ψ(t) H(A, A 0 ) ψ(t)⟩ ≠ ⟨ψ ′ (t) H(A ′ , A 0 ′ ) ψ ′ (t)⟩ . (6)
In order to fully avoid gauge ambiguities, a consistent gauge-invariant formulation of transient absorption theory must be used. From the best of our knowledge, such a gauge-invariant formulation has not yet been proposed in the context of ATAS.
A. Gauge-invariant formulation of ATAS As the minimal-coupling Hamiltonian H(A, A 0 ) is not gauge invariant, it cannot represent a physical observable [31]. This issue forces us to search for an unambiguous quantum-mechanical operator that can be used to describe the instantaneous energy of the quantum system. In the context of the semi-classical light-matter interaction theory, Yang [22] proposed a gauge-invariant formalism based on the so-called energy operator:
H(A, 0) = 1 2m (p − qA) 2 + V,(7)
which satisfies the gauge condition:
⟨ψ(t) H(A, 0) ψ(t)⟩ = ⟨ψ ′ (t) H(A ′ , 0) ψ ′ (t)⟩ . (8)
In order to associate this Hamiltonian to the instantaneous energy operator, Yang [22] applied the correspondence principle of quantum mechanics: The equation of motion for the expectation value of the energy operator can be derived using Ehrenfest's theorem and is given by [22,31]
d dt ⟨ψ(t) H(A, 0) ψ(t)⟩ = q 2 ⟨ψ(t) v ⋅ E + E ⋅ v ψ(t)⟩ ,(9)
where the velocity operator is given by
v = 1 m (p − qA) ,(10)
and E(r, t) the electric field from the time-dependent external field (which does not include any contribution from the conservative potential, V ). If a classical particle is subject to a combination of forces, F = F 0 + F 1 , consisting of a conservative force, F 0 (r) = −∇V , and a non-conservative (in our case explicitly time-dependent) force, F 1 = F 1 (t), then the change in total energy of the particle E T is given by [22]. Thus, the time-derivative of the energy operator, in Eq. (9), is a power caused by the electric-field force, F 1 = qE. By means of the correspondence principle, the Hamiltonian H(A, 0) can be associated with the instantaneous energy operator of the quantum system. Consequently, the gauge-invariant power in ATAS theory should be defined as
dE T = F 1 ⋅ vdt, c.f. Appendix A of Ref.∆Ė(t) = q 2 ⟨ψ(t) v ⋅ E + E ⋅ v ψ(t)⟩ .(11)
Within the electric-dipole approximation, and assuming the Coulomb gauge, ∇⋅A = 0, the gauge-invariant instantaneous power can be written as
∆Ė(t) = q E(0, t) ⋅ v(t),(12)
where v(t) = ⟨ψ(t) v ψ(t)⟩ is the gauge-invariant expectation value of the velocity operator given by Eq. (10). If a linear polarised laser field along the z-axis is considered, the power is given by
∆Ė z (t) = q E(t) v z (t),(13)
and the time-dependent exchanged energy is
∆E z (t) = q t −∞ dt ′ E(t ′ )v z (t ′ ),(14)
where v z (t) is the expectation value of the velocity operator along the polarization axis. In order to derive the energy-domain picture, which is required for comparison with experimental measurements [21], the total exchanged energy in Eq. (14) is rewritten as
∆E z (∞) = 2q +∞ 0 Re[ṽ z (ω)Ẽ * (ω)]dω,(15)
whereṽ z (ω) =ṽ * z (−ω) andẼ(ω) =Ẽ * (−ω) are the Fourier transforms 1 of the real functions v z (t) and E(t), respectively. This implies that the energy-resolved gauge-invariant gain by the atom is given by
∆Ẽ z (ω) = 2q Re[ṽ z (ω)Ẽ * (ω)],(16)
where the energy argument is positive, ω ≥ 0. Alternatively, by inserting the Ehrenfest relation: (14), the gauge-invariant energy-resolved gain can be written as
ż(t) = v z (t) into Eq.∆Ẽ z (ω) = 2qω Im[z(ω)Ẽ * (ω)],(17)
which is identical to the expression derived by Wu et al. in length gauge [21]. While the energy-domain expressions for absorption, Eqs. (16) and (17), are fully consistent with previous results, c.f. Refs. [21,23,24], the time-dependent exchange energy in Eq. (14) differs from the corresponding expression derived from a lengthgauge Hamiltonian [23]. This "paradox" is now lifted, because it is easy to understand that the gauge-invariant power can be substituted by the (incorrect) length-gauge expression: qE(t)ż(t) → −qĖ(t)z(t), only under time integrals with boundary terms that vanish in partial integration. In practical situations, such conditions are met because pulses vanish at early and late times, but we believe that these insights are useful to better interpret ATAS experiments in the time domain.
B. Implementation of the gauge-invariant theory
In our numerical implementation we consider a linearly polarized light pulse along the z-component, within
1 Fourier transform convention:f (ω) = 1 √ 2π ∞ −∞ f (t)e iωt dt and f (t) = 1 √ 2π ∞ −∞f (ω)e −iωt dω. the electric dipole approximation, such that A(r, t) ≈ A(0, t) = A(t)ẑ. The velocity gauge wavefunction, ψ V (t), is obtained from Eq. (2) on the form i ∂ ∂t ψ V (t) = H(A V , A 0 V )ψ V (t) = p 2 2m + V − q m A V (t)p z ψ V (t),(18)
where the gauge transformations:
A V (t) = A(t) and A 0 V (t) = − q 2m A 2 (t)
have been chosen. The electric field is related to the vector potential as
E(t) = − ∂ ∂t A V (t).(19)
The expectation value of the velocity operator is
v z (t) = 1 m p V z (t) − qA V (t) ,(20)
where p V z (t) is the expectation value of the z-component of the canonical momentum computed in velocity gauge; i.e. ⟨ψ V (t) p z ψ V (t)⟩. Thus, Eqs. (19) and (20) can be used to rewrite Eq. (16) as follows,
∆Ẽ z (ω) = 2q m ω Im[p V z (ω)Ã V * (ω)],(21)
where the second term, proportional toà V (ω)à V * (ω), has been disregarded as it is real [23]. In the following, the superscript V is dropped: A V → A because all calculations will be performed in velocity gauge. The attosecond XUV pulse is described by a vector potential with a Gaussian-shaped envelope, defined as
A(t) = A 0 cos (ω 0 t + φ)e −at 2 ,(22)
where a = 2 ln 2 τ 2 e . A 0 , ω 0 , φ and τ e are the amplitude, central frequency, carrier-envelope phase (CEP) and pulse duration, respectively. The frequency-dependent vector potential is obtained through the Fourier transform asÃ
± (ω) = A 0 2 √ 2a exp ±iφ − (ω ± ω 0 ) 2 4a ,(23)
where A ± denotes A = A + + A − . Note that in this expression the positive component is negligible for positive frequencies.
III. DISENTANGLEMENT OF FUNDAMENTAL PROCESSES
Consider a superposition on the general form where c j and j are interaction-picture amplitude and energy of stationary state j⟩, respectively, and N is the total number of coherently prepared states. Fig. 1 shows a specific scenario where a hydrogen-like atom is prepared in a superposition of the states 2p 0 ⟩ and 3p 0 ⟩ with common angular-momentum quantum numbers: = 1 and m = 0 and with equal interaction amplitudes (for simplicity denoted "2p+3p" in the following). Through the interaction with an attosecond XUV pulse, three fundamental processes may take place. The first process (I) is the resonant continuum contribution, represented by red lines, which consists of absorption of light due to transitions from the initial superposition to the continuum. Interference may occur as different paths to the same continuum state are allowed (absorption profiles are represented by partially overlapping Gaussian functions). The second process (II) is the resonant bound contribution, represented by blue lines, given by the emission (and absorption) of resonant dipole-allowed transitions to the bound states. The third process (III) is the off-resonant contribution, represented by grey rectangles, and it covers the whole spectrum of bound and continuum states. All processes change the angular momentum as ′ = ± 1 within the electric dipole approximation. In order to charac-terize and disentangle these different processes, we will control the atomic quantum phases, ϕ j = arg(c j ).
ψ 0 (t)⟩ = U 0 (t, −∞) ψ 0 (−∞)⟩ = N j c j e −i j t j⟩ ,(24)
A. Perturbative treatment
The fundamental ATAS processes are "hidden" in Eq. (21), where the Fourier transform of the momentum,p z (ω), is the key quantity, which contains the response of the electron to the electromagnetic field, A(ω). Due to the low intensity of the pulse, perturbation theory can be applied to compute the momentum p z (t) = ⟨ψ(t) p z ψ(t)⟩ and its corresponding Fourier transformp z (ω).
The time-dependent wave function is given by
ψ(t)⟩ = U (t, −∞) ψ 0 (−∞)⟩ ,(25)
where U is the evolution operator and the initial state superposition is given by Eq. (24). The propagation is rewritten in terms of the unperturbed evolution operator, U 0 , using the well-known Dyson series expansion [30]
ψ(t)⟩ ≈ U 0 (t, −∞) ψ 0 (−∞)⟩ − i t −∞ dt ′ U 0 (t, t ′ )H int (t ′ )U 0 (t ′ , −∞) ψ 0 (−∞)⟩ = N j c j j⟩ e −i j t − i N j ⨋ f c j e −i f t f ⟩ ⟨f p z j⟩ × t −∞ dt ′ A(t ′ )e i f j t ′ ,(26)
where f j = f − j is the difference of the energies of the states f ⟩ and j⟩, the velocity-gauge interaction Hamiltonian is defined as H int = A(t)p z from Eq. 18, with q = −1.
Eq. (22) is inserted into Eq. (26) and the time integral is evaluated using properties of the error function. Thus, the vector potential contribution is given by
A ± = t −∞ dt ′ A 0 e ±i(ω0t ′ +φ) 2 e −at 2 e i f j t ′ = A 0 4 π a e ±iφ exp − ( f j ± ω 0 ) 2 4a × erf √ at − i( f j ± ω 0 ) 2 √ a + 1 .(27)
The momentum expectation value p z (t) is computed with respect to the wave function given by Eq. (26) and is expressed as
p ± z (t) = −iA 0 √ π 4 √ a ⨋ f N jj ′ c * j ′ c j ⟨j ′ p z f ⟩ ⟨f p z j⟩ × e −i f j ′ t erf √ at − i( f j ± ω 0 ) 2 √ a + 1 × e ±iφ− ( f j ±ω 0 ) 2 4a + c.c.,(28)
where parity has been taking into account by setting matrix elements between states of the same parity to be zero.
The sums over initial states are labelled with the indices j and j ′ in such a way that the expression has been simplified to its present form. Finally,p z (ω) can be obtained by the Fourier transform of Eq. (28) and is given bỹ
p ± z (ω) = A 0 2 √ 2a ⨋ f N jj ′ ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ c * j ′ c j ⟨j ′ p z f ⟩ ⟨f p z j⟩ e ±iφ− (ω− jj ′ ±ω 0 ) 2 4a × 1 ω − f j ′ − iπδ(ω − f j ′ ) − c * j ′ c j ⟨j ′ p z f ⟩ ⟨f p z j⟩ * e ∓iφ− (ω+ jj ′ ∓ω 0 ) 2 4a × 1 ω + f j ′ − iπδ(ω + f j ′ ) ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ ,(29)
where we have used integration by parts to compute the product of a complex exponential factor and an error function, yielding the Fourier transform of a Gaussian. The boundary term vanishes as it becomes an infinitely fast oscillating function. In addition, exponential functions have been simplified, using the Dirac δ functions. The sum running over the final states, ⨋ f , is split into bound and continuum states, yielding the final expressionp
± z (ω) = A 0 2 √ 2a N jj ′ c * j ′ c j e ±iφ− (ω− jj ′ ±ω 0 ) 2 4a × O jj ′ c− + R jj ′ c + O jj ′ b− + R jj ′ b− − c j ′ c * j e ∓iφ− (ω+ jj ′ ∓ω 0 ) 2 4a × O jj ′ c+ + O jj ′ b+ + R jj ′ b+ .(30)
The resonant-continuum contribution, R jj ′ c , process I in Fig. 1, is given by
R jj ′ c = −iπ ⟨j ′ p z k ⟩ ⟨ k p z j⟩ k =ω+ j ′ ,(31)
where ⟨ k p z j⟩ is the bound-continuum matrix element, given by Eq. (A1) in Appendix A. The matrix elements are evaluated for the intermediate continuum state with the energy k = ω + j ′ . The resonant-bound contribution, R jj ′ b± , process II in Fig. 1, is given by
R jj ′ b± = − iπ ∆ω ⟨j ′ p z n⟩ ⟨n p z j⟩ n=(n −2 j ′ ±2ω) −1 2 ,(32)
where ⟨n p z j⟩ is the bound-bound matrix element, given by Eq. (A1) in Appendix A. The matrix elements are evaluated for the intermediate bound state with principal quantum number n = (n −2 j ′ ± 2ω) −1 2 . The expression is resolved on a numerical grid as described in Appendix B, ∆ω being the distance between grid points.
Finally, the off-resonant-continuum and the off-resonantbound contributions, O jj ′ c± and O jj ′ b± , respectively, process III in Fig. 1, are given by
O jj ′ c± = ∞ 0 d k ⟨j ′ p z k ⟩ ⟨ k p z j⟩ p.v. ω ± kj ′ ,(33)
and
O jj ′ b± = n ⟨j ′ p z n⟩ ⟨n p z j⟩ ∆ω ω+∆ω ω dω ′ p.v. ω ′ ± nj ′ ,(34)
where p.v. denotes the principal value integral and n denotes the energy of the bound state n⟩.
IV. RESULTS AND DISCUSSION
Results for hydrogen and neon atoms are presented and discussed in this section. The hydrogen atom has been chosen as a benchmark case for two main reasons: (i) all electronic states that are required for the perturbative model are analytically known, see e.g. [33], and (ii) the time-propagation of the TDSE can be considered numerically exact within the dipole approximation. In Sec. IV A 1, time-resolved ATAS features are explored for the hydrogen atom. Subsequently, in Sec. IV A 2, the perturbative model is validated in the energy domain by direct comparison with numerical simulations from the TDSE. In Sec. IV A 3, the perturbative model is used to disentangle the fundamental processes in ATAS from a superposition of hydrogenic states. In Sec.IV A 4, the role of the different angular momentum channels is investigated. After validating our model, the dynamics of various coherent superpositions of states in neon is presented and interpreted in Sec. IV B.
A. Hydrogen atom
Our numerical study is limited to equal population two-state superpositions with the following initial amplitudes,
c 1 = 1 √ 2 ; c 2 = 1 √ 2 exp(iϕ),(35)
where ϕ is the relative superposition phase (RSP). While the case of ns + n ′ s superposition states have been considered previously [23], the more general case of n + n ′ has not been studied and, as we will show, there are subtle dependencies on the the angular momentum, , in the superposition. Extension of our theory to more complex superpositions, which may include more states or different angular momentum, is straightforward and it may be the subject of further studies if experiments are performed on such targets in the future.
Interpretation of time-dependent absorption
The time-resolved energy gain of a hydrogen atom that interacts with an attosecond pulse is shown in Fig. 2 (a) and (b), for the prepared superposition states 2s + 3s and 2p + 3p, respectively. The attosecond pulse has central frequency ω 0 = 1.5 au, pulse length τ e = 7 au and vector-potential magnitude, A 0 = 10 −3 au. The energy gain is computed using Eq. (14) with the momentum in Eq. (28). It represents the integrated atomic power: −E(t)v z (t) in Eq. (13), which is shown in Fig. 2 for 2s+3s (c) and 2p + 3p (d), also resolved over the RSP, ϕ. While the gauge-invariant energy gain is quite similar in shape and magnitude for the two cases (c-d), the final energy gain is much larger for 2s + 3s (a) than for 2p + 3p (b). In order to interpret this puzzling observation, we present in (a-b) a comparison of the atomic energy gain with a relative energy gain that corresponds to the gain of the atom minus the gain of a free electron with no initial velocity, v z (−∞) = p V z = 0, as proposed in Ref. [24]. The power of a free electron with no initial velocity isȦ V (t)A V (t), which obviously does not lead to any net energy gain in a laser field with A V (±∞) = 0. The relative gain allows us to interpret the atomic absorption process with this "virtual" free-electron gain removed. The relative gain is computed using the following power: p V z (t)Ȧ V (t). Here all quantities are computed using the velocity gauge, see Eq. (18), but we stress that the results are gauge invariant due to the usage of the energy operator of Yang [22]. The relative energy gain is slowly changing and we propose that this quantity can be interpreted a gradual net energy gain of the atom in the field. For 2s + 3s (a) it is positive at all times, while for 2p+3p (b) it has an interval of energy loss during the interaction with the pulse. This energy loss is the reason for the much smaller energy gain of 2p + 3p when compared with 2s + 3s at the end of the pulse. The RSP-resolved energy gain in (c) and (d) are similar as they are dominated by the power of the freeelectron, however significant differences between the two cases are observed in the relative energy gain presented in (e) and (f). For 2s + 3s only absorption (positive energy gain) is observed at all RSP, while for 2p + 3p emission (negative energy gain) is found during the beginning and middle of the pulse, while absorption is established only towards the end of the pulse. The time-resolved energy gain is symmetric around ϕ = 0. The magnitude of the relative energy gain is small for out-of-phase RSP, ϕ ≈ ±π, while it is stronger for synchronized RSP, ϕ ≈ 0. Finally, we have found that the CEP of the pulse, φ, determines the peak structure in the energy gain, but that the CEP does not affect the total energy gain by the atom at the considered pulse parameters. This is in contrast to the RSP, ϕ, which strongly affects the magnitude of the energy gain.
Validation of perturbation model in the energy domain
The absorption, in the energy domain, of a hydrogen atom in two different superpositions (2s + 3s and 2p + 3p) is computed with Eq. (21) and shown in Fig. 3. In the top row, results are obtained using perturbation theory with the momentum given by Eq. (30). In the bottom row, results are computed by numerically propagating the TDSE in the velocity gauge given in Eq. (18). The absorption is given as a function of the angular frequency and the phase of the superposition for an attosecond pulse with the same parameters as in Section IV A 1. As expected, there is good agreement between the results from perturbation theory and exact numerical propagation. Further, our result for the 2s+3s superposition is in good agreement with previous studies [23]. The maximal absorption is obtained when the two states are roughly in phase, which corresponds to the case when the atom can be photoionized without destructive quantum interference from the two states in the superposition. The exact phase for maximal absorption depends on the angular frequency and it is shown with a dotted black line. Interestingly, the exact phase has a negative slope over the photon energy for the 2s + 3s case, while it has a positive slope for the 2p + 3p case. A further significant difference between the superpositions is that the 2s + 3s case is associated with mostly absorption of light (shown in red colour), while the 2p + 3p superposition exhibits large spectral regions with emissions of light (shown in blue colour). Fig. 3 (c) and (f) show lineouts of the 2p + 3p superposition at three different RSPs: ϕ = 0 and ϕ = ±3π 4. This demonstrates that the intricate absorption and emission phenomena are consistently manifest in both analytical and numerical results. The transition from symmetric to asymmetric curves in Fig. 3 (c) and (f) is reminiscent of Fano line shapes [34]. While spectrally narrow atomic absorption lines have been manipulated from symmetric Lorentz line shapes to asymmetric Fano line shapes using ATAS [7], the present result shows that the entire broad bandwidth of attosecond pulses can be manipulated using the phases of an atom in a prepared superposition. Thus, we believe that the present result may provide a novel way to tailor the spectral content of isolated attosecond pulses using atoms in time-dependent superpositions. We note that regions of emission are observed in both the energy domain and the time domain for the 2p + 3p superposition.
Fundamental processes in ATAS
Having verified the perturbation theory model, we now analyse its different contributions in detail. In Fig. 4 we show the ATAS result for the 2p + 3p case separated into the fundamental terms of Eq. (30). These terms are illustrated in Fig. 1 with (I) being the resonant transitions to the continuum, R jj ′ c in Eq. (31), (II) the resonant transitions to the bound states, R jj ′ b± in Eq. (32), and (III) the off-resonant transitions, O jj ′ c± and O jj ′ b± , in Eqs. (33) and (34), respectively. The resonant continuum contribution has a broad Gaussian-like shape over angular frequency, as shown in Fig. 4 (a), while the resonant bound contribution shows narrow absorption and emission lines, Fig. 4 (b). The width of the narrow lines is determined by the resolution of photon energy, see Appendix B, and the strongest absorption/emission is found for zero phase, ϕ = 0. Interestingly, absorption is observed in the high-frequency emission line in the out-ofphase case, presumably due to a redistribution of energy. All resonant absorption features are symmetric with the phase transformation: ϕ → −ϕ.
The off-resonant terms exhibit an absorption and emission checkerboard pattern. There are two off-resonant contributions, coming from the − and + components of O jj ′ c± and O jj ′ b± , as shown in Fig. 4 (c) and Fig. 4 (d) respectively. Unlike the resonant case, the absorption/emission features are antisymmetric with respect the phase transformation: ϕ → −ϕ. Further, the two off-resonant contributions (±) have opposite properties. Thus, the relative magnitude of the off-resonant contributions will determine the slope of the phase for maximal absorption. In the 2p + 3p case, the off-resonant contribution with + is dominant, which implies that the slope of the phase is positive, in agreement with the results in Fig. 3 (b,e). We have found that increasing the photon energy of the pulse increases the steepness of the slope due to an increased relative contribution of the off-resonant terms. In the limit of only off-resonant contributions, the slope will become infinitely steep at the central frequency, as the phase for maximal absorption changes from ±π 2 → ∓π 2.
Role of angular momentum channels
The absorption of a hydrogen atom in a 2p + 3p superposition depends on the angular momentum channels s and d, as shown in Fig. 1. The separated contributions to absorption and emission from the two partial waves are shown in Fig. 5 (a) and (b) for the s and d channel, respectively. Similar to the 2s + 3s case, the d channel contribution from 2p+3p has a negative slope for maximal absorption. In contrast, the s channel contribution has a positive slope with clear regions of emission that resemble the off-resonant + contribution in Fig. 4 (d). Clearly, the s channel dominates the total absorption and emission for the 2p + 3p case.
(3s + 4s) L=1 , (b) 2p −1 (3p + 4p) L=2 , (c) 2p −1 (3d + 4d) L=3 and (d) 2p −1 (3p + 4p) L=0
with an estimated scale for RSP, ϕ. The dotted black line shows the phase of maximal absorption (interpolation is used between discrete simulated values), which can be interpreted using the rule of slope (see main text).
The rule of slope: We have found that off-resonant + contributions dominate over − contributions, when it is possible for an atom in a superposition to go to an intermediate state with lower energy. As an example, the 2p + 3p case makes an off-resonant transition towards the 1s state, which means that the + contribution will dominate and the slope will be positive. If there are no dipole-allowed intermediate states with lower energy, the off-resonant − contribution will dominate and the slope will be negative. We have verified that the rule of slope is valid for general superpositions of two states with equal angular momentum ( = ′ ). Superpositions with higher angular momenta, such as the 3d + 4d case, have a less dominant off-resonant + contribution compared with the 2p+3p case. The reason for this is that the off-resonant + contribution is more dominant if there is a dipole-allowed virtual state with lower energy and if the transition matrix element to this state is larger. Hence, if the energy difference is smaller or the transition is weaker, then the off-resonant + contribution is less dominant.
While the results shown in this subsection were computed for the hydrogen atom, we have verified that they exhibit the same behaviour for the helium atom in twostate superpositions: 1s −1 (2s + 3s) and 1s −1 (2p + 3p), using TDCIS theory [35]. In the next section, we study the more complex case of the neon atom, which has 6 electrons in the outermost 2p shell.
B. Neon atom
The dynamics in neon atoms can be approximated by TDCIS theory [35], provided that the role of double excited states are not essential for the physical process under consideration. While it is known that a detailed description of the ground state, containing double electron correlations, is essential for a quantitative description of one-photon ionization cross-sections of noble gas atoms [36,37], the TDCIS theory provides a reasonable approximation for the neon atom. Rydberg states are found by diagonalizing the field-free problem including Coulomb interactions at the level of CIS to find eigenstates: 2p −1 n L , where L is the total angular momentum. The total magnetic quantum number is zero, M = 0. Using the gerade ansatz for TDCIS [38], which provides a symmetry adapted basis for an atom excited by linearly polarized light, which are: Φ p,m=0 a,m=0 ⟩ and 1 √ 2 ( Φ p,m=1 a,m=1 ⟩ + Φ p,m=−1 a,m=−1 ⟩), where m labels the magnetic quantum number of the hole (which equals that of the particle m = m a = m p ). Here, we construct two-state superpositions of diagonalized states: 2p −1 (n L + n ′ ′L ′ ) in the neon atom. In Table I the states used in the neon simulations are presented with the corresponding quantum numbers, symmetries and energies. Here m max is the most probable magnitude for the magnetic quantum number (probability in parentheses). The energies were validated with other computational methods and compared to NIST values, where a discrepancy is found due to the neglection of electron correlation in the CIS method. Using Eq. (21) we obtain the absorption and emission of a neon atom using the TDCIS approach with 2s and 2p as active orbitals, shown in Fig. 6. We investigate the prepared superpositions (a) 2p −1 (3s+4s) L=1 , (b) 2p −1 (3p + 4p) L=2 , (c) 2p −1 (3d + 4d) L=3 and (d) 2p −1 (3p + 4p) L=0 . We use a pulse with central frequency ω 0 = 3 au to exclude resonant transitions to bound states, the other pulse parameters are the same as in Section IV A. As our method does not allow for assigning definite phases between the two initial states, we shift the phase to centre the absorption on zero phase. We clearly find that the rule of slope describes the behaviour of the neon system, showing the applicability of the perturbative model on systems of higher complexity. We see a clear dominance of the off-resonant + contribution for Fig. 6(a,c) due to the off-resonant transition to the 2p state. It is especially dominant for the 2p −1 (3d + 4d) L=3 superposition presented in Fig. 6(c) as the transition between the nd and 2p states is strong, due to the large overlap of the wave functions. However, for the 2p −1 (3p + 4p) superpositions in Fig. 6(b,d) the off-resonant + contribution is less dominant in accordance with the rule of slope as the transition to the 2p hole is forbidden. In Fig. 6(d) the superposition 2p −1 (3p + 4p) L=0 is constructed of mostly (67%) m = 1 orbitals, see Table I. As the m quantum number is conserved, we therefore suppress the transition to the s angular momentum channel, further limiting the number of dipole-allowed virtual states with energies below the prepared superposition (inhibiting transitions to 3s). Hence, the off-resonant − contribution is dominant in accordance with the rule of slope. The effect of the 2s to 2p transition was determined by comparing with results where only the 2p orbital was active, finding that the effect was small.
V. CONCLUSION
In this work, we have presented a general gaugeinvariant formulation of ATAS using the energy operator of Yang [22]. This allowed us to unambiguously simulate absorption processes within a semi-classical description of light-matter interactions. In particular, we have considered the case of a hydrogen atom in a superposition state that is subjected to a weak attosecond pulse in the XUV regime that couples directly to the continuum. We have constructed a model using perturbation theory that allows us to simulate the energy gain of atoms in both time and energy domains. It is found that the nature of the superposition, such as its quantum phases and angular momentum, determine the complex absorption process. Broad emission features are found in the energy domain with corresponding emissions in the time-domain being identified. Absorption processes are disentangled and it is shown that resonant contributions are symmetric, while off-resonant contributions are anti-symmetric, with respect to the phase of the superposition. In more detail, the off-resonant contribution was shown to be dependent on the dipole-allowed virtual states and a rule of slope was proposed to interpret the phase that maximizes the energy-resolved absorption of the attosecond pulse. Our model was validated by numerical simulations of TDSE for the case of the hydrogen atom. Simulations of helium and neon atoms were also performed, which indicated the applicability of our model to more complex atoms. Our model can be easily adapted to investigate weak absorption of light between bound states in atoms, but its strength lies at its proper treatment of continuum states. This may prove useful to study dynamics below, or across, the ionization threshold, in both time and energy. Further application of our model to study XUV absorption of laser-dressed atoms is a natural continuation of this work.
"
[...] an operator represents a physical quantity with a classical analogue only if the equation of motion for the expectation value of the operator is of the same form as the equation of motion for the corresponding classical Newtonian quantity".
FIG. 1 .
1Representation of a hydrogen-like atom prepared in a coherent superposition of states: 2p + 3p, with all processes that are induced by an attosecond XUV pulse indicated by Roman numerals. (I) Resonant transitions to the continuum (red lines). (II) Resonant transitions to the bound states, which may reside below or above the initial states in the superposition (blue lines). (III) Off-resonant transitions to all states allowed by dipole-selection rules (grey rectangles). The bandwidth of the attosecond pulse is assumed to be larger than the separation of the non-degenerate states in the superposition, which implies that quantum beat phenomena may occur that depend on the phases (or more generally the coherence) of the states.
FIG. 2 .
2Time-resolved energy gain of a hydrogen atom in the coherent superposition states 2s + 3s (top row) and 2p + 3p (bottom row) interacting with an attosecond pulse. The left column presents the total gauge-invariant gain in full lines and the relative energy gain (without the free-electron contribution) in dashed lines for the synchronized RSP, ϕ = 0. The perturbation model results are validated by solving the TDSE numerically (dotted lines). The middle column presents the RSP-resolved total gain, whilst the right column shows the corresponding relative gain. Positive gain implies absorption (red), while negative gain implies emission (blue) of energy by the atom.
FIG. 3 .
3Analytical (top row) and numerical (bottom row) energy-resolved absorption of an attosecond pulse by a hydrogen atom in the prepared superpositions: 2s + 3s and 2p + 3p. The data is resolved over angular frequency of the field, ω, and RSP, ϕ. Left column presents 2s + 3s, middle column 2p + 3p, the dotted black line shows the phase of maximal absorption and the purple line is a contour showing where the absorption is zero. Negative absorption is interpreted as emission of energy to the field by the atom (blue). The right column presents 2p + 3p with lineouts for the phases ϕ = 0 (solid line), ϕ = 3π 4 (dashed line) and ϕ = − 3π 4 (dash-dotted line).
FIG. 4 .
4Disentangled fundamental processes in energyresolved absorption as a function of RSP, ϕ, for a hydrogen atom in superposition: 2p + 3p. (a) Resonant-continuum contribution, c.f. Fig. 1 (I). (b) Resonant-bound contribution, c.f. Fig. 1 (II). (c) and (d) off-resonant − and + frequency contributions, respectively, c.f. Fig. 1 (III). Transitions to bound states (below the ionization threshold) are shown in (b), while transitions to continuum states (above the ionization threshold) are shown in (a,c,d). The R jj ′ b− contribution has been scaled up by a factor of 10 for clarity of view in (b). FIG. 5. Energy-resolved absorption of a hydrogen atom in the prepared superposition: 2p+3p, subject to an attosecond pulse with the intermediate angular momentum restricted to s-wave and d−wave in (a) and (b), respectively. FIG. 6. Energy-resolved absorption by a neon atom subject to an attosecond pulse in the superposition (a) 2p −1
support from the Swedish Research Council: 2018-03845, the Olle Engkvist Foundation: 194-0734 and the Knut and Alice Wallenberg Foundation: 2017.0104 and 2019.0154.
TABLE I .
ISingly excited state energy levels of neon for the series 2s 2 2p 5 n ′ ′ computed at the CIS level of theory.Configuration L max
a
max
p
m max
a,p
Sym. Level (eV)
2s 2 2p 6
0 -
-
-
g
0.0000
2s 2 2p 5 3s
1 1
0 0 (100%) u
18.3625
2s 2 2p 5 3p
2 1
1 0 (67%)
g
20.1184
2s 2 2p 5 3p
0 1
1 1 (67%)
g
20.6010
2s 2 2p 5 4s
1 1
0 0 (100%) u
21.2768
2s 2 2p 5 3d
1 1
2 1 (60%)
u
21.6172
2s 2 2p 5 3d
3 1
2 0 (60%)
u
21.6181
2s 2 2p 5 4p
2 1
1 0 (67%)
g
21.7613
2s 2 2p 5 4p
0 1
1 1 (67%)
g
21.9236
2s 2 2p 5 5s
1 1
0 0 (100%) u
22.1503
2s 2 2p 5 4d
3 1
2 0 (60%)
u
22.2850
2s 2 2p 5 4d
1 1
2 1 (60%)
u
22.2862
Appendix A: Matrix elementsThe bound-continuum and bound-bound matrix element of the momentum operator,p z = −i d dz , are computed aswhere the relation for the z derivative of the product of the spherical harmonics and a generic r dependent function f (r) given in Eq. (A.37) in Ref.[33]has been used. lm⟩ are the spherical harmonics and R l ⟩ is either the radial wave function of the bound states in hydrogen, described by Eq.(3.17), or the continuum states described by energy normalized Coulomb waves, given by Eq. (4.23) in Ref.[33].Appendix B: Resolving on a gridThe energy domain absorption calculated using Eq. (21) with the momentum given by Eq.(30)can be written on the formwhere the continuum states are contained in C, which we handle as constant in ω on small intervals. The bound state contribution is represented by K for the resonant and Q for the non-resonant contributions, the diverging parts are explicitly given. In order to handle the diverging elements, we represent the absorption on the numerical grid aswhere ∆ω is the resolution of the grid, and we treat the singularity in the integral as a principal value.
Attosecond Pulse Trains Using High-Order Harmonics. P Antoine, A Huillier, M Lewenstein, 10.1103/PhysRevLett.77.1234Phys. Rev. Lett. 771234P. Antoine, A. L'Huillier, and M. Lewenstein, Attosecond Pulse Trains Using High-Order Harmonics, Phys. Rev. Lett. 77, 1234 (1996).
Attosecond science. P B Corkum, F Krausz, 10.1038/nphys620Nature Phys. 3381P. B. Corkum and F. Krausz, Attosecond science, Nature Phys 3, 381 (2007).
Real-time observation of valence electron motion. E Goulielmakis, Z.-H Loh, A Wirth, R Santra, N Rohringer, V S Yakovlev, S Zherebtsov, T Pfeifer, A M Azzeer, M F Kling, S R Leone, F Krausz, 10.1038/nature09212Nature. 466739E. Goulielmakis, Z.-H. Loh, A. Wirth, R. Santra, N. Rohringer, V. S. Yakovlev, S. Zherebtsov, T. Pfeifer, A. M. Azzeer, M. F. Kling, S. R. Leone, and F. Krausz, Real-time observation of valence electron motion, Nature 466, 739 (2010).
. A Wirth, M T Hassan, I Grguraš, J Gagnon, A Moulet, T T Luu, S Pabst, R Santra, Z A Alahmed, A M Azzeer, V S Yakovlev, V Pervak, F Krausz, E Goulielmakis, 10.1126/science.1210268Science. 334195Synthesized Light TransientsA. Wirth, M. T. Hassan, I. Grguraš, J. Gagnon, A. Moulet, T. T. Luu, S. Pabst, R. Santra, Z. A. Alahmed, A. M. Azzeer, V. S. Yakovlev, V. Pervak, F. Krausz, and E. Goulielmakis, Synthesized Light Tran- sients, Science 334, 195 (2011).
State-resolved attosecond reversible and irreversible dynamics in strong optical fields. M Sabbar, H Timmers, Y.-J Chen, A K Pymer, Z.-H Loh, S Sayres, S Pabst, R Santra, S R Leone, 10.1038/nphys4027Nature Phys. 13472M. Sabbar, H. Timmers, Y.-J. Chen, A. K. Pymer, Z.- H. Loh, S. Sayres, S. Pabst, R. Santra, and S. R. Leone, State-resolved attosecond reversible and irreversible dy- namics in strong optical fields, Nature Phys 13, 472 (2017).
Attosecond Time-Resolved Autoionization of Argon. H Wang, M Chini, S Chen, C.-H Zhang, F He, Y Cheng, Y Wu, U Thumm, Z Chang, 10.1103/PhysRevLett.105.143002Phys. Rev. Lett. 105143002H. Wang, M. Chini, S. Chen, C.-H. Zhang, F. He, Y. Cheng, Y. Wu, U. Thumm, and Z. Chang, Attosec- ond Time-Resolved Autoionization of Argon, Phys. Rev. Lett. 105, 143002 (2010).
C Ott, A Kaldun, P Raith, K Meyer, M Laux, J Evers, C H Keitel, C H Greene, T Pfeifer, 10.1126/science.1234407Lorentz Meets Fano in Spectral Line Shapes: A Universal Phase and Its Laser Control. 340716C. Ott, A. Kaldun, P. Raith, K. Meyer, M. Laux, J. Ev- ers, C. H. Keitel, C. H. Greene, and T. Pfeifer, Lorentz Meets Fano in Spectral Line Shapes: A Universal Phase and Its Laser Control, Science 340, 716 (2013).
Reconstruction and control of a timedependent two-electron wave packet. C Ott, A Kaldun, L Argenti, P Raith, K Meyer, M Laux, Y Zhang, A Blättermann, S Hagstotz, T Ding, R Heck, J Madroñero, F Martín, T Pfeifer, 10.1038/nature14026Nature. 516374C. Ott, A. Kaldun, L. Argenti, P. Raith, K. Meyer, M. Laux, Y. Zhang, A. Blättermann, S. Hagstotz, T. Ding, R. Heck, J. Madroñero, F. Martín, and T. Pfeifer, Reconstruction and control of a time- dependent two-electron wave packet, Nature 516, 374 (2014).
Absorption and emission of single attosecond light pulses in an autoionizing gaseous medium dressed by a time-delayed control field. W.-C Chu, C D Lin, 10.1103/PhysRevA.87.013415Phys. Rev. A. 8713415W.-C. Chu and C. D. Lin, Absorption and emission of single attosecond light pulses in an autoionizing gaseous medium dressed by a time-delayed control field, Phys. Rev. A 87, 013415 (2013).
Attosecond transient absorption spectroscopy of helium above the N = 2 ionization threshold. C L M Petersson, L Argenti, F Martín, 10.1103/PhysRevA.96.013403Phys. Rev. A. 9613403C. L. M. Petersson, L. Argenti, and F. Martín, Attosec- ond transient absorption spectroscopy of helium above the N = 2 ionization threshold, Phys. Rev. A 96, 013403 (2017).
Attosecond transient absorption spectrum of argon at the L2,3 edge. A Chew, N Douguet, C Cariker, J Li, E Lindroth, X Ren, Y Yin, L Argenti, W T Hill, Z Chang, 10.1103/PhysRevA.97.031407Phys. Rev. A. 9731407A. Chew, N. Douguet, C. Cariker, J. Li, E. Lindroth, X. Ren, Y. Yin, L. Argenti, W. T. Hill, and Z. Chang, Attosecond transient absorption spectrum of argon at the L2,3 edge, Phys. Rev. A 97, 031407 (2018).
Attosecond transient absorption of a continuum threshold. P Birk, V Stooß, M Hartmann, G D Borisova, A Blättermann, T Heldt, K Bartschat, C Ott, T Pfeifer, 10.1088/1361-6455/ab7c3fJ. Phys. B: At. Mol. Opt. Phys. 53124002P. Birk, V. Stooß, M. Hartmann, G. D. Borisova, A. Blättermann, T. Heldt, K. Bartschat, C. Ott, and T. Pfeifer, Attosecond transient absorption of a contin- uum threshold, J. Phys. B: At. Mol. Opt. Phys. 53, 124002 (2020).
Attosecond precision in delay measurements using transient absorption spectroscopy. M Hartmann, V Stooß, P Birk, G Borisova, C Ott, T Pfeifer, 10.1364/OL.44.004749Opt. Lett. 444749M. Hartmann, V. Stooß, P. Birk, G. Borisova, C. Ott, and T. Pfeifer, Attosecond precision in delay measurements using transient absorption spectroscopy, Opt. Lett. 44, 4749 (2019).
V Leshchenko, S J Hageman, C Cariker, G Smith, A Camper, B K Talbert, P Agostini, L Argenti, L F Dimauro, 10.1364/OPTICA.474960Kramers-Kronig relation in attosecond transient absorption spectroscopy. 10142V. Leshchenko, S. J. Hageman, C. Cariker, G. Smith, A. Camper, B. K. Talbert, P. Agostini, L. Argenti, and L. F. DiMauro, Kramers-Kronig relation in attosecond transient absorption spectroscopy, Optica 10, 142 (2023).
X-ray transient absorption spectroscopy by an ultrashort x-ray-laser pulse in a continuous-wave IR field. X Shi, Y Wu, J G Wang, V Kimberg, S B Zhang, 10.1103/PhysRevA.101.023401Phys. Rev. A. 10123401X. Shi, Y. Wu, J. G. Wang, V. Kimberg, and S. B. Zhang, X-ray transient absorption spectroscopy by an ul- trashort x-ray-laser pulse in a continuous-wave IR field, Phys. Rev. A 101, 023401 (2020).
Laser Control of Electronic Exchange Interaction within a Molecule. P Rupprecht, L Aufleger, S Heinze, A Magunia, T Ding, M Rebholz, S Amberg, N Mollov, F Henrich, M W Haverkort, C Ott, T Pfeifer, 10.1103/PhysRevLett.128.153001Phys. Rev. Lett. 128153001P. Rupprecht, L. Aufleger, S. Heinze, A. Magunia, T. Ding, M. Rebholz, S. Amberg, N. Mollov, F. Henrich, M. W. Haverkort, C. Ott, and T. Pfeifer, Laser Control of Electronic Exchange Interaction within a Molecule, Phys. Rev. Lett. 128, 153001 (2022).
. M Schultze, K Ramasesha, C Pemmaraju, S Sato, D Whitmore, A Gandman, J S Prell, L J Borja, D Prendergast, K Yabana, D M Neumark, S , M. Schultze, K. Ramasesha, C. Pemmaraju, S. Sato, D. Whitmore, A. Gandman, J. S. Prell, L. J. Borja, D. Prendergast, K. Yabana, D. M. Neumark, and S. R.
Attosecond band-gap dynamics in silicon. Leone, 10.1126/science.1260311Science. 3461348Leone, Attosecond band-gap dynamics in silicon, Science 346, 1348 (2014).
Direct and simultaneous observation of ultrafast electron and hole dynamics in germanium. M Zürch, H.-T Chang, L J Borja, P M Kraus, S K Cushing, A Gandman, C J Kaplan, M H Oh, J S Prell, D Prendergast, C D Pemmaraju, D M Neumark, S R Leone, 10.1038/ncomms15734Nat Commun. 815734M. Zürch, H.-T. Chang, L. J. Borja, P. M. Kraus, S. K. Cushing, A. Gandman, C. J. Kaplan, M. H. Oh, J. S. Prell, D. Prendergast, C. D. Pemmaraju, D. M. Neu- mark, and S. R. Leone, Direct and simultaneous obser- vation of ultrafast electron and hole dynamics in germa- nium, Nat Commun 8, 15734 (2017).
Probing ultrafast dynamics with attosecond transient absorption. A R Beck, D M Neumark, S R Leone, 10.1016/j.cplett.2014.12.048Chemical Physics Letters. 624119A. R. Beck, D. M. Neumark, and S. R. Leone, Probing ultrafast dynamics with attosecond transient absorption, Chemical Physics Letters 624, 119 (2015).
Transient absorption spectroscopy using high harmonic generation: a review of ultrafast X-ray dynamics in molecules and solids. R Geneaux, H J B Marroux, A Guggenmos, D M Neumark, S R Leone, 10.1098/rsta.2017.0463Phil. Trans. R. Soc. A. 37720170463R. Geneaux, H. J. B. Marroux, A. Guggenmos, D. M. Neumark, and S. R. Leone, Transient absorption spec- troscopy using high harmonic generation: a review of ultrafast X-ray dynamics in molecules and solids, Phil. Trans. R. Soc. A. 377, 20170463 (2019).
. M Wu, S Chen, S Camp, K J Schafer, M , M. Wu, S. Chen, S. Camp, K. J. Schafer, and M. B.
Theory of strong-field attosecond transient absorption. Gaarde, 10.1088/0953-4075/49/6/062003J. Phys. B: At. Mol. Opt. Phys. 4962003Gaarde, Theory of strong-field attosecond transient ab- sorption, J. Phys. B: At. Mol. Opt. Phys. 49, 062003 (2016).
Gauge transformations and quantum mechanics I. Gauge invariant interpretation of quantum mechanics. K.-H Yang, 10.1016/0003-4916(76)90275-XAnnals of Physics. 10162K.-H. Yang, Gauge transformations and quantum me- chanics I. Gauge invariant interpretation of quantum me- chanics, Annals of Physics 101, 62 (1976).
Attosecond transient absorption of a bound wave packet coupled to a smooth continuum. J M Dahlström, S Pabst, E Lindroth, 10.1088/2040-8986/aa8a93J. Opt. 19114004J. M. Dahlström, S. Pabst, and E. Lindroth, Attosecond transient absorption of a bound wave packet coupled to a smooth continuum, J. Opt. 19, 114004 (2017).
Implementation and validation of the relativistic transient absorption theory within the dipole approximation. F Zapata, J Vinbladh, E Lindroth, J M Dahlström, 10.1088/2516-1075/abe191Electron. Struct. 314002F. Zapata, J. Vinbladh, E. Lindroth, and J. M. Dahlström, Implementation and validation of the rela- tivistic transient absorption theory within the dipole ap- proximation, Electron. Struct. 3, 014002 (2021).
Analytic modeling of structures in attosecond transient-absorption spectra. J J Rørstad, J E Baekhøj, L B Madsen, 10.1103/PhysRevA.96.013430Phys. Rev. A. 9613430J. J. Rørstad, J. E. Baekhøj, and L. B. Madsen, Analytic modeling of structures in attosecond transient-absorption spectra, Phys. Rev. A 96, 013430 (2017).
Strong-Field Tunneling from a Coherent Superposition of Electronic States. L Fechner, N Camus, J Ullrich, T Pfeifer, R Moshammer, 10.1103/PhysRevLett.112.213001Phys. Rev. Lett. 112213001L. Fechner, N. Camus, J. Ullrich, T. Pfeifer, and R. Moshammer, Strong-Field Tunneling from a Coher- ent Superposition of Electronic States, Phys. Rev. Lett. 112, 213001 (2014).
Eliminating the dipole phase in attosecond pulse characterization using Rydberg wave packets. S Pabst, J M Dahlström, 10.1103/PhysRevA.94.013411Phys. Rev. A. 9413411S. Pabst and J. M. Dahlström, Eliminating the dipole phase in attosecond pulse characterization using Rydberg wave packets, Phys. Rev. A 94, 013411 (2016).
High-order above-threshold ionization from a coherent superposition of states. D B Milošević, B Fetić, P Ranitovic, 10.1103/PhysRevA.106.013109Phys. Rev. A. 10613109D. B. Milošević, B. Fetić, and P. Ranitovic, High-order above-threshold ionization from a coherent superposition of states, Phys. Rev. A 106, 013109 (2022).
Reconstruction of attosecond electron wave packets using quantum state holography. K Klünder, P Johnsson, M Swoboda, A Huillier, G Sansone, M Nisoli, M J J Vrakking, K J Schafer, J Mauritsson, 10.1103/PhysRevA.88.033404Phys. Rev. A. 8833404K. Klünder, P. Johnsson, M. Swoboda, A. L'Huillier, G. Sansone, M. Nisoli, M. J. J. Vrakking, K. J. Schafer, and J. Mauritsson, Reconstruction of attosecond electron wave packets using quantum state holography, Phys. Rev. A 88, 033404 (2013).
J J Sakurai, J Napolitano, 10.1017/9781108499996Modern Quantum Mechanics. Cambridge University Press2nd ed.J. J. Sakurai and J. Napolitano, Modern Quantum Me- chanics:, 2nd ed. (Cambridge University Press, 2017).
Gauge invariant formulation of the interaction of electromagnetic radiation and matter. D H Kobe, A L Smirl, 10.1119/1.11264American Journal of Physics. 46624D. H. Kobe and A. L. Smirl, Gauge invariant formula- tion of the interaction of electromagnetic radiation and matter, American Journal of Physics 46, 624 (1978).
Pulse analysis by delayed absorption from a coherently excited atom. J M Dahlström, S Pabst, E Lindroth, 10.1063/1.5053661APL Photonics. 411101J. M. Dahlström, S. Pabst, and E. Lindroth, Pulse analy- sis by delayed absorption from a coherently excited atom, APL Photonics 4, 011101 (2019).
H A Bethe, E E Salpeter, Quantum Mechanics of One-and Two-Electron Atoms. Springer Science & Business MediaH. A. Bethe and E. E. Salpeter, Quantum Mechanics of One-and Two-Electron Atoms (Springer Science & Busi- ness Media, 2013).
Effects of Configuration Interaction on Intensities and Phase Shifts. U Fano, 10.1103/PhysRev.124.1866Phys. Rev. 1241866U. Fano, Effects of Configuration Interaction on Intensi- ties and Phase Shifts, Phys. Rev. 124, 1866 (1961).
Implementation of the timedependent configuration-interaction singles method for atomic strong-field processes. L Greenman, P J Ho, S Pabst, E Kamarchik, D A Mazziotti, R Santra, 10.1103/PhysRevA.82.023406Phys. Rev. A. 8223406L. Greenman, P. J. Ho, S. Pabst, E. Kamarchik, D. A. Mazziotti, and R. Santra, Implementation of the time- dependent configuration-interaction singles method for atomic strong-field processes, Phys. Rev. A 82, 023406 (2010).
Atomic Photoeffect. M Y Amusia, 10.1007/978-1-4757-9328-4Springer USBoston, MAM. Y. Amusia, Atomic Photoeffect (Springer US, Boston, MA, 1990).
Theory of Atomic Photoionization. A F Starace, 10.1007/978-3-642-46453-9_1Corpuscles and Radiation in Matter I / Korpuskeln und Strahlung in Materie I. S. Flügge and W. MehlhornBerlin Heidelberg; Berlin, HeidelbergSpringer6A. F. Starace, Theory of Atomic Photoionization, in Cor- puscles and Radiation in Matter I / Korpuskeln und Strahlung in Materie I , Vol. 6 / 31, edited by S. Flügge and W. Mehlhorn (Springer Berlin Heidelberg, Berlin, Heidelberg, 1982).
Impact of multichannel and multipole effects on the Cooper minimum in the high-order-harmonic spectrum of argon. S Pabst, L Greenman, D A Mazziotti, R Santra, 10.1103/PhysRevA.85.023411Phys. Rev. A. 8523411S. Pabst, L. Greenman, D. A. Mazziotti, and R. Santra, Impact of multichannel and multipole effects on the Cooper minimum in the high-order-harmonic spectrum of argon, Phys. Rev. A 85, 023411 (2012).
| []
|
[
"CALCULATION OF THE HIGH-ENERGY NEUTRON FLUX FOR ANTICIPATING ERRORS AND RECOVERY TECHNIQUES IN EXASCALE SUPERCOMPUTER CENTRES",
"CALCULATION OF THE HIGH-ENERGY NEUTRON FLUX FOR ANTICIPATING ERRORS AND RECOVERY TECHNIQUES IN EXASCALE SUPERCOMPUTER CENTRES"
]
| [
"Hernán Asorey \nMedical Physics Department & Instituto de Tecnologías en Detección y Astropartículas Comisión Nacional de Energía Atómica Centro Atómico Bariloche Av. E. Bustillo\nTechnology Department Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) Av. Complutense 40\n9500 8400, 28040San Carlos de Bariloche, MadridArgentina, Spain\n",
"Rafael Mayo-García \nMedical Physics Department & Instituto de Tecnologías en Detección y Astropartículas Comisión Nacional de Energía Atómica Centro Atómico Bariloche Av. E. Bustillo\nTechnology Department Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) Av. Complutense 40\n9500 8400, 28040San Carlos de Bariloche, MadridArgentina, Spain\n"
]
| [
"Medical Physics Department & Instituto de Tecnologías en Detección y Astropartículas Comisión Nacional de Energía Atómica Centro Atómico Bariloche Av. E. Bustillo\nTechnology Department Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) Av. Complutense 40\n9500 8400, 28040San Carlos de Bariloche, MadridArgentina, Spain",
"Medical Physics Department & Instituto de Tecnologías en Detección y Astropartículas Comisión Nacional de Energía Atómica Centro Atómico Bariloche Av. E. Bustillo\nTechnology Department Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) Av. Complutense 40\n9500 8400, 28040San Carlos de Bariloche, MadridArgentina, Spain"
]
| []
| The age of exascale computing has arrived and the risks associated with neutron and other atmospheric radiation are becoming more critical as the computing power increases, hence, the expected Mean Time Between Failures will be reduced because of this radiation. In this work, a new and detailed calculation of the neutron flux for energies above 50 MeV is presented. This has been done by using state-of-the-art Monte Carlo astroparticle techniques and including real atmospheric profiles at each one of the next 23 exascale supercomputing facilities. Atmospheric impact in the flux and seasonal variations were observed and characterised, and the barometric coefficient for high-energy neutrons at each site were obtained. With these coefficients, potential risks of errors associated with the increase in the flux of energetic neutrons, such as the occurrence of single event upsets or transients, and the corresponding failure-in-time rates, can be anticipated just by using the atmospheric pressure before the assignation of resources to critical tasks at each exascale facility. For more clarity, examples about how the rate of failures is affected by the cosmic rays are included, so administrators will better anticipate which more or less restrictive actions could take for overcoming errors.Traditionally, general fault-tolerant behaviour has been achieved by redundancy and checkpointing mechanisms. Isolated redundancy is not an ideal approach for HPC as it leads to performance loss, but it has provided nice results in HTC environments (Desktop, Grid, Cloud) or combined with additional methods. Checkpointing techniques have provided good results on a three-fold basis (system-, user-, and application-level) and have demonstrated a wide scenario of solutions on coordinated and uncoordinated actions, roll-back and roll-forward strategies, mono-and multilevel checkpointing, etc.Even more and beyond the proper interest of resilience, a consequence of the increase of parallelism both on the hardware and applications sides was a series of problems related to task scheduling. The idea was to assign tasks to resources trying to avoid starvation, deadlocks, and performance losses, all while having the cluster as full as possible. This computing efficiency improvement could be achieved by profiting from a proactive (not reactive to failures) checkpointing strategy that could be designed as part of the resource manager scheduler. For example and among other results, the user-level checkpointing library DMTCP was seamlessly integrated into Slurm[3]. By designing several dynamic scheduling algorithms and profiting from a new command (smigrate), a more resilient system was provided in which also proactive checkpointing actions could be performed for enhancing the computing and energy efficiency by dynamically migrating tasks previously saved with such a checkpoint with low overhead. This fact has opened the door to new possibilities such as non-invasive maintenance operations, job preemption, more advanced priority policies, lower energy consumption, etc. Then, further advances must be envisioned once traditional checkpointing and rollback recovery strategies have been accomplished. In this regard, Silent Data Corruption (SDC) errors, or simply, silent errors (SE) have become a cornerstone in the path to exascale computing. Soft errors can be mainly classified into two categories: bit-flipping error (e.g., 1 becomes 0) in RAM; and computation error (e.g. 1 + 1 = 3) in floating point units. Traditionally, bit-flipping errors have been handled by the Error Correcting Code (ECC) technique, and computation error is dealt with redundancy methods (ECC cannot handle computation error). Unlike aforementioned fail-stop failures, such latent errors cannot be detected immediately, and a mechanism to detect and overcome them must be provided as they are becoming a major drawback as the supercomputer complexity grows. In other words, failures become a normal part of application executions and, among them, SEs are nowadays those with scarce valid solutions properly tested on real environments.It has been shown that SE are not unusual and must also be accounted for[4]. The cause may be soft efforts in L1 cache, arithmetic errors in the Arithmetic Logic Unit (ALU), (double) bit flips due to cosmic radiation, etc. The problem is that the detection of a latent error is not immediate, because the error is identified only when the corrupted data is activated. One must then account for the detection interval required to detect the error in the error recovery protocol. Indeed, if the last checkpoint saved an already corrupted state, it may not be possible to recover from the error. Hence, the necessity to keep several checkpoints so a valid one could roll back to the last correct state. When dealing with SE, however, faults can propagate to other processes and checkpoints, because processes continue to participate and follow the protocol during the interval that separates the occurrence of the error from its detection.Summarizing, there is a clear necessity for overcoming SE as they are becoming inevitable with the ever-increasing system scale and execution time, and new technologies that feature increased transistor density and lower voltage. Nevertheless, the question of the source for these SE arises. The answer can be found in the atmospheric cosmic-induced radiation, in which neutrons play a key role. As neutrons are produced during the interaction of cosmic rays with the atmosphere, and since this last experience seasonal changes, the latitude, longitude, and altitude where a data centre hosts an exascale supercomputer as well as the atmospheric seasonal conditions determine the number of the neutrons reaching the infrastructure and, consequently, the predicted MTBF. So, in this work, using the current techniques for calculating the flux of the expected radiation at the ground originated by the cosmic ray flux, the flux of neutrons with energy E n ≥ 50 MeV averaged per season in 23 data centres are presented. Among these places, the ones already hosting or expecting to promptly host an exascale supercomputer in China, Europe, Japan, and the United States are included. The geographic distribution of the 23 exascale supercomputing centres is shown inFigure 1andTable 1.High-energy neutrons, i.e., neutrons with an energy higher than 10 MeV, with a total flux of about 13 neutrons cm −2 h −1 in New York at sea level[5,6]are expected to cause SE [5], but the flux of neutrons varies with the geographical location [7], altitude [8], atmospheric [9] and geomagnetic and heliospheric conditions [10]. As it will be shown later in this work (see the 8 th column of the table 1 in section 4.1), depending on the location the averaged flux of neutrons for E n > 50 MeV could vary between (3.7 ± 0.2) cm −2 h −1 in Guangzhou, China, at sea level and (26.4 ± 1.1) cm −2 h −1 in Los Alamos, USA, at 2, 125 m above sea level (asl).The whole integration of the main source of SE (cosmic radiation) jointly with their prediction process according to the geographical place where such radiation occurs (computing infrastructure location) in a specific season of the year is expected to be useful to the administrators of these supercomputers, given a quantitative measure of the changes in the expected flux of neutrons due to changes in the barometric pressure at the ground level. With all this information, system administrators will be capable of designing and applying different mathematical and software solutions to cope | 10.1007/s11227-022-04981-8 | [
"https://export.arxiv.org/pdf/2212.07770v1.pdf"
]
| 254,686,067 | 2212.07770 | 5945dff8caba3ed91780308ba93640a68a8e124a |
CALCULATION OF THE HIGH-ENERGY NEUTRON FLUX FOR ANTICIPATING ERRORS AND RECOVERY TECHNIQUES IN EXASCALE SUPERCOMPUTER CENTRES
December 16, 2022
Hernán Asorey
Medical Physics Department & Instituto de Tecnologías en Detección y Astropartículas Comisión Nacional de Energía Atómica Centro Atómico Bariloche Av. E. Bustillo
Technology Department Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) Av. Complutense 40
9500 8400, 28040San Carlos de Bariloche, MadridArgentina, Spain
Rafael Mayo-García
Medical Physics Department & Instituto de Tecnologías en Detección y Astropartículas Comisión Nacional de Energía Atómica Centro Atómico Bariloche Av. E. Bustillo
Technology Department Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) Av. Complutense 40
9500 8400, 28040San Carlos de Bariloche, MadridArgentina, Spain
CALCULATION OF THE HIGH-ENERGY NEUTRON FLUX FOR ANTICIPATING ERRORS AND RECOVERY TECHNIQUES IN EXASCALE SUPERCOMPUTER CENTRES
December 16, 2022neutron flux · supercomputing · HPC · exascale · atmospheric radiation
The age of exascale computing has arrived and the risks associated with neutron and other atmospheric radiation are becoming more critical as the computing power increases, hence, the expected Mean Time Between Failures will be reduced because of this radiation. In this work, a new and detailed calculation of the neutron flux for energies above 50 MeV is presented. This has been done by using state-of-the-art Monte Carlo astroparticle techniques and including real atmospheric profiles at each one of the next 23 exascale supercomputing facilities. Atmospheric impact in the flux and seasonal variations were observed and characterised, and the barometric coefficient for high-energy neutrons at each site were obtained. With these coefficients, potential risks of errors associated with the increase in the flux of energetic neutrons, such as the occurrence of single event upsets or transients, and the corresponding failure-in-time rates, can be anticipated just by using the atmospheric pressure before the assignation of resources to critical tasks at each exascale facility. For more clarity, examples about how the rate of failures is affected by the cosmic rays are included, so administrators will better anticipate which more or less restrictive actions could take for overcoming errors.Traditionally, general fault-tolerant behaviour has been achieved by redundancy and checkpointing mechanisms. Isolated redundancy is not an ideal approach for HPC as it leads to performance loss, but it has provided nice results in HTC environments (Desktop, Grid, Cloud) or combined with additional methods. Checkpointing techniques have provided good results on a three-fold basis (system-, user-, and application-level) and have demonstrated a wide scenario of solutions on coordinated and uncoordinated actions, roll-back and roll-forward strategies, mono-and multilevel checkpointing, etc.Even more and beyond the proper interest of resilience, a consequence of the increase of parallelism both on the hardware and applications sides was a series of problems related to task scheduling. The idea was to assign tasks to resources trying to avoid starvation, deadlocks, and performance losses, all while having the cluster as full as possible. This computing efficiency improvement could be achieved by profiting from a proactive (not reactive to failures) checkpointing strategy that could be designed as part of the resource manager scheduler. For example and among other results, the user-level checkpointing library DMTCP was seamlessly integrated into Slurm[3]. By designing several dynamic scheduling algorithms and profiting from a new command (smigrate), a more resilient system was provided in which also proactive checkpointing actions could be performed for enhancing the computing and energy efficiency by dynamically migrating tasks previously saved with such a checkpoint with low overhead. This fact has opened the door to new possibilities such as non-invasive maintenance operations, job preemption, more advanced priority policies, lower energy consumption, etc. Then, further advances must be envisioned once traditional checkpointing and rollback recovery strategies have been accomplished. In this regard, Silent Data Corruption (SDC) errors, or simply, silent errors (SE) have become a cornerstone in the path to exascale computing. Soft errors can be mainly classified into two categories: bit-flipping error (e.g., 1 becomes 0) in RAM; and computation error (e.g. 1 + 1 = 3) in floating point units. Traditionally, bit-flipping errors have been handled by the Error Correcting Code (ECC) technique, and computation error is dealt with redundancy methods (ECC cannot handle computation error). Unlike aforementioned fail-stop failures, such latent errors cannot be detected immediately, and a mechanism to detect and overcome them must be provided as they are becoming a major drawback as the supercomputer complexity grows. In other words, failures become a normal part of application executions and, among them, SEs are nowadays those with scarce valid solutions properly tested on real environments.It has been shown that SE are not unusual and must also be accounted for[4]. The cause may be soft efforts in L1 cache, arithmetic errors in the Arithmetic Logic Unit (ALU), (double) bit flips due to cosmic radiation, etc. The problem is that the detection of a latent error is not immediate, because the error is identified only when the corrupted data is activated. One must then account for the detection interval required to detect the error in the error recovery protocol. Indeed, if the last checkpoint saved an already corrupted state, it may not be possible to recover from the error. Hence, the necessity to keep several checkpoints so a valid one could roll back to the last correct state. When dealing with SE, however, faults can propagate to other processes and checkpoints, because processes continue to participate and follow the protocol during the interval that separates the occurrence of the error from its detection.Summarizing, there is a clear necessity for overcoming SE as they are becoming inevitable with the ever-increasing system scale and execution time, and new technologies that feature increased transistor density and lower voltage. Nevertheless, the question of the source for these SE arises. The answer can be found in the atmospheric cosmic-induced radiation, in which neutrons play a key role. As neutrons are produced during the interaction of cosmic rays with the atmosphere, and since this last experience seasonal changes, the latitude, longitude, and altitude where a data centre hosts an exascale supercomputer as well as the atmospheric seasonal conditions determine the number of the neutrons reaching the infrastructure and, consequently, the predicted MTBF. So, in this work, using the current techniques for calculating the flux of the expected radiation at the ground originated by the cosmic ray flux, the flux of neutrons with energy E n ≥ 50 MeV averaged per season in 23 data centres are presented. Among these places, the ones already hosting or expecting to promptly host an exascale supercomputer in China, Europe, Japan, and the United States are included. The geographic distribution of the 23 exascale supercomputing centres is shown inFigure 1andTable 1.High-energy neutrons, i.e., neutrons with an energy higher than 10 MeV, with a total flux of about 13 neutrons cm −2 h −1 in New York at sea level[5,6]are expected to cause SE [5], but the flux of neutrons varies with the geographical location [7], altitude [8], atmospheric [9] and geomagnetic and heliospheric conditions [10]. As it will be shown later in this work (see the 8 th column of the table 1 in section 4.1), depending on the location the averaged flux of neutrons for E n > 50 MeV could vary between (3.7 ± 0.2) cm −2 h −1 in Guangzhou, China, at sea level and (26.4 ± 1.1) cm −2 h −1 in Los Alamos, USA, at 2, 125 m above sea level (asl).The whole integration of the main source of SE (cosmic radiation) jointly with their prediction process according to the geographical place where such radiation occurs (computing infrastructure location) in a specific season of the year is expected to be useful to the administrators of these supercomputers, given a quantitative measure of the changes in the expected flux of neutrons due to changes in the barometric pressure at the ground level. With all this information, system administrators will be capable of designing and applying different mathematical and software solutions to cope
Introduction
Exascale computing presents several issues, being fault tolerance one of the main ones: while the Mean Time Between Failures (MTBF) of the hardware components (from coolers to memories or random issues) does not grow as fast as the number of resources, the number of cores on a hardware unit experiences continuous growth, and so the probability of one or more tasks being affected by a failure increases [1]. For example, large parallel jobs may fail as frequently as once every 30 minutes on exascale platforms [2]. Also, the higher number of tasks composing a job, the higher will be the computational and economics lost associated with the increasing number in failures. Although these issues pose enough of a risk, additional factors are now coming into play: clusters with lower energy consumption that are designed and fed with a lower voltage, or smaller circuits are more easily upset because they carry smaller charges and are more prone to hardware failures, or supercomputers (partially) built with GPUs cards counting on an amazing number of cores, or much more complex software being executed, etc. All the previous results in a higher failure rate, and so, lower values of the MTBF. Thus, there is a necessity in developing tools and frameworks that reduce the impact of tasks and jobs failure on exascale supercomputers. Figure 1: Geographic locations of the 23 exascale supercomputing centres that are being built around the World with these SE that will produce more or less overhead. This work is expected to be a decision-making tool for the exascale supercomputers' administrators as they will be able to determine in advance which mitigation methodologies need to be applied for overcoming SE depending on the forecasted neutron flux in a specific period of the year.
The main result from this exercise will be a higher resilience, better computational efficiency and less energy misuse in exascale supercomputers.
Related work
Fault tolerance can be defined as the capability of a certain system to overcome hardware, software or communication problems and continue with the execution of applications. This field embraces different sections: the detection of failures, their avoidance if possible, and the recovery from them if not.
To achieve computational resilience, there are several methodologies for overcoming errors produced in runtime. Technical progress in resilience has been achieved in the last decade, but the problem is not actually solved and the community is still facing the challenge of ensuring that exascale applications complete and generate correct results while running on unstable systems [11]. In this regard, it should be pinpointed that current systems do not have a fully integrated approach to fault tolerance: the different subsystems (hardware, parallel environment software, parallel file system) have their mechanisms for error detection, notification, recovery, and logging.
The current status can be mostly described in a few articles. In [12], different approaches towards failure detection and prediction are presented. A state-of-the-art description of the approaches to overcome these failures is included in [11], where also a more detailed explanation of checkpoint solutions is presented. An updated status can be found in the compilation of fault detection, fault prediction, and recovery techniques in HPC systems, from electronics to system level, which also analyzes their strengths and limitations and identifies promising paths to meet the reliability levels of exascale systems [13]. These references clearly show that the problem being faced is of real interest in the next generations of supercomputers.
After a failure has been detected (even pre-emptively), checkpoints are a widely used tool devoted to saving the status of the running tasks. A recent survey of checkpointing protocols can be found in the book edited by Hérault and Robert [14]. Strategies also range from coordinated checkpointing (including full and incremental ones) to uncoordinated checkpoint and recovery with message logging, each with different strengths and drawbacks [15]. Checkpoint pursues to reduce the overhead produced by replication methodologies even when the latter is producing valid results still [16].
The coordinated checkpoint technique guarantees consistent global states by enforcing each of the processes to synchronize their checkpoints as it is the most common practical choice due to the simplicity of recovery [17]. The obvious issue is to find a balance between the robustness of iterated checkpoints and the induced overhead. Uncoordinated checkpointing allows different processes to do checkpoints when it is most convenient but is subject to the domino effect, and does not guarantee progress. Although this issue can be avoided with message logging [18], uncoordinated checkpointing does not represent a valid alternative in the majority of current production environments and applications. Recent advances include multi-level approaches, or the use of SSD or NVRAM as secondary storage [11] as well as the replication for redundant MPI processes [19] and threads [20]. Also, on MPI, it is remarkable the initial FT-MPI introduced to enable MPI based software to recover from process failure [21] and, also, the enlarged capacities via the Checkpoint-on-Failure protocol for forwarding recovery MPI without resulting in a major overhead [22]. Recently, the User Level Failure Mitigation (ULFM) interface provides new opportunities in this field, enabling the implementation of resilient MPI applications, system runtimes, and programming language constructs able to detect and react to failures without aborting their execution [23]. Another development is MANA (MPI-Agnostic Network-Agnostic transparent checkpointing) for MPI [24], which proposes a new solution especially deserved for exascale [25]. The three major approaches to implementing checkpoint systems are application-, user-and system-level (or kernel-level) implementations [26], being the last one always transparent to the user. The most popular approach is the applicationlevel checkpoint [11], where the programmer defines which is the state to be stored in the application by directly injecting the checkpointing routines directly into the code, or by using some automated pre-processors. This approach keeps being of interest as new solutions are proposed, such as the application-based focused recovery (ABFR) [27]. This alternative has however been mostly abandoned in the place of the other two, and up to the authors' knowledge there are currently no significant projects in the area.
With the user-level approach, a library is used to do the checkpointing and the application programs are linked to the library. User-level does not require system privileges to operate either special kernel modules or kernel patches. One of the active projects for transparent user-level checkpoints are DMTCP [28] or BLCR [29], which include support for distributed and multi-threaded applications and do not require modifying either the application executable or the kernel.
Concerning the state-of-the-art of research on SE, (parallel) jobs can be interrupted at any time for checkpointing, for a nominal cost C. To deal with fail-stop failures, the execution of divisible-load applications is partitioned into same-size chunks followed by a checkpoint, and there exist well-known formulae by Young & Daly [30] to determine the optimal checkpointing period. To deal with SE, the simplest protocol had been to perform a verification (at a cost V ) just before taking each checkpoint. If the verification succeeds, then one can safely store the checkpoint and mark it as valid. If the verification fails, then an error has struck since the last checkpoint, which is correct having been verified, and one can safely recover (which takes a time R) from that checkpoint to resume the execution of the application. This protocol with verifications zeroes out the risk of fatal errors that would force restarting the execution from scratch, but the key point is to find a pattern that minimizes the expected execution time of the application. Finding the best trade-off between error-free overhead (what is paid due to the resilience method, when there is no failure during execution) and execution time (when errors strike) is not trivial [31].
Later on, it has been published a work for determining the real computational cost in the technique of combining replication and checkpointing [32] for assessing either duplication or triplication, which can be acceptable solutions for specific scenarios (aeronautics, for example, though it also requires manufacturing specific hardware as IBM S/390 in Boeing 777 [33]). Though it does not specifically try to cope with SE, this work is of interest as it provides closed-form formulas that give the optimal checkpointing period and optimal process count as a function of the error rate, checkpoint cost, and platform size. Similar work on predicting an optimal checkpointing period and its relationship with the cluster size has been recently published [34].
In addition to software techniques, SE can be coped with mathematical approaches. The traditional wisdom in computing no longer applies as unorthodox, new algorithmic techniques are emerging linked to the exascale requirements. Aspects related to communicating avoiding algorithms, mixed single-double precision computations or the inclusion of new kinds of randomised algorithms embedded in deterministic portions of the codes are of major concern in the context of faster and more reliable solvers [35].
These new methods are insensitive to the quality of the randomness and produce highly accurate results, besides their simplicity and speed [36,37]. Hence, there is currently a large interest in conducting further research on them [38,39]. Specific recent works applied to GMRES [40] or parallel stencil computations [41] also demonstrate the interest in this topic.
Last but not least, there are some works on radiating computing hardware. More than twenty years ago, it has been demonstrated that neutrons originated in cosmic radiation are the dominant source of soft errors in DRAM devices [42], and cosmic-ray induced soft error rates were measured on 16-Mb DRAM memory chips [43]. Later on, in 2002 and 2003, to prove to the manufacturers that the errors appearing in ASC-Q at Los Alamos National Laboratory were due to cosmic rays, the staff placed one of the servers in a beam of neutrons causing errors to spike [44]. The Jaguar supercomputer logged single-bit ECC errors at a rate of 350 min −1 in 2006 as well as double-bit errors once per day, being the latter detected, but not corrected by ECC technique as previously stated. Also, BlueGene/L at Lawrence Livermore Nat Lab suffered with radioactive lead in the solder to cause bad data in the L1 cache, a problem that ended in slower computations as L1 had to be bypassed.
The main effects of radiation on semiconductors are the total ionizing dose (TID), the occurrence of Single Event Effects (SEE), and Displacement Damage (DD). For high-energy neutrons, both the elastic and inelastic interactions are possible, and scattering producing a displacement of atoms from their position in the lattice site results in defects altering the electronic properties of the crystal and being one of the main mechanisms of device degradation [45]. The neutron interacts with atoms creating DD and generating secondary charged ionizing particles: a neutron of energy E n = 100 MeV can produce a cascade of secondary particles including secondary neutrons, protons, ions, photons and δ electrons with energy above 100 eV, extending temporal effects and permanent damage far away from the first interaction site [46]. Detailed simulations show that, while the elastic neutron-28 Si interaction cross-section decreases from ∼ 1, 000 mb for E n 8 MeV down to 450 mb at E n 100 MeV and remains constant up to E n 1000 MeV, the corresponding inelastic cross-section curve starts at ∼ 800 mb for E n 10 MeV, peaking at 1, 000 mb at 80 MeV and then it stabilizes at 200 mb for E n 1 GeV (see Fig. 3 of [46]), where 100 mb = 0.1 barn = 10 −25 cm 2 means that about 4.2% of the incident neutrons interacts with the 28 Si. Some typical reactions observed involve different mechanisms with energy thresholds between 2.75 and 12.99 MeV, and producing αs, such as 28 Si(n, α) 25 Mg and 28 Si(n, 2α) 21 Ne, or neutrons, such as 28 Si(n, nα) 24 Mg, or neutrons and protons, such as 28 Si(n, np) 27 Al [47]. Similar reactions occur with neutrons and oxygen, increasing the probability of having errors with the incident energy as SiO 2 is typically in the proximity to active junction areas [48]. Alia et al.
[49] exposed commercial SRAM devices to different flux of protons and neutrons (5-300 MeV) and measured the effective σ err for both types of SEE: soft errors, also known as single event upsets (SEU) in the literature, and hard (or catastrophic) errors just as the single event latch-up (SEL). By using fitting their experimental data to Weibull functions they compared the σ SEU for neutrons at different energies with the same magnitude for energetic proton, and observed that the behaviour of σ n,SEU depends both on the neutron energy and on the internal geometry of the device, and that σ n,SEU tends to σ p,SEU of protons at E p = 250 MeV for E n 25 MeV (see Figure 3 of [49]).
As the incident neutron energy gets higher, the number of new reactions in the pathway increases, extending the damage and the probability of having errors from a single reaction. As it will be detailed in section 4, it is possible to characterize the radiation-induced errors in computing devices by defining an effective cross-section, σ err , a widely used magnitude to directly evaluate the radiation sensitivity of a particular device [47]. As it is an effective metric, it considers all the possible sources of neutron-induced computing errors, and it is experimentally measured by placing different devices in a neutron beam and calculating the fraction of the observed rate of neutron-induced errors to the injected neutron flux [50]. The Los Alamos Neutron Science Center (LANSCE) irradiation facility is one of the neutron sources typically used to measure the number of fatal soft errors, such as the measurement performed in the ASC-Q supercomputer, one of the world's fastest supercomputers in 2005 [44], and in the Titan supercomputer, which is composed of more than 18, 000 Kepler GPUs, has a radiation-induced MTBF in the order of dozens of hours [50].
Thus, new works on this SE problem produced by radiation have been more recently published focusing on determining the reliability in GPUs [6] and Xeon Phis also applying high-level fault injection [51], where the relative σ err for each device exposed to high-energy neutrons have been obtained. Further steps forward have been the comparison between high-energy and thermal neutrons effects on the error rates on Commercial Off-The-Shelf (COTS) devices [52] by exposing AMD APUs (4 Steamroller CPUs + 1 AMD Raedon R7), Intel XeonPhi processors, Nvidia K20, TitanX and TitanV GPUs and a Zync-7000 FPGA to two beam of neutrons with energies in the range from 1 meV to 1 GeV in the ChipIR and Rotax neutron beam-lines at the ISIS Neutron and Muons Source [52]. They conclude that while high-energy neutrons are the most important source of SE, for some applications in some computing devices thermal-neutrons can account up to 59% of the total MTBF. In a latter work, an experimental evaluation of the effective cross-section σ err for a high-energy vs thermal neutron to generate an error in the same computing devices is provided as well as an estimation of the thermal neutrons flux modification due to materials heavily present in a supercomputer room [53].
These works also quantify and qualify radiation effects on applications' output correlating the number of corrupted elements with their spatial locality and provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. Might it not be forgotten, as transistors get smaller, the amount of energy it takes to spontaneously flip a bit get smaller too, i.e., as exascale arrives, the number of bit-flip errors caused by radiation increases. Also, previous references about radiating computing hardware are associated to either neutron flux originated in a Lab for quantitatively estimating SE rates or demonstrating how cosmic rays actually affect computations, but what about determining the natural flux that is received in any place in the world? Hence, the evaluation of the contribution of non-thermal neutrons to the error rate of computing devices can be now calculated for the 23 exascale data centres around the World from the work carried out in the previous references and the results provided in this work.
Atmospheric production of energetic neutrons
Cosmic rays are high-energy particles and atomic nuclei with energies from a few GeVs up to 10 20 eV [54]. After the pioneering works of Rossi and Auger in the 1930's [55], it is well established that cosmic rays interact with the atmosphere producing cascades of particles via radiative and decay processes, collectively known as Extensive Air Showers (EAS) [56]. Depending on the energy E p of the primary cosmic ray, an EAS could have up to ∼ 10 10 particles at the moment of its maximum development. The detailed analysis of these phenomena is highly complex, as lot of different processes could be involved as more and more particles are produced. Essentially, the shower starts in the atmosphere at the first interaction point occurring at an atmospheric depth X 0 that depends on primary composition and energy, where it interacts with an atomic nucleus present in the air constituents (see for example [57]). Due to the enormous difference in the energy when compared with the incoming cosmic ray, the target nuclei can be considered at rest. Since the transference at these energies of transverse momentum is small, all the increasing number of secondaries are moving towards the ground in the approximate direction of the primary. However, they can be dispersed, and the small transfer of traverse moment during radiative or decay processes produces a slow drift moving the particles away from the shower axis, and finally remain contained in a curved, thin disk known as the shower front, that moves down to the ground in the direction pointed by the initial momentum of the primary particle. The distribution of secondary particles in the shower front is axially symmetric and the particle density decrease as a power law with the distance r to the shower axis, being well described by the Nishimura-Kamata-Greisen (NKG) lateral distribution function (LDF) [58].
Electromagnetic (EM) showers are initiated by photons or electrons, and most of the processes are mediated by QED interactions. These cascades are mainly ruled by two interaction channels: (i) e ± Bremsstrahlung, and (ii) pair production of e ± . It is important to notice that both processes are coupled at high energies, as photons produce e ± pairs by (ii), which in turn produce high-energy γs by (i). These processes continue producing EM particles that could initiate new EM sub-cascades and more energy is transferred to the EM channel, which in turn produce new EM secondaries with lower energy. At some point in the cascade evolution during the propagation through the atmosphere, the rate of occurrence of radiative processes begins to decrease as the mean energy as a function of the atmospheric depth X, i.e., E(X) = E p /N (X), where N is the total number of secondaries in the cascade, drops below the critical energy E c and the ionization losses start to dominate over the radiative losses. At this point, the cascade reaches its maximum development, with a total number of particles N max ∝ E p and occurring at an atmospheric depth X max ∝ log(E p ). The cascade continues collectively moving down to the ground through the atmosphere, and once X max is surpassed, the total number of particles N (X) starts to monotonically decrease due to: (i) the radiative processes are strongly suppressed for E(X) < E c ; and (ii) the atmospheric absorption raises as the air density increases at lower altitudes.
Instead, a hadron-initiated EAS typically produces new hadrons through fragmentation, and mesons through hadronization of the resulting fragments. Those mesons, typically π ± and π 0 , have different energy losses in the air and, most importantly, their corresponding lifetime and decay products are very different, having at the end a major impact on how these cascades develop. Almost all π 0 , with a lifetime of τ π 0 = 8.4 × 10 −17 s [59], decay very close to their production point into two energetic γs that initiate new EM showers, transferring more energy into the EM channel. Instead, charged pions can propagate through the atmosphere down to typical altitudes of 4 − 6 km due to their longer lifetime τ π ± = 2.6 × 10 −8 s [59]. At these altitudes, they start to decay into charged muons µ ± generating the muonic component of the cascade. As the shower develops, the energy is continuously transferred to the EM and µ channels due to the decays of neutral and charged mesons. Close to the ground, 85 − 90% of E p is at the EM channel, and the number of particles ratios typically are 10 2 : 1 : 10 −2 for the EM, muon and hadronic channels respectively [60]. This latter is produced by hadronic interactions and so, it remains close to the shower axis as most of the hadrons move in a close direction to the original one, due to the reduced transference of traverse momentum produced by the leading particle effect of hadronic interactions, see e.g. [61,60,62]. Therefore, the hadronic component is located in a small region located close to the shower axis and is mainly composed of energetic neutrons and protons, with some light nuclei and charged pions, and small traces of other hadrons. Neutrons are mainly produced by spallation processes of protons on 14 N and other nuclei in the atmosphere [63,64]. As they are the only quasi-stable neutral hadrons present in the cascade 2 and no ionization or radiative processes affects their propagation in the atmosphere, their evolution is only determined by elastic and quasi-elastic scattering and hadronic interactions. As explained in section 2, the energy distribution of atmospheric neutrons at different places exhibit some similarities and the main variations are related to the location and altitude of the observation site [10,5,65,64]. Energy losses in the atmosphere produce two typical structures in the neutron energy spectrum: first, a single peak in the number of muons is observed at E n 100 MeV, the so-called quasi-elastic peak; and a complex structure observed in the 0.1 E n 10 MeV caused by many resonances cross-sections depending on the target nuclei. At lower energies, the spectrum follows a typical E −1 n power law distribution with the neutron energy. The exact energy at which these spectral features appear depends on several factors, such as the altitude above sea level, geomagnetic field conditions and Solar activity, and the water vapour content in the air [66]. Due to their energy and the way they propagate through the atmosphere, these neutrons arrive at the ground with a considerable and measurable time delay with respect the primary cascade [67].
To properly simulate the cascade evolution and take into account all the involved physical processes and the propagation and tracking of up to ∼ 10 10 secondary particles is a heavily demanding computing task. To do so, several tools have been developed, but the most extended and validated one is CORSIKA [68], a program for the detailed simulation of extensive air showers initiated by high-energy cosmic ray particles written in FORTRAN and continuously upgraded [69]. However, while it incorporates the possibility to select a specific atmospheric model, the values of the components of the local geomagnetic field and the altitude of the observation level, CORSIKA lacks the possibility to change those values in a dynamic way, or, most importantly, it is not possible to calculate in a direct way the secondary particles at the ground produced by the integrated flux of the primary cosmic rays. These factors are significant for the calculation of the expected background radiation at any particular site around the World and under specific and time evolving atmospheric and geomagnetic conditions. When calculating the expected flux of secondary particles, the composition of the primary flux, the local atmospheric profile and its variations along the year, or the secular changes and the fast disturbances introduced by the Solar activity in the Earth's magnetic field have to be taken into account as they affect the number of primaries impinging the Earth's atmosphere, the evolution of the EAS in the air and the consequent flux of secondary particles at the ground.
To accomplish these tasks in a semi-autonomous way, the Latin American Giant Observatory (LAGO) [70] developed ARTI [71], a toolkit designed to effortlessly calculate and analyze the total background flux of secondaries and the corresponding detector signals produced by the atmospheric response to the primary flux of galactic cosmic rays (GCR). ARTI is publicly available at the LAGO GitHub repository [72].
LAGO operates a network of water Cherenkov detectors (WCD) at different sites in Latin America, spanning over different altitudes and geomagnetic rigidity cutoffs [73]. The geographic distribution of the LAGO sites, combined with the new electronics for control, atmospheric sensing, and data acquisition, allows the realisation of diverse astrophysics studies at a continental scale [74]. By using ARTI, LAGO is capable to obtain a better characterization of its distributed detection network and determining the sensitivity to the different phenomena studied, such as the measurement of space weather phenomena [75] or the observation of high-energy transients [76].
ARTI is a computational tool that integrates CORSIKA, Magneto-Cosmic and Geant4 with its own designed control and data analysis codes, allowing the calculation of the expected integrated flux of atmospheric radiation in any geographic location under realistic and time-evolving atmospheric and geomagnetic conditions [77]. The expected flux at the ground calculated by ARTI has been contrasted and verified with measurements performed at different astroparticles observatories, as most of them take advantage of the atmospheric muon background for the detector calibration [74,78,79,80,81]. ARTI also has been extensively used for different applications, such as the characterization of new high altitude sites for the observation of steady gamma sources or astrophysical transients, such as the sudden occurrence of a gamma ray burst [76]; or to study the impact of space weather phenomena from ground level by using water Cherenkov detectors [74,82,83]; or to calculate the most statistically significant flux of high-energy muons at underground laboratories [83,84]; to help in the assessment of active volcanoes risks in Latin America [85,86,87,88]; and even to contribute to the detection of improvised explosive devise at warfare fields in Colombia [89]. In particular, we have used ARTI to estimate the expected response of water Cherenkov detectors, commonly used for astroparticles observation, to the atmospheric neutron flux and its relation with the observation of space weather phenomena [90], and for the design of new safeguard neutron detectors for the identification of traffic of fissile materials [91,92], which involves in both cases the calculation of the expected flux of atmospheric neutrons and the corresponding detector responses [90,92].
Added to the intrinsic complexity of tracking all the relevant interactions of up to billions of particles with the atmosphere just for a single EAS, the atmospheric radiation at the ground level is originated by the convolution of the cascade developments of billions of cosmic rays that simultaneously impinge the Earth's atmosphere. Therefore, to obtain a statistically significant distribution of secondary particles at the ground, the time integration should be long enough to avoid statistical fluctuations [74,71]. For example, a typical calculation of the expected number of secondaries per square metre per day for a high-latitude site involves the computation of ∼ 10 9 EAS. For this reason, ARTI is prepared for running at high-performance computing (HPC) clusters operating with the SLURM workload manager, and in Docker containers running at virtualized cloud-based environments such as the European Open Scientific Cloud (EOSC) and capable to store and access the produced data catalogues at federated cloud storage servers [93,83].
In this work previous calculations of the expected flux or particles at the ground level are extended, with special emphasis on the neutron flux, as one of the possible sources of silent and non-silent errors as described in previous sections. For doing this, we selected the minimum possible available value of the kinetic energy cuts for hadrons in CORSIKA, i.e., E hmin = 5 × 10 −2 GeV, and so, for the case of neutrons, they have not tracked anymore once they reach this energy limit of E nmin = 50 MeV, that corresponds to a total energy of 989.6 MeV.
As can be inferred from the development of the showers described above, the atmosphere has a crucial role in the final distribution of particles at the ground. Any atmospheric model describes the atmosphere's main parameters (such as the atmospheric density profile) at a given time and position. So, to account for the atmospheric impacts on the cascades developments, ARTI can use four different types of atmospheric models: i) the broad MODTRAN atmospheric model [94], that assigns a general profile for different areas of the World depending on latitude and season (tropical, subtropical summer and winter, arctic or antarctic summer and winter) [94]; ii) local atmospheric profiles based on the Linsley's layers model [95] for predefined sites; iii) extract real-time atmospheric profiles from the Global Data Assimilation 3 System (GDAS) [96] using the Linsley's model; and iv) calculate and use the typically monthly-averaged atmospheric profiles for a given location [9,83,93]. As we will show in the next section, by using these functionalities we can model the expected seasonal variation in the flux of secondary particles at the ground level for each one of the 23 exascale data centres shown in Figure 1.
Given all the relevant primaries are charged particles and nuclei, another important factor that should be taken into account is the secular variation of the Earth's magnetic field (EMF) and its fast disturbances. These effects could be significant for the case of high latitude sites, such as the CSC Kajaani data centre in Finland. As it is described in [77,71], ARTI incorporates specific modules to calculate the status of the EMF by using the different EMF models taken into account both the secular variation of the EMF and its disturbances.
In the next section, we show the expected flux of atmospheric radiation at the ground and its corresponding seasonal variations for the 23 exascale supercomputing centres. [97]. To reduce the impact produced by Solar activity, all the calculations were performed using the configuration of the EMF for December, 20th, 2021, as no disturbances in the magnetosphere were observed for this day.
Once the EMF components are defined, the next step is to obtain the atmospheric profiles we shall use at each of the 23 sites. For this calculation we use the monthly atmospheric profile for 2020 at each site, which was averaged from two local daily profiles extracted from the GDAS database and averaged following the ARTI methodology [9], obtaining 23 × 12 = 276 atmospheric profiles. A sample of the obtained density profiles and their seasonal variations can be seen in the left panel of Figure 2, where the seasonal density profiles of Los Alamos, are shown as a function of the altitude above sea level. Density profiles follow the expected seasonal variations, with denser air at the ground level in winter and a decrease in the density in the summer's warm air. In the right panel of the same Figure, The observed differences in the density profiles along the year are small, at the level of a few per cent, but they are critical when observing the atmospheric radiation at the ground level, as the atmospheric depth at a given altitude h i , defined as the integral of the atmospheric density profile within the atmospheric layer of thickness δh i , X(h i ) = δhi ρ(h ) dh , has a direct impact on the particles production, interactions and absorption at each particular layer (especially for altitudes below ∼ 15 km asl), and therefore, on the final secondary particle distribution at the ground.
Given the stochastic nature of the development of the EAS, a large sample of showers is needed to observe these effects on the expected flux at the ground in a statistically significant manner. So the third step in the calculation is to integrate the primary spectrum j to determine the total number of primary cosmic rays N (A, Z) = j dΩ dt dS dE p of each relevant nucleus (identified by its atomic mass A and number Z), which needs to be injected for a given integration time t, observation area S, solid angle interval Ω, and primary energy E p range.
The cosmic ray energy spectrum ranges from GeV and up to more than 100 EeV and can be very well approximated by a simple monotonically decreasing power law, i.e.,
Φ(E p , A, Z) Φ 0 (E 0 , A, Z) × (E p /E 0 ) α(Ep,A,Z) ,(1)
where Φ(E p ) is the expected flux of the considered primary nucleus (A, Z), Φ 0 is the reference flux at a certain energy E 0 for this particular nucleus, and α is the spectral index that depends on the primary energy and, while it can slightly vary from nucleus to nucleus, it can be well approximated by α ≈ −3 for the whole spectrum. Thus, we can use this property of the primary flux to limit the upper energy limit when calculating the total number of primaries for each Table 1 for a summary of the characteristics of each site. Altitudes were slightly shifted for the sake of clarity species that need to be injected. Even more, at the PeV scale, the spectral index becomes steeper in the so-called knee of the cosmic ray spectrum, i.e., α ≈ −3.3 at E p = 4.5 PeV [98]. At the lowest energies, primaries are much more abundant but secondary particle production is limited and most of them are absorbed by the atmosphere before reaching ground level. For all these reasons, we limit the primary energy range for the calculation of the expected background at the ground to E min < E p < 10 6 GeV, where E min = m(Z, A)c 2 + 0.1 GeV, being m(A, Z) the mass of the injected primary [71].
The second important parameter to be considered is the total integration time t. While lower times reduce the total number of primaries needed to be simulated, the risk of the calculation being dominated by a statistical fluctuation increases as t decreases. So, in the end, a compromise has to be taken between the saving of computing resources and the statistical significance of the calculations. While typical values for t in astrophysics studies are up to a few hours [77,82], in this case, we want to evaluate the atmospheric impact on the flux of secondary particles, and so we considered a total integration time t of 1.5 days, i.e., t = 129, 600 s for each month at S = 1 m 2 in each one of the 23 sites to reduce statistical fluctuations.
Finally, since at these energies the primary flux is isotropic, we considered all the primaries following a uniform distribution in solid angle for the complete sky hemisphere around each site, i.e., −π ≤ ϕ ≤ π and 0 ≤ θ ≤ π/2 for the local azimuth and zenith angle respectively.
Once the integration intervals are defined, the expected primary flux is integrated for all the relevant cosmic nuclei, obtaining N 1.6 × 10 9 primaries from protons to irons (1 ≤ Z ≤ 26) for each month at each site, resulting in 4.3 × 10 11 simulated showers in 12 × 23 = 276 individual runs. Calculations and analysis were done using the ARTI framework v1r9 [72], including CORSIKA v7.7402 [68] for the EAS simulations, and QGSJET-II-04 [99] and GEISHA-2002 libraries for accounting for the high-and low-energy interactions respectively. The total flux of secondaries, Ξ All , ranges from ∼ 700 to ∼ 2, 000 particles per square metre per second, depending mainly on the EMF conditions, affecting the low energy sector of the primary flux [77]; and the atmospheric profile, having a direct influence on particle production and absorption. All the computations were performed on the ACME (equipped with Intel Gold 6138 processors) and TURGALIUM (Intel Gold 6254) clusters, demanding ∼ 450 kCPU·hours and occupying a storage space of 1 TB for the final binary compressed files.
Typically, secondary particles are grouped into three main groups: the electromagnetic component, composed of γs and e ± , the hadronic component composed of neutrons, protons, nuclei and other baryons and mesons, and the muon µ ± component. In Figure 3 The high-energy flux, shown in the right panel of Figure 3, is dominated by muons, charged leptons that carry the same interaction charges as e ± but they are ∼ 200 times as massive. Thus, energy losses are relatively small compared with their typical energies: dE/dX is in the range of 2 − 6 MeV cm 2 g −1 , i.e.,5 − 15 MeV cm in silicon, for muons in the 10 0 − 10 3 GeV energy range [100]. Muons at the TeV scale, as those observed in Figure 3, possess enough energy to traverse hundreds and up to thousands of metres of rock and could be the main source for signals in muography studies [101] or background noise at underground laboratories [102]. For the same reason, it is almost impossible to shield critical devices from muons, where they could induce SET and SEU soft errors by ionization for both types of muons, plus nuclear capture only for low-energy negative muons (∼ 50% of the total muon flux). Recent works started to analyse the impact of atmospheric muons producing soft errors in different types of devices [103,104].
Finally, at intermediate values of p s , the non-thermal flux of atmospheric neutrons produces an important contribution to the total flux, especially at high-altitude sites. The impact of the altitude and local atmospheric conditions can also be seen in the same Figure, where we also included the total flux of secondaries at the MACC site but for April 2020, and at Los Alamos National Laboratory (LANL, US, 2125 m asl) for February 2020. Except for the flux of high-energy muons, which are essentially not affected by atmospheric absorption, the altitude effect is, by far, the dominant one when comparing the flux between different sites. An increase of up to 3 times in the flux of secondary particles can be observed between the MACC and LANL sites. It is also noticeable a lower but still statistically significant change in the flux originated from the change in the atmospheric profile at MACC between February and April 2020.
A denser atmosphere shall produce more absorption during the final stages of the development of the EAS, and so, a lower number of secondary particles at the ground will be observed, producing the well known anti-correlation between the atmospheric pressure and the rate of particles at the ground level [105]. The atmospheric effect can be easily observed when studying the atmospheric pressure P (h 0 ) at the ground level 4 and the relative temporal variations in the expected flux of secondary type j, i.e.,
ζ j = ∆Ξ j Ξ j = Ξ j (t) Ξ j − 1,(2)
4 Atmospheric pressure at a certain altitude P (h) can be obtained from the atmospheric profiles by simply integrating the density profile, i.e., P (h) = h ∞ gρ(h )dh , where g is the acceleration due to gravity.
where Ξ j (t) is the instantaneous flux at time t and Ξ j is the reference flux. In Figure 4, the values for ζ j for high-energy neutrons, muons and total number of secondaries are shown together with the atmospheric pressure at the ground for the supercomputing centres of the National Energy Research Scientific Computing Center (NERSC, USA, h 0 210 m asl) and the National Supercomputing Center in Wuxi (NSCW, China, h 0 10 m asl). Depending on the secondary type, the atmospheric dependence could be more or less important. For example, in the right panel of Figure 4 the flux of electromagnetic particles is reduced due to the air absorption in the denser layers of the low atmosphere and, thus, the barometric modulation in Wuxi for the total flux is not as large as for neutrons. Instead for muons, atmospheric absorption effect can be considered negligible, as can be appreciated in both panels of Figure 4, where even a correlation can be observed during part of the year at some sites. This can be explained by recalling that muons are mainly produced after charged pions decay, and so, local changes in density profiles at the muon production atmospheric depth are more relevant than the integral effect, that is related with the absorption.
On the other hand, the atmosphere has a greater impact on neutron production, propagation, moderation and absorption, as can be also seen in Figure 5, where the average, deviation and extrema in the expected number of neutrons at the ground per squared metre and hour are shown as a function of their energy for the complete year of 2020 at four sites: Los Alamos National Laboratory (LANL, 2125 m asl), High-performance Computing Center Stuttgart (HLRS, 453 m asl), Centre de Calcul Recherche et Technology (CCRT, 94 m asl) and Minho Advanced Computing Centre (MACC, 207 m asl). While the altitude effect is still dominant, the seasonal atmospheric variations have a noticeable effect on the flux of these high-energy neutrons (E n > 50 MeV), even at higher energies. A detailed view of the 60 − 110 GeV neutron energy range is included, where a slightly significant deviation from the averaged power law is observed at E n 75) GeV for all the sites. This deviation is originated from the convolution of the decreasing energy at the production level with the increase in the neutron-nucleon cross-section at the 100 GeV scale [106].
In the right panel of the same Figure, it is detailed the flux and its variations in the range 50 E n /MeV < 450, where the neutron flux increases by a factor of 1 − 2 as the impact of the seasonal effects are enlarged. At LANL, for example, the expected neutron flux in the 100 MeV energy bin could vary by +15%, from 3.5 × 10 4 up to 4.0 × 10 4 neutrons per hour per squared metre, due only to the seasonal effect.
To get a quantitative measure of the impact of the temporal variations of the atmosphere, in Figure 6 the relative variation in the flux ζ j for different types of secondaries j is shown as a function of the variation of the local atmospheric pressure at all the low-altitude (h < 1, 000 m asl) data centres. The barometric effect has a different impact on each type of component of the showers due to their different development in the atmosphere. This is visible in this Figure from the large differences in the observed slopes for each type of particle. The biggest impact is for neutrons and other hadrons, evidencing global variations of up to +40% for a −4% decrease in the atmospheric pressure, with the flux Ξ n ranging from 42, 500 up to 68, 500 neutrons per squared metre per hour.
It is important to notice that, besides the obvious influence of the temperature on the air density, it also impacts the single shower distribution of particles at the ground due to local changes in the lateral development of the cascade [107]. However, we are not interested in studying single EAS but looking for the global effect over the development of . At left, the Ξ n in the energy range 50 < E n < 10 5 MeV is shown as long as the 1-sigma observed variation along the year. A slight increment in the flux is observed at E n 80 GeV (inset), consistent with neutron-nucleon cross-section increase at this energy range. The significant peak in Ξ n , observed at E n 100 MeV, is detailed in the right panel, where the mean, 1σ deviations and the extrema in Ξ n for each energy bin are also shown. It can be noticed that the flux within each energy bin is not symmetric to the mean.
whole primary flux in the air producing the atmospheric radiation at the ground. So, given the GCR flux isotropy and uniformity at the relevant energy ranges for this study, and the stochastic (Poissonian) and self-similarity [71] nature of the atmospheric radiation production, the only effect that needs to be considered is related to the integral variation of the air density profile, i.e., the atmospheric pressure at the ground level. g, e -, e + µ others n Figure 6: Effect of the changes in the barometric pressure in the expected flux of electromagnetic radiation (green triangles), muons (light blue rhombuses) and neutrons (blue squares) and other hadrons (yellow stars) at the ground level for low altitude data centres (h < 1, 000 m asl). Large variations are observed in the neutron flux for slightly small changes in the pressure. Due to their different atmospheric development, each type of particle evidence a very different response to changes in the barometric pressure, as it is evidenced in the slopes of these curves. For muons, on the other hand, local variations are most influenced by changes in the atmospheric profile at muon production layers than the barometric pressure. The exponential atmospheric dependence of the flux is visible in the slight deviation from a straight line.
Thereby, it is possible to take advantage of these effects to anticipate the expected flux of neutrons in different energy ranges at each data centre facility just by simply using the local atmospheric pressure at the ground as a tracer for the expected number of neutrons.
Local variations at each site are not as large as those shown in Figure 6, where all the observed seasonal variations with the global mean of the barometric pressure and the flux for each type of particle are shown together for the 22 low-altitude (h < 1, 000) data centres. The slight deviation from the straight line is evidence of the exponential dependence of the flux of any secondary particle j, Ξ j (t), with the local barometric pressure p(t) at time t. Nevertheless, the observed variations in the barometric pressure at every single site are significantly smaller than the global ones, and thus, they can be modelled by:
ζ j = β j ∆P,(3)
where β j is the barometric coefficient for secondary j and ∆P = P (t) − P is the variation of the atmospheric pressure to the local reference P . As this can be also done for different energy ranges, in this work we considered three different ones: the complete simulated energy range, E n 50) MeV; (50 E n 1, 000) MeV; and (E n > 1, 000) MeV; respectively labelled as i = 0, i = 1 and i = 2. For the sake of clarity and given we are mainly focused on the neutron flux, we can obviate the subscript n and so, equation (3) could be written as:
ζ i = β i ∆P,(4)
where now the subscript i refers to the corresponding neutron energy range i = 0, 1, 2 described above. The obtained results for all the sites are compiled in the table 2. It is important to notice that slight differences could be observed in both the total flux and the barometric coefficients at sites with similar altitudes due to differences of the atmospheric profiles and their impacts on the neutron flux.
From these values and using (4), it is possible to estimate the expected flux of high-energy neutrons and its variations at each site just by measuring the local atmospheric pressure, since β i corresponds to the relative decrease (increase) in the neutron flux for an 1 hPa increase (decrease) in the local barometric pressure. For example, from the second and the fourth column of Table 2, the reference atmospheric pressure and the global barometric coefficient for the site of Los Alamos (LANL) are P = 777 hPa and β 0 = −9.2 × 10 −3 hPa −1 respectively. Therefore, on a typical sunny day at LANL, when the barometric pressure should be higher than usual, say, P (t) = 779 hPa, a reduction of β 0 (P (t) − P ) = −9.2 × 10 −3 hPa −1 × 2 hPa = −1.84 × 10 −2 −2% in the E n 50 MeV neutron flux shall be expected. Thunderstorms, on the other hand, are preceded by a drop in the atmospheric pressure of several hPa in a few hours, with typical drop rates of at least −1 hPa h −1 . So, at sea level, the barometric pressure could be as low as 1, 002 hPa, or even less, during a thunderstorm. Thus, for example, during the preclude of a thunderstorm at the RIKEN Center for Computational Science (RCCS) in Kobe, Japan, where the average atmospheric pressure is P = 1, 010 hPa, an increase of ∼ 6% in the flux of neutrons with energies above 50 MeV could be expected 5 , and the situation could be even worst when considering the effective moderation of neutrons produced by rain. As a consequence, an increase (decrease) in the flux of high-energy neutrons will result in a similar increase (decrease) in the probability of errors produced in the supercomputer.
For muons, local changes in the profile at muon production depth are the dominant effect. Expected average muon flux at each data centre for E µ 15 MeV are also included in Table 1.
Space weather phenomena, such as the disturbances of the magnetosphere produced by the passage of an interplanetary coronal mass ejection (iCME) by Earth [108], also impacts the flux of high-energy neutrons and for this reason, atmospheric neutrons have been used since decades ago to monitor Solar activity [109]. These phenomena are observed as decreases in the total flux of atmospheric neutrons, where reductions of up to 35% could be expected for E n 100 MeV neutrons during severe geomagnetic storms [77], and some astroparticle observatories, such as LAGO, are focused on enhancing their neutron detection capabilities [91,90].
These scenarios are important when anticipating possible errors associated with the flux of high-energy neutrons at supercomputer centres, as it will be discussed in subsection 4.2.
High-energy neutrons modulations and soft error rates at supercomputers
A typical magnitude used to describe the device performance in terms of its sensitivity to radiation is the FIT (failures-intime) rate, i.e., the number of observed failures of a certain (or any) kind in 10 9 (one billion) hours of device operation, and so, the total FIT is just the sum of each kind of failure: FIT = N k FIT k . From this definition, the MTBF measured in hours is just the reciprocal of FIT times 10 9 :
MTBF = 10 9 FIT .(5)
It is possible to obtain the FIT rate from the effective cross-section σ err , as it is just an effective measure of the probability that a neutron triggers a certain type of error in a device, and it is typically expressed in units of area (cm 2 ) [47]. Thus, in general, FIT err = 10 5 Ξ σ err , FIT err (t) = 10 5 σ err Ξ i 1 + β i P (t) − P ,
in the i-th neutron energy range, for pressure expressed in hPa and σ in cm 2 .
Oliveira et al. [53,52] irradiate different types of commercial off-the-shelf (COTS) devices by exposing them to neutron beams in energy scales from thermal to 1 GeV, obtaining the device sensitivity to neutrons measured through the identification of unrecoverable errors (DUE) or SDC in APUs (CPUs+GPUs integrated in the same device), FPGAs and DDR memories. Unfortunately, they only present cross-sections "relative to the lowest one measure for each vendor to prevent the leakage of business-sensitive data" [53]. However, is it possible to see that, for all the tested devices, thermal neutron cross-sections are far for being negligible [53], but in most cases they are still considerable smaller than the corresponding effective cross-section of high-energy neutrons (the observed differences are up to one order of magnitude for APUs). Similar conclusions can be obtained from Figure 6 of [52], where it is possible to observe that, in presence of the nominal atmospheric flux of high-energy neutrons (E n > 10 MeV), the FIT rates are totally dominated by them.
As mentioned in section 2, Tiwari et al. [50], analyzed the error logs of two GPU supercomputing facilities: the Titan supercomputer at the Oak Ridge National Laboratory (ORNL), consisting of 18, 688 K20X GPUs; and of the Moonlight GPGPU cluster at Los Alamos National Laboratory (LANL), consisting of 616 M2090 GPGPUs. By exposing K20X GPU to the ISIS and LANSCE white neutron sources, that emulate the atmospheric neutron flux in the 10 < E n < 750 MeV energy range [110], they were able to obtain the SDC and program crashes effective cross-sections σ err , that are compiled in the table 2 of [50] and, for the worst case scenario, they can be averaged obtaining σ SDC = (4.8 ± 0.4) × 10 −7 cm 2 and σ crash = (2.7 ± 0.2) × 10 −7 cm 2 respectively. While the energy ranges of the neutron sources used for the irradiation of the K20s devices are lower than the complete energy range simulated in this work, it is possible to assume that the neutron-error cross-sections in the energy range E n > 1, 000 MeV should not be far from the reported values. Moreover, at these high energies, the flux is considerably lower than in the 50 ≤ E n /MeV ≤ 1, 000 energy range, and so the error rates will be dominated by the flux within this range. Therefore, following equation (7) and using the tabulated values for P , Ξ 1 and β 1 for the ORNL site, the expected FIT SDC rate when the atmospheric pressure drops by, say, −5 hPa respect to the barometric reference pressure, should be 6 of FIT SDC ∼ 2, 300, and so, from equation (5), the corresponding MTBF for the whole Titan supercomputer should be of 23 hours, i.e., about 1 silent error per day due to the expected flux of neutrons with 50 < E n < 1, 000 MeV when the atmospheric pressure drops by −5 hPa.
Once the expected flux of neutrons was determined for each site, calculation of effective flux at computing devices, including CPUs, GPUs, APUs, storage and memories, have to take into account the geometry and materials of computing racks, buildings and other infrastructures in the surroundings, even, on the supercomputing cooling system, especially those using water or any other aqueous solutions as coolants. All these components will have a profound impact in the flux of high-energy neutrons, producing thermal and epi-thermal neutrons having different cross-sections with the materials used for making the different types of devices available in any data centre.
As a final remark, given the linearity of equations (4) and (6), it is easy to see that the relative variation of the FIT rates,
ψ err = FIT err (t) FIT err − 1,
where FIT err is the reference FIT err rate at the site, is equal to the relative variation ζ of the high-energy neutron flux, i.e, ψ = ζ, and so: ψ err = β∆P,
that is, the FIT rate associated with the flux of high-energy neutrons at each site should evidence a small anti-correlation (β > −1%) with the local changes in the barometric pressure that increases with the altitude of the supercomputing centre.
Conclusions
In this work we presented the calculation of the expected flux of atmospheric neutrons of E n ≥ 50 MeV and their seasonal variations at each one of the 23 future sites of the next generation of exascale supercomputing facilities. This was done by simulating the interaction of the measured galactic cosmic rays flux and including real atmospheric and geomagnetic conditions at each site using the state-of-the-art techniques and codes heavily used, tested and validated in the astroparticle physics community.
By using real atmospheric profiles, extracted from the GDAS database and averaged to obtain the atmosphere conditions for each month of 2020, the expected flux of high-energy neutrons with E n ≥ 50 MeV and its seasonal variations at each exascale supercomputing centre were obtained and parametrised. The dependence on the total flux of particles and neutron flux with the atmospheric pressure was observed and the barometric pressure coefficient for neutrons at different energy ranges were obtained and they are summarised in Table 2. The reported barometric coefficients, β i corresponds to the relative change in the expected flux in different energy ranges when the atmospheric pressure changes by ±1 hPa. The provided information makes it possible to easily estimate the expected flux of neutrons under different atmospheric conditions (equation (4)) and to evaluate the corresponding FIT rates of silent errors due to high-energy neutrons (equation (7)) and its relative seasonal variations (equation (8)). This can be done by using the instantaneous barometric pressure that can be easily measured at each facility, being a simple and direct way to anticipate potential silent and non-silent errors that could appear during critical calculations that could be performed soon at the next generation of exascale supercomputing facilities.
To avoid the intrinsic limitation of CORSIKA for low energy neutrons, we are currently developing a special module in ARTI, based on FLUKA [111], to extend current calculations down to the meV neutron energy scale. Extensions of the atmospheric flux simulations using real atmospheres presented here but including other effects such as the rain, that could double the thermal neutron flux at the ground as water droplets acts as neutrons moderators, and the corresponding Geant4 [112] simulations of neutron moderation in infrastructures are being considered and will be published as a follow-up of the analysis presented here.
Declarations
Availability of data and materials
The datasets generated and analysed during the current study are available in the Zenodo repository, DOI:10.5281/zenodo.6721615. The ARTI code is available in the LAGO GitHub repository: github.com/lagoproject/arti, DOI:10.5281/zenodo.7316555.
effects in the flux of high-energy neutrons The first step in the calculation of the expected flux at the ground is to obtain the magnetic field components B x (north component) and B z (vertical component) from the current version of the International Geomagnetic Field Reference (IGRF) model (IGRF13-2019)
expected variations along the year are shown for each atmospheric layer between ground level and 8 km asl. These variations are characterised by the minimum, maximum and one sigma deviation from the mean observed during 2020. We also included the variations observed at the High-performance Computing Center Stuttgart (HLRS, 453 m asl), the Centre de Calcul Recherche et Technology (CCRT, 94 m asl) and the Minho Advanced Computing Centre (MACC, 207 m asl) for comparative analysis.
Figure 2 :
2Left: The atmospheric density profiles for LANL are shown for the Winter (dotted black line), Spring (dash-dotted green line), Summer (solid red line) and Autumn (dashed yellow line) of 2020. These profiles were extracted from the GDAS database and averaged for each month. Differences of up to 7.5% can be observed in the density at the ground level in the LANL site, at an altitude of 2, 125 m asl. Atmospheric profiles used extends up to an altitude of ∼ 110 km, corresponding to the limit of the Earth's atmosphere according to Linsley's atmospheric model [95]. Right: Density variations observed at different altitudes along 2020 at LANL (solid red line), HLRS (dashed yellow line), CCRT (dotted green line) and MACC (dash-dotted black line). For each altitude between 0 and 8 km asl, candlesticks show minimums, maximums and 1-sigma deviation from the mean of the density at each atmospheric layer. See
Figure 3 :
3, the secondary momentum p s spectra are shown for these different components for the Minho Advanced Computing Centre (MACC) in Portugal, at an altitude of 200 m asl in February 2020. Several important features of the cascade development can be inferred from this Figure. At low p s values the flux is dominated by the Linear-Log (left) and Log-Log (right) distribution of the momentum of the secondary particles p s expected in February 2020 at the ground level in MACC (207 m asl). The main components of the showers, i.e., the electromagnetic component (dot-dashed green line), the muons µ ± (dot-long-dashed light blue line) and the neutrons (solid blue line) and other hadrons (dot-dashed yellow line), are identifiable by their own characteristics as described in the text. Total flux for February 2020 (dotted black line) and April 2020 are also show to evidence the seasonal effects. However, the major impact is produced by the altitude above sea level, as can be seen by comparison with the neutron (dotted yellow line) and total (dotted red line) fluxes expected in February 2020 at the ground level in LANL (2, 125 m asl) electromagnetic (EM) component. As explained in the previous section, as the shower evolves in the atmosphere, more and more energy is transferred to the EM component via particle decay and radiative processes. However, EM particles are coupled to each other through different radiative processes and, thus, EM becomes the most important component of the shower development. In the left panel of theFigure 3, a significant increase in the photon flux at the 510 − 520 keV energy bin is seen, corresponding to the production of E γ = 511 keV photons via pair annihilation e + e − → γγ processes in the atmosphere.
Figure 4 :
4Expected relative flux variations ζ j for neutrons (blue solid line, empty squares), muons (light blue dot-dashed line, empty triangles) and all the secondaries (black dotted line, empty circles); and the local atmospheric pressure at the ground (red dashed line, empty rhombus, right axis), are shown for each month of 2020 at the data centres of the National Energy Research Scientific Computing Center (NERSC, USA, h 0 210 m asl) and National Supercomputing Center in Wuxi (NSCW, sea level, right). As described in the text, except for muons, the anti-correlation is remarkable at all the studied sites, especially for the neutron flux
Figure 5 :
5Energy distribution of the expected flux of neutrons Ξ n and its variations along 2020 at four sites: LANL (red squares), HLRS (yellow circles), CCRT (green rhombuses) and MACC (black triangles)
variation ΔP (hPa)
when the flux Ξ is expressed in units of m −2 h −1 . Then, by combining this result with equations (2) and (4) for neutrons:
Table 1 :
1Altitude, latitude and longitude of the 23 new exascale facilities, including the total, averaged flux Ξin m −2
hour −1
of all the secondary particles, muons and
Table 2 :
2Reference pressure P and neutron flux Ξ i , and barometric coefficients β i (in hPa −1 ) for the i-th energy range at the 23 exascale facilities. With these values, it is possible to calculate using equation (4) the local variations in the flux of neutrons just from the local barometric pressure (use Ξ 0 and β 0 for the neutron flux with E n 50 MeV). Pressure is given in hPa and fluxes are given in m −2 hour −1 .Site
Alt.
P
Ξ0
β0
Ξ1
β1
Ξ2
β2
LANL
2,125
777
26.4
-9.2
25.7
-9.2
7.0
-9.7
NUDT
750
927
7.7
-6.9
7.5
-6.9
1.9
-7.1
MAD
700
927
7.7
-7.7
7.5
-7.7
1.9
-7.5
SOFIA
565
940
6.9
-6.8
6.8
-6.8
1.7
-7.0
LRZ
471
950
6.4
-7.8
6.2
-7.8
1.6
-7.9
HLRS
453
952
6.3
-8.1
6.1
-8.1
1.6
-8.3
IZUM
280
974
5.3
-6.8
5.2
-6.8
1.3
-7.0
DC2
275
973
5.3
-7.9
5.2
-7.9
1.3
-8.2
IT4
261
975
5.2
-7.6
5.1
-7.6
1.3
-7.8
ORNL
250
984
4.8
-7.9
4.7
-7.9
1.2
-8.2
ANL
214
983
4.9
-8.8
4.8
-8.8
1.2
-9.6
NERSC
210
984
4.8
-7.0
4.7
-7.0
1.2
-6.4
MACC
207
986
4.8
-7.6
4.6
-7.6
1.2
-8.0
LLNL
188
987
4.7
-7.1
4.6
-7.1
1.1
-7.7
CSCF
128
978
5.1
-8.4
5.0
-8.4
1.3
-8.7
BSC
100
997
4.3
-7.7
4.2
-7.7
1.1
-7.9
JSC
100
993
4.5
-7.7
4.4
-7.8
1.1
-7.7
PSNC
100
993
4.5
-7.3
4.4
-7.3
1.1
-7.7
CCRT
94
995
4.4
-8.2
4.3
-8.2
1.1
-8.4
BOLT
40
1002
4.2
-7.2
4.1
-7.2
1.0
-7.2
NSCG
10
1015
3.7
-6.9
3.6
-6.9
0.9
-7.2
NSCW
10
1014
3.8
-6.4
3.7
-6.4
0.9
-6.5
RCCS
10
1010
3.9
-6.7
3.8
-6.7
0.9
-6.7
×10 4
×10 −3
×10 4
×10 −3
×10 4
×10 −3
It is possible to consider neutron as quasi-stable particles since their lifetime is several orders of magnitude larger than the characteristic time of the cascade evolution.
Data assimilation is the adjustment of the parameters of any specific atmospheric model to the real state of the atmosphere as measured by meteorological observations
Since, according toTable 2for RCSS: β0(P (t) − P ) = −6.7 × 10 −3 hPa −1 × (−8) hPa 6%.
F ITSDC = (10 5 )(4.7 × 10 4 )(4.8 × 10 −7 )[1 + (−7.9 × 10 −3 )(979 − 984)] = 2, 345 2, 300 failures in 10 9 device·hours of operation.
AcknowledgmentsThis work has been partially funded by the co-funded Spanish Ministry of Science and Innovation project CODEC-OSE (RTI2018-096006-B-I00) with European Regional Development Fund (ERDF) funds, by the co-funded European Union Horizon 2020 research and innovation Programme project EOSC-SYNERGY (grant agreement No 857647), and by the co-funded Comunidad de Madrid project CABAHLA-CM (S2018/TCS-4423). Also, this work was partially supported by the computing facilities (Turgalium) of Extremadura Research Centre for Advanced Technologies (CETA-CIEMAT), funded by the ERDF too.The authors are grateful to Antonio Juan Rubio-Montero and Angelines Alberto-Morillas from CIEMAT, Alfonso Pardo-Diaz from CETA/CIEMAT and Iván Sidelnik from CNEA for their continuous support and fruitful discussions.HA thanks Rafael Mayo-García for his warm welcome and continuous support during his stay at CIEMAT in Madrid, Spain.
The international exascale software project: a call to cooperative action by the global high-performance community. J Dongarra, P Beckman, P Aerts, F Cappello, T Lippert, S Matsuoka, P Messina, T Moore, R Stevens, A Trefethen, The International Journal of High Performance Computing Applications. 234J. Dongarra, P. Beckman, P. Aerts, F. Cappello, T. Lippert, S. Matsuoka, P. Messina, T. Moore, R. Stevens, A. Trefethen, et al., "The international exascale software project: a call to cooperative action by the global high-performance community," The International Journal of High Performance Computing Applications, vol. 23, no. 4, pp. 309-322, 2009.
Addressing failures in exascale computing. M Snir, R W Wisniewski, J A Abraham, S V Adve, S Bagchi, P Balaji, J Belak, P Bose, F Cappello, B Carlson, The International Journal of High Performance Computing Applications. 282M. Snir, R. W. Wisniewski, J. A. Abraham, S. V. Adve, S. Bagchi, P. Balaji, J. Belak, P. Bose, F. Cappello, B. Carlson, et al., "Addressing failures in exascale computing," The International Journal of High Performance Computing Applications, vol. 28, no. 2, pp. 129-173, 2014.
Job migration in hpc clusters by means of checkpoint/restart. M Rodríguez-Pascual, J Cao, J A Moríñigo, G Cooperman, R Mayo-García, The Journal of Supercomputing. 7510M. Rodríguez-Pascual, J. Cao, J. A. Moríñigo, G. Cooperman, and R. Mayo-García, "Job migration in hpc clusters by means of checkpoint/restart," The Journal of Supercomputing, vol. 75, no. 10, pp. 6517-6541, 2019.
Strider: Runtime support for optimizing strided data accesses on multi-cores with explicitly managed memories. J.-S Yeom, D S Nikolopoulos, SC'10: Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis. IEEEJ.-S. Yeom and D. S. Nikolopoulos, "Strider: Runtime support for optimizing strided data accesses on multi-cores with explicitly managed memories," in SC'10: Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-11, IEEE, 2010.
Measurement of the flux and energy spectrum of cosmic-ray induced neutrons on the ground. M S Gordon, P Goldhagen, K P Rodbell, T H Zabel, H H Tang, J M Clem, P Bailey, 51M. S. Gordon, P. Goldhagen, K. P. Rodbell, T. H. Zabel, H. H. Tang, J. M. Clem, and P. Bailey, "Measurement of the flux and energy spectrum of cosmic-ray induced neutrons on the ground," vol. 51, pp. 3427-3434, 12 2004.
Radiation-induced error criticality in modern hpc parallel accelerators. D A G D Oliveira, L L Pilla, M Hanzich, V Fratin, F Fernandes, C Lunardi, J M Cela, P O A Navaux, L Carro, P Rech, 22017D. A. G. D. Oliveira, L. L. Pilla, M. Hanzich, V. Fratin, F. Fernandes, C. Lunardi, J. M. Cela, P. O. A. Navaux, L. Carro, and P. Rech, "Radiation-induced error criticality in modern hpc parallel accelerators," pp. 577-588, 2 2017.
Comparison and validation of fluka and hzetrn as tools for investigating the secondary neutron production in large space vehicles. K Rojdev, S Koontz, B Reddell, W Atwell, P Boeder, K. Rojdev, S. Koontz, B. Reddell, W. Atwell, and P. Boeder, "Comparison and validation of fluka and hzetrn as tools for investigating the secondary neutron production in large space vehicles," 2015.
Neutron detection through an sram-based test bench. L Dilillo, F Wrobel, J.-M Galliere, F Saigne, 6L. Dilillo, F. Wrobel, J.-M. Galliere, and F. Saigne, "Neutron detection through an sram-based test bench," pp. 64-69, 6 2009.
Impact of global data assimilation system atmospheric models on astroparticle showers. J Grisales-Casadiegos, C Sarmiento-Cano, L A Núñez, Canadian Journal of Physics. 1003J. Grisales-Casadiegos, C. Sarmiento-Cano, and L. A. Núñez, "Impact of global data assimilation system atmospheric models on astroparticle showers," Canadian Journal of Physics, vol. 100, no. 3, pp. 152-157, 2022.
Fluka monte carlo assessment of the terrestrial muon flux at low energies and comparison against experimental measurements. A Infantino, E W Blackmore, M Brugger, R G Alía, M Stukel, M Trinczek, Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 8382016A. Infantino, E. W. Blackmore, M. Brugger, R. G. Alía, M. Stukel, and M. Trinczek, "Fluka monte carlo assessment of the terrestrial muon flux at low energies and comparison against experimental measurements," Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 838, pp. 109-116, 12 2016.
Toward exascale resilience: 2014 update. F Cappello, G Al, W Gropp, S Kale, B Kramer, M Snir, Supercomputing Frontiers and Innovations: an International Journal. 11F. Cappello, G. Al, W. Gropp, S. Kale, B. Kramer, and M. Snir, "Toward exascale resilience: 2014 update," Supercomputing Frontiers and Innovations: an International Journal, vol. 1, no. 1, pp. 5-28, 2014.
Failure prediction for hpc systems and applications: Current situation and open issues. A Gainaru, F Cappello, M Snir, W Kramer, The International journal of high performance computing applications. 273A. Gainaru, F. Cappello, M. Snir, and W. Kramer, "Failure prediction for hpc systems and applications: Current situation and open issues," The International journal of high performance computing applications, vol. 27, no. 3, pp. 273-282, 2013.
Predictive reliability and fault management in exascale systems: State of the art and perspectives. R Canal, C Hernandez, R Tornero, A Cilardo, G Massari, F Reghenzani, W Fornaciari, M Zapater, D Atienza, A Oleksiak, ACM Computing Surveys (CSUR). 535R. Canal, C. Hernandez, R. Tornero, A. Cilardo, G. Massari, F. Reghenzani, W. Fornaciari, M. Zapater, D. Atienza, A. Oleksiak, et al., "Predictive reliability and fault management in exascale systems: State of the art and perspectives," ACM Computing Surveys (CSUR), vol. 53, no. 5, pp. 1-32, 2020.
Fault-tolerance techniques for high-performance computing. Springer International PublishingSwitzerlandFault-tolerance techniques for high-performance computing. Switzerland: Springer International Publishing, 2015.
Using checkpointing and virtualization for fault injection. C Artho, K Suzaki, M Hagiya, W Leungwattanakit, R Potter, E Platon, Y Tanabe, F Weitl, M Yamamoto, International Journal of Networking and Computing. 52C. Artho, K. Suzaki, M. Hagiya, W. Leungwattanakit, R. Potter, E. Platon, Y. Tanabe, F. Weitl, and M. Yamamoto, "Using checkpointing and virtualization for fault injection," International Journal of Networking and Computing, vol. 5, no. 2, pp. 347-372, 2015.
Doubt and redundancy kill soft errors-towards detection and correction of silent data corruption in task-based numerical software. P Samfass, T Weinzierl, A Reinarz, M Bader, 2021 IEEE/ACM 11th Workshop on Fault Tolerance for HPC at eXtreme Scale (FTXS). IEEE2021P. Samfass, T. Weinzierl, A. Reinarz, and M. Bader, "Doubt and redundancy kill soft errors-towards detection and correction of silent data corruption in task-based numerical software," in 2021 IEEE/ACM 11th Workshop on Fault Tolerance for HPC at eXtreme Scale (FTXS), pp. 1-10, IEEE, 2021.
Cppc: a compiler-assisted tool for portable checkpointing of message-passing applications. G Rodríguez, M J Martín, P González, J Tourino, R Doallo, Concurrency and Computation: Practice and Experience. 226G. Rodríguez, M. J. Martín, P. González, J. Tourino, and R. Doallo, "Cppc: a compiler-assisted tool for portable checkpointing of message-passing applications," Concurrency and Computation: Practice and Experience, vol. 22, no. 6, pp. 749-766, 2010.
Unified model for assessing checkpointing protocols at extreme-scale. G Bosilca, A Bouteiller, E Brunet, F Cappello, J Dongarra, A Guermouche, T Herault, Y Robert, F Vivien, D Zaidouni, Concurrency and Computation: Practice and Experience. 2617G. Bosilca, A. Bouteiller, E. Brunet, F. Cappello, J. Dongarra, A. Guermouche, T. Herault, Y. Robert, F. Vivien, and D. Zaidouni, "Unified model for assessing checkpointing protocols at extreme-scale," Concurrency and Computation: Practice and Experience, vol. 26, no. 17, pp. 2772-2791, 2014.
Evaluating the viability of process replication reliability for exascale systems. K Ferreira, J Stearley, J H Laros, Iii , R Oldfield, K Pedretti, R Brightwell, R Riesen, P G Bridges, D Arnold, Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis. 2011 International Conference for High Performance Computing, Networking, Storage and AnalysisK. Ferreira, J. Stearley, J. H. Laros III, R. Oldfield, K. Pedretti, R. Brightwell, R. Riesen, P. G. Bridges, and D. Arnold, "Evaluating the viability of process replication reliability for exascale systems," in Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-12, 2011.
Thread-level redundancy fault tolerant cmp based on relaxed input replication. J Yu, D Jian, Z Wu, H Liu, 2011 6th International Conference on Computer Sciences and Convergence Information Technology (ICCIT). IEEEJ. Yu, D. Jian, Z. Wu, and H. Liu, "Thread-level redundancy fault tolerant cmp based on relaxed input replication," in 2011 6th International Conference on Computer Sciences and Convergence Information Technology (ICCIT), pp. 544-549, IEEE, 2011.
Ft-mpi: Fault tolerant mpi, supporting dynamic applications in a dynamic world. G E Fagg, J J Dongarra, European parallel virtual machine/message passing interface users' group meeting. SpringerG. E. Fagg and J. J. Dongarra, "Ft-mpi: Fault tolerant mpi, supporting dynamic applications in a dynamic world," in European parallel virtual machine/message passing interface users' group meeting, pp. 346-353, Springer, 2000.
Extending the scope of the checkpoint-on-failure protocol for forward recovery in standard mpi. W Bland, P Du, A Bouteiller, T Herault, G Bosilca, J J Dongarra, Concurrency and computation: Practice and experience. 2517W. Bland, P. Du, A. Bouteiller, T. Herault, G. Bosilca, and J. J. Dongarra, "Extending the scope of the checkpoint-on-failure protocol for forward recovery in standard mpi," Concurrency and computation: Practice and experience, vol. 25, no. 17, pp. 2381-2393, 2013.
Fault tolerance of mpi applications in exascale systems: The ulfm solution. N Losada, P González, M J Martín, G Bosilca, A Bouteiller, K Teranishi, Future Generation Computer Systems. 106N. Losada, P. González, M. J. Martín, G. Bosilca, A. Bouteiller, and K. Teranishi, "Fault tolerance of mpi applications in exascale systems: The ulfm solution," Future Generation Computer Systems, vol. 106, pp. 467- 481, 2020.
Mana for mpi: Mpi-agnostic network-agnostic transparent checkpointing. R Garg, G Price, G Cooperman, Proceedings of the 28th International Symposium on High-Performance Parallel and Distributed Computing. the 28th International Symposium on High-Performance Parallel and Distributed ComputingR. Garg, G. Price, and G. Cooperman, "Mana for mpi: Mpi-agnostic network-agnostic transparent checkpointing," in Proceedings of the 28th International Symposium on High-Performance Parallel and Distributed Computing, pp. 49-60, 2019.
Mana-2.0: A future-proof design for transparent checkpointing of mpi at scale. Y Xu, Z Zhao, R Garg, H Khetawat, R Hartman-Baker, G Cooperman, arXiv:2112.05858arXiv preprintY. Xu, Z. Zhao, R. Garg, H. Khetawat, R. Hartman-Baker, and G. Cooperman, "Mana-2.0: A future-proof design for transparent checkpointing of mpi at scale," arXiv preprint arXiv:2112.05858, 2021.
A survey of fault tolerance mechanisms and checkpoint/restart implementations for high performance computing systems. I P Egwutuoha, D Levy, B Selic, S Chen, The Journal of Supercomputing. 653I. P. Egwutuoha, D. Levy, B. Selic, and S. Chen, "A survey of fault tolerance mechanisms and checkpoint/restart implementations for high performance computing systems," The Journal of Supercomputing, vol. 65, no. 3, pp. 1302-1326, 2013.
Abfr: convenient management of latent error resilience using application knowledge. A Fang, A A Chien, Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing. the 27th International Symposium on High-Performance Parallel and Distributed ComputingA. Fang and A. A. Chien, "Abfr: convenient management of latent error resilience using application knowledge," in Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing, pp. 27-39, 2018.
Dmtcp: Transparent checkpointing for cluster computations and the desktop. J Ansel, K Arya, G Cooperman, 2009 IEEE International Symposium on Parallel & Distributed Processing. IEEEJ. Ansel, K. Arya, and G. Cooperman, "Dmtcp: Transparent checkpointing for cluster computations and the desktop," in 2009 IEEE International Symposium on Parallel & Distributed Processing, pp. 1-12, IEEE, 2009.
Berkeley lab checkpoint/restart (blcr) for linux clusters. P H Hargrove, J C Duell, Journal of Physics: Conference Series. IOP Publishing4667P. H. Hargrove and J. C. Duell, "Berkeley lab checkpoint/restart (blcr) for linux clusters," in Journal of Physics: Conference Series, vol. 46, p. 067, IOP Publishing, 2006.
A higher order estimate of the optimum checkpoint interval for restart dumps. J T Daly, Future generation computer systems. 223J. T. Daly, "A higher order estimate of the optimum checkpoint interval for restart dumps," Future generation computer systems, vol. 22, no. 3, pp. 303-312, 2006.
Coping with silent errors in hpc applications. G Aupy, A Benoit, A Cavelan, M Fasi, Y Robert, H Sun, B Uçar, Springer International PublishingChamin Emergent ComputationG. Aupy, A. Benoit, A. Cavelan, M. Fasi, Y. Robert, H. Sun, and B. Uçar, "Coping with silent errors in hpc applications," in Emergent Computation, pp. 269-292, Cham: Springer International Publishing, 2017.
Coping with silent and fail-stop errors at scale by combining replication and checkpointing. A Benoit, A Cavelan, F Cappello, P Raghavan, Y Robert, H Sun, Journal of Parallel and Distributed Computing. 122A. Benoit, A. Cavelan, F. Cappello, P. Raghavan, Y. Robert, and H. Sun, "Coping with silent and fail-stop errors at scale by combining replication and checkpointing," Journal of Parallel and Distributed Computing, vol. 122, pp. 209-225, 2018.
Triple-triple redundant 777 primary flight computer. Y C Yeh, 1996 IEEE Aerospace Applications Conference. Proceedings. IEEE1Y. C. Yeh, "Triple-triple redundant 777 primary flight computer," in 1996 IEEE Aerospace Applications Confer- ence. Proceedings, vol. 1, pp. 293-307, IEEE, 1996.
On the modelling of optimal coordinated checkpoint period in supercomputers. J A Moríñigo, M Rodríguez-Pascual, R Mayo-García, The Journal of Supercomputing. 752J. A. Moríñigo, M. Rodríguez-Pascual, and R. Mayo-García, "On the modelling of optimal coordinated checkpoint period in supercomputers," The Journal of Supercomputing, vol. 75, no. 2, pp. 930-954, 2019.
Blendenpik: Supercharging lapack's least-squares solver. H Avron, P Maymounkov, S Toledo, SIAM Journal on Scientific Computing. 323H. Avron, P. Maymounkov, and S. Toledo, "Blendenpik: Supercharging lapack's least-squares solver," SIAM Journal on Scientific Computing, vol. 32, no. 3, pp. 1217-1236, 2010.
Using random butterfly transformations to avoid pivoting in sparse direct methods. M Baboulin, X S Li, F.-H Rouet, International Conference on High Performance Computing for Computational Science. SpringerM. Baboulin, X. S. Li, and F.-H. Rouet, "Using random butterfly transformations to avoid pivoting in sparse direct methods," in International Conference on High Performance Computing for Computational Science, pp. 135-144, Springer, 2014.
Iterative solution of sparse linear least squares using lu factorization. G W Howell, M Baboulin, Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region. the International Conference on High Performance Computing in Asia-Pacific RegionG. W. Howell and M. Baboulin, "Iterative solution of sparse linear least squares using lu factorization," in Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region, pp. 47-53, 2018.
With extreme computing, the rules have changed. J Dongarra, S Tomov, P Luszczek, J Kurzak, M Gates, I Yamazaki, H Anzt, A Haidar, A Abdelfattah, Computing in Science & Engineering. 193J. Dongarra, S. Tomov, P. Luszczek, J. Kurzak, M. Gates, I. Yamazaki, H. Anzt, A. Haidar, and A. Abdelfattah, "With extreme computing, the rules have changed," Computing in Science & Engineering, vol. 19, no. 3, pp. 52-62, 2017.
Numerical algorithms for hpc systems and fault tolerance. B N Chetverushkin, M V Yakobovskiy, M A Kornilina, A V Semenova, International Conference on Parallel Computational Technologies. SpringerB. N. Chetverushkin, M. V. Yakobovskiy, M. A. Kornilina, and A. V. Semenova, "Numerical algorithms for hpc systems and fault tolerance," in International Conference on Parallel Computational Technologies, pp. 34-44, Springer, 2019.
Error resilience of three gmres implementations under fault injection. J A Moríñigo, A Bustos, R Mayo-García, The Journal of Supercomputing. J. A. Moríñigo, A. Bustos, and R. Mayo-García, "Error resilience of three gmres implementations under fault injection," The Journal of Supercomputing, pp. 1-28, 2021.
Algorithm-based fault tolerance for parallel stencil computations. A Cavelan, F M Ciorba, 2019 IEEE International Conference on Cluster Computing (CLUSTER). IEEEA. Cavelan and F. M. Ciorba, "Algorithm-based fault tolerance for parallel stencil computations," in 2019 IEEE International Conference on Cluster Computing (CLUSTER), pp. 1-11, IEEE, 2019.
Single event upset at ground level. E Normand, IEEE Transactions on Nuclear Science. 43E. Normand, "Single event upset at ground level," IEEE Transactions on Nuclear Science, vol. 43, pp. 2742-2750, 12 1996.
Cosmic ray soft error rates of 16-mb dram memory chips. J F Ziegler, M E Nelson, J D Shell, R J Peterson, C J Gelderloos, H P Muhlfeld, C J Montrose, IEEE Journal of Solid-state circuits. 332J. F. Ziegler, M. E. Nelson, J. D. Shell, R. J. Peterson, C. J. Gelderloos, H. P. Muhlfeld, and C. J. Montrose, "Cosmic ray soft error rates of 16-mb dram memory chips," IEEE Journal of Solid-state circuits, vol. 33, no. 2, pp. 246-252, 1998.
Using the lansce irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers. S Michalak, K Harris, N Hengartner, B Takala, S Wender, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms. 241S. Michalak, K. Harris, N. Hengartner, B. Takala, and S. Wender, "Using the lansce irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers," Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, vol. 241, no. 1-4, pp. 414-418, 2005.
Neutron radiation effects on an electronic system on module. D L Presti, N H Medina, M A Guazzelli, M Moralles, V A P Aguiar, J R B Oliveira, N Added, E L A Macchione, P De Tarso, D Siqueira, G Zahn, F Genezini, D Bonanno, G Gallo, S Russo, O Sgouros, A Muoio, L Pandola, F Cappuzzello, Review of Scientific Instruments. 912020D. L. Presti, N. H. Medina, M. A. Guazzelli, M. Moralles, V. A. P. Aguiar, J. R. B. Oliveira, N. Added, E. L. A. Macchione, P. de Tarso D. Siqueira, G. Zahn, F. Genezini, D. Bonanno, G. Gallo, S. Russo, O. Sgouros, A. Muoio, L. Pandola, and F. Cappuzzello, "Neutron radiation effects on an electronic system on module," Review of Scientific Instruments, vol. 91, p. 083301, 8 2020.
Characteristics of energy deposition from 1-1000 mev proton and neutron induced nuclear reactions in silicon. J Han, G Guo, AIP Advances. 72017J. Han and G. Guo, "Characteristics of energy deposition from 1-1000 mev proton and neutron induced nuclear reactions in silicon," AIP Advances, vol. 7, p. 115220, 11 2017.
Radiation-induced soft errors in advanced semiconductor technologies. R Baumann, IEEE Transactions on Device and Materials Reliability. 5R. Baumann, "Radiation-induced soft errors in advanced semiconductor technologies," IEEE Transactions on Device and Materials Reliability, vol. 5, pp. 305-316, 9 2005.
Soft errors in advanced semiconductor devices -part i: the three radiation sources. R Baumann, IEEE Transactions on Device and Materials Reliability. 1R. Baumann, "Soft errors in advanced semiconductor devices -part i: the three radiation sources," IEEE Transactions on Device and Materials Reliability, vol. 1, pp. 17-22, 3 2001.
Single event effect cross section calibration and application to quasi-monoenergetic and spallation facilities. R G Alía, S Bonaldo, M Brugger, S Danzeca, A Ferrari, C Frost, A Infantino, Y Iwamoto, J Mekki, C Theis, A Thornton, EPJ Nuclear Sciences and Technologies. 41R. G. Alía, S. Bonaldo, M. Brugger, S. Danzeca, A. Ferrari, C. Frost, A. Infantino, Y. Iwamoto, J. Mekki, C. Theis, and A. Thornton, "Single event effect cross section calibration and application to quasi-monoenergetic and spallation facilities," EPJ Nuclear Sciences and Technologies, vol. 4, p. 1, 2 2018.
Understanding gpu errors on large-scale hpc systems and the implications for system design and operation. D Tiwari, S Gupta, J Rogers, D Maxwell, P Rech, S Vazhkudai, D Oliveira, D Londo, N Debardeleben, P Navaux, L Carro, A Bland, D. Tiwari, S. Gupta, J. Rogers, D. Maxwell, P. Rech, S. Vazhkudai, D. Oliveira, D. Londo, N. DeBardeleben, P. Navaux, L. Carro, and A. Bland, "Understanding gpu errors on large-scale hpc systems and the implications for system design and operation," pp. 331-342, 2015.
Experimental and analytical study of xeon phi reliability. D Oliveira, L Pilla, N Debardeleben, S Blanchard, H Quinn, I Koren, P Navaux, P Rech, 112017D. Oliveira, L. Pilla, N. DeBardeleben, S. Blanchard, H. Quinn, I. Koren, P. Navaux, and P. Rech, "Experimental and analytical study of xeon phi reliability," pp. 1-12, 11 2017.
High-energy versus thermal neutron contribution to processor and memory error rates. D Oliveira, F F Santos, G P Davila, C Cazzaniga, C Frost, R C Baumann, P Rech, IEEE Transactions on Nuclear Science. 67D. Oliveira, F. F. dos Santos, G. P. Davila, C. Cazzaniga, C. Frost, R. C. Baumann, and P. Rech, "High-energy versus thermal neutron contribution to processor and memory error rates," IEEE Transactions on Nuclear Science, vol. 67, pp. 1161-1168, 6 2020.
Thermal neutrons: a possible threat for supercomputer reliability. D Oliveira, S Blanchard, N Debardeleben, F F Santos, G P Dávila, P Navaux, A Favalli, O Schappert, S Wender, C Cazzaniga, C Frost, P Rech, Journal of Supercomputing. 772021D. Oliveira, S. Blanchard, N. DeBardeleben, F. F. dos Santos, G. P. Dávila, P. Navaux, A. Favalli, O. Schappert, S. Wender, C. Cazzaniga, C. Frost, and P. Rech, "Thermal neutrons: a possible threat for supercomputer reliability," Journal of Supercomputing, vol. 77, pp. 1612-1634, 2 2021.
Cosmic rays from the knee to the highest energies. J Blümer, R Engel, J R Hörandel, Progress in Particle and Nuclear Physics. 63J. Blümer, R. Engel, and J. R. Hörandel, "Cosmic rays from the knee to the highest energies," Progress in Particle and Nuclear Physics, vol. 63, no. 2, pp. 293-338, 2009.
Extensive air showers and ultra high-energy cosmic rays: a historical review. K.-H Kampert, A A Watson, The European Physical Journal H. 373K.-H. Kampert and A. A. Watson, "Extensive air showers and ultra high-energy cosmic rays: a historical review," The European Physical Journal H, vol. 37, no. 3, pp. 359-412, 2012.
Exentsive Air Showers and High Energy Phenomena. P K F Grieder, P. K. F. Grieder, Exentsive Air Showers and High Energy Phenomena. 2010.
Measurement of the proton-air cross section at √ s = 57 Tev with the pierre auger observatory. P Abreu, M Aglietta, E Ahn, I F D M Albuquerque, D Allard, I Allekotte, J Allen, P Allison, A Almeda, J A Castillo, Physical review letters. 109662002P. Abreu, M. Aglietta, E. Ahn, I. F. d. M. Albuquerque, D. Allard, I. Allekotte, J. Allen, P. Allison, A. Almeda, J. A. Castillo, et al., "Measurement of the proton-air cross section at √ s = 57 Tev with the pierre auger observatory," Physical review letters, vol. 109, no. 6, p. 062002, 2012.
Cosmic ray showers. K Greisen, Annual Review of Nuclear Science. 101K. Greisen, "Cosmic ray showers," Annual Review of Nuclear Science, vol. 10, no. 1, pp. 63-108, 1960.
Review of Particle Physics. P Zyla, PTEP. 20208p. 083C01, 2020. and 2021 updateP. Zyla et al., "Review of Particle Physics," PTEP, vol. 2020, no. 8, p. 083C01, 2020. and 2021 update.
A heitler model of extensive air showers. J Matthews, Astroparticle Physics. 225-6J. Matthews, "A heitler model of extensive air showers," Astroparticle Physics, vol. 22, no. 5-6, pp. 387-397, 2005.
Aspects of hadron physics. C D Roberts, M Bhagwat, A Höll, S Wright, The European Physical Journal Special Topics. 1401C. D. Roberts, M. Bhagwat, A. Höll, and S. Wright, "Aspects of hadron physics," The European Physical Journal Special Topics, vol. 140, no. 1, pp. 53-116, 2007.
The influence of baryon resonances and vector mesons on cosmic ray cascades. J Capdevielle, Journal of Physics G: Nuclear and Particle Physics. 18243J. Capdevielle, "The influence of baryon resonances and vector mesons on cosmic ray cascades," Journal of Physics G: Nuclear and Particle Physics, vol. 18, no. 2, p. L43, 1992.
Spallation processes and nuclear interaction products of cosmic rays. R Silberberg, C Tsao, Physics Reports. 1916R. Silberberg and C. Tsao, "Spallation processes and nuclear interaction products of cosmic rays," Physics Reports, vol. 191, no. 6, pp. 351-408, 1990.
Cosmic-ray neutrons on the ground and in the atmosphere. P Goldhagen, MRS bulletin. 282P. Goldhagen, "Cosmic-ray neutrons on the ground and in the atmosphere," MRS bulletin, vol. 28, no. 2, pp. 131-135, 2003.
Measurements of neutron radiation in aircraft. B Vuković, M Poje, M Varga, V Radolić, I Miklavčić, D Faj, D Stanić, J Planinić, Applied Radiation and Isotopes. 682010B. Vuković, M. Poje, M. Varga, V. Radolić, I. Miklavčić, D. Faj, D. Stanić, and J. Planinić, "Measurements of neutron radiation in aircraft," Applied Radiation and Isotopes, vol. 68, pp. 2398-2402, 12 2010.
New calculations of the atmospheric cosmic radiation field-results for neutron spectra. J M Clem, G D Angelis, P Goldhagen, J W Wilson, Radiation Protection Dosimetry. 110J. M. Clem, G. D. Angelis, P. Goldhagen, and J. W. Wilson, "New calculations of the atmospheric cosmic radiation field-results for neutron spectra," Radiation Protection Dosimetry, vol. 110, pp. 423-428, 8 2004.
The neutron 'thunder' accompanying large extensive air showers. A Erlykin, Nuclear Physics B-Proceedings Supplements. 175A. Erlykin, "The neutron 'thunder' accompanying large extensive air showers," Nuclear Physics B-Proceedings Supplements, vol. 175, pp. 330-333, 2008.
Corsika: A monte carlo code to simulate extensive air showers. D Heck, J Knapp, J N Capdevielle, G Schatz, T Thouw, D. Heck, J. Knapp, J. N. Capdevielle, G. Schatz, and T. Thouw, "Corsika: A monte carlo code to simulate extensive air showers," 1998.
Towards a next generation of corsika: A framework for the simulation of particle cascades in astroparticle physics. R Engel, D Heck, T Huege, T Pierog, M Reininghaus, F Riehn, R Ulrich, M Unger, D Veberič, Computing and Software for Big Science. 322019R. Engel, D. Heck, T. Huege, T. Pierog, M. Reininghaus, F. Riehn, R. Ulrich, M. Unger, and D. Veberič, "Towards a next generation of corsika: A framework for the simulation of particle cascades in astroparticle physics," Computing and Software for Big Science, vol. 3, p. 2, 12 2019.
Lago: The latin american giant observatory. I Sidelnik, H Asorey, L Collaboration, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 876I. Sidelnik, H. Asorey, L. Collaboration, et al., "Lago: The latin american giant observatory," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 876, pp. 173-175, 2017.
The arti framework: cosmic rays atmospheric background simulations. C Sarmiento-Cano, M Suárez-Durán, R Calderón-Ardila, A Vásquez-Ramírez, A Jaimes-Motta, L A Núñez, S Dasso, I Sidelnik, H Asorey, The European Physical Journal C. 822022C. Sarmiento-Cano, M. Suárez-Durán, R. Calderón-Ardila, A. Vásquez-Ramírez, A. Jaimes-Motta, L. A. Núñez, S. Dasso, I. Sidelnik, and H. Asorey, "The arti framework: cosmic rays atmospheric background simulations," The European Physical Journal C, vol. 82, p. 1019, 11 2022.
The arti framework. H Asorey, C Sarmiento-Cano, M Suárez-Durán, A J Rubio-Montero, 2022H. Asorey, C. Sarmiento-Cano, M. Suárez-Durán, and A. J. Rubio-Montero, "The arti framework," 2022.
The sites of the latin american giant observatory. I Sidelnik, 82016I. Sidelnik, "The sites of the latin american giant observatory," p. 665, 8 2016.
Lago: the latin american giant observatory. H Asorey, 82016H. Asorey, "Lago: the latin american giant observatory," p. 247, 8 2016.
The lago space weather program: Directional geomagnetic effects, background fluence calculations and multi-spectral data anal. M S Durán, H Asorey, S Dasso, L A Nunez, Y F Pérez, C Sarmiento, 82016M. S. Durán, H. Asorey, S. Dasso, L. A. Nunez, Y. F. Pérez, and C. Sarmiento, "The lago space weather program: Directional geomagnetic effects, background fluence calculations and multi-spectral data anal," p. 142, 8 2016.
The latin american giant observatory (lago) capabilities for detecting gamma ray bursts. C Sarmiento-Cano, H Asorey, J Sacahui, L Otiniano, I Sidelnik, Proceedings of 37th International Cosmic Ray Conference. 37th International Cosmic Ray ConferencePoS(ICRC2021)C. Sarmiento-Cano, H. Asorey, J. Sacahui, L. Otiniano, and I. Sidelnik, "The latin american giant observatory (lago) capabilities for detecting gamma ray bursts," in Proceedings of 37th International Cosmic Ray Conference, vol. PoS(ICRC2021), pp. 1-4, 2021.
Preliminary results from the latin american giant observatory space weather simulation chain. H Asorey, L A Núñez, M Suárez-Durán, Space Weather. 16H. Asorey, L. A. Núñez, and M. Suárez-Durán, "Preliminary results from the latin american giant observatory space weather simulation chain," Space Weather, vol. 16, pp. 461-475, 5 2018.
The pierre auger observatory and its upgrade. P A Collaboration, Science Reviews-from the end of the world. 14P. A. collaboration et al., "The pierre auger observatory and its upgrade," Science Reviews-from the end of the world, vol. 1, no. 4, pp. 8-33, 2020.
Calibration of a large water-cherenkov detector at the sierra negra site of lago. A Galindo, E Moreno, E Carrasco, I Torres, A Carramiñana, M Bonilla, H Salazar, R Conde, W Alvarez, C Alvarez, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 861A. Galindo, E. Moreno, E. Carrasco, I. Torres, A. Carramiñana, M. Bonilla, H. Salazar, R. Conde, W. Alvarez, C. Alvarez, et al., "Calibration of a large water-cherenkov detector at the sierra negra site of lago," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 861, pp. 28-37, 2017.
Characterization of the muography background using the muon telescope (mute). J Peña-Rodríguez, L A Núñez, H Asorey, Proceedings of 40th International Conference on High Energy physics. 40th International Conference on High Energy physics2020J. Peña-Rodríguez, L. A. Núñez, and H. Asorey, "Characterization of the muography background using the muon telescope (mute)," in Proceedings of 40th International Conference on High Energy physics, vol. PoS(ICHEP2020), pp. 1-4, 2021.
Studies on the response of a water-cherenkov detector of the pierre auger observatory to atmospheric muons using an rpc hodoscope. A Aab, P Abreu, M Aglietta, J M Albury, I Allekotte, A Almela, J A Castillo, J Alvarez-Muñiz, R A Batista, G A Anastasi, Journal of Instrumentation. 15099002A. Aab, P. Abreu, M. Aglietta, J. M. Albury, I. Allekotte, A. Almela, J. A. Castillo, J. Alvarez-Muñiz, R. A. Batista, G. A. Anastasi, et al., "Studies on the response of a water-cherenkov detector of the pierre auger observatory to atmospheric muons using an rpc hodoscope," Journal of Instrumentation, vol. 15, no. 09, p. P09002, 2020.
Modeling the lago's detectors response to secondary particles at ground level from the antarctic to mexico. C Sarmiento-Cano, M Suárez-Durán, A Vásquez Ramírez, A Jaimes-Motta, R Calderón-Ardila, J Peña-Rodríguez, Proceedings of 36th International Cosmic Ray Conference. 36th International Cosmic Ray ConferencePoS(ICRC2019)C. Sarmiento-Cano, M. Suárez-Durán, A. Vásquez Ramírez, A. Jaimes-Motta, R. Calderón-Ardila, and J. Peña- Rodríguez, "Modeling the lago's detectors response to secondary particles at ground level from the antarctic to mexico," in Proceedings of 36th International Cosmic Ray Conference, vol. PoS(ICRC2019), pp. 1-4, 2019.
The eosc-synergy cloud services implementation for the latin american giant observatory (lago). A J Rubio-Montero, R Pagán-Muñoz, R Mayo-García, A Pardo-Diaz, I Sidelnik, H Asorey, 72021A. J. Rubio-Montero, R. Pagán-Muñoz, R. Mayo-García, A. Pardo-Diaz, I. Sidelnik, and H. Asorey, "The eosc-synergy cloud services implementation for the latin american giant observatory (lago)," p. 261, 7 2021.
Estimación del flujo de muones en el laboratorio subterráneo andes. C P Bertolli, C Sarmiento-Cano, H Asorey, ANALES AFA. 32C. P. Bertolli, C. Sarmiento-Cano, and H. Asorey, "Estimación del flujo de muones en el laboratorio subterráneo andes," in ANALES AFA, vol. 32, pp. 106-111, 2022.
Muography in colombia: simulation framework, instrumentation and data analysis. J Peña-Rodríguez, A Vesga-Ramírez, A Vásquez-Ramírez, M Suárez-Durán, R De León-Barrios, D Sierra-Porta, R Calderón-Ardila, J Pisco-Guavabe, H Asorey, J Sanabria-Gómez, Journal for Advanced Instrumentation in Science. 20222022in pressJ. Peña-Rodríguez, A. Vesga-Ramírez, A. Vásquez-Ramírez, M. Suárez-Durán, R. de León-Barrios, D. Sierra- Porta, R. Calderón-Ardila, J. Pisco-Guavabe, H. Asorey, J. Sanabria-Gómez, et al., "Muography in colombia: simulation framework, instrumentation and data analysis," Journal for Advanced Instrumentation in Science, vol. 2022, no. in press, 2022.
Meiga, a dedicated framework used for muography applications. A Taboada, C Sarmiento-Cano, A Sedoski, H Asorey, Journal for Advanced Instrumentation in Science. 20221A. Taboada, C. Sarmiento-Cano, A. Sedoski, and H. Asorey, "Meiga, a dedicated framework used for muography applications," Journal for Advanced Instrumentation in Science, vol. 2022, no. 1, 2022.
Simulated response of mute, a hybrid muon telescope. A Vásquez-Ramírez, M Suárez-Durán, A Jaimes-Motta, R Calderón-Ardila, J Peña-Rodríguez, J Sánchez-Villafrades, J Sanabria-Gómez, H Asorey, L Núñez, Journal of Instrumentation. 15088004A. Vásquez-Ramírez, M. Suárez-Durán, A. Jaimes-Motta, R. Calderón-Ardila, J. Peña-Rodríguez, J. Sánchez- Villafrades, J. Sanabria-Gómez, H. Asorey, and L. Núñez, "Simulated response of mute, a hybrid muon telescope," Journal of Instrumentation, vol. 15, no. 08, p. P08004, 2020.
Simulated annealing for volcano muography. A Vesga-Ramírez, J Sanabria-Gómez, D Sierra-Porta, L Arana-Salinas, H Asorey, V Kudryavtsev, R Calderón-Ardila, L Núñez, Journal of South American Earth Sciences. 109103248A. Vesga-Ramírez, J. Sanabria-Gómez, D. Sierra-Porta, L. Arana-Salinas, H. Asorey, V. Kudryavtsev, R. Calderón- Ardila, and L. Núñez, "Simulated annealing for volcano muography," Journal of South American Earth Sciences, vol. 109, p. 103248, 2021.
Improvised explosive devices and cosmic rays. A Vásquez-Ramírez, M Ariza-Gómez, M Carrillo-Moreno, V Baldovino-Medrano, H Asorey, L Núñez, Proceedings of 37th International Cosmic Ray Conference. 37th International Cosmic Ray ConferencePoS(ICRC2021)A. Vásquez-Ramírez, M. Ariza-Gómez, M. Carrillo-Moreno, V. Baldovino-Medrano, H. Asorey, and L. Núñez, "Improvised explosive devices and cosmic rays," in Proceedings of 37th International Cosmic Ray Conference, vol. PoS(ICRC2021), pp. 1-4, 2021.
Simulation of 500 mev neutrons by using nacl doped water cherenkov detector. I Sidelnik, H Asorey, N Guarín, M S Durán, M G Berisso, J Lipovetzky, J J Blostein, Advances in Space Research. 659I. Sidelnik, H. Asorey, N. Guarín, M. S. Durán, M. G. Berisso, J. Lipovetzky, and J. J. Blostein, "Simulation of 500 mev neutrons by using nacl doped water cherenkov detector," Advances in Space Research, vol. 65, no. 9, pp. 2216-2222, 2020.
Enhancing neutron detection capabilities of a water cherenkov detector. I Sidelnik, H Asorey, N Guarin, M S Durán, J Lipovetzky, L H Arnaldi, M Pérez, M S Haro, M G Berisso, F A Bessia, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 955163172I. Sidelnik, H. Asorey, N. Guarin, M. S. Durán, J. Lipovetzky, L. H. Arnaldi, M. Pérez, M. S. Haro, M. G. Berisso, F. A. Bessia, et al., "Enhancing neutron detection capabilities of a water cherenkov detector," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 955, p. 163172, 2020.
Neutron detection capabilities of water cherenkov detectors. I Sidelnik, H Asorey, N Guarin, M S Durán, F A Bessia, L H Arnaldi, M G Berisso, J Lipovetzky, M Pérez, M S Haro, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 952161962I. Sidelnik, H. Asorey, N. Guarin, M. S. Durán, F. A. Bessia, L. H. Arnaldi, M. G. Berisso, J. Lipovetzky, M. Pérez, M. S. Haro, et al., "Neutron detection capabilities of water cherenkov detectors," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 952, p. 161962, 2020.
A novel cloud-based framework for standardized simulations in the latin american giant observatory (lago). A J Rubio-Montero, R Pagan-Munoz, R Mayo-Garcia, A Pardo-Diaz, I Sidelnik, H Asorey, December122021A. J. Rubio-Montero, R. Pagan-Munoz, R. Mayo-Garcia, A. Pardo-Diaz, I. Sidelnik, and H. Asorey, "A novel cloud-based framework for standardized simulations in the latin american giant observatory (lago)," vol. 2021- December, pp. 1-12, 12 2021.
The modtran 2/3 report and lowtran 7 model. F Kneizys, tech. rep. F. Kneizys et al., "The modtran 2/3 report and lowtran 7 model," tech. rep., Phillips Laboratory, Hanscom AFB, MA (USA), 1996.
Us standard atmosphere 1976. NOAA-S/T-76-1562N. O. National Aerospace Administration (NASA), A. A. (NOAA), and U. A. ForceNational Oceanic and Atmospheric AdministrationNOAA technical reportN. O. National Aerospace Administration (NASA), A. A. (NOAA), and U. A. Force, "Us standard atmosphere 1976," NOAA technical report NOAA-S/T-76-1562, National Oceanic and Atmospheric Administration, 1976.
Global data assimilation system (gdas1) archive information. NOAA Air Resources Laboratory (ARLNOAA Air Resources Laboratory (ARL), "Global data assimilation system (gdas1) archive information."
. P Alken, E Thébault, C D Beggan, H Amit, J Aubert, J Baerenzung, T N Bondar, W J Brown, S Califf, A Chambodut, A Chulliat, G A Cox, C C Finlay, A Fournier, N Gillet, A Grayver, M D Hammer, M Holschneider, L Huder, G Hulot, T Jager, C Kloss, M Korte, W Kuang, A Kuvshinov, B Langlais, J.-M , P. Alken, E. Thébault, C. D. Beggan, H. Amit, J. Aubert, J. Baerenzung, T. N. Bondar, W. J. Brown, S. Califf, A. Chambodut, A. Chulliat, G. A. Cox, C. C. Finlay, A. Fournier, N. Gillet, A. Grayver, M. D. Hammer, M. Holschneider, L. Huder, G. Hulot, T. Jager, C. Kloss, M. Korte, W. Kuang, A. Kuvshinov, B. Langlais, J.-M.
International geomagnetic reference field: the thirteenth generation. V Léger, P W Lesur, F J Livermore, S Lowes, W Macmillan, M Magnes, S Mandea, J Marsal, M C Matzka, T Metman, A Minami, J E Morschhauser, M Mound, S Nair, N Nakano, F J Olsen, V G Pavón-Carrasco, G Petrov, M Ropp, T J Rother, S Sabaka, D Sanchez, N R Saturnino, X Schnepf, C Shen, A Stolle, L Tangborn, H Tøffner-Clausen, J M Toh, J Torta, F Varner, P Vervelidou, I Vigneron, J Wardinski, A Wicht, Y Woods, Z Yang, B Zeren, Zhou, Earth, Planets and Space. 732021Léger, V. Lesur, P. W. Livermore, F. J. Lowes, S. Macmillan, W. Magnes, M. Mandea, S. Marsal, J. Matzka, M. C. Metman, T. Minami, A. Morschhauser, J. E. Mound, M. Nair, S. Nakano, N. Olsen, F. J. Pavón-Carrasco, V. G. Petrov, G. Ropp, M. Rother, T. J. Sabaka, S. Sanchez, D. Saturnino, N. R. Schnepf, X. Shen, C. Stolle, A. Tangborn, L. Tøffner-Clausen, H. Toh, J. M. Torta, J. Varner, F. Vervelidou, P. Vigneron, I. Wardinski, J. Wicht, A. Woods, Y. Yang, Z. Zeren, and B. Zhou, "International geomagnetic reference field: the thirteenth generation," Earth, Planets and Space, vol. 73, p. 49, 12 2021.
On the knee in the energy spectrum of cosmic rays. J R Hoerandel, Astroparticle Physics. 192J. R. Hoerandel, "On the knee in the energy spectrum of cosmic rays," Astroparticle Physics, vol. 19, no. 2, pp. 193-220, 2003.
Monte carlo treatment of hadronic interactions in enhanced pomeron scheme: Qgsjet-ii model. S Ostapchenko, Physical Review D. 832011S. Ostapchenko, "Monte carlo treatment of hadronic interactions in enhanced pomeron scheme: Qgsjet-ii model," Physical Review D, vol. 83, p. 014018, 1 2011.
Muon stopping power and range tables 10 mev-100 tev. D E Groom, N V Mokhov, S I Striganov, Atomic Data and Nuclear Data Tables. 78D. E. Groom, N. V. Mokhov, and S. I. Striganov, "Muon stopping power and range tables 10 mev-100 tev," Atomic Data and Nuclear Data Tables, vol. 78, no. 2, pp. 183-356, 2001.
Atmospheric muons as an imaging tool. L Bonechi, R , A Giammanco, Reviews in Physics. 5100038L. Bonechi, R. D'Alessandro, and A. Giammanco, "Atmospheric muons as an imaging tool," Reviews in Physics, vol. 5, p. 100038, 2020.
Estimation of the muon flux expected at the andes underground laboratory. C P B , C Sarmiento-Cano Y, H Asorey, Anales de la Asociación de Física Argentina (AFA). 324C. P. B. y C. Sarmiento-Cano y H. Asorey, "Estimation of the muon flux expected at the andes underground laboratory," Anales de la Asociación de Física Argentina (AFA), vol. 32, no. 4, pp. 106-111, 2022.
Negative and positive muon-induced seu cross sections in 28-nm and 65-nm planar bulk cmos srams. W Liao, M Hashimoto, S Manabe, Y Watanabe, S.-I Abe, K Nakano, H Takeshita, M Tampo, S Takeshita, Y Miyake, 2019 IEEE International Reliability Physics Symposium (IRPS). IEEEW. Liao, M. Hashimoto, S. Manabe, Y. Watanabe, S.-I. Abe, K. Nakano, H. Takeshita, M. Tampo, S. Takeshita, and Y. Miyake, "Negative and positive muon-induced seu cross sections in 28-nm and 65-nm planar bulk cmos srams," in 2019 IEEE International Reliability Physics Symposium (IRPS), pp. 1-5, IEEE, 2019.
Soft error and its countermeasures in terrestrial environment. M Hashimoto, W Liao, 1M. Hashimoto and W. Liao, "Soft error and its countermeasures in terrestrial environment," vol. 2020-January, pp. 617-622, 1 2020.
The scaler mode in the pierre auger observatory to study heliospheric modulation of cosmic rays. S Dasso, Auger CollaborationH Asorey, Auger CollaborationPierre, Auger CollaborationAdvances in space research. 4911S. Dasso, H. Asorey, and for the Pierre Auger Collaboration, "The scaler mode in the pierre auger observatory to study heliospheric modulation of cosmic rays," Advances in space research, vol. 49, no. 11, pp. 1563-1569, 2012.
Neutron-nucleus total cross section, 30-270 GeV. C A Ayre, H R Gustafson, L W Jones, M J Longo, P V Ramana Murthy, International Cosmic Ray Conference. 72268C. A. Ayre, H. R. Gustafson, L. W. Jones, M. J. Longo, and P. V. Ramana Murthy, "Neutron-nucleus total cross section, 30-270 GeV," in International Cosmic Ray Conference, vol. 7 of International Cosmic Ray Conference, p. 2268, Aug. 1975.
Atmospheric effects on extensive air showers observed with the surface detector of the pierre auger observatory. J Abraham, P Abreu, M Aglietta, C Aguirre, E Ahn, D Allard, I Allekotte, J Allen, P Allison, J Alvarez-Muniz, Astroparticle Physics. 322J. Abraham, P. Abreu, M. Aglietta, C. Aguirre, E. Ahn, D. Allard, I. Allekotte, J. Allen, P. Allison, J. Alvarez- Muniz, et al., "Atmospheric effects on extensive air showers observed with the surface detector of the pierre auger observatory," Astroparticle Physics, vol. 32, no. 2, pp. 89-99, 2009.
Superposed epoch study of icme sub-structures near earth and their effects on galactic cosmic rays. J J Masías-Meza, S Dasso, P Démoulin, L Rodriguez, M Janvier, Astronomy & Astrophysics. 592118J. J. Masías-Meza, S. Dasso, P. Démoulin, L. Rodriguez, and M. Janvier, "Superposed epoch study of icme sub-structures near earth and their effects on galactic cosmic rays," Astronomy & Astrophysics, vol. 592, p. A118, 2016.
Current status and possible extension of the global neutron monitor network. A Mishev, I Usoskin, Journal of Space Weather and Space Climate. 1017A. Mishev and I. Usoskin, "Current status and possible extension of the global neutron monitor network," Journal of Space Weather and Space Climate, vol. 10, p. 17, 2020.
A new hardware/software platform and a new 1/e neutron source for soft error studies: Testing fpgas at the isis facility. M Violante, L Sterpone, A Manuzzato, S Gerardin, P Rech, M Bagatin, A Paccagnella, C Andreani, G Gorini, A Pietropaolo, IEEE Transactions on Nuclear Science. 544M. Violante, L. Sterpone, A. Manuzzato, S. Gerardin, P. Rech, M. Bagatin, A. Paccagnella, C. Andreani, G. Gorini, A. Pietropaolo, et al., "A new hardware/software platform and a new 1/e neutron source for soft error studies: Testing fpgas at the isis facility," IEEE Transactions on Nuclear Science, vol. 54, no. 4, pp. 1184-1189, 2007.
The fluka code: developments and challenges for high energy and medical applications. T Böhlen, F Cerutti, M Chin, A Fassò, A Ferrari, P G Ortega, A Mairani, P R Sala, G Smirnov, V Vlachoudis, Nuclear data sheets. 120T. Böhlen, F. Cerutti, M. Chin, A. Fassò, A. Ferrari, P. G. Ortega, A. Mairani, P. R. Sala, G. Smirnov, and V. Vlachoudis, "The fluka code: developments and challenges for high energy and medical applications," Nuclear data sheets, vol. 120, pp. 211-214, 2014.
. S Agostinelli, J Allison, K Amako, J Apostolakis, H Araujo, P Arce, M Asai, D Axen, S Banerjee, G Barrand, F Behner, L Bellagamba, J Boudreau, L Broglia, A Brunengo, H Burkhardt, S Chauvie, J Chuma, R Chytracek, G Cooperman, G Cosmo, P Degtyarenko, A Dell'acqua, G Depaola, D Dietrich, R Enami, A Feliciello, C Ferguson, H Fesefeldt, G Folger, F Foppiano, A Forti, S Garelli, S Giani, R Giannitrapani, D Gibin, J G Cadenas, I González, G G Abril, G Greeniaus, W Greiner, V Grichine, A Grossheim, S Guatelli, P Gumplinger, R Hamatsu, K Hashimoto, H Hasui, A Heikkinen, A Howard, V Ivanchenko, A Johnson, F Jones, J Kallenbach, N Kanaya, M Kawabata, Y Kawabata, M Kawaguti, S Kelner, P Kent, A Kimura, T Kodama, R Kokoulin, M Kossov, H Kurashige, E Lamanna, T Lampén, V Lara, V Lefebure, F Lei, M Liendl, W Lockman, F Longo, S Magni, M Maire, E Medernach, K Minamimoto, P M De Freitas, Y Morita, K Murakami, M Nagamatu, R Nartallo, P Nieminen, T Nishimura, K Ohtsubo, M Okamura, S O'neale, Y Oohata, K Paech, J Perl, A Pfeiffer, M Pia, F Ranjard, A Rybin, S Sadilov, E D Salvo, G Santin, T Sasaki, N Savvas, Y Sawada, S Scherer, S Sei, V Sirotenko, D Smith, N Starkov, H Stoecker, J Sulkimo, M Takahata, S Tanaka, E Tcherniaev, E S Tehrani, M Tropeano, P Truscott, H Uno, L Urban, P Urban, M Verderi, A Walkden, W Wander, H Weber, J Wellisch, T Wenaus, D Williams, D Wright, T Yamada, H Yoshida, D Zschiesche, 506Geant4-a simulation toolkit," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated EquipmentS. Agostinelli, J. Allison, K. Amako, J. Apostolakis, H. Araujo, P. Arce, M. Asai, D. Axen, S. Banerjee, G. Barrand, F. Behner, L. Bellagamba, J. Boudreau, L. Broglia, A. Brunengo, H. Burkhardt, S. Chauvie, J. Chuma, R. Chytracek, G. Cooperman, G. Cosmo, P. Degtyarenko, A. Dell'Acqua, G. Depaola, D. Dietrich, R. Enami, A. Feliciello, C. Ferguson, H. Fesefeldt, G. Folger, F. Foppiano, A. Forti, S. Garelli, S. Giani, R. Giannitrapani, D. Gibin, J. G. Cadenas, I. González, G. G. Abril, G. Greeniaus, W. Greiner, V. Grichine, A. Grossheim, S. Guatelli, P. Gumplinger, R. Hamatsu, K. Hashimoto, H. Hasui, A. Heikkinen, A. Howard, V. Ivanchenko, A. Johnson, F. Jones, J. Kallenbach, N. Kanaya, M. Kawabata, Y. Kawabata, M. Kawaguti, S. Kelner, P. Kent, A. Kimura, T. Kodama, R. Kokoulin, M. Kossov, H. Kurashige, E. Lamanna, T. Lampén, V. Lara, V. Lefebure, F. Lei, M. Liendl, W. Lockman, F. Longo, S. Magni, M. Maire, E. Medernach, K. Minamimoto, P. M. de Freitas, Y. Morita, K. Murakami, M. Nagamatu, R. Nartallo, P. Nieminen, T. Nishimura, K. Ohtsubo, M. Okamura, S. O'Neale, Y. Oohata, K. Paech, J. Perl, A. Pfeiffer, M. Pia, F. Ranjard, A. Rybin, S. Sadilov, E. D. Salvo, G. Santin, T. Sasaki, N. Savvas, Y. Sawada, S. Scherer, S. Sei, V. Sirotenko, D. Smith, N. Starkov, H. Stoecker, J. Sulkimo, M. Takahata, S. Tanaka, E. Tcherniaev, E. S. Tehrani, M. Tropeano, P. Truscott, H. Uno, L. Urban, P. Urban, M. Verderi, A. Walkden, W. Wander, H. Weber, J. Wellisch, T. Wenaus, D. Williams, D. Wright, T. Yamada, H. Yoshida, and D. Zschiesche, "Geant4-a simulation toolkit," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 506, pp. 250-303, 7 2003.
| []
|
[
"On spinodal points and Lee-Yang edge singularities",
"On spinodal points and Lee-Yang edge singularities"
]
| [
"X An \nDepartment of Physics\nUniversity of Illinois at Chicago\n845 West Taylor Street60607ChicagoILUSA\n",
"D Mesterházy \nAlbert Einstein Center for Fundamental Physics Institute for Theoretical Physics\nUniversity of Bern\nSidlerstrasse 53012BernSwitzerland\n",
"M A Stephanov \nDepartment of Physics\nUniversity of Illinois at Chicago\n845 West Taylor Street60607ChicagoILUSA\n"
]
| [
"Department of Physics\nUniversity of Illinois at Chicago\n845 West Taylor Street60607ChicagoILUSA",
"Albert Einstein Center for Fundamental Physics Institute for Theoretical Physics\nUniversity of Bern\nSidlerstrasse 53012BernSwitzerland",
"Department of Physics\nUniversity of Illinois at Chicago\n845 West Taylor Street60607ChicagoILUSA"
]
| []
| We address a number of outstanding questions associated with the analytic properties of the universal equation of state of the φ 4 theory, which describes the critical behavior of the Ising model and ubiquitous critical points of the liquid-gas type. We focus on the relation between spinodal points that limit the domain of metastability for temperatures below the critical temperature, i.e., T < T c , and Lee-Yang edge singularities that restrict the domain of analyticity around the point of zero magnetic field H for T > T c . The extended analyticity conjecture (due to Fonseca and Zamolodchikov) posits that, for T < T c , the Lee-Yang edge singularities are the closest singularities to the real H axis. This has interesting implications, in particular, that the spinodal singularities must lie off the real H axis for d < 4, in contrast to the commonly known result of the mean-field approximation. We find that the parametric representation of the Ising equation of state obtained in the ε = 4 − d expansion, as well as the equation of state of the O(N )-symmetric φ 4 theory at large N , are both nontrivially consistent with the conjecture. We analyze the reason for the difficulty of addressing this issue using the ε expansion. It is related to the long-standing paradox associated with the fact that the vicinity of the Lee-Yang edge singularity is described by Fisher's φ 3 theory, which remains nonperturbative even for d → 4, where the equation of state of the φ 4 theory is expected to approach the mean-field result. We resolve this paradox by deriving the Ginzburg criterion that determines the size of the region around the Lee-Yang edge singularity where mean-field theory no longer applies. 1 arXiv:1707.06447v2 [hep-th] | 10.1088/1742-5468/aaac4a | [
"https://arxiv.org/pdf/1707.06447v2.pdf"
]
| 54,501,169 | 1707.06447 | 3a05f5c31d5d7fd971abfc44a1f38f06e319c2dd |
On spinodal points and Lee-Yang edge singularities
March 20, 2018 18 Mar 2018
X An
Department of Physics
University of Illinois at Chicago
845 West Taylor Street60607ChicagoILUSA
D Mesterházy
Albert Einstein Center for Fundamental Physics Institute for Theoretical Physics
University of Bern
Sidlerstrasse 53012BernSwitzerland
M A Stephanov
Department of Physics
University of Illinois at Chicago
845 West Taylor Street60607ChicagoILUSA
On spinodal points and Lee-Yang edge singularities
March 20, 2018 18 Mar 2018
We address a number of outstanding questions associated with the analytic properties of the universal equation of state of the φ 4 theory, which describes the critical behavior of the Ising model and ubiquitous critical points of the liquid-gas type. We focus on the relation between spinodal points that limit the domain of metastability for temperatures below the critical temperature, i.e., T < T c , and Lee-Yang edge singularities that restrict the domain of analyticity around the point of zero magnetic field H for T > T c . The extended analyticity conjecture (due to Fonseca and Zamolodchikov) posits that, for T < T c , the Lee-Yang edge singularities are the closest singularities to the real H axis. This has interesting implications, in particular, that the spinodal singularities must lie off the real H axis for d < 4, in contrast to the commonly known result of the mean-field approximation. We find that the parametric representation of the Ising equation of state obtained in the ε = 4 − d expansion, as well as the equation of state of the O(N )-symmetric φ 4 theory at large N , are both nontrivially consistent with the conjecture. We analyze the reason for the difficulty of addressing this issue using the ε expansion. It is related to the long-standing paradox associated with the fact that the vicinity of the Lee-Yang edge singularity is described by Fisher's φ 3 theory, which remains nonperturbative even for d → 4, where the equation of state of the φ 4 theory is expected to approach the mean-field result. We resolve this paradox by deriving the Ginzburg criterion that determines the size of the region around the Lee-Yang edge singularity where mean-field theory no longer applies. 1 arXiv:1707.06447v2 [hep-th]
Introduction
The universality of critical phenomena makes the knowledge of the equation of state of the Ising model or, more broadly, φ 4 field theory, important to the study of a wide range of phenomena [1] from Curie points in magnets and liquid-gas transitions, to the cosmologically relevant phase transition in the gauge-Higgs sector of the Standard model [2], and the phase diagram of QCD at finite density studied in heavy-ion collisions [3,4]. Owing to universality and scaling, the equation of state sufficiently close to the critical point, i.e., in the scaling region, can be characterized by a universal function of a single argument, a scale-invariant combination of two relevant variables -the magnetic field and the temperature. Determining this function is a well-posed mathematical problem which to this day, however, remains unsolved, at least in the analytically exact sense. Nevertheless, a lot is known about the equation of state [1]. This includes celebrated exact results, such as the Onsager solution of the two-dimensional Ising model in the absence of the magnetic field [5] or the Lee-Yang theorem regarding the distribution of zeros of the partition function in the complex plane of the magnetic field variable [6,7]. The equation of state near the upper critical dimension, d = 4, is also understood in terms of the perturbative Wilson-Fisher fixed point using the ε = 4 − d expansion [8][9][10][11][12]. Furthermore, there are numerous numerical studies based on the high-temperature series expansion [13,14], perturbative field-theory expansions [15], Monte Carlo lattice simulations [16][17][18][19][20], the exact renormalization group [21], as well as the truncated free-fermion space approach [22].
In this paper we focus on the analytic properties of the universal equation of state in the scaling regime near the Ising critical point as a function of a complex magnetic field H. Two notable facts will guide our discussion. The first is Lee and Yang's observation that the singularities in the complex magnetic field plane terminate two (complex conjugate) branch cuts, which according to the Lee-Yang theorem [7], must lie on the imaginary axis. These branch points, or Lee-Yang edge singularities, "pinch" the real axis as the temperature T approaches its critical value T c from above, resulting in a singularity on the real axis at zero magnetic field -the Ising critical point. The second is the observation by Fisher that the thermodynamic singularity at the Lee-Yang edge point corresponds to the critical point in the φ 3 theory [23]. The upper critical dimension of this theory is six, which means that below this dimension the critical exponent σ that characterizes the vanishing of the discontinuity at the Lee-Yang branch point is not simply given by its mean-field value 1/2. This includes the case d = 4 − ε, where the Ising equation of state is believed to be described by mean-field theory with corrections suppressed by ε. Here, we address the apparent contradiction between the conclusions of Fisher's analysis and the ε expansion around d = 4.
Analyticity of the equation of state allows one to connect high-and low-temperature domains near the critical point [24,25]. In particular, using the mean-field equation of state one can show that the Lee-Yang edge singularities, which reside on the imaginary magnetic field axis, are analytically connected to singularities that limit the domain of metastability -so-called spinodal singularities [26][27][28]. The latter reside on another Riemann sheet reachable by analytic continuation through the branch cut along the real magnetic field axis, describing the first-order phase transition at zero magnetic field. The position of these singularities on the real axis, however, is an artifact of the mean-field approximation. In fact, in 4 − ε dimensions the position of the spinodal point shifts into the complex plane by an amount of order ε 2 . We analyze this phenomenon in the framework of the ε expansion employing parametric representations of the equation of state [29][30][31][32]. Our goal is to confront the extended analyticity conjecture advanced by Fonseca and Zamolodchikov [22], which states that the complexified spinodal point is the nearest singularity to the real axis of the magnetic field.
We point out that our analysis is not complete, since the ε expansion fails to capture certain nonperturbative aspects of the universal Ising equation of state, most notably the Langer cut [33]. However, as we shall see, other important questions can nevertheless be addressed within such an approach. One has to bear in mind also that our results apply to the scaling region where the universal behavior is observed. However, many of the conclusions, such as those pertaining to the Langer cut, associated metastability and the shift of the spinodal point into the complex H-plane due to fluctuations should, arguably, remain true outside the scaling region.
We hope that the insights our study provides will contribute to a more complete picture of the φ 4 theory. In particular, our work could help develop better parametrizations of the equation of state by taking into account its correct analytic properties. The knowledge of the complex singularities of the equation of state is also important for determining the position of the QCD critical point using lattice Taylor expansion methods [34].
The outline of this article is as follows: In Sec. 2 we review the properties of the mean-field equation of state of the scalar φ 4 theory, and introduce the Lee-Yang edge singularities with their low-temperature image -the spinodal points. Next, in Sec. 3, we discuss the limitations of the mean-field approximation. In particular, we derive the Ginzburg criterion which quantifies the breakdown of mean-field theory near the Lee-Yang edge singularities. Thereafter, in Sec. 4, we employ the ε = 4 − d expansion and examine the nature of the complex-field singularities in the framework of parametric representations of the Ising equation of state. In Sec. 5 we consider the same problem from the point of view of the O(N )-symmetric φ 4 theory in the large-N limit. In the concluding section, Sec. 6, we summarize our findings. We argue that they are consistent with the extended analyticity conjecture put forward by Fonseca and Zamolodchikov and discuss the difficulty of establishing the latter rigorously in the ε expansion.
Critical equation of state and the mean-field approximation
The scalar φ 4 theory in d dimensions can be defined by the Euclidean action (or, depending on the context, the Hamiltonian divided by temperature)
S = d d x 1 2 (∂ µ φ) 2 + r 0 2 φ 2 + u 0 4! φ 4 − h 0 φ . (2.1)
The expectation value of the field φ, φ , can be found by differentiating the logarithm of the partition function (free energy) with respect to h 0 . The relation between the expectation value φ and the bare parameters r 0 and h 0 (and, generally, also u 0 as well as the ultraviolet cutoff) defines the equation of state.
More specifically, we are interested in the critical point of this theory, i.e., the point in the parameter space where the correlation length ξ, measured in units of the cutoff scale, diverges. This point can be reached at h 0 = 0, by tuning r 0 → r c for any given u 0 . In fact, below the upper critical dimension, i.e., d < 4, the effective coupling runs into an infrared (IR) fixed point, the Wilson-Fisher fixed point [35] and, as a result, the dependence on the coupling u 0 and the cutoff disappears -the equation of state becomes a relation between three variables: φ , h 0 , and r 0 .
The critical φ 4 theory provides a universal description of critical phenomena in many physically different systems such as liquid-gas or binary fluid mixtures or spin systems such as uniaxial ferromagnets. For the latter, the parameter h 0 can be mapped onto the applied external magnetic field, i.e.,
h 0 ∼ H, while t ≡ r 0 − r c , (2.2)
is proportional to the deviation of the temperature from the critical (Curie) point, i.e., t ∼ T −T c . In terms of conveniently rescaled variables Φ = u 0 /6 φ and H = u 0 /6 h 0 , the action Eq. (2.1) takes the form with the potential
S = 6 u 0 d d x 1 2 (∂ µ Φ) 2 + V (Φ) , (2.3) • • -1 -0.5 0 0.5 1 -1.5 -1 -0.5 0 0.5 1 1.5 H M(H) • • -2 -1 0 1 2 -0.2 0 0.2 M V (M) (a) (b)V (Φ) = r 0 2 Φ 2 + 1 4 Φ 4 − HΦ. (2.4)
It is clear from Eq. (2.3) that for small u 0 fluctuations are suppressed and the path integral defining the partition function of the theory can be evaluated in the saddle-point, or meanfield, approximation. In this approximation the expectation value of the field, Φ = M , is a coordinate-independent constant that minimizes the potential (2.4), i.e.,
V (M ) = −H + r 0 M + M 3 = 0. (2.5)
The correlation length ξ is defined in terms of the second derivative of the (effective) potential V at its minimum and, in the mean-field case, it is given by
ξ −2 = V (M ) = r 0 + 3M 2 .
(2.6)
The Ising critical point, ξ → ∞, is reached at H = M = r 0 = 0, and therefore
t = r 0 . (2.7)
The implicit (multivalued) function M (t, H) defined by Eq. (2.5) represents the mean-field equation of state of the φ 4 theory (or Ising model). It is clear from Eq. (2.6) that above the critical temperature of the Ising model, i.e., for t = r 0 > 0, the correlation length is finite for all real values of H. However, solving for V (M ) = V (M ) = 0, we find points on the imaginary axis, where ξ → ∞ for t > 0:
M LY = ± 1 √ 3 it 1/2 and H LY = ± 2 3 √ 3 it 3/2 . (2.8)
For t > 0, these branch points of M (H), known as Lee-Yang (LY) edge singularities, terminate cuts that lie on the imaginary H axis (according to the Lee-Yang theorem [6,7]). They pinch the real H axis as the temperature T approaches its critical value T c , i.e., t → 0. On the other hand, below the critical temperature, t < 0, the mean-field approximation predicts that the correlation length, given by Eq. (2.6), diverges at real values of M and H. These so-called spinodal points are located on the metastable branch and limit the domain of metastability [26][27][28], as shown in Fig. 1.
An important property of the critical equation of state is scaling 1 [26,36]: The relation between M , t, and H is invariant under simultaneous rescaling of these variables according to their scaling dimensions t → λt, H → λ βδ H, and M → λ β M, (2.9) 1 Generally, scaling is a consequence of the coupling u0 running into an IR fixed point. (2.15) with w ∼ Ht −3/2 . Starting from H > 0 and t > 0, keeping H > 0 and |t| fixed we rotate the phase arg t from 0 to −π and trace the corresponding movement of the variable w along the shown circular path. The principal sheet features a pair of Lee-Yang branch cuts along the imaginary w axis, which terminate in the Lee-Yang edge singularities. Going through the cut we enter the metastable low-temperature branch (H < 0, t < 0). One reaches the stable branch (H > 0, t < 0) when arg t = −π. From there one can also reach metastable branch H < 0 by rotating arg H from 0 to ±π, which changes arg w by ±π.
w principal (t > 0) w metastable t < 0 H < 0 ( , ) stable t < 0 H > 0 ( , )
where β and δ are standard critical exponents. The mean-field equation of state in Eqs. (2.5), (2.7) scales with exponents β = 1 2 and δ = 3 (mean field).
(2.10)
Scaling implies that the equation of state can be expressed as a relation between only two scaling-invariant variables. Depending on the choice of these variables it may be represented in several different ways. For example, we may express the equation of state in the Widom scaling form [36] y = f (x), with x ∼ tM −1/β and y ∼ HM −δ , (2.11) where symbols '∼' reflect arbitrary normalization constants which can be chosen to bring the function f (x) into canonical form. Here, we express the mean-field scaling function f (x) as
f (x) = 1 + x,(2.12)
with the scaling-invariant variables defined as
x = tM −1/β and y = HM −δ . (2.13)
However, the analytic properties as a function of H at fixed t are more manifest in another representation of the scaling equation of state w = F (z), with w ∼ Ht −βδ and z ∼ M t −β .
(2.14)
Again, the normalization constants in Eqs. (2.14) can be chosen to achieve a conventional (canonical) form for the equation of state. We choose to express the mean-field scaling function F (z) in the following form F (z) = z(1 + z 2 ), (2.15) with the variables w = Ht −βδ and z = M t −β .
(2.16)
The inverse of the (mean-field) function F (z), i.e., z(w), is multivalued and has three Riemann sheets associated with the high-and low-temperature regimes of the mean-field equation of state. The principal sheet, which represents the equation of state M (H) for t > 0, features two branch points. They are located on the imaginary axis in the complex w plane 17) and correspond to the Lee-Yang edge singularities at imaginary values of the magnetic field H, cf. Eqs. (2.8).
w LY = ± 2i 3 √ 3 ,(2.
Going under either one of the associated branch cuts, e.g., by following the path shown in Fig. 2, one arrives on the secondary sheet, which corresponds to the metastable branch of the equation of state at t < 0. The same branch point in Eq. (2.17) viewed from this sheet represents the spinodal point located at real negative H. To arrive on the stable t < 0 branch, i.e., H > 0, one has to follow the circular path further in the anticlockwise direction, as shown in Fig. 2 (right). We conclude that, in the mean-field approximation, the spinodal points and the Lee-Yang edge singularities are manifestations of the same singularities of the scaling equation of state z(w).
Beyond the mean-field approximation
The mean-field approximation relies on the smallness of the coupling u 0 . This is justifiable for d ≥ 4, where the coupling runs into the Gaussian IR fixed point and becomes arbitrary small as ξ → ∞. For d < 4, the Wilson-Fisher (WF) fixed-point value of the coupling, u WF 0 = O(ε), is also small as long as ε = 4 − d 1. However, for the most interesting case d = 3 the theory is nonperturbative and we cannot rely on the mean-field approximation. We would like to address the following question: What happens with the spinodal points and Lee-Yang edge singularities in this case? We shall begin with general considerations and later consider the case of small ε.
Langer cut and Fonseca-Zamolodchikov conjecture
According to the Lee-Yang theorem [6,7] the singularities of the Ising model, and thus, by universality, of the φ 4 theory, must be located on the imaginary axis of H. Thus the result of the mean-field theory that the Lee-Yang edge singularities (and their associated cuts) are on the imaginary axis holds in general. 2 What happens to the spinodal singularities away from mean-field? As we discussed, the scaling equation of state z(w) describes both high-and low-temperature branches of M (H), which correspond to primary and secondary Riemann sheets of the variable w. The Lee-Yang edge singularities are described by w LY , which lie on the imaginary w axis because w ∼ Ht −βδ and for t > 0 the value of H at the singularity, H LY , is imaginary. Thus analyticity and scaling imply that there must also be singularities on the low-temperature branch t < 0, at values of H given by:
H sp ∼ w LY t βδ = ∓|w LY t βδ |e ±iπ(βδ−3/2) , t < 0, (3.1)
where the different signs correspond to the two (complex conjugate) values of w LY and to the two possible directions of rotation from t to −t = e ±iπ t. Thus, in general, the spinodal points H sp are displaced from the (negative) real H axis by a phase Figure 3: Analytic continuation t → −t from the principal, i.e., high-temperature sheet (left panel) to the low-temperature sheet (right panel) of the scaling function z(w) of the Ising theory as conjectured by Fonseca and Zamolodchikov, where w ∼ Ht −βδ , while keeping the magnetic field H > 0 fixed at d = 4 − ε. After analytic continuation the metastable branch H < 0 can be accessed by rotating H clockwise in the complex plane, while keeping t < 0 fixed. The line representing the Langer cut is rotated away from imaginary axis by an angle ∆φ, cf. Eq. (3.2).
∆φ = π βδ − 3 2 , (3.2) w principal (t > 0) w Δϕ ancillary w m e ta s ta b le t < 0 H < 0 ( , ) s ta b le t < 0 H > 0 ( , )
where βδ > 3/2 below the upper critical dimension, i.e., for d < 4 (cf. Eq. (4.3)), and βδ = 3/2 for d ≥ 4.
In order to understand the position of the points described by Eq. (3.1) it is important to take into account another property of the equation of state in the low-temperature domain -the Langer cut [33]. It is well-known that the Ising equation of state is weakly singular at H = 0 for t < 0, due to the presence of an essential singularity [26,37,38] associated with the decay of the metastable vacuum [39][40][41]. The rate of this decay gives the imaginary part of the free energy F(t, H) for H on the metastable branch at t < 0 and, since M = ∂F/∂H, also the imaginary part of the magnetization M (t, H). Near d = 4, it takes the form (for w 1)
Im M (t, H) ∼ exp − const u 0 |w| 3 , (3.3)
demonstrating that there is an essential singularity, which is nonperturbative in u 0 . Not only is this singularity absent in the mean-field equation of state, but it cannot be seen at any finite order of the ε expansion. The imaginary part of M is discontinuous (changes sign by Schwarz reflection principle) across the real axis of H on the metastable branch, which corresponds to a cut, known as the Langer cut [33]. This cut can be reached from the stable low-temperature branch (H > 0, t < 0) by rotating H along a semicircle in the complex H plane, such that H → −H. Thus, its location in the complex w plane should be as shown in Fig. 3. If we translate Fig. 3 into the H plane, using w ∼ Ht −βδ (with t < 0), we find that the spinodal point can be found under the Langer cut as shown in Fig. 4, assuming, of course, that we start from the stable H > 0 branch. It is therefore natural to expect that the spinodal singularity (which is also the Lee-Yang edge singularity) is the closest singularity to the real axis (i.e., to the Langer cut). This is the essence of the "extended analyticity" conjecture put forward by Fonseca and Zamolodchikov [22]. Here, our goal is to see what one can say about the singularities of the equation of state and the validity of the conjecture using the ε expansion as well as large-N limit of the O(N )-symmetric φ 4 theory.
Lee-Yang edge singularities and Ginzburg criterion
As we discussed in Sec. 2, the mean-field (saddle-point) approximation is controlled by the quartic coupling u 0 . For d < 4, in the scaling regime, the coupling is given by the IR should approach the mean-field one as ε → 0. However, this approach is not uniform, especially, at the Lee-Yang edge singularities, which are the focus of this study.
The issue was first raised by Fisher, who observed that the singular behavior near the Lee-Yang point is described by a φ 3 theory [23]. This theory has an IR fixed point, albeit somewhat formally, since it occurs at imaginary values of the cubic coupling. The exponents (anomalous dimensions) can be calculated by an expansion around the upper critical dimension d = 6 of the φ 3 theory where the theory becomes perturbative in terms of ε = 6 − d. However, the φ 3 theory is nonperturbative in d = 4. In particular, the singular behavior in the vicinity of the
Lee-Yang point M − M LY ∼ (H − H LY ) σ , (3.4)
is characterized by the exponent σ ≈ 0.26 in d = 4 [42][43][44][45], which differs significantly from the mean-field result σ = 1/2. We appear to be facing a paradox. On the one hand, mean-field theory should become valid as d → 4. On the other hand, this approximation fails to account for the correct exponent at the Lee-Yang point in the same limit. There is no contradiction, of course. The reason that mean-field theory becomes precise for d → 4 is that the importance of fluctuations diminishes as the fixed-point value of the coupling vanishes at d = 4. However, at any given value of ε (and t), the magnitude of the fluctuations themselves increase as we approach the Lee-Yang points, since the correlation length ξ diverges at those points. In other words, the (squared) magnitude of fluctuations is proportional to the isothermal susceptibility M (H), which diverges as H → H LY .
We are, therefore, led to seek a condition, similar to the Ginzburg criterion in the theory of superconductors [46], which determines how close the Lee-Yang edge singularity can be approached before mean-field theory breaks down. Even though the critical exponents, such as σ, cannot be determined reliably in the mean-field approximation, the domain of the validity of that approximation can be.
At the Lee-Yang point H = H LY the mean-field potential Eq. (2.4) takes the following form
V (Φ)| H=H LY = t 2 12 + 1 √ 3 it 1/2 (Φ − M LY ) 3 + 1 4 (Φ − M LY ) 4 . (3.5)
It describes a massless φ 3 theory with imaginary cubic coupling (t > 0). When H = H LY a quadratic (mass) term appears. Expanding in H − H LY we find
V (Φ) = t 2 12 + (−3t) 1/4 (H − H LY ) 1/2 (Φ − M LY ) 2 + 1 √ 3 it 1/2 (Φ − M LY ) 3 + 1 4 (Φ − M LY ) 4 + . . . ,(3.6)
where we show only the leading-order contribution to each of the coefficients and the ellipsis denotes the subleading terms. From Eq. (3.6) we can determine the correlation length, given by Eq. (2.6), for small H − H LY . The result can be written in the following scaling form
ξ −2 = t 2(−3) 1/4 (w − w LY ) 1/2 + . . . , (3.7)
where w and w LY are given by (2.16) and (2.17), respectively. This analysis relies on the mean-field approximation and, therefore, assumes that fluctuations can be neglected. The relative importance of fluctuations, is determined by the quartic coupling u 0 , which is most evident in Eq. (2.3), where u 0 controls the applicability of the saddle-point approximation to the path integral. In 4 − ε dimensions, this coupling runs to the Wilson-Fisher fixed point in the IR, i.e., u 0 → u WF 0 = O(ε) and therefore a mean-field (saddle-point) analysis is justified for sufficiently small ε. How small ε, or u 0 , should be, however, depends on the value of the scalinginvariant variable w. For a generic value away from w LY the condition is simply ε 1. However, as w → w LY the correlation length diverges, fluctuations are enhanced, and the condition on ε becomes more restrictive.
As w → w LY , the relative importance of fluctuations is controlled by the most relevant coupling, the cubic coupling g 3 , which can be read off as the coefficient of the (Φ − M ) 3 term in Eq. (3.6), i.e., g 3 ∼ i(u 0 t) 1/2 . Note that a factor √ u 0 must be included in order to restore the canonical normalization of the field, Φ = u 0 /6 φ. The mass dimension of the cubic coupling g 3 is (6 − d)/2 and thus its relative importance is determined by the dimensionless combination g 3 ≡ g 3 ξ (6−d)/2 which, according to Eq. (3.7), is given by
g 3 ∼ u 1/2 0 |w − w LY | −(6−d)/8 + . . . , (3.8) where u 0 ≡ u 0 t (d−4)/2 = u 0 t −ε/2 .
The mean-field analysis is applicable near the Lee-Yang edge singularity only as long as g 3 1. 3 For 0 < ε 1, this yields the following requirement
|w − w LY | ε 2 ,(3.9)
where we replaced u 0 with its IR fixed-point value u WF 0 ∼ ε. Eq. (3.9) is the Ginzburg criterion that determines the size of the critical region around the Lee-Yang point. Inside this region the mean-field approximation breaks down and the correct scaling near that point is given by the fixed point of the φ 3 theory, which is nonperturbative in d = 4. One can also say that a typical condition for the mean-field approximation to apply, ε 1, is not sufficient near the Lee-Yang points, where a stronger condition becomes necessary: Eq. (3.9).
It is instructive to consider also the case 4 < d < 6. The critical behavior simplifies as the scaling is now controlled by the Gaussian IR fixed point. However, we cannot simply set the coupling u 0 to zero since the action becomes singular in this limit (cf. Eq. (2.3)). In other words, for d > 4, the coupling u 0 is a dangerously irrelevant variable [47]. In this case the equation of state depends on u 0 in addition to the variables t and H. Repeating the arguments leading to Eq. (3.8) we conclude that, for 4 < d < 6, no matter how small the coupling u 0 is, the mean-field approximation will break down sufficiently close to the Lee-Yang point with the Ginzburg criterion given by |w − w LY | ( u 0 ) 4/(6−d) .
(3.10)
Finally, for d ≥ 6 the variable w is not constrained by the Ginzburg criterion, and the condition u 0 1 is sufficient for mean-field theory to apply for all w. This corresponds to the fact that d = 6 is the upper critical dimension of the φ 3 theory, and the exponent σ in Eq. (3.4) takes the mean-field value 1/2 for d ≥ 6 in accordance with [23]. Since the fixed-point value of the coupling u 0 is small near d = 4, the equation of state can be calculated perturbatively in ε = 4 − d [8][9][10][11][12]. In terms of the rescaled variables t, M , and H, one finds to order ε 2 [10,11] Expressing the critical exponents β and δ to order ε 2 , the "gap" exponent is given by
4 H M = t + u 8 r 1 + ln(r) − ε 4 ln 2 (r) − u 2 64 r 4 + π 2 − 8λ − ln 2 (r) + M 2 1 − 3 32 u 2 6 + 1 2 π 2 − 4λ + 3 ln(r) + 1 2 ln 2 (r) ,(4.βδ = 3 2 + 1 12 ε 2 + O(ε 3 ),(4.3)
and the series expansion in ε of the scaling function F (z) reads
F (z) = F 0 (z) + F 1 (z)ε + F 2 (z)ε 2 + O(ε 3 ),(4.4)
with F 0 (z) = z + z 3 , (4.5a)
F 1 (z) = 1 6 −3z 3 + z + 3z 3 L(z) , (4.5b) F 2 (z) = 1 648 −150z 3 + 2(25z − 6z 3 )L(z) + 9(z + 9z 3 )L 2 (z) ,(4.5c)
and L(z) = ln 1 + 3z 2 . 5 Note that the mean-field equation of state (2.15) is recovered in the limit ε → 0. Here, the normalization of the scaling variables w and z in Eq. (2.14) is chosen in such a way that the two lowest order terms in the Taylor expansion
F (z) = z + z 3 + ∞ n=2 F 2n+1 z 2n+1 ,(4.6)
are fixed and coefficients F 2n+1 = O(ε), for all n ≥ 2.
Since the singularities of the equation of state are associated with a diverging correlation length, the equation of state must be analytic away from the Ising critical point located at t = M = H = 0. Thus, if any one of these parameters is set to a nonzero value, the relation between the other two must be analytic. This translates into the following two properties of F (z) often referred to as Griffiths' analyticity [49]. First, for fixed t > 0, we find that F (z) ∼ H is an analytic function of z ∼ M in the vicinity of z = 0, which should also be odd under reflection H → −H and M → −M . This is easily seen in the explicit expressions for F (z) in Eqs. (4.5). Second, for fixed M > 0 we find that the function z −δ F (z) ∼ H must be an analytic function of the variable z −1/β ∼ t in the vicinity of t = 0 (z = ∞). The behavior of F (z) at large z is not manifest in Eqs. (4.5) since the ε expansion of this function does not converge uniformly, due to the presence of large logarithms.
In this case, it is better to introduce the scaling variables x ∼ tM −1/β and y ∼ HM −δ (as in Eq. (2.11)), and express the equation of state Eq. (4.1) as the Widom scaling function y = f (x) [15], whose ε expansion is convergent when x → 0 (corresponding to z ∼ x −β → ∞). However, in this representation the analyticity at large x (corresponding to small z) is obscured, again due to lack of convergence of the ε expansion.
Thus it would be useful to have a representation of the equation of state where the analyticity is manifest in both regimes, i.e., a representation for which the ε expansion converges uniformly. The so-called parametric representations [29,30], reviewed below, are designed to fulfill this requirement.
Parametric equation of state
As we discussed in the previous section, the problem with representations using the pairs of scaling variables such as w and z, or y and x, is that the two points z = 0 and x ∼ z −1/β → 0, where each of them is analytic, correspond to infinitely separated points z = 0 and z = ∞ and similarly for x. This problem can be addressed by introducing a new scaling variable, θ, by means of a nonlinear variable transformation (t, M ) → (R, θ):
t = Rk(θ), (4.7) M = R β m(θ),(4.8)
with analytic functions k(θ) and m(θ), chosen such that the two points x ∼ tM −1/β = 0 and z ∼ M t −β = 0 are placed at positions θ = 1 and θ = 0, respectively. The simplest choice satisfying these conditions is
k(θ) = 1 − θ 2 and m(θ) =mθ,(4.9)
also known as the linear parametric model (LPM) [29,31]. Here,m is a normalization constant, which can be chosen to bring the equation of state into canonical form (e.g., see Eq. (4.6)).
In the parametric representation, the equation of state becomes a relationship between H and the parameters R and θ, i.e., H = R βδ h(θ), While R scales as the reduced temperature t, the variable θ is invariant under rescaling in Eq. (2.9). Therefore, the scaling variables w and z can be expressed in terms of θ alone, i.e.,
z ∼ M t −β ∼ θ(1 − θ 2 ) −β and w ∼ Ht −βδ ∼ h(θ)(1 − θ 2 ) −βδ .
(4.11)
Inserting these expressions into the equation of state w = F (z) one can determine the function h(θ) (as well as the normalization constantm) order by order in the ε expansion [15]. 6 In this section, we shall carefully examine the parametric representation obtained by matching equation of state to order ε 2 . Our goal is to determine the location of singularities and their uncertainty due to higher orders of ε expansion. To focus on relevant features we present the results in the minimal form necessary for the argument, and collect explicit expressions needed for the derivation in Appendix A and B.
It is known that to order ε 2 the function h(θ) is given by a cubic polynomial [8,10,11] h(θ) =h(θ + h 3 θ 3 ), (4.12)
whereh is a normalization parameter. As we shall see, the number of singularities is determined by the order of this polynomial while their positions are related to the coefficient h 3 which can be determined by matching to equation of state (4.6). For ε = 0 (mean-field equation of state),
h 3 = −2/3.
In order to study the dependence of our results on ε we shall expand h 3 in ε. To this end we adopt the historical notation of Refs. [10,11,29,31] and express h 3 in terms of parameter b defined by h(θ = b) = 0, i.e., the closest zero to θ = 0. Obviously,
h 3 = − 1 b 2 . (4.13)
The coefficients of the ε expansion of b 2
b 2 = 3 2 + b 1 ε + b 2 ε 2 + O(ε 2 ). (4.14)
cannot be determined by matching at order ε 2 (or ε 3 for that matter, cf. Appendix B). It is a common choice [10,11] to set b 1 = 0, but it is not necessary and we shall allow this parameter to have an arbitrary real value. It will be helpful for understanding the ε dependence of our results.
We shall now study the singularities that arise in the linear parametric representation in order to infer the analytic properties of the scaling equation of state. Specifically, we examine the equation of state to order ε 2 in the form w = F (z), represented parametrically using Eqs. (4.11). This allows us to directly access the singularities in the complex w plane by examining the rescaled inverse isothermal susceptibility, given by
rt −γ ∼ F (z) = w (θ) z (θ) ,(4.15)
whose zeros correspond to the branching points of the multivalued function z(w).
In terms of the linear parametric representation, Eqs. (4.7) -(4.10) and Eq. (4.12), the scaling variables z and w are given by
z =z θ (1 − θ 2 ) β and w =w (θ + h 3 θ 3 ) (1 − θ 2 ) βδ ,(4.16)
where the normalization parametersz andw, determined by matching the parametric model to the canonical equation of state Eq. (4.6) to order ε 2 , are needed below to find the position of singularities to that order and are given by Eqs. (A.2). Substituting into Eq. (4.15) we arrive at the following expression for the inverse susceptibility where the scaling exponents β, γ, and δ, as well as the parameters h 3 ,w andz should be expanded to order ε 2 .
F (θ) =w z (1 − θ 2 ) −γ 1 + (2βδ + 3h 3 − 1) θ 2 + (2βδ − 3)h 3 θ 4 1 − (1 − 2β)θ 2 ,(4.
If we set ε = 0 and use the mean-field critical exponents, β = 1/2, γ = 1, and βδ = 3/2, we observe that the only zeros of F (θ) = (1 − θ 2 ) −1 lie at complex infinity (in the θ plane). Of course, this is consistent with the mean-field result, which is easily confirmed by examining the limit |θ| → ∞ in Eqs. (4.16), i.e., lim |θ|→∞ w(θ) = ±2i/(3 √ 3), and comparing with Eq. (2.17). At nonzero ε, however, the structure of the singularities of Eq. (4.17) becomes more complicated. Now, the polynomial in the numerator has four zeros. There are also two zeros in the denominator, giving rise to two poles. In addition, there are two branch-point singularities at θ = ±1. Since w(θ = 1) = z(θ = 1) = ∞ the latter can be seen to correspond to the behavior F (z) ∼ z δ (and, therefore, F (z) ∼ z γ/β ) at large z, required by Griffiths' analyticity. The four zeros and two poles on the other hand, occur at finite, albeit large, values of θ 2 = O(ε −1 ). We shall now focus on these finite w singularities.
Since F (θ) is an even function of θ it is convenient to consider its singularities as a function of θ 2 . The numerator of Eq. (4.17) vanishes at two distinct values θ 2 n , which we label by indices n = 1, 2. These solutions can be expanded in powers of ε where the leading contribution appears at order ε −1 , i.e.,
θ 2 n = c n ε [1 + O(ε)] . (4.18)
Substituting θ n into Eq. (4.16), w n ≡ w(θ n ), and expanding in ε we get
w n = ± 2i (−ĉ n ) 3 2 −βδ 3 √ 3 1 + ω (2) (c n , b 1 ) + 1 12 ln ε ε 2 + O(ε 3 ) ,(4.19)
whereĉ n ≡ c n /|c n |. Remarkably, only the leading-order coefficient of θ 2 n , c n , appears in this expression. 7 The coefficient c n is a function of b 1 and for the two solutions θ 2 n , n = 1, 2, we obtain c n = 3 2b 1 + (−1) n 1 + 4b 2 1 , n = 1, 2, Inserting the coefficient c 1 into (4.19), we find
w 1 = ± 2i 3 √ 3 1 + ω (2) (c 1 , b 1 ) + 1 12 ln ε ε 2 + O(ε 3 ) , (4.21)
which corresponds to the pair of Lee-Yang edge singularities (cf. Eq. (2.17)). As in the mean-field case they are located on the imaginary axis in accordance with Lee-Yang theorem. However, comparing with the mean-field result, we observe that its absolute value receives corrections of order ε 2 , which depend on the parameter b 1 . Since b 1 cannot be determined at this order of the ε expansion, practically, the position of the singularity also cannot be established to precision of order ε 2 . This agrees with our earlier observation in Sec. 3.2 that the nonperturbative domain around the Lee-Yang edge singularities has size O(ε 2 ), according to Ginzburg criterion Eq. (3.9). The second pair of singularities, w 2 , is located off the imaginary axis:
w 2 = ± 2i 3 √ 3 (−1) 3 2 −βδ 1 + ω (2) (c 2 , b 1 ) + 1 12 ln ε ε 2 + O(ε 3 ) . (4.22)
One can easily see that they lie precisely where we expect the Langer cut (see Fig. 3). But what is their significance? Before answering this question, let us first consider the poles of F (z), which can be obtained by solving
1 − (1 − 2β) θ 2 = 0. (4.23)
The solution θ 2 to this equation, which we label by the index n = 0, can be also expanded in powers of ε. The corresponding leading coefficient c 0 (cf. Eq. (4.18)) is given by
c 0 = 3, (4.24)
and according to Eq. (4.19), we find
w 0 = ± 2i 3 √ 3 (−1) 3 2 −βδ 1 + ω (2) (c 0 , b 1 ) + 1 12 ln ε ε 2 + O(ε 3 ) . (4.25)
The position of the singularities w 0 , w 1 , and w 2 , is shown schematically in Fig. 5 in the upper half of the complex w plane and for a generic value of b 1 , according to Eq. (A.3). We observe that w 0 and w 2 lie on the same ray, which corresponds to the Langer cut of the exact equation of state. The distance between these points is given by (4.26) and depends on the value of b 1 . For the common and particular choice b 1 = 0, when c 2 = c 0 , the two points coincide and the zero and the pole cancel each other to order ε 2 . It is also important to note that both w 0 and w 2 (on the Langer cut) are within distance O(ε 2 ) from the Lee-Yang edge singularity, since βδ − 3/2 = O(ε 2 ). Therefore, according to the Ginzburg criterion in Eq. (3.9), these singularities and their position are nonperturbative. This is in agreement with the fact that we cannot determine the parameter b 1 within the ε expansion to establish their position. Furthermore, as we show in Appendix B, extending the linear parametric model to next order, ε 3 , leads to terms in w n that contribute at order ε 2 (in addition to the expected ε 3 contribution). Thus, the procedure based on matching to increasing orders of the ε expansion does not converge in the usual sense. In spite of this, it is still tempting to speculate that the sequence of (alternating) zeros and poles line up along the ray at angle ∆φ relative to the imaginary axis and will eventually coalesce into the Langer cut -a purely nonperturbative feature, which cannot be reproduced at any finite order of ε expansion. In fact, such a scenario is common in rational-function (Padé) approximations of functions with branch cuts. 8 Summarizing, we see that the Ginzburg criterion (3.9) sets the limit on the information that can be gained about the Lee-Yang edge singularities. The precision that we can reach, ε 2 , is not sufficient to study the region between the Lee-Yang edge singularity and the Langer cut, which is necessary to test the Fonseca-Zamolodchikov conjecture. Nevertheless, the results we find are nontrivially consistent with the conjecture.
w 2 − w 0 = O(ε 2 ),
Singularities in the O(N )-symmetric φ 4 theory
An alternative point of view on the question of extended analyticity and the nature of singularities in the complex H plane can be obtained by studying the generalization of the φ 4 theory to the N -component theory with O(N ) global symmetry. This generalization is a well-known tool to study nonperturbative aspects of the theory. The finite-N cases describe the critical behavior of, e.g, the Heisenberg model (N = 3), the XY-model (N = 2), and, of course, the Ising model (N = 1). On the other hand, in the N → ∞ limit the O(N ) model describes the critical behavior of the exactly solvable spherical model [50,51].
Similar to Eq. (2.1), the O(N ) theory is defined by the Euclidean action (or Hamiltonian divided by temperature)
S = d d x 1 2 (∂ µ φ) 2 + r 0 2 φ 2 + u 0 4! (φ 2 ) 2 − h 0 · φ . (5.1)
Here, φ is a (real) N -component vector field and the external magnetic field h 0 has the same dimensionality. In the presence of a nonvanishing h 0 the expectation value of φ, φ , is directed along the former. Due to the O(N ) invariance of the theory we may choose an axis along the vector h 0 and define the equation of state as a relationship between the projections φ , h 0 of φ and h 0 onto that direction, similar to the N = 1 case in Sec. 2. We are interested in analytic properties of the universal equation of state, which describes the critical behavior of the φ 4 theory associated with the spontaneous breaking of the O(N ) symmetry. However, since there can be no spontaneous symmetry breaking of continuous symmetries in d ≤ 2 [52,53], we shall limit our analysis to dimensions d > 2.
When h 0 = 0 the critical point is reached by tuning r 0 to its critical value, i.e., t = r 0 −r c → 0. In this limit, and for d < 4 the quartic coupling u 0 runs into the O(N ) Wilson-Fisher fixed point in the infrared, i.e., u 0 → u WF 0 [54,55] and therefore the critical equation of state becomes independent of the bare coupling u 0 as well as the ultraviolet cutoff. A systematic expansion in powers of 1/N , yields a fixed-point value with u WF 0 ∼ O(1/N ) [55,56]. But this does not necessarily mean that the equation of state of the O(N ) model reduces to the mean-field result in the limit N → ∞. Indeed, the tree-level action for the longitudinal field φ receives a nontrivial contribution from integrating out the N −1 transverse-field degrees of freedom (which, for t < 0, correspond to the massless Goldstone modes associated with the spontaneous breaking of the O(N ) symmetry) [55]. Both the tree-level action, proportional to 1/u 0 ∼ O(N ) (as in Eq. (2.3)), as well as the one-loop contribution of the N − 1 transverse-field modes are of order N . We may therefore apply the saddle-point approximation in the large-N limit.
We shall first consider the infinite-N case, or the spherical model, and then briefly comment on 1/N corrections below. As in Sec. 2 we introduce the rescaled field variables M = u 0 /6 φ and H = u 0 /6 h 0 and employ the scaling variables w = Ht −βδ and z = M t −β etc. In the N → ∞ limit the critical exponents are known [55] But what is the meaning of the solution at w = 0 (i.e., H = 0) in Eq. (5.5)? Since z 2 = −1 and M ∼ zt 1/2 , this singularity corresponds to real M only for t < 0. It is located at the origin (H = 0) of the low-temperature sheet and is associated with a branch cut along the negative real H axis.
β = 1 2 , γ = 2 d − 2 , and δ = d + 2 d − 2 , for 2 < d < 4,(5.
To understand the significance of this singularity and the associated branch cut, we illustrate the equation of state M (H) in Fig. 6 for t < 0 and d = 3. Unlike the N = 1 case there is no metastable regime (compare with Fig. 1). This can be understood as a consequence of the fact that, when H changes sign (relative to M ), the effective potential as a function of φ develops directions with negative curvature (i.e., the Goldstone bosons become tachyonic). This means that the false vacuum is classically unstable and, since there is no tunneling involved in the decay of the false vacuum, there is also no exponential suppression of the imaginary part, unlike the N = 1 case. That is, instead of the essential (and very weak) singularity (cf. Eq. (3.3)) the equation of state with N > 1 has a power-law singularity [54,55,57,58] which comes from the IR-divergent contributions of the Goldstone bosons [57,59]. Similar to the Langer cut in the Ising case, the N > 1 equation of state for 2 < d < 4 has a "Goldstone cut" branching off from the origin and going along the real H axis on the unstable branch (H < 0 in our convention), with discontinuity given by 10
Im M ∼ H (d−2)/2 , for H → 0, t < 0. (5.7)
Furthermore, from Fig. 6, we see that, for T < T c , the Lee-Yang edge singularities must lie on another (unphysical) branch of the equation of state M (H). These singularities can be reached in the complex H plane by going under the Goldstone cut onto an ancillary Riemann sheet. In fact, the situation is very similar to the conjectured scenario shown in Fig. 4, where we observed a similar analytic structure of the equation of state. 11 The low-temperature singularities located off the Goldstone cut for d < 4 are very similar to the spinodal points. In fact, they become the spinodal points at d = 4 when the equation of state (5.3) takes the mean-field form (2.15).
Since there are no singularities in the equation of state Eq. (5.3) apart from the ones given by Eqs. (5.5), we conclude that the Fonseca-Zamolodchikov scenario is realized in the O(N ) model in the N → ∞ limit.
To complete our analysis, we finally comment on the 1/N corrections to the scaling function (5.3). Since the leading-order contribution to ∆φ in Eq. (5.6) is already O(1/N 0 ), we observe that 1/N corrections cannot change the conclusion that there are no singularities at real (nonzero) H, provided that the only effect of these corrections is to shift the position of the singularities already present in the N → ∞ limit.
The 1/N corrections can be expressed in terms of momentum integrals whose explicit form is not particularly illuminating (see Refs. [55,56] for details). For simplicity we shall consider only d = 3, which is also the case that is most relevant for applications. In three dimensions, we find that the aforementioned momentum integrals yield only two branch points at z 2 = −1 and z 2 = −1/5, which coincide with the same singularities already found in the N → ∞ limit, cf. Eq. (5.4), while the position of the corresponding points in the complex w plane is shifted by an amount of order 1/N . 12 This is consistent with our expectation that the 1/N corrections only modify the position of the singularities (as determined in the N → ∞ limit) and suggests that no new singularities appear at finite N . 13
Conclusions
In this work we studied the relationship between singularities of the universal scaling equation of state of the φ 4 theory above and below the critical temperature. Above the critical temperature 10 The coefficient of this singularity vanishes at N = 1 [57]. 11 Interestingly, the analytic structure of the scaling equation of state of the three-dimensional spherical model is also remarkably similar to that of the planar Ising model coupled to two-dimensional quantum gravity [60][61][62]. 12 At order 1/N the value of ∆φ, which controls the position of the spinodal points in the complex w (or H plane) is given by ∆φ = π βδ − 3 2 = π − 28/(πN ) + O(1/N 2 ) in d = 3 dimensions, where we have used Eq. (3.2) and critical exponents β and δ from Ref. [55,63]. 13 Note that the position of the singularities in the complex w plane can be calculated reliably to order 1/N , even though higher-order corrections become nonperturbative. Indeed, the Ginzburg criterion (see Eq. (3.8) or (3.10)) with u0 = O(1/N ) imposes a constraint on the applicability of the saddle-point approximation around the Lee-Yang point, which reads |w − wLY| N −4/(6−d) (in agreement with Ref. [64]). This demonstrates that the nonperturbative region is smaller than 1/N for d > 2.
Lee-Yang edge singularities, by the Lee-Yang theorem, lie on the imaginary magnetic field axis and limit the domain of analyticity around the origin H = 0. On the other hand, below the critical temperature, there are singularities associated with the point where the metastable state becomes locally unstable and its decay occurs via spinodal decomposition.
In the mean-field approximation to the equation of state, H = tM + M 3 , these spinodal points are related to the Lee-Yang edge singularities. In terms of the scaling variable w = Ht −βδ , they are essentially the same singularities. These singularities occur at imaginary w and, for t > 0, they correspond to imaginary H, i.e., the Lee-Yang points. For t < 0, however, they correspond to real H on the metastable branch (since in the mean-field approximation: βδ = 3/2 and i(−1) 3/2 = −1).
Since βδ = 3/2 for d < 4, one naturally has to ask the question if the spinodal singularities on the real H axis exist at all. The analyticity of equation of state as a function of w would require the low-temperature manifestation of the Lee-Yang points to be points off the real axis by a phase ∆φ = π(βδ − 3/2). Fonseca and Zamolodchikov put forward a conjecture that these are the closest singularities to the real H axis. Our aim here was to test this conjecture in the small-ε and large-N regimes.
We have used a uniform approximation to the equation of state based on parametric representations, which are especially convenient to study the equation of state in the whole complex plane of w using the ε expansion. However, the vicinity of the Lee-Yang singularity is special in that the ε expansion must break down. In fact, there is an apparent paradox, identified first by Fisher [23], which is most acute in 4 < d < 6. The equation of state is expected to be mean-field-like in this case, yet, near the Lee-Yang point the critical behavior must be given by nontrivial critical exponents of the φ 3 theory. For d < 4 the equation of state must approach the mean-field form as ε → 0, yet this cannot be true near the Lee-Yang point because the φ 3 theory is nonperturbative at d = 4. We identify and quantify the solution to this apparent paradox. We show that the ε expansion must break down and the equation of state becomes nonperturbative in the (Ginzburg) region around the Lee-Yang point whose radius is proportional to ε 2 as ε → 0.
We have considered the parametric representation to order ε 2 (and ε 3 , see Appendix) and have shown that the singularities we find are consistent with the Fonseca-Zamolodchikov conjecture (for a range of parameters controlling the form of the parametric representation). However, we have also confirmed that the expansion breaks down near the Lee-Yang edge singularities in a way consistent with the derived Ginzburg criterion. In particular, the order ε 3 contribution modifies the results obtained at order ε 2 also at order ε 2 ! In other words, the behavior near the singularities (including their position) is nonperturbative at order ε 2 . Since the distance between the Lee-Yang edge singularity at t < 0 (i.e., the spinodal point) from the real axis is itself of order βδ − 3/2 = O(ε 2 ) we conclude that the ε expansion cannot be used to confirm or invalidate the Fonseca-Zamolodchikov conjecture.
We point out that the equation of state of the O(N )-symmetric φ 4 theory satisfies the Fonseca-Zamolodchikov conjecture in the large-N limit. In particular, for d < 4 there are no singularities on the metastable branch of the real H axis. Instead the singularities can be found off the real axis, and are, in fact, the Lee-Yang branching points, as predicted by extended analyticity. We have checked that (at least in d = 3) this result is not affected by the leading 1/N corrections.
Although the Fonseca-Zamolodchikov conjecture for the Ising critical equation of state is difficult to prove using the analytic methods considered, we can conclude that it is nontrivially consistent with the various systematic approximations to the equation of state beyond the mean-field level.
The absence of singularities on the real H axis (except for the branch point at H = 0 associated with the Langer cut) could have implications for the behavior of systems undergoing cooling past the first-order phase transition (see, e.g., Refs. [65][66][67][68]). In particular, it could prove important for the understanding of the experimental signatures of the first-order phase transition separating hadron gas and quark-gluon plasma phases of QCD associated with the QCD critical point, which is being searched for using the beam energy scan heavy-ion collision experiments.
It is important to realize that in the region of the parameter space where the spinodal singularities occur the equation of state is not, strictly speaking, defined in the usual sense as a property of the system in thermal equilibrium, due to the finite lifetime of the metastable state. It is, however, defined mathematically by analytic continuation from the regime of thermodynamic stability. Many properties of the equation of state in the metastable region, such as the imaginary part and the discontinuity on the Langer cut are clearly reflecting dynamics of the system associated with the decay of the metastable state. Also the absence of the spinodal singularities at real H can be related to metastability: the presence of a thermodynamic singularity requires the correlation length to diverge and the equilibration to such a critical state requires infinite time, which is impossible due to the finite lifetime of the metastable state.
Finally, it is also interesting to note that the decay rate of the metastable state, which is controlled by the (small) coupling u 0 (see Eq. (3.3)) is no longer exponentially suppressed at the spinodal point. Moreover, for small u 0 , the nucleation rate near the spinodal point has the asymptotic form exp[−const(w − w LY ) (6−d)/4 / u 0 ] [69,70]. Therefore exponential suppression disappears in the same region as defined by the Ginzburg criterion in Eq. (3.10). This is to be expected since the fluctuations leading to the decay become important in that region. The fact that the shift of the spinodal singularity into the complex H-plane is also due to fluctuation contribution to the "gap" exponent βδ suggests that the shift is related to metastability. It would be interesting to establish a more quantitative relation between this phenomenon and the Fonseca-Zamolodchikov conjecture. We defer full development of this connection to future work. and the normalization parameters in Eqs. i.e., b and e are related to the zeros of h on the coexistence line (t < 0, H → 0). Note, while θ = ±b stays finite in the limit ε → 0, θ = ±b/ √ −eε 3 diverges in the same limit. To order ε 3 , the extended linear parametric representation depends on three real-valued parameters b 1 , b 2 , and b 3 . These parameters cannot be fixed by matching to the equation of state alone [11], which parallels the behavior we have already observed with the order ε 2 parametric model (cf. Sec. 4.2). Essentially, we therefore obtain a three-parameter family of Figure 7: The parameters c n , n = 1, 2, 3 as a function of b 1 . We observe a critical value b 1 ≈ 0.552 above which all solutions θ 2 n , n = 1, 2, 3, are real.
extended linear models that we employ in the following to study the complex-field singularities of the equation of state. At order ε 3 we find that the (rescaled) inverse susceptibility is given by while another point is located on the Langer cut, i.e.,ŵ 2 = ±i(−1) 3/2−βδ . At b 1 ≈ 0.552 the two zeros w 1 and w 3 collide and move off the imaginary axis, into the complex w plane, while w 2 remains on the Langer cut. On the other hand, from Eq. (B.7), we also obtain a pole, θ 2 0 , determined by the same equation as Eq. (4.23), albeit with the critical exponent β expanded to order ε 3 . Thus, this pole is displaced from the imaginary axis and located along the Langer cut, i.e.,ŵ 0 = ±i(−1) 3/2−βδ (see Fig. 8).
Summarizing, it appears that the free parameters b 1 , b 2 , and b 3 can be chosen in such a way that the zeros and poles of the inverse isothermal susceptibility F (z) align either on the Lee-Yang and/or the Langer cut. If b 1 0.552, there are always singular points located on the Lee-Yang cut, which we might identify with the Lee-Yang edge singularities. This observation supports our earlier suspicion on the nature of rational approximations of functions with a branch cut.
Figure 1 :
1(a) The mean-field Ising equation of state M (H) and (b) the corresponding effective potential V (M ) at H = 0 in the low-temperature phase (T < T c ). The analytic continuation of the stable branch (dashed curve) is bounded by the spinodal points (red). The straight line connecting the two minima of the effective potential is determined by the Maxwell construction.
Figure 2 :
2Analytic continuation t → −t from the principal, i.e., high-temperature sheet (left panel) to the low-temperature sheet (right panel) of the mean-field scaling function z(w) in Eq.
Figure 4 :
4(Wilson- Fisher) fixed-point value of order ε = 4 − d. This means that the true scaling equation of state The Fonseca-Zamolodchikov conjecture for t < 0, illustrated in the complex H plane. The line along the negative real H axis represents the Langer cut. The second cut on the ancillary sheet is the Lee-Yang cut, which is associated with the Lee-Yang edge singularity. The latter is expected to be the nearest singularity under the Langer cut.
equation of state at d = 4 − ε In this section we shall review the known results on the ε expansion of the equation of state relevant for our discussion.
the mean-field equation of state Eq. (2.5) with Eq. (2.7) when u → 0. Here, the parameter u denotes the (Wilson-Fisher) fixed-point value of the conveniently normalized quartic coupling, i.e., d is the area of the unit sphere in d dimensions and λ = (1/9) 3Ψ (1/3) − 2π 2 involves the first derivative of the digamma function Ψ(z) = d/dz ln Γ(z). The inverse isothermal susceptibility r = (∂H/∂M ) t is a function of the variables t and M , i.e., r = r(t, M ), and therefore Eq. (4.1) constitutes a relation between t, M , and H. As mentioned in Sec. 1, in the scaling region, Eq. (4.1) can be written in a scaling form, in terms of scale-invariant combinations of two relevant variables. Among various choices it is more convenient for our work to write Eq. (4.1) in terms of the scaling variables w ∼ Ht −βδ and z ∼ M t −β , i.e., w = F (z).
(θ) is an odd function of θ (since θ ∼ M is an odd variable under reflection M → −M , H → −H).
Figure 5 :
5We show the position of the two zeros w 1 and w 2 (solid points) and single pole w 0 (open circle) of the parametrically represented inverse isothermal susceptibility F (z) at order O(ε 2 ) in the complex w plane (b 1 = 0). Note, only the singularities in the upper half of the complex w plane are shown.
1 < 0 and c 2 > 0 for all real values of b 1 . Note that the absolute value of w n is determined by ω (2) (c n (b 1 ), b 1 ) -a function of b 1 (see Eq. (A.3)) while the dependence on n appears only via c n in Eq. (4.20). Nontrivially, there are no O(ε) terms in Eq. (4.19) (they cancel) and there is no dependence on b 2 to this order.
Figure 6 :
6(a) Equation of state M (H) of the three-dimensional O(N ) model in the N → ∞ limit and (b) the corresponding effective potential V (M ) at H = 0 in the low-temperature phase (T < T c ). The dashed curve illustrates the analytic continuation of the stable branch (solid curve). In addition to the spinodal singularities at nonvanishing H, the presence of massless Goldstone modes induces singularities on the coexistence line (T < T c and H → 0).
their mean-field values for d ≥ 4. The scaling equation of state w = F (z) is determined in terms of the scaling function F (z) = z(1 + z 2 ) γ . (5.3) In d ≥ 4 dimensions, where the critical exponent γ = 1, this agrees with the mean-field equation of state Eq. (2.15), as should be expected. The branching points of the inverse function z(w) correspond to solutions of F (z) = 0. We find two (pairs of) such solutions z 2 = −1 and z 2 onto w = 0 and w = ±i(2γ) γ (1 + 2γ) −βδ , (5.5)in the complex w plane. The w = 0 solutions lie on the imaginary w axis. In fact, for d = 4 they are identical to the Lee-Yang edge singularities in Eq. (2.8). Thus, for t > 0, we can identify these solutions with the pair of Lee-Yang edge singularities at imaginary H. For t < 0, they lie on the real H axis for d ≥ 4, while they are shifted off the real H axis by an angle 9
π 2 − 8λ − 8b 1 (1 + 2b 1 ) + 16b 2 ε 2 + O(ε 3 ) . (A.2b)Substituting Eqs. (A.1) and (A.2) into Eq. (4.16) and using the ansatz for θ n , Eq. (4.18), we obtain Eq.(4.19), where O(ε) terms cancel and the O(ε 2 ) coefficientω (2) (c n , b 1 ) = 1 24 7 + π 2 − 8λ − ln |c n | 2 − 8b 1 (1 + 2b 1 ) of the parameter b 2 .B Parametric equation of state at order ε 3Here, we consider the extended linear parametric model, i.e., Eqs. (4.7) -(4.10), in order to examine the robustness of our conclusions at O(ε 2 ). In particular, we will show how the O(ε 2 ) terms in |w n | are modified by introducing the O(ε 3 ) contributions, which again demonstrates the nonperturbative nature of the problem. In the extended model,h(θ) =h(θ + h 3 θ 3 + h 5 θ 5 ), (B.1)whereh is an appropriate normalization constant. In contrast to the parametric model of Sec. 4.2, the inclusion of a fifth-order contribution in θ is necessary to match to the equation of state at order ε 3[10,11]. The coefficients h 3 and h 5 are given by 24b1 b 2 + 18b 3 + 27e ε 3 + O(ε 4 ), 1 ε + b 2 ε 2 + b 3 ε 3 + O(ε 4 ),(B.5) expanded in powers of ε. The significance of these parameters becomes clear if we factor decompose Eq. (B.1) to the following form h(θ) =hθ 1 − (θ/b) 2 1 + eε 3 (θ/b) 2 , (B.6)
Figure 8 :
8We show the distribution of the zeros w 1 , w 2 , and w 3 (solid points) and pole w 0 (open circle) of the parametrized inverse isothermal susceptibility F (z) up to O(ε 3 ). Here, b 1 0.552, such that all singular points align either along the Lee-Yang cut (situated on the imaginary w axis) or the Langer cut (along the dashed line). Note, only the singularities in the upper half of the complex w plane are shown.
The theorem applies to singularities on physical stable branches of the function M (H) (both below and above critical temperature) and thus cannot constrain the position of the spinodal singularities which are located on the metastable branch.
The basic idea of the Ginzburg criterion is to compare the tree-level amplitude, or coupling, g3 in our case, to the one-loop contribution. The latter stems from a triangle diagram, which is IR divergent when ξ → ∞. By counting dimensions (k d from the loop integral and k 6 from the denominators) it is easy to see that the loop integral diverges as ξ 6−d . Thus, we need to compare g3 to (g3) 3 ξ 6−d , or equivalently g 2 3 ξ 6−d to 1.9
While the equation of state is known to order ε 3[10][11][12], for our purposes it is sufficient to consider only contributions up to order ε 2 . We comment on some features specific to the ε 3 result (in particular related to parametric representation of the equation of state) in Appendix B.5 Note at the Lee-Yang point, the argument of the logarithmic function vanishes. An imaginary part, which is analyzed by Weinberg and Wu[48], develops when the argument is negative and is associated with the cut terminating at the Lee-Yang edge singularity.
Similarly, one could also use the equation of state in the form y = f (x) with the scaling variables x ∼ θ −1/β (1 − θ 2 ) and y ∼ θ −δ h(θ) to determine h(θ).
This happens because the leading corrections to the mean-field value of w are εθ −2 and θ −4 , while θ −2 n ∼ ε.
The experience with Padé approximations suggests a guiding principle for constructing improved parametric representations: The choice of (polynomial) functions h(θ) and m(θ) should be such that the rank of polynomials in the numerator and the denominator in Eq. (4.17) increase at the same rate.
Note, the angular displacement ∆φ is of order ε and not ε 2 as in the Ising-like (N = 1) case (cf. Sec. 3.1).
AcknowledgmentsThis material is based on work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award Number DE-FG0201ER41195 and within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration. This work is also funded by the Swiss National Science Foundation.AppendixA Details of the parametric equation of state at order ε 2 For completeness, and for possible future use, in this Appendix we collect the results which are represented schematically in Sec. 4. In particular, they show explicit dependence (or independence) of expansion coefficients on (arbitrary at this order of ε) parameters b 1 and b 2 . To obtain the complete expression for w n in Eq.(4.19)up to O(ε 2 ) one needs to expand each quantity in Eq. (4.17) up to sufficient order. In particular, we need:where the exponents β, γ, and δ, as well as the normalization constantsw andz, should be expanded to order ε 3 (for details we refer to[10,11]). The zeros of this function, which we consider in terms of θ 2 , can be found by solving for the roots of the numerator and can be determined in closed form. We find three distinct (pairs of) zeros, θ 2 n , which we label by n = 1, 2, 3. It is sufficient to use the ansatz Eq. , (B.9) a function of b 1 only. InFig. 7we illustrate the real-valued coefficients c n in the range of parameters −2 ≤ b 1 ≤ 2. Note that there is a "critical" value of b 1 ≈ 0.552 above which all values of c n are real -this has interesting implications as we show below.As in Appendix. A, we finally obtain2)). However, the additional corrections to the absolute value do not affect the phases of the corresponding singularities in the complex w plane.The above results lead to the following picture, which depends on the parameter b 1 : If b 1 0.552, two points w 1 and w 3 are imaginary and thus distribute along the Lee-Yang cut,
Critical phenomena and renormalization group theory. A Pelissetto, E Vicari, 10.1016/S0370-1573(02)00219-3arXiv:cond-mat/0012164Phys. Rep. 368549A. Pelissetto and E. Vicari, "Critical phenomena and renormalization group theory," Phys. Rep. 368 (2002) 549, arXiv:cond-mat/0012164.
The universality class of the electroweak theory. K Rummukainen, M Tsypin, K Kajantie, M Laine, M E Shaposhnikov, 10.1016/S0550-3213(98)00494-5arXiv:hep-lat/9805013Nucl. Phys. B. 532283K. Rummukainen, M. Tsypin, K. Kajantie, M. Laine, and M. E. Shaposhnikov, "The universality class of the electroweak theory," Nucl. Phys. B 532 (1998) 283, arXiv:hep-lat/9805013.
Signatures of the tricritical point in QCD. M A Stephanov, K Rajagopal, E V Shuryak, 10.1103/PhysRevLett.81.4816arXiv:hep-ph/9806219Phys. Rev. Lett. 814816M. A. Stephanov, K. Rajagopal, and E. V. Shuryak, "Signatures of the tricritical point in QCD," Phys. Rev. Lett. 81 (1998) 4816, arXiv:hep-ph/9806219.
QCD phase diagram and the critical point. M A Stephanov, 10.1142/S0217751X05027965arXiv:hep-ph/0402115Prog. Theor. Phys. Suppl. 153139Int. J. Mod. Phys. AM. A. Stephanov, "QCD phase diagram and the critical point," Prog. Theor. Phys. Suppl. 153 (2004) 139, arXiv:hep-ph/0402115. [Int. J. Mod. Phys. A 20 (2005) 4387].
Crystal statistics. I. A two-dimensional model with an order-disorder transition. L Onsager, 10.1103/PhysRev.65.117Phys. Rev. 65117L. Onsager, "Crystal statistics. I. A two-dimensional model with an order-disorder transition," Phys. Rev. 65 (1944) 117.
Statistical theory of equations of state and phase transitions. I. Theory of condensation. C.-N Yang, T D Lee, 10.1103/PhysRev.87.404Phys. Rev. 87404C.-N. Yang and T. D. Lee, "Statistical theory of equations of state and phase transitions. I. Theory of condensation," Phys. Rev. 87 (1952) 404.
Statistical theory of equations of state and phase transitions. II. Lattice gas and Ising model. T D Lee, C.-N Yang, 10.1103/PhysRev.87.410Phys. Rev. 87410T. D. Lee and C.-N. Yang, "Statistical theory of equations of state and phase transitions. II. Lattice gas and Ising model," Phys. Rev. 87 (1952) 410.
Feynman graph expansion for the equation of state near the critical point (Ising-like case). E Brézin, D J Wallace, K G Wilson, 10.1103/PhysRevLett.29.591Phys. Rev. Lett. 29591E. Brézin, D. J. Wallace, and K. G. Wilson, "Feynman graph expansion for the equation of state near the critical point (Ising-like case)," Phys. Rev. Lett. 29 (1972) 591.
Equation of state in (4 − ε)-dimensional Ising model. G M Avdeeva, A A , JETP Lett. 16178G. M. Avdeeva and A. A. Migdal, "Equation of state in (4 − ε)-dimensional Ising model," JETP Lett. 16 (1972) 178.
The ε-expansion and parametric models for the Ising equation of state in the critical region. D J Wallace, R K P Zia, 10.1016/0375-9601(73)90212-0Phys. Lett. A. 46261D. J. Wallace and R. K. P. Zia, "The ε-expansion and parametric models for the Ising equation of state in the critical region," Phys. Lett. A 46 (1973) 261.
Parametric models and the Ising equation of state at order ε 3. D J Wallace, R K P Zia, 10.1088/0022-3719/7/19/008J. Phys. C. 73480D. J. Wallace and R. K. P. Zia, "Parametric models and the Ising equation of state at order ε 3 ," J. Phys. C 7 (1974) 3480.
Crossover functions by renormalization-group matching: Three-loop results. J F Nicoll, P C Albright, 10.1103/PhysRevB.31.4576Phys. Rev. B. 314576J. F. Nicoll and P. C. Albright, "Crossover functions by renormalization-group matching: Three-loop results," Phys. Rev. B 31 (1985) 4576.
Improved high-temperature expansion and critical equation of state of three-dimensional Ising-like systems. M Campostrini, A Pelissetto, P Rossi, E Vicari, 10.1103/PhysRevE.60.3526arXiv:cond-mat/9905078Phys. Rev. E. 603526M. Campostrini, A. Pelissetto, P. Rossi, and E. Vicari, "Improved high-temperature expansion and critical equation of state of three-dimensional Ising-like systems," Phys. Rev. E 60 (1999) 3526, arXiv:cond-mat/9905078.
25th-order high-temperature expansion results for three-dimensional Ising-like systems on the simple-cubic lattice. M Campostrini, A Pelissetto, P Rossi, E Vicari, 10.1103/PhysRevE.65.066127arXiv:cond-mat/0201180Phys. Rev. E. 6566127M. Campostrini, A. Pelissetto, P. Rossi, and E. Vicari, "25th-order high-temperature expansion results for three-dimensional Ising-like systems on the simple-cubic lattice," Phys. Rev. E 65 (2002) 066127, arXiv:cond-mat/0201180.
3D Ising model: The scaling equation of state. R Guida, J Zinn-Justin, 10.1016/S0550-3213(96)00704-3arXiv:hep-th/9610223Nucl. Phys. B. 489626R. Guida and J. Zinn-Justin, "3D Ising model: The scaling equation of state," Nucl. Phys. B 489 (1997) 626, arXiv:hep-th/9610223.
The universal effective potential for three-dimensional massive scalar field theory from the Monte Carlo study of the Ising model. M M Tsypin, arXiv:hep-lat/9401034M. M. Tsypin, "The universal effective potential for three-dimensional massive scalar field theory from the Monte Carlo study of the Ising model," arXiv:hep-lat/9401034.
Universal effective potential for scalar field theory in three-dimensions by Monte Carlo computation. M M Tsypin, 10.1103/PhysRevLett.73.2015Phys. Rev. Lett. 73M. M. Tsypin, "Universal effective potential for scalar field theory in three-dimensions by Monte Carlo computation," Phys. Rev. Lett. 73 (1994) 2015.
Effective potential for a scalar field in three dimensions: Ising model in the ferromagnetic phase. M M Tsypin, 10.1103/PhysRevB.55.8911arXiv:hep-lat/9601021Phys. Rev. B. 558911M. M. Tsypin, "Effective potential for a scalar field in three dimensions: Ising model in the ferromagnetic phase," Phys. Rev. B 55 (1997) 8911, arXiv:hep-lat/9601021.
Universal amplitude ratios in the 3D Ising model. M Caselle, M Hasenbusch, 10.1088/0305-4470/30/14/010arXiv:hep-lat/9701007J. Phys. A. 304963M. Caselle and M. Hasenbusch, "Universal amplitude ratios in the 3D Ising model," J. Phys. A 30 (1997) 4963, arXiv:hep-lat/9701007.
Critical exponents of the three-dimensional Ising universality class from finite-size scaling with standard and improved actions. M Hasenbusch, K Pinn, S Vinti, 10.1103/PhysRevB.59.11471arXiv:hep-lat/9806012Phys. Rev. B. 5911471M. Hasenbusch, K. Pinn, and S. Vinti, "Critical exponents of the three-dimensional Ising universality class from finite-size scaling with standard and improved actions," Phys. Rev. B 59 (1999) 11471, arXiv:hep-lat/9806012.
Critical equation of state from the average action. J Berges, N Tetradis, C Wetterich, 10.1103/PhysRevLett.77.873arXiv:hep-th/9507159Phys. Rev. Lett. 77873J. Berges, N. Tetradis, and C. Wetterich, "Critical equation of state from the average action," Phys. Rev. Lett. 77 (1996) 873, arXiv:hep-th/9507159.
Ising field theory in a magnetic field: Analytic properties of the free energy. P Fonseca, A Zamolodchikov, arXiv:hep-th/0112167J. Stat. Phys. 110527P. Fonseca and A. Zamolodchikov, "Ising field theory in a magnetic field: Analytic properties of the free energy," J. Stat. Phys. 110 (2003) 527, arXiv:hep-th/0112167.
Yang-Lee edge singularity and φ 3 field theory. M E Fisher, 10.1103/PhysRevLett.40.1610Phys. Rev. Lett. 401610M. E. Fisher, "Yang-Lee edge singularity and φ 3 field theory," Phys. Rev. Lett. 40 (1978) 1610.
Analytic properties of thermodynamic functions at first-order phase transitions. V Privman, L S Schulman, 10.1088/0305-4470/15/5/004J. Phys. A. 15231V. Privman and L. S. Schulman, "Analytic properties of thermodynamic functions at first-order phase transitions," J. Phys. A 15 (1982) L231.
Metastable decay rates, asymptotic expansions, and analytic continuation of thermodynamic functions. O Penrose, 10.1007/BF02183348J. Stat. Phys. 78267O. Penrose, "Metastable decay rates, asymptotic expansions, and analytic continuation of thermodynamic functions," J. Stat. Phys. 78 (1993) 267.
The theory of condensation and the critical point. M E Fisher, Physics. 3255M. E. Fisher, "The theory of condensation and the critical point," Physics 3 (1967) 255.
Investigation of metastable states and nucleation in the kinetic Ising model. K Binder, H Müller-Krumbhaar, 10.1103/PhysRevB.9.2328Phys. Rev. B. 92328K. Binder and H. Müller-Krumbhaar, "Investigation of metastable states and nucleation in the kinetic Ising model," Phys. Rev. B 9 (1974) 2328.
Theory of first-order phase transitions. K Binder, 10.1088/0034-4885/50/7/001Rep. Prog. Phys. 50783K. Binder, "Theory of first-order phase transitions," Rep. Prog. Phys. 50 (1987) 783.
Parametric representation of the equation of state near a critical point. P Schofield, 10.1103/PhysRevLett.22.606Phys. Rev. Lett. 22606P. Schofield, "Parametric representation of the equation of state near a critical point," Phys. Rev. Lett. 22 (1969) 606.
Equation of state near the critical point. B D Josephson, 10.1088/0022-3719/2/7/302J. Phys. C. 21113B. D. Josephson, "Equation of state near the critical point," J. Phys. C 2 (1969) 1113.
Correlation between critical coefficients and critical exponents. P Schofield, J D Litster, J T Ho, 10.1103/PhysRevLett.23.1098Phys. Rev. Lett. 231098P. Schofield, J. D. Litster, and J. T. Ho, "Correlation between critical coefficients and critical exponents," Phys. Rev. Lett. 23 (1969) 1098.
ε-expansion of the parametric equation of state near the critical point. P Schofield, 10.1016/0375-9601(73)90132-1Phys. Lett. A. 46197P. Schofield, "ε-expansion of the parametric equation of state near the critical point," Phys. Lett. A 46 (1973) 197.
Theory of the condensation point. J S Langer, 10.1016/0003-4916(67)90200-XAnnals Phys. 41Annals Phys.J. S. Langer, "Theory of the condensation point," Annals Phys. 41 (1967) 108. [Annals Phys. 281 (2000) 941].
QCD critical point and complex chemical potential singularities. M A Stephanov, 10.1103/PhysRevD.73.094508arXiv:hep-lat/0603014Phys. Rev. D. 7394508M. A. Stephanov, "QCD critical point and complex chemical potential singularities," Phys. Rev. D 73 (2006) 094508, arXiv:hep-lat/0603014.
Critical exponents in 3.99 dimensions. K G Wilson, M E Fisher, 10.1103/PhysRevLett.28.240Phys. Rev. Lett. 28240K. G. Wilson and M. E. Fisher, "Critical exponents in 3.99 dimensions," Phys. Rev. Lett. 28 (1972) 240.
Equation of state in the neighborhood of the critical point. B Widom, 10.1063/1.1696618J. Chem. Phys. 433898B. Widom, "Equation of state in the neighborhood of the critical point," J. Chem. Phys. 43 (1965) 3898.
Singularity of thermodynamic quantities at a first order phase transition point. A F Andreev, Sov. Phys. JETP. 181415A. F. Andreev, "Singularity of thermodynamic quantities at a first order phase transition point," Sov. Phys. JETP 18 (1964) 1415.
Nonanalytic features of the first order phase transition in the Ising model. S N Isakov, 10.1007/BF01210832Commun. Math. Phys. 95427S. N. Isakov, "Nonanalytic features of the first order phase transition in the Ising model," Commun. Math. Phys. 95 (1984) 427.
Bubbles in metastable vacuum. I Yu, L B Kobzarev, M B Okun, Voloshin, Sov. J. Nucl. Phys. 201229Yad. Fiz.I. Yu. Kobzarev, L. B. Okun, and M. B. Voloshin, "Bubbles in metastable vacuum," Sov. J. Nucl. Phys. 20 (1975) 644. [Yad. Fiz. 20 (1974) 1229].
Fate of the false vacuum: Semiclassical theory. S Coleman, 10.1103/PhysRevD.15.2929Phys. Rev. D. 151248Phys. Rev. DS. Coleman, "Fate of the false vacuum: Semiclassical theory," Phys. Rev. D 15 (1977) 2929. [Erratum: Phys. Rev. D 16 (1977) 1248].
The fate of the false vacuum. II. First quantum corrections. C G Callan, Jr , S Coleman, 10.1103/PhysRevD.16.1762Phys. Rev. D. 161762C. G. Callan, Jr. and S. Coleman, "The fate of the false vacuum. II. First quantum corrections," Phys. Rev. D 16 (1977) 1762.
Critical exponents of the 3d Ising and related models from conformal bootstrap. F Gliozzi, A Rago, 10.1007/JHEP10(2014)042arXiv:1403.6003JHEP. 1042hep-thF. Gliozzi and A. Rago, "Critical exponents of the 3d Ising and related models from conformal bootstrap," JHEP 10 (2014) 042, arXiv:1403.6003 [hep-th].
Four loop renormalization of φ 3 theory in six dimensions. J A Gracey, 10.1103/PhysRevD.92.025012arXiv:1506.03357Phys. Rev. D. 9225012hep-thJ. A. Gracey, "Four loop renormalization of φ 3 theory in six dimensions," Phys. Rev. D 92 (2015) 025012, arXiv:1506.03357 [hep-th].
Functional renormalization group approach to the Yang-Lee edge singularity. X An, D Mesterházy, M A Stephanov, 10.1007/JHEP07(2016)041arXiv:1605.06039JHEP. 0741hep-thX. An, D. Mesterházy, and M. A. Stephanov, "Functional renormalization group approach to the Yang-Lee edge singularity," JHEP 07 (2016) 041, arXiv:1605.06039 [hep-th].
Lee-Yang model from the functional renormalization group. L Zambelli, O Zanusso, 10.1103/PhysRevD.95.085001arXiv:1612.08739Phys. Rev. D. 9585001hep-thL. Zambelli and O. Zanusso, "Lee-Yang model from the functional renormalization group," Phys. Rev. D 95 (2017) 085001, arXiv:1612.08739 [hep-th].
Some remarks on phase transitions of the 2nd kind and the microscopic theory of ferroelectric materials. V L Ginzburg, Sov. Phys. Solid State. 21824Fiz. Tverd. TelaV. L. Ginzburg, "Some remarks on phase transitions of the 2nd kind and the microscopic theory of ferroelectric materials," Fiz. Tverd. Tela 2 (1960) 2031. [Sov. Phys. Solid State 2 (1961) 1824].
On dangerous irrelevant operators. D J Amit, L Peliti, 10.1016/0003-4916(82)90159-2Annals Phys. 140207D. J. Amit and L. Peliti, "On dangerous irrelevant operators," Annals Phys. 140 (1982) 207.
Understanding complex perturbative effective potentials. E J Weinberg, A Wu, 10.1103/PhysRevD.36.2474Phys. Rev. D. 362474E. J. Weinberg and A. Wu, "Understanding complex perturbative effective potentials," Phys. Rev. D 36 (1987) 2474.
Thermodynamic functions for fluids and ferromagnets near the critical point. R B Griffiths, 10.1103/PhysRev.158.176Phys. Rev. 158176R. B. Griffiths, "Thermodynamic functions for fluids and ferromagnets near the critical point," Phys. Rev. 158 (1967) 176.
The Spherical Model of a Ferromagnet. T H Berlin, M Kac, 10.1103/PhysRev.86.821Phys. Rev. 86821T. H. Berlin and M. Kac, "The Spherical Model of a Ferromagnet," Phys. Rev. 86 (1952) 821.
Spherical model as the limit of infinite spin dimensionality. H E Stanley, 10.1103/PhysRev.176.718Phys. Rev. 176718H. E. Stanley, "Spherical model as the limit of infinite spin dimensionality," Phys. Rev. 176 (1968) 718.
Absence of ferromagnetism or antiferromagnetism in one-dimensional or two-dimensional isotropic Heisenberg models. N D Mermin, H Wagner, 10.1103/PhysRevLett.17.1133Phys. Rev. Lett. 171133N. D. Mermin and H. Wagner, "Absence of ferromagnetism or antiferromagnetism in one-dimensional or two-dimensional isotropic Heisenberg models," Phys. Rev. Lett. 17 (1966) 1133.
There are no Goldstone bosons in two-dimensions. S R Coleman, 10.1007/BF01646487Commun. Math. Phys. 31259S. R. Coleman, "There are no Goldstone bosons in two-dimensions," Commun. Math. Phys. 31 (1973) 259.
Feynman-graph expansion for the equation of state near the critical point. E Brézin, D J Wallace, K G Wilson, 10.1103/PhysRevB.7.232Phys. Rev. B. 7232E. Brézin, D. J. Wallace, and K. G. Wilson, "Feynman-graph expansion for the equation of state near the critical point," Phys. Rev. B 7 (1973) 232.
Critical behavior of a classical Heisenberg ferromagnet with many degrees of freedom. E Brézin, D J Wallace, 10.1103/PhysRevB.7.1967Phys. Rev. B. 7E. Brézin and D. J. Wallace, "Critical behavior of a classical Heisenberg ferromagnet with many degrees of freedom," Phys. Rev. B 7 (1973) 1967.
Equation of state in 1/n expansion: n-vector model in the presence of magnetic field. R Abe, S Hikami, 10.1143/PTP.57.1197Prog. Theor. Phys. 571197R. Abe and S. Hikami, "Equation of state in 1/n expansion: n-vector model in the presence of magnetic field," Prog. Theor. Phys. 57 (1977) 1197.
Singularities induced by Goldstone modes. D J Wallace, R K P Zia, 10.1103/PhysRevB.12.5340Phys. Rev. B. 125340D. J. Wallace and R. K. P. Zia, "Singularities induced by Goldstone modes," Phys. Rev. B 12 (1975) 5340.
Goldstone modes in vacuum decay and first-order phase transitions. N J Gunther, D J Wallace, D A Nicole, 10.1088/0305-4470/13/5/034J. Phys. A. 131755N. J. Gunther, D. J. Wallace, and D. A. Nicole, "Goldstone modes in vacuum decay and first-order phase transitions," J. Phys. A 13 (1980) 1755.
Goldstone modes and coexistence in isotropic N -vector models. I D Lawrie, doi.org/10.1088/0305-4470/14/9/041J. Phys. A. 142489I. D. Lawrie, "Goldstone modes and coexistence in isotropic N -vector models," J. Phys. A 14 (1981) 2489.
Ising model on a dynamical planar random lattice: Exact solution. V A Kazakov, 10.1016/0375-9601(86)90433-0Phys. Lett. A. 119140V. A. Kazakov, "Ising model on a dynamical planar random lattice: Exact solution," Phys. Lett. A 119 (1986) 140.
The Ising Model on Random Planar Lattice: The Structure of Phase Transition and the Exact Critical Exponents. D V Boulatov, V A Kazakov, 10.1016/0370-2693(87)90312-1Phys. Lett. B. 186379D. V. Boulatov and V. A. Kazakov, "The Ising Model on Random Planar Lattice: The Structure of Phase Transition and the Exact Critical Exponents," Phys. Lett. B 186 (1987) 379.
On the Yang-Lee and Langer singularities in the O(n) loop model. J.-E Bourgine, I Kostov, 10.1088/1742-5468/2012/01/P01024arXiv:1110.1108J. Stat. Mech. 12011024hep-thJ.-E. Bourgine and I. Kostov, "On the Yang-Lee and Langer singularities in the O(n) loop model," J. Stat. Mech. 1201 (2012) P01024, arXiv:1110.1108 [hep-th].
Expansion of a Critical Exponent in Inverse Powers of Spin Dimensionality. R Abe, 10.1143/PTP.48.1414Prog. Theor. Phys. 481414R. Abe, "Expansion of a Critical Exponent in Inverse Powers of Spin Dimensionality," Prog. Theor. Phys. 48 (1972) 1414.
Yang-Lee edge singularities in the large-N limit. M Bander, C Itzykson, 10.1103/PhysRevB.30.6485Phys. Rev. B. 306485M. Bander and C. Itzykson, "Yang-Lee edge singularities in the large-N limit," Phys. Rev. B 30 (1984) 6485.
Spinodal decomposition during the hadronization stage at RHIC?. J Randrup, 10.1103/PhysRevLett.92.122301arXiv:hep-ph/0308271Phys. Rev. Lett. 92122301J. Randrup, "Spinodal decomposition during the hadronization stage at RHIC?," Phys. Rev. Lett. 92 (2004) 122301, arXiv:hep-ph/0308271.
Signals of spinodal hadronization: Strangeness trapping. V Koch, A Majumder, J Randrup, 10.1103/PhysRevC.72.064903arXiv:nucl-th/0509030Phys. Rev. C. 7264903V. Koch, A. Majumder, and J. Randrup, "Signals of spinodal hadronization: Strangeness trapping," Phys. Rev. C 72 (2005) 064903, arXiv:nucl-th/0509030.
Density fluctuations in the presence of spinodal instabilities. C Sasaki, B Friman, K Redlich, 10.1103/PhysRevLett.99.232301arXiv:hep-ph/0702254Phys. Rev. Lett. 99232301C. Sasaki, B. Friman, and K. Redlich, "Density fluctuations in the presence of spinodal instabilities," Phys. Rev. Lett. 99 (2007) 232301, arXiv:hep-ph/0702254.
Chiral phase transition in the presence of spinodal decomposition. C Sasaki, B Friman, K Redlich, 10.1103/PhysRevD.77.034024arXiv:0712.2761Phys. Rev. D. 7734024hep-phC. Sasaki, B. Friman, and K. Redlich, "Chiral phase transition in the presence of spinodal decomposition," Phys. Rev. D 77 (2008) 034024, arXiv:0712.2761 [hep-ph].
Metastable systems near the instability region. A Z Patashinskii, B I Shumilo, Sov. Phys. Solid State. 22655A. Z. Patashinskii and B. I. Shumilo, "Metastable systems near the instability region," Sov. Phys. Solid State 22 (1980) 655.
Nucleation theory near the classical spinodal. C Unger, W Klein, 10.1103/PhysRevB.29.2698Phys. Rev. B. 292698C. Unger and W. Klein, "Nucleation theory near the classical spinodal," Phys. Rev. B 29 (1984) 2698.
| []
|
[
"Frobenius linear translators giving rise to new infinite classes of permutations and bent functions",
"Frobenius linear translators giving rise to new infinite classes of permutations and bent functions"
]
| [
"N Cepak ",
"E Pasalic ",
"A Muratović-Ribić "
]
| []
| []
| We show the existence of many infinite classes of permutations over finite fields and bent functions by extending the notion of linear translators, introduced by Kyureghyan [12]. | 10.1007/s12095-019-00395-1 | [
"https://arxiv.org/pdf/1801.08460v1.pdf"
]
| 119,622,456 | 1801.08460 | 8e912f15242f11fda39f9088f40f6d2d3bd8650f |
Frobenius linear translators giving rise to new infinite classes of permutations and bent functions
25 Jan 2018 January 26, 2018
N Cepak
E Pasalic
A Muratović-Ribić
Frobenius linear translators giving rise to new infinite classes of permutations and bent functions
25 Jan 2018 January 26, 2018
We show the existence of many infinite classes of permutations over finite fields and bent functions by extending the notion of linear translators, introduced by Kyureghyan [12].
Introduction
The main goal of this paper is to further extend the possibilities of employing the concept of linear translators, introduced by Kyureghyan [12], for the purpose of constructing new classes of permutation polynomials over finite fields explicitly. Some of these permutations are then further used in constructing bent functions. A finite field of order p n is denoted F p n where p is any prime and n a positive integer. A polynomial F ∈ F p n [x] is said to be a permutation if its associated mapping x → F (x) over F p n is bijective. During the last few years there has been a tremendous progress in construction methods and characterisation of many infinite classes of permutations, see a survey on recent works in [11] and the references therein.
This paper extends the work in [4] and [16]. In [4]there are proposed several new classes of permutation polynomials
F : x → L(x) + L(γ)h(f (x)), f : F p rk → F p k , h : F p k → F p k ,(1)
which were originally studied by Kyureghyan in [12].
In [16] permutation polynomials of the form
F : x → L(x) + L(γ)(h(f (x)) + f (x) b ), f : F p rk → F p k , h : F p k → F p k ,(2)
are studied and then further used in the construction of bent functions.
Here γ ∈ F * p rk is a so-called b-linear translator of f (cf. Definition 1) and L a linear permutation. It should be noted that this construction is in a certain sense a generalization of the so-called switching construction [5,7]. Akbary, Ghioca and Wang unified the Kyureghyan's construction for arbitrary subsets S ⊂ F p n (not only subfields of F p n ) along with proposing a few other constructions in [1]. This general criterion is now called AGW criterion [18,Theorem 8.1.39].
After these pioneering works a series of papers [24,22,23,25] (among others) treated the same topic of specifying new classes of permutation polynomials of the form (1). For a nice survey of recent achievements related to this particular class of permutations the reader is referred to [11]. In particular, the existence of linear translators were analyzed in [4] for some simple polynomial forms (monomials and binomials) and their efficient embedding in (1) then resulted in several explicit classes of permutation polynomials. Apart form the unified framework provided by AGW criterion, most of the recent attempts were towards specifying suitable functions h, f and L as in (1). Alternatively, for F given by F : x → L(x) + γ(f (x) + δ) s , δ ∈ F * p n , the main idea was to specify suitable degrees s, elements δ ∈ F p n , and the function f for some particular field characteristic p, see e.g. [24], thus only giving rise to sporadic families of permutations.
The main obstacle when considering the forms (1) and (2) is that some new classes of permutation polynomials could be specified provided the existence of suitable polynomials admitting linear translators. For instance, it was shown in [4] that for n = rk (where r > 1), the function f (x) = βx i + x j , i < j, where f : F p n → F p k and β ∈ F * p n , has a linear translator if and only if n is even, k = n 2 , and furthermore f (x) = T n k (x). This indicates that the class of polynomials f : F p n → F p k admitting linear translators is quite likely rather small. To increase its cardinality and consequently to be able to derive other classes of permutation polynomials, we extend the original definition of linear translators to cover a wider class of functions admitting such translators. We call these translators Frobenius translators since the derivative of f is rather expressed as
f (x + uγ) − f (x) = u p i b in contrast to standard definition f (x + uγ) − f (x) = ub.
Apparently, linear translators are just a special case of Frobenius translators. To justify this extension we may for instance consider the mapping f : x → T n k (x 2 ℓk +1 ) over F 2 n , where n = rk and 1 ≤ ℓ ≤ r − 1, which does not have linear but admits a Frobenius translator, cf. Example 1. This gives us the possibility to construct permutation polynomials whose form greatly resembles (1), (2) though using Frobenius translators instead, cf. Theorem 3, Proposition 8. In connection to the results in [4], we also address some existence issues for the classes of functions given by f (x) = T n k (βx p i +p j ), where n = rk, admitting linear translators and specify exactly the value of γ in this case. In addition, another class of permutations of the form F (x) = L(x) + (x p k − x + δ) s is proposed by specifying those L, s, and δ that satisfy the condition given recently in [4].
In the second part of this article we focus on the use of suitable quadruples of bent functions and Frobenius translators in order to provide new secondary constructions of bent functions. Recently, many works have been devoted to secondary constructions of bent functions and for an exhaustive list of main contributions the reader is referred to [3]. Here, we mainly focus on the construction of Mesnager et al. [15,17], where bent functions are constructed using a suitable set of permutations whose duals are also explicitly defined. This is a nice property since in general computing the dual of a bent functions is a hard problem. Many secondary constructions rely on the initial bent functions whose duals satisfy certain properties. In [15], the required property for three bent functions f 1 , f 2 , f 3 , whose sum
f 4 = f 1 + f 2 + f 3 is again bent, is that f * 1 + f * 2 + f * 3 + f * 4 = 0, where f * i
denotes the dual of f * i . This problem has been partially solved in [15] and a general method for finding quadruples of so-called anti-self dual bent functions is given in [21]. Nevertheless, a slightly different approach [9] that uses a quadruple of bent functions f 1 , . . . , f 4 , sharing the above properties but this time f * 1 + f * 2 + f * 3 + f * 4 = 1 instead, also leads to the design of secondary bent functions. The problem of finding such quadruples
satisfying f * 1 + f * 2 + f * 3 + f * 4 = 1
was left open in [9]. We provide an efficient and generic solution to this problem which allows us to explicitly specify further secondary classes of bent functions and their duals. Based on the use of linear translators, in [17] the authors derived several infinite families of bent functions by defining suitable permutations from which initial quadruples of bent functions are defined. These results are generalized in a straightforward manner using Frobenius translators, thus offering a much wider class of secondary bent functions.
The rest of this article is organized as follows. The concept of linear translators and the generic method of specifying new permutations based on their use is given in Section 2. In Section 3 we generalize the concept of linear translators by introducing the notion of Frobenius translators, which is proved useful for specifying some classes of permutations for those cases when linear translators are inefficient. In Section 5, we employ Frobenius translators to specify some secondary classes of bent functions and their duals. Some concluding remarks are given in Section 6.
Linear translators -preliminaries
For clarity, we recall the original definition of linear translators given in [12] by Kyureghyan. Throughout this article p designates any prime and n = rk.
Definition 1 Let f be a function from F p n to F p k , γ ∈ F * p n and b fixed in F p k . Then γ is a b-linear translator for f if f (x + uγ) − f (x) = ub for all x ∈ F p n and all u ∈ F p k .
In particular, when k = 1, γ is usually said to be a b-linear structure of the function f
(where b ∈ F p ), that is f (x + γ) − f (x) = b for all x ∈ F p n .
We denote by T r(·) the absolute trace on F 2 n and by T n k (·) the trace function from F p n to F p k :
T n k (β) = β + β p k + · · · + β p (n/k−1)k . We have also to recall that a F p k -linear function on F p n is of the type
L : F p n → F p n , L(x) = r−1 i=0 λ i x p ki , λ i ∈ F p n .
The following general theorem is given in [12] without proof since the proof is an equivalent of that given in [5] and [6], respectively, when k = 1 and k = n.
Theorem 1 A function f from F p n to F p k , has a linear translator if and only if there is a non-bijective F p k -linear function L on F p n such that
f (x) = T n k (H • L(x) + βx)
for some H : F p n → F p n and β ∈ F p n . In this case the kernel of L is contained in the subspace of linear translators (including 0 by convention).
The construction of permutations based on linear translators, introduced by Kyureghyan in [12, Theorem 1], is given below.
Theorem 2 [12, Theorem 1] Let n = rk, k > 1. Let L be a F p k -linear permutation on F p n . Let f be a function from F p n onto F p k , h : F p k → F p k , γ ∈ F * p n , and b is fixed in F p k . Assume that γ is a b-linear translator of f . Then
F (x) = L(x) + L(γ)h(f (x))
permutes F p n if and only if g : u → u + bh(u) permutes F p k .
Frobenius translators
The main restriction of Theorem 2 is that it only gives new permutation polynomials for linear translators of f satisfying the conditions in Definition 1.
Example 1 Let p = 2, n = rk and f : x → T n k (x 2 ℓk +1 ) with 1 ≤ ℓ ≤ r − 1.
Let γ ∈ F 2 n and u be any element of F 2 k . Then
f (x) + f (x + uγ) = T n k x 2 ℓk +1 + (x + γu) 2 ℓk +1 = T n k x 2 ℓk γu + x(γu) 2 ℓk + (γu) 2 ℓk +1 = u T n k x(γ 2 ℓk + γ 2 n−ℓk ) + u 2 T n k γ 2 ℓk +1 .
This shows that f (x) + f (x + uγ) = u 2 T n k (γ 2 ℓk +1 ), for all x and all u ∈ F 2 k , if and only if γ 2 ℓk + γ 2 n−ℓk = 0, which is equivalent to γ 2 2ℓk = γ.
In the above example b = T n k (γ 2 ℓk +1 ) is not a linear translator of f since we would obtain f (x + γu) + f (x) = u 2 b, for γ satisfying γ 2 2ℓk = γ, instead of having ub on the righthand side. To find other (not affine) functions f which have b-translators appears to be a difficult problem. The global description is given in [12,Section 2] but to have precise instances would be useful for some constructions. In particular, extending Definition 1 to cover other cases, as illustrated in the above example, would be useful for deducing other families of permutation polynomials.
To accomplish this we extend the definition of linear translators to cover the case when
f (x + γu) − f (x) = u p i b, as given below. Definition 2 Let n = rk, 1 ≤ k ≤ n. Let f be a function from F p n to F p k , γ ∈ F * p n and b fixed in F p k . Then γ is an (i, b)-Frobenius translator for f if f (x + uγ) − f (x) = u p i b for all x ∈ F p n and for all u ∈ F p k , where i = 0, . . . , k − 1.
Notice that in the above definition taking i = 0 gives a standard definition of translators. The next proposition generalizes the standard properties of linear translators to the case of Frobenius translators.
Proposition 1 Let γ 1 , γ 2 ∈ F p n be (i, b i ) and (i, b 2 )-Frobenius translators, respectively, of the function f : F p n → F p k . Then • γ 1 + γ 2 is an (i, b 1 + b 1 )-Frobenius translator of f , • cγ 1 is a (i, c p i b 1 )-Frobenius translator of f , for any c ∈ F * p k .
Proof.
f (x + u(γ 1 + γ 2 )) − f (x) = f (x + uγ 1 ) + u p i b 2 − f (x) = f (x) + u p i b 1 + u p i b 2 − f (x) = u p i (b 1 + b 2 ) f (x + u(cγ 1 )) − f (x) = f (x + (uc)γ 1 ) − f (x) = (uc) p i b 1 = u p i (c p i b 1 ) ⋄
The Corollary below will be useful when satisfying conditions of constructions in Section 5.
Corollary 1 In the binary case the sum of any three (i, b)-Frobenius translators γ 1 , γ 2 , γ 3 , such that γ 1 + γ 2 + γ 3 = 0, is again an (i, b)-Frobenius translator.
Proof. By applying Proposition 1 we know that γ 1 + γ 2 + γ 3 is a (i, b + b + b)-Frobenius translator. Since we are considering the binary case, that is an (i, b)-Frobenius translator. ⋄ Theorem 3 For n = rk, let h : F p k → F p k be an arbitrary mapping and let γ ∈ F p n be an
(i, b)-Frobenius translator of f : F p n → F p k , that is f (x + uγ) − f (x) = u p i b for all x ∈ F p n and all u ∈ F p k . Then, the mapping G(x) = L(x) p i + L(γ) p i h(f (x)),(3)
where L : F p n → F p n is an F p k -linear permutation, permutes F p n if and only if the mapping
g(u) = u + bh(u) permutes F p k .
Proof. We follow the same steps as in the proof of [12,Theorem 6]. Let us first consider the special case
L(x) = x, thus the function F (x) = x p i + γ p i h(f (x)). Assume that x, y ∈ F p n satisfy F (x) = F (y). Then F (x) = x p i + γ p i h(f (x)) = y p i + γ p i h(f (y)) = F (y),
and hence
x p i = y p i + γ p i (h(f (y)) − h(f (x))) = y p i + γ p i a, where a = h(f (y)) − h(f (x)) ∈ F q .
This is equivalent to saying that x = y + γa p n−i , thus we suppose that F (y) = F (y + γa p n−i ). Then, using
F (y + γa p n−i ) = y p i + (γa p n−i ) p i + γ p i h(f (y + γa p n−i )) = y p i + γ p i a + γ p i h(f (y) + ab),
we get
y p i + γ p i h(f (y)) = y p i + γ p i a + γ p i h(f (y) + ab),
which can be rewritten as
h(f (y)) = a + h(f (y) + ab).(4)
The mapping F is a permutation of F p n if and only if the only a satisfying (4) is a = 0. Using exactly the same arguments as in [12], one can conclude that F is a permutation if and only if g
(u) = u + bh(u) permutes F p k . To show that G(x) is a permutation it is enough to notice that G(x) = L(F (x)). ⋄ Remark 1
The condition imposed on h, which applies to both linear and Frobenius translators, requiring that for a given b the function x + bh(x) is a permutation of F p k is easily satisfied. Indeed, given any permutation g over
F p k we can define h(x) = 1/b(g(x) − x) so that x + bh(x) = g(x)
is a permutation. Thus, the main challenge is to specify {f : F p n → F p k } which admit linear/Frobenius translators. Each such translator then gives different permutations over F p n for different permutations g over F p k .
Apart from Example 1, one can for instance find Frobenius translators by combining trace functions, more precisely by defining f (x) = T r n k (x) + T r n 2k (x), for n = 4k, as shown below.
Proposition 2 For n = 4k, the function f : F p n → F p 2k , defined by f (x) = T r n k (x) + T r n 2k (x), always has a 0-translator if γ + γ p 2k = 0.
In the binary case, it also has a (k, γ p k + γ p 3k )-Frobenius translator.
Proof. Let n = 4k and f (x) = T r n k (x) + T r n 2k (x). Let also γ ∈ F * p 4k and u ∈ F p 2k . Then
f (x + uγ) − f (x) = T r 4k k (x + uγ) + T r 4k 2k (x + uγ) − T r 4k k (x) − T r 4k 2k (x) = T r 4k k (x + uγ) + T r 4k k (−x) + T r 4k 2k (x + uγ) + T r 4k 2k (−x) = T r 4k k (uγ) + T r 4k 2k (uγ) = 2uγ + (uγ) p k + 2(uγ) p 2k + (uγ) p 3k = 2u(γ + γ p 2k ) + u p k (γ p k + γ p 3k ).
For p = 2 the only possibility that f has a linear translator is γ + γ p 2k = 0, which results in a 0-translator. In the binary case,
we have f (x + uγ) − f (x) = u 2 k (γ 2 k + γ 2 3k ),
for any x ∈ F 2 4k and any u ∈ F 2 2k , which means that γ is a (k, γ p k + γ p 3k )-Frobenius translator. ⋄
Some existence issues
In this section we specify exactly Frobenius translators for certain classes of mappings f : F p n → F p k which gives us the possibility to specify some new infinite classes of permutations. The following existence results are similar to the ones presented in [4], with the difference that here we consider Frobenius translators by means of Definition 2.
Proposition 3 Let f (x) = x d , f : F p n → F p k ,Proposition 4 Let f (x) = βx i + x j , i < j, where f : F p n → F p k , β ∈ F * p n and n = rk, where r > 1.
Then the function f has a linear translator γ if and only if n is even, and
k = n 2 . Furthermore, f (x) = x p i ′ + x p i ′ + n 2 and γ is an (i ′ , γ p i ′ + γ p i ′ + n 2 )-linear translator.
Proof. The same method as in [4,Proposition 2], that uses Lucas' Theorem and the formula relating the coefficients of a given function and the coefficients of its derivative [20], is used to prove that for f to have a translator (either linear or Frobenius) we necessarily have i = p i ′ and j = p j ′ , for some i ′ and j ′ .
Let us now analyse f (x) = βx p i ′ + x p j ′ . Since f maps to a subfield F p k , the following must be satisfied for all x:
(βx p i ′ + x p j ) p k − βx p i ′ − x p j ′ = 0 β p k x p i ′ +k + x p j ′ +k − βx p i ′ − x p j ′ = 0.
Hence, the exponents {p i ′ +k , p j ′ +k , p i ′ , p j ′ } cannot be two by two distinct. This forces p i ′ +k ≡ p j ′ mod (p n − 1) and p j ′ +k ≡ p i ′ mod (p n − 1). It follows that j ′ = i ′ + k, k = n 2 and β = 1. Then,
f (x + uγ) − f (x) = (x + uγ) p i ′ + (x + uγ) p i ′ + n 2 = u p i ′ γ p i ′ + u p i ′ + n 2 γ p i ′ + n 2 = u p i ′ (γ p i ′ + γ p i ′ + n 2 )
and γ is an
(i ′ , γ p i ′ + γ p i ′ + n 2 )-linear translator. ⋄
We conclude this section by specifying exactly Frobenius translators related to quadratic mappings of the form f (x) = T n k (βx p i +p j ) as discussed in [4].
Lemma 1 [4] Let n = rk and f (x) = T n k (βx p i +p j ), where i < j. Then, f has a derivative independent of x, that is, f (x + uγ) − f (x) = T n k (β(uγ) p i +p j ) for all x ∈ F p n , all u ∈ F p k , if and only if β, γ ∈ F * p n are related through,
βγ p i+lk + β p (r−l)k γ p i+(r−l)k = 0,(5)
where 0 < l < r satisfies j = i + kl.
Nevertheless, the relation between β and γ imposed by (5) and their existence were not investigated in [4]. Below, we specify the exact relationship between β and γ, thus implying the possibility of defining some infinite classes of permutations explicitly.
Proposition 5 Let n, r, k, l be as in Lemma 1, α be a primitive element of F p n , and γ = α a , β = α b ∈ F p n . Then
βγ p i+lk + β p (r−l)k γ p i+(r−l)k = 0 if and only if b = −ap i+lk (p (r−l)k + 1) mod (p n − 1), p = 2 −ap i+lk (p (r−l)k + 1) + p n −1 2 (1 − p (r−l)k ) −1 mod (p n − 1), p = 2.
Proof. Expressed in terms of α, the equation
−α b+ap i+lk = α bp (r−l)k +ap i+(r−l)k
is considered separately for the binary and non-binary case. Let p = 2. In this case
α b+ap i+lk = α bp (r−l)k +ap i+(r−l)k . Therefore, b + ap i+lk mod (p n − 1) = bp (r−l)k + ap i+(r−l)k mod (p n − 1) b(1 − p (r−l)k ) mod (p n − 1) = ap i+lk (p 2(r−l)k − 1) mod (p n − 1) b(1 − p (r−l)k ) mod (p n − 1) = ap i+lk (p (r−l)k − 1)(p (r−l)k + 1) mod (p n − 1) b mod (p n − 1) = −ap i+lk (p (r−l)k + 1) mod (p n − 1) b = −ap i+lk (p (r−l)k + 1) mod (p n − 1).
Let p = 2. In this case −1 = α p n −1 2 and α p n −1 2
α b+ap i+lk = α bp (r−l)k +ap i+(r−l)k . Therefore, b + ap i+lk + p n − 1 2 mod (p n − 1) = bp (r−l)k + ap i+(r−l)k mod (p n − 1) 2b(1 − p (r−l)k ) mod (p n − 1) = 2a(p i+(r−l)k − p i+lk ) mod (p n − 1) 2b(1 − p (r−l)k ) mod (p n − 1) = 2ap i+lk (p 2(r−l)k − 1) mod (p n − 1) 2b(1 − p (r−l)k ) mod (p n − 1) = 2ap i+lk (p (r−l)k − 1)(p (r−l)k + 1) mod (p n − 1) 2b = −2ap i+lk (p (r−l)k + 1) mod (p n − 1).
⋄
The Frobenius translators related to the function f in Lemma 1 are further specified in the result below.
Theorem 4 Let n = rk and f (x) = T n k (βx p i +p i+kl ), where r > 1 and 0 < l < r. Assume that γ ∈ F * p n is an (s, b)-translator of f , where b = T n k (βγ p i +p i+lk ). Then: i) If p = 2 the condition (5) in Lemma 1 must be satisfied and s = i + 1. In particular, if β ∈ F 2 k then γ = 1 is a 0-translator of f if r is even, and γ = 1 is an (i + 1, β)translator if r is odd.
ii) If p > 2 we necessarily have b = 0. In particular, if β ∈ F p k then n is even and γ must satisfy γ p 2kl −1 = −1 and T r n k (γ p i +p i+lk ) = 0.
Proof. If (5) is satisfied then
f (x + uγ) − f (x) = u 2p i T n k βγ p i +p i+lk .
i) Let p = 2. Then u 2p i = u p i+1 and γ is an (i + 1, b)-translator. In particular, if β ∈ F 2 k then γ = 1 is a solution to (5). Then, b = βT n k (γ 2 i +2 i+lk ) = βT n k (1) = 0 if r is even and b = β for odd r.
ii) For p > 2 we have 2p i ≡ p t (mod p k − 1) for some positive integer t, which implies 2p i = m(p k − 1) + p j . Since p is odd, the left-hand side of the equation is even and the right-hand side is odd, which is impossible. The only remaining option is for γ to be a 0-translator.
The rest follows directly from [4, Theorem 4]. ⋄
The following example specifies a function having a linear translator constructed in this way.
Example 2 Let us consider f (x) = T n k (βx p i +p j ) given in Lemma 1, where p = 2. The relevant parameters are: n = rk = 8, r = 4, k = 2 and i = 2, l = 1, j = i + kl = 4. Let α be a primitive element of the field F 2 4 . We fix an arbitrary element γ = α a by setting e.g. a = 3. Now the function f : F 2 8 → F 2 2 , having a linear translator, can be specified using the condition (5) in Lemma 1. The element β = α b is then computed, using Proposition 5, by specifying b to be b = −ap i+lk (p (r−l)k + 1) mod (p n − 1) = −3 · 2 2+1·2 · (2 (4−1)·2 + 1) mod (255) = 195.
By Theorem 4, it follows that f (x) = T n k (βx p i +p j ) = T 8 2 (α 195 x 2 2 +2 4 ) has an (s, b) = (3, T 8 2 (α 195 α 3(2 2 +2 4 ) ))-Frobenius translator.
Permuting subspaces and derived permutations
In this section we consider a special class of polynomials for which the permutation property is scaled down to the same property though restricted to a certain subspace of the field F p n . It will be shown that the special form considered here and the restriction of the permutation property to this subspace leads us easily to a large class of permutations of the form
F (x) = L(x) + (x p k − x + δ) s .
We recall the following result that was derived recently in [4].
Theorem 5 Let p be an odd prime, n = 2k and F : F p n → F p n with
F (x) = L(x) + (x p k − x + δ) s , δ ∈ F p n ,(6)
where L ∈ F p k [x] is a linear permutation and s is any integer in the range [0, p n − 2]. Then F is a permutation over F p n if and only if the function G G(y) = −L(y) + (y + δ) s − (y + δ) p k s , is a permutation of the subspace S = {y ∈ F p n | T n k (y) = 0}. In particular, if s satisfies p k s ≡ s (mod p n − 1) then F is a permutation.
We notice that the form of F above corresponds to x + bh(x) when L(x) = x and b = 1. Furthermore, as already noticed in [4], L induces a permutation of S. By noting that T r n k (α) = 0 if and only if there exists β ∈ F p n such that α = β − β p k , we can write S = {y ∈ F p n | T n k (y) = 0} = {β − β p k |β ∈ F p n }. Clearly, G : S → S since S is a subspace and (y + δ) s − (y + δ) p k s ∈ S.
We first consider the special case when δ ∈ S.
Proposition 6 Let p be odd, n = 2k, and S = {y ∈ F p n | T n k (y) = 0}. Then the mapping
G(x) = −L(x) + (x + δ) s − (x + δ) p k s
permutes the set S for any δ ∈ S, any linear permutation L, and any even s ∈ {2, 4, . . . , p n − 1}. Consequently,
F (x) = L(x) + (x p k − x + δ) s ,
is a permutation for any δ ∈ S, for any L and any even s ∈ {2, 4, . . . , p n − 1}.
Proof. Since s is even, let us write s = 2s ′ and let a ∈ S be arbitrary. Then because a ∈ S we can write a = b − b p k for some b ∈ F p n and
(b − b p k ) 2s ′ p k = (b p k − b p 2k ) 2s ′ = (b p k − b) 2s ′ = (−(b p k − b)) 2s ′ = (b p k − b) 2s ′ .
Since x + δ is an element of S for every x, δ ∈ S, the function G(x), restricted to S, can be also written as
G(x) = −L(x) + (x + δ) 2s ′ − (x + δ) 2s ′ p k = −L(x) + (x + δ) 2s ′ − (x + δ) 2s ′ = −L(x).
Since L(x) is a linear permutation and we already observed that it induces permutation on S, G(x) must be a permutation of S. From Theorem 5, it then follows that
F (x) = L(x) + (x p k − x + δ) s is a permutation. ⋄
This results provides us with many infinite classes of permutations of the form (6), as illustrated by the following example.
Example 3 Let p = 3, n = 2k, k = 3, L(x) be any F 3 3 -linear permutation polynomial of F 3 6 , and δ ∈ F 3 6 be such that T r 6 3 (δ) = 0. It then follows from Proposition 6 that the mapping
G(x) = −L(x) + (x + δ) s − (x + δ) p k s
permutes the set S = {y ∈ F 3 6 |T r 6 3 (y) = 0} for any even s. Further, by Theorem 5
F (x) = L(x) + (x 3 3 − x + δ) s
is a permutation for any δ ∈ S and any even s.
A closely related issue in this context is whether there are suitable L(y) and exponents s when δ ∈ S . Proposition 7 Let p be odd, n = 2k, and S = {y ∈ F p n | T n k (y) = 0}. Then the mapping
G(x) = −L(x) + (x + δ) s − (x + δ) p k s
permutes the set S for any δ, any linearized permutation L, and any s = t(p k + 1), where t is an integer. Consequently,
F (x) = L(x) + (x p k − x + δ) t(p k +1) ,
is a permutation for any δ, for any L, and any integer t.
Proof. For every x ∈ F p n we can see that
x t(p k +1) − x p k t(p k +1) = x t(p k +1) − x tp 2k +tp k = x t(p k +1) − x t x tp k = x t(p k +1) − x t(p k +1) = 0.
It follows that
G(x) = −L(x) + (x + δ) t(p k +1) − (x + δ) p k t(p k +1) = −L(x).
Similarly as before, it follows from Theorem 5 that G(x) is a permutation of F p n . ⋄
Application to bent functions
In this section we provide a generalization of results in [16] by using Frobenius translators instead of standard linear translators, when p = 2. This allows to specify some new infinite classes of permutations and their inverses similarly to the approach in [16] which in turn gives rise to suitable quadruples of permutations from which secondary classes of bent functions can be deduced. Furthermore, we also solve an open problem [9] mentioned in the introduction which concerns the existence of quadruples of bent functions whose duals sum to one.
Generalization of certain permutations using Frobenius translators
The main result of the method in [15] is the condition imposed on the duals of four bent functions f 1 , . . . , f 4 (where
f 4 = f 1 + f 2 + f 3 ) given by f * 1 + f * 2 + f * 3 + f * 4 = 0, where f * i denotes the dual of f i .
This condition was shown to be both necessary and sufficient in order that the function H = f 1 f 2 + f 1 f 3 + f 2 f 3 is bent. This naturally leads to the employment of the Maiorana-McFarland class of bent functions, where a bent function f j : F 2 n × F 2 n → F 2 in this class is defined as f j (x, y) = T r n 1 (xφ j (y) + θ j (y)), for some permutation φ j over F 2 n and arbitrary function θ j over F 2 n . It was shown in [17] that the above quadruples of bent functions are easily identified using a set of permutations defined by means of linear translators. We show that this approach is easily extended to cover Frobenius translators as well, which induces larger classes of these sets of permutations suitable to define new bent functions.
Proposition 8 (Generalization of Proposition 3, [16])
Let f : F 2 n → F 2 k , let L : F 2 n → F 2 n be an F 2 k -linear permutation of F 2 n , and let g : F 2 k → F 2 k be a permutation. Assume γ ∈ F * 2 n and a ∈ F * 2 k are such that γ is an (a, i)-Frobenius translator of f with respect to F 2 k . Then the function φ :
F 2 n → F 2 n , φ = L(x) + L(γ) g(f (x)) + f (x) a 2 n−i ,(7)
is a permutation polynomial of F 2 n and
φ −1 = L −1 (x) + γa 2 i g −1 f (L −1 (x)) a + f (L −1 (x)) 2 n−i .
Proof. Let us define h : F 2 n → F 2 n as
h(x) = x + γ g(f (x)) + f (x) a 2 n−i . Then, setting y = x + γ g(f (x)) + f (x) a 2 n−i leads to f (y) = f x + γ g(f (x)) + f (x) a 2 n−i = f (x) + a g(f (x)) + f (x) a 2 n−i 2 i = ag(f (x))
.
Therefore, f (x) = g −1 f (y) a and x = y + γ g(f (x)) + f (x) a −2 i = y + γa −2 n−i f (y) + g −1 f (y) a 2 n−i .
This means that h is a permutation of F 2 n and its inverse is
h −1 (x) = x + γa −2 n−i f (x) + g −1 f (x) a 2 n−i . Now we can define φ as φ = L • h, φ(x) = L(h(x)) = L x + γ g(f (x)) + f (x) a 2 n−i = L(x)+L(γ) g(f (x)) + f (x) a 2 n−i , and φ −1 as φ −1 (x) = h −1 • L −1 = L −1 (x) + γa −2 n−i f (L −1 (x)) + g −1 f (L −1 (x)) a 2 n−i . ⋄
In order to use these permutations in constructing new secondary classes of bent functions they must satisfy the condition (A n ), which was first introduced by Mesnager in [15] and later employed in [17].
Definition 3 Three pairwise distinct permutations φ 1 , φ 2 , φ 3 of F 2 n are said to satisfy (A n ) if the following conditions hold:
• ψ = φ 1 + φ 2 + φ 3 is a permutation of F 2 n , • ψ −1 = φ −1 1 + φ −1 2 + φ −1 3 .
The main challenge is to define suitable permutations φ i as in (7) so that ψ = φ 1 +φ 2 + φ 3 is also a permutation satisfying the condition (A n ), quite similarly to the approach taken in [16]. To achieve this, the simplest way is to use the same L, f, g for all φ j , j ∈ {1, 2, 3}, where the functions φ i only differ in the term L(γ i ). More precisely, the function f admits different (a, i)-Frobenius translators γ i , for some fixed i and a, with the additional condition that γ 1 + γ 2 + γ 3 is also an (a, i)-Frobenius translator of f .
In the non-binary cases, finding such triples of Frobenius translators can be difficult, but in the binary case, the sum of any three (a, i)-Frobenius translators is again an (a, i)-Frobenius translator, as Corollary 1 proves. Then
ψ(x) = L(x) + L(γ 1 + γ 2 + γ 3 ) g(f (x)) + f (x) a 2 n−i , ψ −1 (x) = L −1 (x) + (γ 1 + γ 2 + γ 3 )a −2 n−i f (L −1 (x)) + g −1 f (L −1 (x)) a 2 n−i ,
and it is easily verified that the permutations φ j satisfy the condition (A n ). This approach allows us to construct new bent functions using the result from [15,17] below.
Proposition 9 ( [15,17]) Let φ 1 , φ 2 , φ 3 be three pairwise distinct permutations satisfying (A n ). Then, the Boolean function H : F 2 n × F 2 n → F 2 defined by H(x, y) = T r n 1 (xφ 1 (y))T r n 1 (xφ 2 (y))+ T r n 1 (xφ 1 (y))T r n 1 (xφ 3 (y))+ T r n 1 (xφ 2 (y))T r n 1 (xφ 3 (y)) is bent. Furthermore, its dual function H * is given by
H * (x, y) = T r n 1 (φ −1 1 (x)y)T r n 1 (φ −1 2 (x)y)+T r n 1 (φ −1 1 (x)y)T r n 1 (φ −1 3 (x)y)+T r n 1 (φ −1 2 (x)y)T r n 1 (φ −1 3 (x)y).
Notice that H is essentially defined as H = f 1 f 2 +f 1 f 3 +f 2 f 3 , where f j (x, y) = T r n 1 (xφ j (y)) so that θ j (y) = 0.
Remark 2 Using the same techniques the following Propositions and Theorems from [16] can be generalized as well with minor modifications.
• Theorems 1, 2, 3, 4 in [16];
• Propositions 4, 5, 6 in [16].
Due to similarity, we only discuss a generalization of Theorem 1 in [16] and give an example of bent functions constructed using this generalization.
Theorem 6 (Generalized Theorem 1, [16]) Let f : F 2 n → F 2 k , let L : F 2 n → F 2 n be an F 2 k -linear permutation of F 2 n , and let g : F 2 k → F 2 k be a permutation. Assume γ 1 , γ 2 , γ 3 ∈ F * 2 n are all pairwise distinct (a, i)-Frobenius translators of f with respect to F 2 k (a ∈ F * 2 k ) such that γ 1 + γ 2 + γ 3 is again an (a, i)-Frobenius translator. Suppose
γ 1 +γ 2 +γ 3 = 0. Set ρ(x) = g(f (x)) + f (x) a 2 n−i andρ(x) = a 2 i g −1 f (x) a + f (x) 2 n−i . Then,
H(x, y) = T r(xL(y)) + T r(L(γ 1 )xρ(y))T r(L(γ 2 )xρ(y)) + T r(L(γ 1 )xρ(y))T r(L(γ 3 )xρ(y)) + T r(L(γ 2 )xρ(y))T r(L(γ 3 )xρ(y)) is bent. Furthermore, its dual function H * is given by H * (x, y) = T r(yL −1 (x)) + T r(γ 1 yρ(L −1 (x)))T r(γ 2 yρ(L −1 (x))) + T r(γ 1 yρ(L −1 (x)))T r(γ 3 yρ(L −1 (x))) + T r(γ 2 yρ(L −1 (x)))T r(γ 3 yρ(L −1 (x))).
Proof. The only difference between Theorem 1 [16], and the generalized version presented here is the modification to ρ andρ. In the original approach ρ(x) = g(f (x)) + f (x) a and ρ(x) = a 2 i g −1 f (x) a + f (x) . Then, raising ρ andρ to the power of 2 n−i , as it has been done in the proof of Proposition 8, the proof of Theorem 6 is the same as the proof of Theorem 1, [16]. ⋄ Example 4 Let n = 8, ω be a primitive element of F 2 8 , L be an arbitrary F 2 4 -linear permutation of F 2 8 and h be an arbitrary permutation of F 2 4 . Suppose we want the function f : F 2 8 → F 2 4 to be a binomial and to use it in the construction of a bent function using Theorem 6. Using only the standard definition of a linear translator, we would be forced to define f (x) = T r 8 4 (x) according to Proposition 2 from [4]. But using Proposition 4 we can define f (x) = x 2 i + x 2 i+4 for any i with any γ ∈ F 2 8 being an (γ 2 i + γ 2 i+4 , i)-Frobenius translator of f .
To use Theorem 6, we need to define three pairwise distinct (a, i)-Frobenius translators. So we need to find three distinct γ 1 , γ 2 , γ 3 such that
γ 2 i 1 + γ 2 i+4 1 = γ 2 i 2 + γ 2 i+4 2 = γ 2 i 3 + γ 2 i+4 3 = (γ 1 + γ 2 + γ 3 ) 2 i + (γ 1 + γ 2 + γ 3 ) 2 i+4 = a.
This would imply that γ 1 , γ 2 , γ 3 , γ 1 + γ 2 + γ 3 are all (a, i)-Frobenius translators. A quick computation shows that γ 1 + γ 2 , γ 1 + γ 3 , γ 2 + γ 3 ∈ F 2 4 is required. We select γ 1 = ω, γ 2 = ω 3 , γ 3 = ω 16 and, for example, if we fix i = 2, we get
γ 2 i 1 + γ 2 i+4 1 = γ 2 i 2 + γ 2 i+4 2 = γ 2 i 3 + γ 2 i+4 3 = (γ 1 + γ 2 + γ 3 ) 2 i + (γ 1 + γ 2 + γ 3 ) 2 i+4 = ω 136
and ω + ω 3 + ω 16 = ω 48 = 0. Let ρ,ρ and H be defined as in Theorem 6. It follows that H is a bent function.
New bent functions from suitable quadruples of bent functions
In difference to the above approach, which preserves the variable space of input functions, another method of constructing secondary bent functions on the extended variable space was recently proposed in [9]. Nevertheless, quite a similar set of conditions on initial bent functions f 1 , f 2 , f 3 , which was left as an open problem in [9], is imposed in order that the resulting function F defined on a larger variable space is bent.
Open Problem 1 [9] Find such bent functions
f 1 , f 2 , f 3 that f 1 + f 2 + f 3 = f 4 is again a bent function and f * 1 + f * 2 + f * 3 + f * 4 = 1.
The design rationale is illustrated by Example 4.9 [9], where using f 1 , f 2 , f 3 : F 2 n → F 2 that satisfy the above condition, implies that F :
F 2 n × F 2 × F 2 defined as F (X, y 1 , y 2 ) = f 1 (X) + y 1 (f 1 + f 3 )(X) + y 2 (f 1 + f 2 )(X)
is bent.
Below we present a construction that solves this open problem and gives an example of its use.
⋄
The following example illustrates the procedure of defining three suitable bent functions on F 2 n used to specify a bent function F on F 2 n × F 2 × F 2 . The condition (8) imposed on h i in the definition of suitable f i (x, y) = T r(xφ i (y))+ h i (y) turns out to be easily satisfied.
Example 5 Let α be a primitive element of F 2 6 . For simplicity, we define the permutations φ i over F 2 6 as φ 1 (y) = y + α, φ 1 (y) = y + α 2 , φ 1 (y) = y + α 3 , which are self-inverse and it is straightforward to verify that they satisfy the condition (A n ). Define the Boolean functions h 2 , h 3 : F 2 6 → F 2 as h 2 (y) = 0, h 3 (y) = 1.
After, we define the Boolean function h 1 in such a way that
h 1 (φ −1 1 (y)) + h 2 (φ −1 2 (y)) + h 3 (φ −1 3 )(y)) + (h 1 + h 2 + h 3 )((φ 1 + φ 2 + φ 3 ) −1 (y)) = 1 h 1 (φ −1 1 (y)) + (h 1 )((φ 1 + φ 2 + φ 3 ) −1 (y)) = 1 h 1 (y + α) + (h 1 )(y + α + α 2 + α 3 ) = 1.
This condition is easily satisfied. We just construct the truth table of the Boolean function h 1 in such a way that for every y ∈ F 2 6 we have h 1 (y) = h 1 (y + α 2 + α 3 ) + 1. Now we construct bent Maiorana-McFarland functions f i : F 2 6 × F 2 6 → F 2 , f i (x, y) = T r(xφ i (y)) + h i (y) and use them in the construction from Example 4.9, [9]. We define F : F 2 12 × F 2 × F 2 → F 2 , F (X, y 1 , y 2 ) = f 1 (X) + y 1 (f 1 + f 3 )(X) + y 2 (f 1 + f 2 )(X).
The function F was implemented and tested using the programming package Magma. It was confirmed that F is a bent function.
Remark 3
In [21, Remark 3], a method to define anti-self-dual bent functions f 1 , f 2 , f 3 , f 1 + f 2 + f 3 (thus f * i = f i + 1) is given which implies that f * 1 + f * 2 + f + 3 + f * 4 = 0. Another construction of f 1 , f 2 , f 3 that satisfies this condition can be found in [26,Section 5], where f 1 , f 2 , f 3 all belong to the partial spread (PS) class of Dillon [8]. It is based on a well-known property of the PS class that the dual f * of a PS function f is defined by substituting all the disjoint n 2 -dimensional subspaces in its support by their orthogonal subspaces [2]. It follows that f * 4 = f * 1 + f * 2 + f * 3 and consequently f * 1 + f * 2 + f * 3 + f * 4 = 0.
Some new infinite families of bent functions
In [4] many infinite families of permutations based on linear translators were introduced, some of which were already generalized in previous sections. It turns that in the binary case some of those families satisfy the condition (A n ).
Proposition 10 ([4])
Let k > 1(n = rk), f : F 2 n → F 2 k , g : F 2 k → F 2 k , and let γ be a 0-linear translator. Then
F (x) = x + γg(f (x))
is an involution.
Note that if γ is a 0-translator it is irrelevant to differentiate between linear and Frobenius translators.
Proposition 11 Let γ 1 , γ 2 , γ 3 be pairwise distinct 0-linear translators, and let F i (x) = x + γ i g(f (x)) for i ∈ {1, 2, 3}. Then the functions F i satisfy the condition A n .
Proof. By Proposition 1, γ 1 + γ 2 + γ 3 must again be a 0-linear translator.
F 1 (x) + F 2 (x) + F 3 (x) = x + γ 1 g(f (x)) + x + γ 2 g(f (x)) + x + γ 3 g(f (x)) = x + (γ 1 + γ 2 + γ 3 )g(f (x))
Then, by Proposition 10, F 1 + F 2 + F 3 is again a permutation and an involution. This immediately implies that the second requirement of condition (A n ) is satisfied as well. ⋄ It therefore follows that we can use the above presented permutations in constructing new families of bent functions, as was done in Proposition 9. Since the proof also follows the same steps it is in this case skipped.
Theorem 8 Let k > 1(n = rk), f : F 2 n → F 2 k , g : F 2 k → F 2 k , and let γ i be pairwise distinct 0-linear translators. Then H(x, y) = T r(xy) + T r(γ 1 g(f (y)))T r(γ 2 g(f (y))) + T r(γ 1 g(f (y)))T r(γ 3 g(f (y))) + +T r(γ 2 g(f (y)))T r(γ 3 g(f (y))) is a self-dual bent function.
Another family of permutations that turns out to satisfy the condition (A n ) was introduced in [4]: Corollary 2 ([4]) Let k > 1(n = rk), L be any F 2 k -linear permutation, f (x) = T n k (βx) such that T r(βγ) = 0. Then the functions F (x) = L(x) + L(γ)g(T r n k (βx)) are permutations for any g : F 2 k → F 2 k . Moreover, F −1 (x) = L −1 (x) + L(γ)g(T r n k (βL −1 (x))).
In a similar way as before we can show that F i (x) = L(x) + L(γ i )g(T r n k (βx)) satisfy the condition (A n ) if T r n k (γ i β) = 0. It follows that these permutations can also be used in constructing new families of bent functions.
where n = rk and r > 1. Then the function f does not have Frobenius translators in the sense of Definition 2. Proof. This result follows directly from the proof of Proposition 1 in [4] by direct calculation. ⋄ On the other hand, binomial mappings of the form f (x) = βx i + x j still admit Frobenius translators as shown below.
AcknowledgementsEnes Pasalic is partly supported by the Slovenian Research Agency (research program P3-0384 and research project J1-6720). Nastja Cepak is supported by the Slovenian Research Agency (research 25 program P3-0384 and Young Researchers Grant).Theorem 7 Let f i (X) = f i (x, y) = T r(xφ i (y)) + h i (y) for i ∈ {1, 2, 3}, where φ i satisfy the condition (A n ) and x, y ∈ F 2 n/2 . If the functions h i satisfySince the permutations φ i satisfy the condition (A n ), their sum is again a permutation and f 4 is a bent Maiorana-McFarland function. Its dual isTheorem 9 Let L be any F 2 k -linear permutation,f (x) = T n k (βx), g : F 2 k → F 2 k , and let γ i be such that T r n k (γ i β) = 0. Then H(x, y) = T r(xL(y)) + T r(L(γ 1 )g(T r n k (βx)))T r(L(γ 2 )g(T r n k (βx))) + +T r(L(γ 1 )g(T r n k (βx)))T r(L(γ 3 )g(T r n k (βx))) + +T r(L(γ 2 )g(T r n k (βx)))T r(L(γ 3 )g(T r n k (βx)))is a bent function and its dual isH(x, y) = T r(yL −1 (x)) + T r(L(γ 1 )g(T r n k (βL −1 (x))))T r(L(γ 2 )g(T r n k (βL −1 (x)))) + +T r(L(γ 1 )g(T r n k (βL −1 (x))))T r(L(γ 3 )g(T r n k (βL −1 (x)))) + +T r(L(γ 2 )g(T r n k (βL −1 (x))))T r(L(γ 3 )g(T r n k (βL −1 (x)))).ConclusionsIn this article several classes of permutations and bent functions are derived using the concepts of linear and Frobenius translators. These Frobenius translators allow us to specify suitable sets of permutations based on which many new secondary classes of bent functions and their dual can be derived. The most interesting open problem in our opinion regards the existence of non-quadratic functions admitting linear/Frobenius translators. It might be the case that there are only a few classes of quadratic mappings having this kind of translators, discussed in[4]and in this article, which are suitable for this type of construction.
A Akbary, D Ghioca, Q Wang, On constructing permutations of finite fields, Finite Fields and Their Applications. 17A. Akbary, D. Ghioca, and Q. Wang, On constructing permutations of finite fields, Finite Fields and Their Applications, vol. 17(1) (2011), pp. 51-67
Boolean functions for cryptography and error correcting codes, Boolean models and methods in mathematics. C Carlet, C. Carlet, Boolean functions for cryptography and error correcting codes, Boolean models and methods in mathematics, computer science, and engineering 2 (2010), pp.257-397
C Carlet, S Mesnager, Four decades of research on bent functions, Designs, Codes and Cryptography. 78C. Carlet, S. Mesnager, Four decades of research on bent functions, Designs, Codes and Cryptography 78.1 (2016), pp.5-50
N Cepak, P Charpin, E Pasalic, Permutations via linear translators, Finite Fields and Their Applications. 451942N. Cepak, P. Charpin, and E. Pasalic, Permutations via linear translators, Finite Fields and Their Applications., vol. 45 (2017), pp.1942
When does G(x)+ γ T r(H(x)) permute F 2 n ?. P Charpin, G Kyureghyan, Finite Fields and Their Applications. 15P. Charpin, and G. Kyureghyan, When does G(x)+ γ T r(H(x)) permute F 2 n ?, Finite Fields and Their Applications, 15 (5) (2009), pp.615-632
Polynomials with linear structure and Maiorana-McFarland construction. P Charpin, S Sarkar, IEEE Trans. Inform. Theory. 57637963804P. Charpin, and S. Sarkar, Polynomials with linear structure and Maiorana- McFarland construction, IEEE Trans. Inform. Theory 57 (2011), no. 6, pp.37963804
P Charpin, G M Kyureghyan, V Suder, Sparse permutations with low differential uniformity, Finite Fields and Their Applications. 28P. Charpin, G.M. Kyureghyan, and V. Suder, Sparse permutations with low differ- ential uniformity, Finite Fields and Their Applications, vol. 28 (2014), pp.214-243
Elementary Hadamard difference sets. J F Dillon, University of Maryland, U.S.A.Ph. D. thesisJ. F. Dillon, " Elementary Hadamard difference sets", Ph. D. thesis, University of Maryland, U.S.A., 1974.
A general framework for secondary constructions of bent and plateaued functions. S Hodžić, E Pasalic, Y Wei, Submitted manuscriptS. Hodžić, E. Pasalic, and Y. Wei, A general framework for secondary constructions of bent and plateaued functions, Submitted manuscript
A survey of permutation binomials and trinomials over finite fields, Topics in Finite Fields. X D Hou, Proceedings of the 11th International Conference on Finite Fields and Their Applications. the 11th International Conference on Finite Fields and Their ApplicationsAMS632X.D. Hou, A survey of permutation binomials and trinomials over finite fields, Topics in Finite Fields, Proceedings of the 11th International Conference on Finite Fields and Their Applications. Vol. 632. AMS, 2015
Permutation polynomials over finite fields -a survey of recent advances, Finite Fields and Their Applications. X D Hou, 32X.D. Hou, Permutation polynomials over finite fields -a survey of recent advances, Finite Fields and Their Applications, 32 (2015), pp.82-119
Constructing permutations of finite fields via linear translators. G M Kyureghyan, Journal of Combinatorial Theory, Series A. 118G.M. Kyureghyan, Constructing permutations of finite fields via linear translators, Journal of Combinatorial Theory, Series A 118 (2011), pp.1052-1061
New permutation trinomials from Niho exponents over finite fields with even characteristic, CoRR. N Li, T Helleseth, 16063768N. Li, and T. Helleseth, New permutation trinomials from Niho exponents over finite fields with even characteristic, CoRR, vol.1606.03768 (2016)
R Lidl, H Niederreiter, Finite fields. Cambridge university press20R. Lidl, and H. Niederreiter, Finite fields, Vol. 20. Cambridge university press, 1997
Several new infinite families of bent functions and their duals. S Mesnager, IEEE Trans. Inf. Theory. 60743974407S. Mesnager, Several new infinite families of bent functions and their duals, IEEE Trans. Inf. Theory 60(7), (2014), pp.43974407
S Mesnager, P Ongan, F Özbudak, New bent functions from permutations and linear translators, C2SI 2017: Codes, Cryptology and Information Security. S. Mesnager, P. Ongan, and F.Özbudak, New bent functions from permutations and linear translators, C2SI 2017: Codes, Cryptology and Information Security, pp. 282-297
S Mesnager, P Ongan, F Özbudak, Further constructions of infinite families of bent functions from new permutations and their duals, Cryptography and Communications. 8S. Mesnager, P. Ongan, and F.Özbudak, Further constructions of infinite families of bent functions from new permutations and their duals, Cryptography and Commu- nications 8.2 (2016), pp.229-246
G L Mullen, Q Wang, Handbook of Finite Fields. Boca Raton, FLChapman and Hall/CRCPermutation polynomialsG.L. Mullen, and Q. Wang, Permutation polynomials, Chapter 8 in Handbook of Finite Fields, Chapman and Hall/CRC, Boca Raton, FL, 2013, pp.215-230
A note on complete polynomials over finite fields and their applications in cryptography. A Muratović-Ribić, E Pasalic, Finite Fields and Their Applications. 25A. Muratović-Ribić, and E. Pasalic, A note on complete polynomials over finite fields and their applications in cryptography, Finite Fields and Their Applications 25 (2014), pp.306-315
On derivatives of polynomials over finite fields through integration, Available at Cryptology ePrint Archive. E Pasalic, A Muratović-Ribić, S Hodžić, S Gangopadhyay, ReportE. Pasalic, A. Muratović-Ribić, S. Hodžić and S. Gangopadhyay, On derivatives of polynomials over finite fields through integration, Available at Cryptology ePrint Archive, Report 2016/022. http://eprint.iacr.org/
Generic construction of bent functions and bent idempotents with any possible algebraic degree. C Tang, Z Zhou, Y Qi, X Zhang, C Fang, T Helleseth, IEEE Transactions on Information Theory. 63C. Tang, Z. Zhou,Y. Qi, X. Zhang, C. Fang, and T. Helleseth, Generic construction of bent functions and bent idempotents with any possible algebraic degree, IEEE Transactions on Information Theory 63.10 (2017), pp.6149-6157
Z Tu, X Zeng, L Hu, Several classes of complete permutation polynomials, Finite Fields and Their Applications. 25182193Z. Tu, X. Zeng, and L. Hu, Several classes of complete permutation polynomials, Finite Fields and Their Applications, 25(2014), pp.182193
Two classes of permutation polynomials having the form (x 2 m + x + δ) s + x, Finite Fields and Their Applications. Z Tu, X Zeng, Y Jiang, Z. Tu, X. Zeng, and Y. Jiang, Two classes of permutation polynomials having the form (x 2 m + x + δ) s + x, Finite Fields and Their Applications, 31 (2015), pp.12-24
Permutation polynomials of the form (x p m − x + δ) s + L(x) over the finite field F p 2m of odd characteristic. Z Tu, X Zeng, C Li, T Helleseth, Finite Fields and Their Applications. 34Z. Tu, X. Zeng, C. Li, and T. Helleseth, Permutation polynomials of the form (x p m − x + δ) s + L(x) over the finite field F p 2m of odd characteristic, Finite Fields and Their Applications 34 (2015), pp.20-35
J Yuan, C Ding, Further results on permutation polynomials over finite fields, Finite Fields and Their Applications. 27J. Yuan, and C. Ding, Further results on permutation polynomials over finite fields, Finite Fields and Their Applications 27 (2014), pp.88-103.
Constructing bent functions outside the MaioranaMcFarland class using a general form of Rothaus. F Zhang, E Pasalic, Y Wei, N Cepak, IEEE Transactions on Information Theory. 63F. Zhang, E. Pasalic, Y. Wei, and N. Cepak, Constructing bent functions outside the MaioranaMcFarland class using a general form of Rothaus, IEEE Transactions on Information Theory, Vol. 63, Issue 8, 2017
| []
|
[
"arXiv:gr-qc/0507019v1 5 Jul 2005 U niversality of H ighly D am ped Q uasinorm al M odes for Single H orizon B lack H oles 1",
"arXiv:gr-qc/0507019v1 5 Jul 2005 U niversality of H ighly D am ped Q uasinorm al M odes for Single H orizon B lack H oles 1"
]
| []
| []
| []
| It has been suggested that the hi ghl y dam ped quasi norm alm odes ofbl ack hol es provi de i nform ati on aboutthe m i croscopi c quantum gravi tati onalstates underl yi ng bl ack hol e entropy. T hi s i nterpretati on requi res the form of the hi ghl y dam ped quasi norm al m ode frequency to be uni versal l y of the form : h! R = l n(l)kT B H , w here l i s an i nteger, and T B H i s the bl ack hol e tem perature. W e sum m ari ze the resul ts ofan anal ysi s ofthe hi ghl y dam ped quasi norm alm odes for a l arge cl ass of si ngl e hori zon, asym ptoti cal l y at bl ack hol es. | 10.1139/p06-030 | [
"https://export.arxiv.org/pdf/gr-qc/0507019v1.pdf"
]
| 18,496,256 | gr-qc/0507019 | 3787d57d9bee32dce956309eea7824c76fdf1924 |
arXiv:gr-qc/0507019v1 5 Jul 2005 U niversality of H ighly D am ped Q uasinorm al M odes for Single H orizon B lack H oles 1
arXiv:gr-qc/0507019v1 5 Jul 2005 U niversality of H ighly D am ped Q uasinorm al M odes for Single H orizon B lack H oles 1
R am in G .D aghigh 2 and G abor K unstatter 3 D epartm ent ofPhysics and W innipeg Institute for T heoreticalPhysics, U niversity ofW innipeg,W innipeg,M B R 3B 2E9,C anada. A bstract
It has been suggested that the hi ghl y dam ped quasi norm alm odes ofbl ack hol es provi de i nform ati on aboutthe m i croscopi c quantum gravi tati onalstates underl yi ng bl ack hol e entropy. T hi s i nterpretati on requi res the form of the hi ghl y dam ped quasi norm al m ode frequency to be uni versal l y of the form : h! R = l n(l)kT B H , w here l i s an i nteger, and T B H i s the bl ack hol e tem perature. W e sum m ari ze the resul ts ofan anal ysi s ofthe hi ghl y dam ped quasi norm alm odes for a l arge cl ass of si ngl e hori zon, asym ptoti cal l y at bl ack hol es.
Introduction
Bl ack hol e quasi norm alm odes(Q N M ' s)arethenaturalvi brati onalm odesofperturbati ons i n the spaceti m e exteri or to an event hori zon. T hey are de ned as sol uti ons to the wave equati on fortheappropri ateperturbati on w i th boundary condi ti onsthatarei ngoi ng atthe hori zon and outgoi ng atspati ali n ni ty. T he correspondi ng frequency spectrum i sdi screte and com pl ex. T he i m agi nary part of the frequency si gnal s the presence of dam pi ng, a necessary consequence ofboundary condi ti onsthatrequi re energy to be carri ed away from the system .
w here T B H i s the H aw ki ng tem perature of the bl ack hol e. T he i m agi nary part becam e equal l y spaced (w i th n l arge),w hereas the realpart ofthe frequency approached a constant. N ote that si nce a Schwarzshi l d bl ack hol e i s com pl etel y descri bed by a si ngl e dim ensi onful param eter (the m ass, or radi us, or equi val entl y the tem perature), i t fol l ow s from di m ensi onalgrounds that the Q N M frequency m ust be proporti onalto k h T B H . W hat i s i nteresti ng about thi s spectrum i s the fact that the constant ofproporti onal i ty for the realpart ofthe frequency approaches a uni versalval ue (i . e. i ndependent ofthe angul ar m om entum ofthe perturbati on). M oreover, as H od [ 1] rst noti ced,the num eri calval ue coi nci des to the gi ven order w i th l n(3). (T he fact that the coe ci ent was preci sel y l n (3) was l ater proved anal yti cal l y by M otl [ 2] . ) H od went on to suggest a fasci nati ng physi cal i nterpretati on for thi s l n(3). Suppose,he sai d,that the l i m i ti ng val ue ofthe realpart of the hi ghl y dam ped Q N M frequency was a fundam entalvi brati onalm ode associ ated w i th the dynam i cs ofthe event hori zon i tsel f. In thi s case,sem i -cl assi calargum ents requi re the exi stence ofstates i n the energy spectrum thatare separated by the correspondi ng energy quantum :
E n = h! n ;(2)
w here n i s the i nteger l abel i ng the states and n = 1. In the l arge n l i m i t thi s expressi on can be i ntegrated to yi el d:
n = Z dE ! = 1 l n(3) Z dE T B H = 1 l n(3) S B H ;(3)
A m azi ngl y,thi sform ofthe entropy i sconsi stentw i th a stati sti calm echani c i nterpretati on i n term s of a bl ack hol e hori zon m ade of n fundam ental el em ents of area, each w i th 3
i nternalm i crostates. T hi s m i croscopi c pi cture ofbl ack hol e hori zons was rst conjectured by Bekenstei n [ 5]and l ater M ukhanov [ 6] ,w ho used i t to argue for an equal l y spaced area spectrum ofquantum bl ack hol es (al though they assum ed a bi nary structure for the area el em ents,so that the num ber ofm i crostates was 2 n ).
T he above argum ent i s ofcourse hi ghl y conjectural . To have any hope ofval i di ty,i t shoul d appl y i n som e form to any bl ack hol e event hori zon,i rrespecti ve ofthe dynam i cs that l ead to i ts form ati on. Speci cal l y, i t shoul d appl y to al lasym ptoti cal l y at,si ngl e hori zon bl ack hol es. T hi s natural l y rai ses the questi on ofw hether or not the coe ci ent of the realpartofthe frequency i sgeneri cal l y l n(3).M otland N ei tzke [ 7]showed anal yti cal l y that l n(3) i s val i d for hi gher di m ensi onalSchwarzschi l d bl ack hol es,thereby veri fyi ng the conjecture i n [ 9] . M ore recentl y,Tam akiand N om ura [ 10]argued that the sam e coe ci ent was correct for 4-d di l aton bl ack hol es, w hi l e K ettner et al [ 11] anal yzed si ngl e hori zon bl ack hol es i n generi c 2-d di l aton gravi ty. In a parti cul arl y el egant anal ysi s, D as and Shankaranarayanan [ 12]were abl e to study al lsi ngl e hori zon bl ack hol es i n 4 and hi gher di m ensi onsw i th i nteresti ng resul ts. Fi nal l y,the presentauthors [ 13]perform ed an anal ysi s thati ncl uded al lsi ngl e hori zon,asym ptoti cal l y atbl ack hol es(i ncl udi ng those consi dered i n [ 11]and [ 12] )usi ng theri gorousW K B form al i sm ofA ndersson and H ow l s [ 8] .T hegeneral and ri gorous nature of thi s l atter anal ysi s gave si gni cant i nsi ght i nto the source of the fam ousl n(3).In parti cul ar,the num eri calcoe ci enti n the realpartofthe hi ghl y dam ped frequency i sgeneri cal l y determ i ned by the behavi ourofcoupl i ng ofthe perturbati on to the gravi tati onal el d near the ori gi n,as expressed i n tortoi se coordi nates. T he l n(3) appears i fand onl y i fthi scoupl i ng dependsl i nearl y on the tortoi se coordi nate nearthe ori gi n.T he questi on ofuni versal i ty seem s to requi re an understandi ng ofhow thi s behavi our m ay,or m ay not,be connected to the dynam i cs ofthe hori zon.
In the next secti on,we set up the probl em . Secti on 3 show s how M otland N ei tzke' s m onodrom y cal cul ati on can be ri gorousl y appl i ed to generi c si ngl e hori zon,asym ptoti cal l y at bl ack hol es. T he resul ts, and thei r physi cal si gni cance for quantum gravi ty, are presented i n the concl usi ons,al ong w i th a di scussi on ofprospects for the future.
Q N M 's For G eneric Single H orizon B lack H oles
W e w i sh to consi der the general2 di m ensi onalscal ar wave equati on:
@ p gh( )g @ = p gV ( ) ;(5)
w here g i sa two m etri c and i sa scal arw i th respectto 2-d coordi nate transform ati ons.
Both the m etri c and di l aton are assum ed stati c,so that one can nd coordi nates (x;t) i n w hi ch = (x) and the m etri c takes the form
ds 2 = f(x)dt 2 + 1 g(x) dx 2 ; = f(x)( dt 2 + dz 2 ) ;(6)
w here the second l i ne expresses the m etri c i n term s ofthe so-cal l ed \tortoi se" coordi nate z,de ned by:
dz = dx F (x) ;(7)
w here F (x) x h , the hori zon l ocati on. T hei r rati o H (x) = f(x) g(x) i s assum ed to be a regul ar, now here vani shi ng,anal yti c functi on ofx [ 12] . T he surface gravi ty ofthe correspondi ng bl ack hol e i s gi ven by:
= 1 2 dF dx j x h ;(8)
and the associ ated H aw ki ng tem perature i s generi cal l y gi ven by:
T B H = h 2 :(9)
T he Q N M ' s are obtai ned by l ooki ng for sol uti ons to (5) that have the product form :
(x;t)= e i!t (x) :(10)
Ifone de nes a rescal ed el d = p F ,the wave equati on i n tortoi se coordi nates takes the si m pl e form :
d 2 dz 2 + ! 2 U h (z) = 0 ;(11)
w here
U h 1 2 h 0 0 h 1 4 h 0 h ! 2 + F h V (x) ;(12)
and the pri m e here denotesdi erenti ati on w i th respectto z. T he potenti alU h goesto zero at both the hori zon (z ! 1 ) and spati ali n ni ty (z ! 1 ).
T he boundary condi ti ons appropri ate for Q N M ' s are:
(z) ! e i!z as z ! 1 (x ! x h ) ! e + i!z as z ! + 1 (x ! 1 )(13)
O urform al i sm appl i esto two di sti nct,butcl osel y rel ated (and overl appi ng)cl assesofbl ack hol e spaceti m es. Fi rst,one can consi der(5),asi n [ 11] ,to descri be a scal arperturbati on i n two spaceti m e di m ensi onsnon-m i ni m al l y coupl ed a bl ack hol e m etri c i n generi c 2-d di l aton gravi ty. In thi s case one can choose = x so that [ 14] f(x)= g(x)= J(x) 2G M ; T he tri ck i s that w hi l e cal cul ati ng the phase change al ong arbi trary contours on the com pl ex pl ane i s di cul t,i t i s rel ati vel y easy i fone sti cks to a contour al ong w hi ch the W K B phase i s purel y real . T hese are the so-cal l ed anti -Stokes l i nes [ 8] . W e therefore need to determ i ne the structure ofanti -Stokes l i nes i n the com pl ex x-pl ane.
dŝ 2 = f(r)dt 2 + 1 g(r) dr 2 + r 2 d (n) ;(15)
Si nce we are i nterested i n the hi ghl y dam ped Q N M ' s w here j ! I j! 1 ,the potenti al T he m onodrom y around the sam e contourA can al so be determ i ned by observi ng that the onl y si ngul ari ty i nsi de thi s contour i s at the event hori zon. T hus the m onodrom y i s the sam e asthat ofa sm al lcl osed contour nearthe hori zon. T he boundary condi ti on (13) requi res thi s m onodrom y to be:
! e != :(18)
C om pari ng Eqs. (17) and (18) gi ves the consi stency condi ti on
e != 0 = e != :(19)
O nce we determ i ne 0 ,we w i l lbe abl e to sol ve for the Q N M frequency !. To determ i ne 0 ,we need to know the behavi our ofthe sol uti on near the ori gi n w here U h (z) di verges and therefore becom es rel evant. A ssum i ng that h(z)! z a as z or x ! 0. T hen we have
U h (z)! a(a 2) 4z 2 ;(20)
cl ose to the ori gi n. T hus,cl ose to the ori gi n,the rel evant equati on i n tortoi se coordi nates i s si m pl y d 2 dz 2 + ! 2 a(a 2) 4z 2 ! = 0 : hand,these si m pl e and el egant resul ts do not seem to appl y to m ul ti -hori zon bl ack hol es [ 7,8,15] . T hese i ssues are currentl y under i nvesti gati on.
T he sl ow l y dam ped Q N M ' s(forgravi tati onalperturbati ons)are rel evantforastrophysi cal observati ons si nce they descri be the frequency spectrum of the gravi tati onal radi ati on that i s expected to em erge from bl ack hol e form ati on duri ng l ate ti m es. T he hi ghl y dam ped m odes, i . e. the m odes for w hi ch the dam pi ng rate goes to i n ni ty, w hi ch are the subject ofthi s paper,are unobservabl e. H owever,i t has recentl y been suggested that the hi ghl y dam ped Q N M ' s carry fundam ental i nform ati on about hori zon dynam i cs and the m i crostates underl yi ng bl ack hol e entropy. W e begi n by sum m ari zi ng thi s proposed connecti on. N um eri cal cal cul ati ons of the Q N M frequenci es for Schwarzschi l d bl ack hol es i n the earl y 90' s reveal ed that i n the l i m i t of l arge dam pi ng, the frequency spectrum took the fol l ow i ng form : h! ! 2 i(n + 1 2 )kT B H + (1: 098612: : : )kT B H ;
w here S B H / A rea=4 i s the Bekenstei n-H aw ki ng entropy[ 3, 4] of the bl ack hol e. Its appearancei sa di rectconsequence ofthe rstl aw ofbl ack hol etherm odynam i cs. Equati on (3) i m pl i es that the entropy/area i s equal l y spaced i n the sem i -cl assi call i m i t: S B H = l n(3)n = l n(3 n ) :
x). T he tortoi se coordi nate i s di sti ngui shed by two features: the 2-m etri c i sconform al l y M i nkow ski an,and z ! 1 l ogari thm i cal l y nearan eventhori zon.T he functi ons f(x),g(x),and h(x) h[ (x)]are com pl etel y arbi trary at thi s stage, si nce we are m aki ng no assum pti ons about the gravi tati onaldynam i cs or m atter sources that gi ve ri se to thi s m etri c. By further restri cti ng the coordi nate system , i t i s possi bl e to el i m i nate at m ost one ofthese functi ons,so the system i s i n fact com pl etel y speci ed by two arbi trary functi ons. In order to restri ct to si ngl e hori zon bl ack hol e spaceti m es we assum e that h(x) i s m onotoni c and vani shes at x = 0, w hi ch i s a si ngul ar poi nt i n the spaceti m e. M oreover, we assum e f(x) and g(x) have si m pl e zeros at the sam e non-zero
w here M i s the m ass ofthe bl ack hol e and J(x) i s determ i ned by the di l aton potenti al , w hi ch i s di erent for di erent theori es.
w here n = d 2 and d (n) i sthe l i ne el em ent on the uni tn-sphere. T hi scl ass ofprobl em s was consi dered by D as etal[ 12] . By di m ensi onal l y reduci ng the wave equati on for a m i n-i m al l y coupl ed scal ar el d i n thi s background and m aki ng the i denti cati ons x = r and h(x)= r n ,one obtai ns preci sel y (5),w i th: V = r n l(l+ n 1) onodrom y cal cul ati on proceedsby i nvoki ng theW K B approxi m ati on and cal cul ati ng the change of the W K B phase al ong prescri bed cl osed contours i n the com pl ex x-pl ane. T he boundary condi ti ons are i m posed by rel ati ng the phase change al ong a contour that goes to i n ni ty,w here the sol uti on i s the prescri bed outgoi ng wave,to the phase change around a contourvery cl ose to (and enci rcl i ng)the hori zon,w here the form ofthe sol uti on i sknow n to bean i ngoi ng wave (i n tortoi secoordi nates).D em andi ng thatthephasechange cal cul ated al ong the two contours be consi stent gi ves an al gebrai c condi ti on on the Q N M frequency.
UFi gure 1 :
1h (z) i s i rrel evant i n the regi on away from the ori gi n. In thi s regi on the anti -Stokes l i nes are the l i nes al ong w hi ch !z i spurel y real . A rough schem ati c behavi ourofthese l i nes are pl otted i n Fi g. 1. A s one can see,we have two unbounded anti -Stokes l i nes w hi ch extend to i n ni ty next to a bounded anti -Stokes l i ne that l oops around the event hori zon. Even i fsuch unbounded anti -Stokes l i nes do not exi st i n one coordi nate system ,we can al ways generate such l i nes by a change ofvari abl e ofthe form x !x = x q ,w here q i s an i nteger greaterthan one. M ovi ng al ong the unbounded anti -Stokesl i nesi n the cl ockw i se di recti on Schem ati c ofcontours and A nti -stokes l i nes for generi c m onodrom y cal cul ati on. and usi ng the boundary condi ti on at i n ni ty,we nd the m onodrom y around the contour A i n Fi g. 1 to be ! e != 0 ; (17) w here the e != i sfrom m ovi ng al ong the sol i d l i ne on w hi ch we have a pl ane wave sol uti on ofthe form e i!z ,and 0 ,to be determ i ned l ater,i s from m ovi ng al ong the dashed l i ne i n Fi g. 1.
sequati on can besol ved exactl y i n term sofBesselfuncti onsforgeneri c a.O nei nteresti ng i ssue i s thatthe rotati on angl e i n the com pl ex x-pl ane,w hi ch i sthe angl e between the two unbounded anti -Stokes l i nes,i s al ways correspond to a rotati on by 3 i n the tortoi se coordi nates. O nce we know the sol uti on i n thi s regi on,we can m ove al ong the dashed l i ne i n the cl ockw i se di recti on and we can nd 0 w hi ch Eq. (22) i nto the consi stency condi ti on (19) w i l lgi ve us the condi ti on e 2 != = [ 1 + 2cos( (a 1))]: (23) U si ng thi s equati on we can get the Q N have sum m ari zed a cal cul ati on that ri gorousl y cal cul ates the Q N M frequenci es for vi rtual l y al lsi ngl e hori zon bl ack hol es,i n any di m ensi on. T he coe ci ent ofthe realpart ofthe Q N M frequency generi cal l y i s determ i ned by the exponent a,w hi ch determ i nes the rate atw hi ch the e ecti ve 2-d coupl i ng (i . e. h( )i n Eq. (5))approaches zero atthe ori gi n, as expressed i n tortoi se coordi nates. It i s therefore at rst gl ance di cul t to see how thi s exponent i s rel ated to the dynam i cs ofthe hori zon. M oreover,the form ofthe answer i n the context of2-d di l aton gravi ty suggests that the coe ci ent ofthe realpart i s onl y the l ogari thm ofan i nteger i n excepti onalcases. A l though at rst gl ance thi s al so seem s to be true for hi gher di m ensi onal bl ack hol es, i t i s nonethel ess encouragi ng that the l n(3) appears for al lhi gher di m ensi onalsi ngl e hori zon bl ack hol es,even those w i th non-tri vi al m atter el ds i n w hi ch extra param eters i n pri nci pl e coul d a ect the resul t. O n the other
Joanne K ettner for thei r col l aborati on i n an earl i er part of thi s work. W e al so thank, Saurya D as. Bri an D ol an and S.ShankaranarayananW e are gratefulto Joey M edved and Joanne K ettner for thei r col l aborati on i n an earl i er part of thi s work. W e al so thank, Saurya D as,Bri an D ol an and S.Shankaranarayanan
. S Od, gr-qc/9812002)Phys.Rev.Lett. 814293S.H od,Phys.Rev.Lett. 81 (1998) 4293 (gr-qc/9812002).
. L M , A , gr-qc/0212096)Adv.T heoret.M ath.Phys. 6L.M otland A .N ei tzke,Adv.T heoret.M ath.Phys.6 (2003) 1135 (gr-qc/0212096).
. J D Ekenstei N, Lett, Uovo, Im, 4737J. D .B ekenstei n,Lett.N uovo.C im .4 (1972) 737.
. S W Ng, Phys.Rev. D. 142460S. W .H aw ki ng,Phys.Rev. D 14 (1976) 2460.
. J D Ekenstei N, Lett Uovo, C im. 11467J.D .B ekenstei n,Lett.N uovo.C im .11 (1974) 467.
. V Ukhanov, JET P Letters. 4463V . M ukhanov, JET P Letters 44 (1986) 63;
. J D , V F Ukhanov, gr-qc/9505012)Phys. Lett. B. 360J.D .B ekenstei n and V .F.M ukhanov,Phys. Lett. B 360 (1995) 7 (gr-qc/9505012).
. L M , A , Adv.T heoret.M ath.Phys. 7307L.M otland A .N ei tzke,Adv.T heoret.M ath.Phys.7 (2003) 307.
H ow l s,C l ass.Q uant.G rav. N , C J , 211623N .A ndersson and C .J.H ow l s,C l ass.Q uant.G rav.21 (2004) 1623.
. G Unstatter, gr-qc/0212014)Phys.Rev.Lett. 90161301G .K unstatter,Phys.Rev.Lett. 90 (2003) 161301 (gr-qc/0212014).
. T Tam, H , Phys.Rev. D. 7044041T .Tam akiand H .N om ura,Phys.Rev. D 70 (2004) 044041.
. J Ettner, G Unstatter, A J , gr-qc/0408042)C l ass. Q uant. G rav. 21J. K ettner, G . K unstatter, A . J. M . M edved, C l ass. Q uant. G rav. 21 (2004) 5317 (gr-qc/0408042).
. S , S Shankaranarayanan, hep-th/0410209C l ass.Q uant.G rav. 227S.D as and S.Shankaranarayanan,C l ass.Q uant.G rav.22 (2005) L7 (hep-th/0410209).
\H i ghl y D am ped Q uasi norm al M odes of G eneri c Si ngl e H ori zon B l ack H ol es. R , G Unstatter, gr-qc/0505044R . D aghi gh and G . K unstatter, \H i ghl y D am ped Q uasi norm al M odes of G eneri c Si ngl e H ori zon B l ack H ol es",gr-qc/0505044 (2005).
. D Loui S-M Arti Nez, J Egenberg, G Unstatter, gr-qc/9309018)Phys. Lett. B. 321D . Loui s-M arti nez, J. G egenberg and G . K unstatter, Phys. Lett. B 321 (1994) 193 (gr-qc/9309018).
. J , R Schi, 411267J. N atari o and R . Schi appa, hep-th0411267;
. E Erti, K D Okkotas, Phys. Rev. D. 71124008E. B erti , K . D . K okkotas, Phys. Rev. D 71 (2005) 124008;
. V Ardoso, J , R Schi Appa, J Ath, Phys. 454698V .C ardoso,J.N atari o,and R .Schi appa,J.M ath. Phys. 45 (2004) 4698;
. E Erti, V Ardoso, S , Yoshi Da, Phys. Rev. D. 69124018E.B erti , V .C ardoso,S.Yoshi da, Phys. Rev. D 69 (2004) 124018;
. E Erti, V Ardoso, K D Okkotas, H Nozawa, Phys. Rev. D. 68124018E.B erti , V .C ardoso, K .D .K okkotas, H .O nozawa, Phys. Rev. D 68 (2003) 124018;
. E Erti, K D Okkotas, P R , 6844027E.B erti , K . D .K okkotas, PR D 68 (2003) 044027;
. V Ardoso, R Ya, J P S Lem Os, Phys. Rev. D. 6844024V .C ardoso,R .K onopl ya, J. P. S.Lem os,Phys. Rev. D 68 (2003) 044024;
. V , J P S Lem Os, Phys. Rev. D. 6784020V . C ardoso and J. P. S. Lem os, Phys. Rev. D 67 (2003) 084020;
. E Erti, K D Okkotas, Phys. Rev. D. 6764020E. B erti , K . D . K okkotas, Phys. Rev. D 67 (2003) 064020;
. V Ardoso, J P S Lem Os, Phys. Rev. D. 6484017V . C ardoso, J. P. S. Lem os, Phys. Rev. D 64 (2001) 084017;
. V Ardoso, J P S Lem Os, Phys.Rev. D. 63124015V .C ardoso,J. P. S.Lem os,Phys.Rev. D 63 (2001) 124015.
| []
|
[
"IMPROVED ROBUSTNESS TO DISFLUENCIES IN RNN-TRANSDUCER BASED SPEECH RECOGNITION",
"IMPROVED ROBUSTNESS TO DISFLUENCIES IN RNN-TRANSDUCER BASED SPEECH RECOGNITION"
]
| [
"Valentin Mendelev \nAmazon Alexa\n\n",
"Tina Raissi [email protected] \nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\nGermany\n",
"† ",
"Guglielmo Camporese [email protected] \nDepartment of Mathematics \"Tullio Levi-Civita\"\nUniversity of Padova\nItaly\n",
"† ",
"Manuel Giollo [email protected] \nAmazon Alexa\n\n"
]
| [
"Amazon Alexa\n",
"Human Language Technology and Pattern Recognition Group\nRWTH Aachen University\nGermany",
"Department of Mathematics \"Tullio Levi-Civita\"\nUniversity of Padova\nItaly",
"Amazon Alexa\n"
]
| []
| Automatic Speech Recognition (ASR) based on Recurrent Neural Network Transducers (RNN-T) is gaining interest in the speech community. We investigate data selection and preparation choices aiming for improved robustness of RNN-T ASR to speech disfluencies with a focus on partial words. For evaluation we use clean data, data with disfluencies and a separate dataset with speech affected by stuttering. We show that after including a small amount of data with disfluencies in the training set the recognition accuracy on the tests with disfluencies and stuttering improves. Increasing the amount of training data with disfluencies gives additional gains without degradation on the clean data. We also show that replacing partial words with a dedicated token helps to get even better accuracy on utterances with disfluencies and stutter. The evaluation of our best model shows 22.5% and 16.4% relative WER reduction on those two evaluation sets. | 10.1109/icassp39728.2021.9413618 | [
"https://arxiv.org/pdf/2012.06259v1.pdf"
]
| 228,376,178 | 2012.06259 | a5dd05c75f34c12a3aa40999a4654858e609f002 |
IMPROVED ROBUSTNESS TO DISFLUENCIES IN RNN-TRANSDUCER BASED SPEECH RECOGNITION
Valentin Mendelev
Amazon Alexa
Tina Raissi [email protected]
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
Germany
†
Guglielmo Camporese [email protected]
Department of Mathematics "Tullio Levi-Civita"
University of Padova
Italy
†
Manuel Giollo [email protected]
Amazon Alexa
IMPROVED ROBUSTNESS TO DISFLUENCIES IN RNN-TRANSDUCER BASED SPEECH RECOGNITION
Index Terms-Automatic speech recognitionRNN- Transducerspeech with disfluenciesstuttering
Automatic Speech Recognition (ASR) based on Recurrent Neural Network Transducers (RNN-T) is gaining interest in the speech community. We investigate data selection and preparation choices aiming for improved robustness of RNN-T ASR to speech disfluencies with a focus on partial words. For evaluation we use clean data, data with disfluencies and a separate dataset with speech affected by stuttering. We show that after including a small amount of data with disfluencies in the training set the recognition accuracy on the tests with disfluencies and stuttering improves. Increasing the amount of training data with disfluencies gives additional gains without degradation on the clean data. We also show that replacing partial words with a dedicated token helps to get even better accuracy on utterances with disfluencies and stutter. The evaluation of our best model shows 22.5% and 16.4% relative WER reduction on those two evaluation sets.
INTRODUCTION
Human speech typically contains disfluencies alongside the articulation of an intended word sequence. Speech from any speaker has filled pauses, partial words and repetitions, while certain speech disorders (e.g. stuttering) amplify these phenomena. And despite the general performance achievements in speech recognition using End-to-End (E2E) models, there is still not enough robustness to them. The main objective of this work is to investigate training data filtering and transcription processing choices for an ASR system based on the Recurrent Neural Network Transducer (RNN-T) [1], which may improve its robustness to speech disfluencies with a special focus on the partial words and repetitions.
Motivation for this work comes from discussions with our colleagues who reported that self corrections are responsible for significant share of entity resolution errors in several Denotes equal contribution † Work done during an internship at Amazon voice assistant use-cases. Also, we wanted to investigate if ASR robustness to disfluencies may be improved by overweighting the data with partial words in the training set and if this gives a higher ASR accuracy for speakers with stuttering, even though vast majority of those data came from fluent speakers.
Main contributions of this work are: a study of the effect of different ways to represent partial words in transcripts used to train RNN-T system, experimental results with different fractions of data with disfluencies in the training set and a view of the influence of the factors mentioned before on ASR performance for speakers with stuttering.
In the next session an overview of the prior work is presented, then the datasets are described in Sec. 3. The experimental setting and results are discussed in Sec. 4 which is followed by conclusions and future work.
PRIOR WORK
The initial attention of the research community towards speech disfluencies derives from the importance of the improvement of the ASR system accuracy not only for the speech signal which is recorded under controlled conditions but also for the spontaneous speech [2,3]. The resulting task comprises the identification and the consecutive removal of the disfluency events in the recognizer output and is solved by using the noisy channel approach [4,5]. Following the Bayesian statistical framework, this entails maximization of the a-posteriori probability of the word sequence with disfluencies, given the originally intended word sequence. Since disfluency events affect different phonetic aspects of speech [6] many researchers tried to take advantage of the possible combination of different sources of knowledge on both acoustic and language model sides [7,8,9,10]. The disfluency detection task in all mentioned works is solved by using sequence labelling/tagging approaches which can rely either on a generative approach such as Hidden Markov Model or discriminative log-linear models, such as Maximum Entropy Markov Model or Conditional Random Fields as well as Bidirectional Long Short-Term Memory based networks in combination with an attention mechanism [11]. In most of the cases the overall system maintains its modular setting and therefore requires not only separate optimization criteria for different components but also in some cases hand-labeled features for the annotation of the disfluencies to train the language model. A recent work brings the focus on the acoustic side and does not take into consideration any language-dependent information [12]. To the best of the authors' knowledge, with the exception of a work on personalized ASR for dysarthric speech [13], none of the published papers are aimed to improve speech recognition accuracy of an E2E ASR system by dealing with disfluencies without solving disfluency detection task itself.
DATASETS
For our experiments we used subsets of the transcribed data pool available to train Alexa ASR models. The recordings comprising the data pool are anonimized voice assistant requests recorded with various far-field devices in compliance with terms of service. Each transcription, in addition to the spoken words, contains tags provided by the transcriber indicating additional information on the speech signal. The attribution of the described tags relies on the transcriber's perception and expertise and therefore can be source of possible inaccuracy for both tag and spoken word annotations. This aspect is especially valid when the utterance contains unintelligible or disfluent speech. Most disfluency events such as word or syllable prolongation are actually not marked in the transcriptions.
In this work we use three datasets, which we call Ordinary, Disfluencies and Stutter.
The Ordinary dataset contains ordinary utterances with intelligible device-directed speech but without partial words. Acoustic conditions may be challenging because of low signal-to-noise ratio, media speech or due to the presence of multiple speakers.
The Disfluencies dataset is derived by applying a set of filters, which operate on transcriptions level on the large pool of data. The filters aim to select challenging utterances with partial words, repetitions and hesitations. More specifically, an utterance is included into this dataset if its transcription contains a partial word and its subsequent completion (e.g. 'alarm on tw-on twelve') and at least one of the following conditions is true:
-there are no more than 4 words in the transcription; -there is at least one other partial word (not necessarily with completion); -there are hesitations; -there are repetitions. This set of filters was chosen after trying several alternatives and observing that without additional conditions the dataset contained a lot of utterances with a single partial word, which were considered not challenging enough. We have to note that after most of the experiments mentioned in this work were done we repeated some of them with the simplest filter, which accepted utterances with a single partial word in the transcript. We found that conclusions reported in the following sections were mostly valid for datasets derived with this simple filter as well.
The Ordinary and Disfluencies datasets include train, dev and test partitions, which do not have speaker overlap.
The Stutter dataset was recorded by a vendor and contains speech samples provided by 11 speakers with stuttering. The speakers were reading prompts containing possible requests to a voice assistant in a quiet acoustic environment. This dataset is used for the evaluation purpose only.
Size of the datasets is presented in Table 1. We restricted the amount of training data in the Ordinary dataset to have faster turnaround time. Also, the data for the Disfluencies dataset were selected from an order of magnitude bigger data pool in comparison to the Ordinary Train to enable experiments with increased relative amount of challenging data in the training sets.
Handling Partial Words in Transcriptions
Once data with partial words are included in the training set, there are several options how to mark such words. In this work we assume that our goal is to have ASR output free from disfluencies. This can be achieved if the system: (1) ignores them, (2) outputs a label instead of the partial word, (3) concatenates a partial label with the disfluency content which later can be removed via the post-processing (e.g. for the recording with the reference transcript 'p-play' the system may output (1) 'play', (2) ' pw play', (3) 'p pw play'). We decided to test the 3 options mentioned plus the one where only the first letter of the partial word remains and is appended with pw on the right. The motivation behind the last two options is clear: ideally we would like to keep disfluency content in order to preserve more information for the downstream tasks. Still, the quality of the ASR output with disfluencies removed is considered as the main criterion in the current work.
REF:
play devotional music hindu dev-devotional music CLEAN: play devotional music hindu devil devotional music DISFL: play devotional music hindu devotional music
EXPERIMENTAL SETUP AND RESULTS
Experimental Setting
We train models suitable for on-line recognition. The model consist of a 5 layers deep encoder, a 2 layers deep prediction network, a joint network as in [14], and an output layer with a softmax nonlinearity. Each layer of the encoder and the prediction network comprises 1024 Long Short-Term Memory [15] units. The size of the joint network layer is 512 and the output layer size is 4001 corresponding to 4000 wordpeices and a blank symbol. The wordpiece model was trained on a large set of voice assistant requests using a unigram language model [16]. The model accepts 192 dimensional input feature vectors each comprising three 64 dimensional Log-Mel-Filterbanks extracted every 10 milliseconds and stacked together.
Training objective is minimization of RNN-T loss function [1,14] with Adam optimizer [17], with total batch size of 1536 utterances and warmup-hold-decay learning rate schedule. We also use SpecAugment [18].
Evaluations were done using a beam decoding with the beam size 16. Model specific post-processing was applied on both the hypothesis and the reference in order to get transcripts free from partial words. For models trained with replacement of the full partial word with pw tag, those were removed. For models where a partial word or its first letter persisted in the training transcript, the tag was removed together with all letters before first space to the left of it.
Case Studies
After training the baseline model on the Ordinary Train, the experimental models on the Ordinary Train merged with the Disfluencies Train datasets and different handling of partial words in the transcripts, we looked into the decoding results of Disfluencies Test in order to ensure that the models behave as we expect. Indeed, we observed that the baseline model produces quite a lot of insertions, while the one trained with Disfluencies Train and partial words removed does not. Example transcripts are depicted in Fig. 1.
The additional examples are presented in Fig. 2 including those derived with the model trained with the replacement of partial words by pw in transcripts for training. As one would expect, the model trained with pw produces reasonable 'alignments' in some cases capturing the amount of partial words uttered as in Fig. 2b-c, while not so reasonable in the others (Fig. 2a). In some cases (Fig. 2d) this model produces a better result than the one with partial words removed without outputting the pw tag.
One can speculate that mapping all partial words to a single tag allows the model to capture acoustic and lingustic patterns associated with partial word appearance and to preserve the integrity of the 'normal' speech patterns which would not happen if partial words were removed from the transcripts.
Word Error Rates
In Table 2 one can find word error rates for different models trained. The Ordinary Test results are provided to make sure that while improving on data with disfluencies we don't have degradation on this dataset. As expected, the error on the test sets with disfluencies is more than 2 times higher than on Ordinary Test and the baseline model produced the highest number of insertions on all three test sets. By adding Disfluencies Train with removed partial words we achieved 21% and 14% relative WER reduction on the the test sets with disfluencies in comparison to the baseline. If partial words are replaced by a tag, we see a modest additional reduction by 1.7% and 3.9%. When we try to preserve all or some characters of a disfluency, the reduction is much smaller (lines 4 and 5 in Table 2). It may seem surprising that WER on Stutter Test is significantly lower, than on Disfluencies Test. This is explained by the nature of the data: the former was recorded in a quiet room with a limited set of popular prompts, while the latter contains challenging field data.
In order to verify reproducibility of the observed effects and to investigate how much data with disfluencies is actually needed to improve WER for speakers with stuttering, we conducted additional experiments. We used only the training sets where partial words were removed or replaced by pw because the corresponding models demonstrated much lower WER values than the others. Trainings with 3 different seeds were performed for each dataset configuration, then each model was evaluated and the WER numbers were averaged between the runs. The results are summarized in Table 3. The models corresponding to lines 4 and 5 give lower WER on the Disfluencies and Stutter datasets than the model number 6, which confirms benefits from using a tag to denote all partial words in transcripts. Another observation is that as the amount of data with disfluencies increased, there was a gradual decrease in WER on Disfluencies and Stutter datasets with saturation more pronounced for the latter one. Probably in order to further improve accuracy one needs to take into account additional aspects associated with stuttering which we don't pay attention to.
We emphasise that the natural frequency of partial words REF1: show bigger rapto-a ra-a ra-a ra-a raptor fossil appearing in the dataset available to us is rather low due to the heavy head of the requests distribution, so even 1/4 of the Disfluencies Train should be considered as oversampling (it constitutes about 0.5% of Ordinary Train size).
CONCLUSIONS
In this work we showed that RNN-T based speech recognition models tend to produce insertions when presented with speech containing partial words if data with such words were not included in the training set. This contributes to low recognition accuracy for speakers with a stuttering disorder. Adding the data with partial words to the training set and increasing their relative share leads to significant WER reduction on the test sets with disfluencies without accuracy degradation on the average data. Replacing partial words in transcripts with a tag for training allows to reach even lower WER. Relative to the baseline the best model configuration allowed to achieve 22% reduction on the test with disfluencies and 16% on the test containing stuttering speech.
FUTURE WORK
We see two directions for the future work which benefit each other. The first is increasing ASR robustness to disfluencies occurring in fluent speech by using data augmentation, semisupervised learning approaches. The second is pushing the boundary of what an ASR system can do out-of-the-box for speakers with speech less fluent due to stutter, age or other factors.
Fig. 1 :
1Example recognition results. REF denotes the reference transcription, CLEAN was produced by the baseline model, DISFL -by the model trained on Ordinary Train merged with Disfluencies Train and with partial words removed from training transcripts.
Fig. 2 :
2Example recognition results. REF denotes the reference transcription, CLEAN was produced by the baseline model, DISFL -by the model trained on Ordinary Train merged with Disfluencies Train and with partial words removed from training transcripts, PW -same as DISFL but with partial words replaced by a tag.
Table 1 :
1Size of the datasets used in this work in hours of sound.Dataset
Train (hours) Test (hours)
Ordinary
∼ 2300
> 20
Disfluencies
47
5
Stutter
-
2
Table 2 :
2Evaluation results on different test sets depending on partial words handling in transcripts. Model-specific postprocessing was applied before evaluation to get rid of partial words, or pw tag. Partial words were removed from reference transcriptions as well.NWER column contains the corresponding model WER divided by WER of the baseline model on Ordinary Test. WERR (%) is 100 * (y − x)/y where x is the corresponding model WER and y is WER of the baseline model on the same test set. S, I, D columns contain shares (%) of substitutions, insertions, deletions in the observed WER. Train and... words are ... NWER WERR S I D NWER WERR S I D NWER WERR S I D#
Ordinary
Partial
Ordinary Test
Disfluencies Test
Stutter Test
1 -
absent
1
0.0 60 18 23 3.02
0.0 29 57 13 2.35
0.0 34 49 17
2
deleted
1
0.2 59 18 23 2.38 21.0 33 47 20 2.02 14.0 40 39 21
3 Disfluencies replaced by pw
1
0.5 59 17 24 2.34 22.4 33 45 22 1.93 17.9 39 34 29
4 Train
appended by pw
1
0.2 59 18 23 2.6
13.7 31 51 18 2.33
0.9 37 45 18
5
replaced by 1st letter & pw
1
0.4 59 17 24 2.49 17.4 31 49 20 2.1
10.5 36 40 23
Table 3 :
3WER reduction relative to the baseline model (%) depending on the fraction of the Disfluencies Train dataset used for training. Ordinary Partial Ordin. Disfl. Stutter Train and ... words are ...#
Test
Test
Test
1 -
absent
0.0
0.0
0.0
2 1/10 Disfl.
0.3
8.7
5.8
3 1/4 Dislf.
replaced by
0.2
13.9
6.6
4 1/2 Disfl.
pw
0.0
18.3
14.5
5 Full Disfl.
0.0
22.5
16.4
6 Full Disfl.
deleted
-0.1
19.1
13.1
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Alex Graves, arXiv:1211.3711Sequence transduction with recurrent neural networks. Alex Graves, "Sequence transduction with recurrent neural networks," arXiv:1211.3711, 2012.
Preliminaries to a theory of speech disfluencies. Elizabeth Ellen Shriberg, Ph.D. thesis, CiteseerElizabeth Ellen Shriberg, Preliminaries to a theory of speech disfluencies, Ph.D. thesis, Citeseer, 1994.
Speech repains, intonational phrases, and discourse markers: modeling speakers' utterances in spoken dialogue. A Peter, James Heeman, Allen, Computational Linguistics. 254Peter A Heeman and James Allen, "Speech repains, intonational phrases, and discourse markers: modeling speakers' utterances in spoken dialogue," Computa- tional Linguistics, vol. 25, no. 4, pp. 527-572, 1999.
A tag-based noisy-channel model of speech repairs. Mark Johnson, Eugene Charniak, Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04). the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)Mark Johnson and Eugene Charniak, "A tag-based noisy-channel model of speech repairs," in Proceedings of the 42nd Annual Meeting of the Association for Com- putational Linguistics (ACL-04), 2004, pp. 33-39.
Correction of disfluencies in spontaneous speech using a noisy-channel approach. Matthias Honal, Tanja Schultz, Eighth European Conference on Speech Communication and Technology. Matthias Honal and Tanja Schultz, "Correction of dis- fluencies in spontaneous speech using a noisy-channel approach," in Eighth European Conference on Speech Communication and Technology, 2003.
Phonetic consequences of speech disfluency. E Elizabeth, Shriberg, SRI INTERNATIONAL MENLO PARK CATech. Rep.Elizabeth E Shriberg, "Phonetic consequences of speech disfluency," Tech. Rep., SRI INTERNATIONAL MENLO PARK CA, 1999.
Comparing hmm, maximum entropy, and conditional random fields for disfluency detection. Yang Liu, Elizabeth Shriberg, Andreas Stolcke, Mary Harper, Ninth European Conference on Speech Communication and Technology. Yang Liu, Elizabeth Shriberg, Andreas Stolcke, and Mary Harper, "Comparing hmm, maximum entropy, and conditional random fields for disfluency detection," in Ninth European Conference on Speech Communica- tion and Technology, 2005.
Disfluency detection with a semi-markov model and prosodic features. James Ferguson, Greg Durrett, Dan Klein, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJames Ferguson, Greg Durrett, and Dan Klein, "Disflu- ency detection with a semi-markov model and prosodic features," in Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, 2015, pp. 257-262.
Vicky Zayats, Mari Ostendorf, Hannaneh Hajishirzi, arXiv:1604.03209Disfluency detection using a bidirectional lstm. Vicky Zayats, Mari Ostendorf, and Hannaneh Ha- jishirzi, "Disfluency detection using a bidirectional lstm," arXiv:1604.03209, 2016.
Sequence labeling to detect stuttering events in read speech. Sadeen Alharbi, Madina Hasan, J H Anthony, Shelagh Simons, Phil Brumfitt, Green, Computer Speech & Language. 62101052Sadeen Alharbi, Madina Hasan, Anthony JH Simons, Shelagh Brumfitt, and Phil Green, "Sequence labeling to detect stuttering events in read speech," Computer Speech & Language, vol. 62, pp. 101052, 2020.
Noisy bilstm-based models for disfluency detection. Nguyen Bach, Fei Huang, Proc. Interspeech. InterspeechNguyen Bach and Fei Huang, "Noisy bilstm-based models for disfluency detection.," in Proc. Interspeech, 2019, pp. 4230-4234.
Detecting multiple speech disfluencies using a deep residual network with bidirectional long shortterm memory. Tedd Kourkounakis, Amirhossein Hajavi, Ali Etemad, Proc. IEEE Intern. Conf. on Acoustics, Speech and Signal Process. (ICASSP). IEEE Intern. Conf. on Acoustics, Speech and Signal ess. (ICASSP)Tedd Kourkounakis, Amirhossein Hajavi, and Ali Etemad, "Detecting multiple speech disfluencies using a deep residual network with bidirectional long short- term memory," in Proc. IEEE Intern. Conf. on Acous- tics, Speech and Signal Process. (ICASSP), 2020, pp. 6089-6093.
Personalizing asr for dysarthric and accented speech with limited data. Joel Shor, Dotan Emanuel, Oran Lang, Omry Tuval, Michael Brenner, Julie Cattiau, Fernando Vieira, Maeve Mcnally, Taylor Charbonneau, Melissa Nollstadt, arXiv:1907.13511Joel Shor, Dotan Emanuel, Oran Lang, Omry Tuval, Michael Brenner, Julie Cattiau, Fernando Vieira, Maeve McNally, Taylor Charbonneau, Melissa Nollstadt, et al., "Personalizing asr for dysarthric and accented speech with limited data," arXiv:1907.13511, 2019.
Speech recognition with deep recurrent neural networks. Alex Graves, Mohamed Abdel-Rahman, Geoffrey Hinton, 2013 IEEE international conference on acoustics, speech and signal processing. IEEEAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, "Speech recognition with deep recurrent neu- ral networks," in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp. 6645-6649.
Long shortterm memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber, "Long short- term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. Taku Kudo, John Richardson, arXiv:1808.06226arXiv preprintTaku Kudo and John Richardson, "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing," arXiv preprint arXiv:1808.06226, 2018.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Specaugment: A simple data augmentation method for automatic speech recognition. S Daniel, William Park, Yu Chan, Chung-Cheng Zhang, Barret Chiu, Zoph, D Ekin, Quoc V Cubuk, Le, arXiv:1904.08779arXiv preprintDaniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le, "Specaugment: A simple data augmentation method for automatic speech recognition," arXiv preprint arXiv:1904.08779, 2019.
| []
|
[
"DECIDABILITY, COMPLEXITY, AND EXPRESSIVENESS OF FIRST-ORDER LOGIC OVER THE SUBWORD ORDERING",
"DECIDABILITY, COMPLEXITY, AND EXPRESSIVENESS OF FIRST-ORDER LOGIC OVER THE SUBWORD ORDERING"
]
| [
"Simon Halfon ",
"ANDPhilippe Schnoebelen ",
"Georg Zetzsche "
]
| []
| []
| We consider first-order logic over the subword ordering on finite words where each word is available as a constant. Our first result is that the Σ 1 theory is undecidable (already over two letters).We investigate the decidability border by considering fragments where all but a certain number of variables are alternation bounded, meaning that the variable must always be quantified over languages with a bounded number of letter alternations. We prove that when at most two variables are not alternation bounded, the Σ 1 fragment is decidable, and that it becomes undecidable when three variables are not alternation bounded. Regarding higher quantifier alternation depths, we prove that the Σ 2 fragment is undecidable already for one variable without alternation bound and that when all variables are alternation bounded, the entire first-order theory is decidable.State of the art. Relatively little is known about deciding the validity of (A * , ⊑) formulas and about algorithms for computing their solutions. By comparison, it is well known that the Σ 2 -theory of FO(A * , ·), the logic of strings with concatenation, is undecidable[11,34], and that its Σ 1 fragment (aka "word equations") is decidable in PSPACE[21,33]. Moreover, introducing counting predicates leads to an undecidable Σ 1 fragment[7].Regarding the logic of subwords, Comon and Treinen showed undecidability for an extended logic FO(A * , ⊑, p # ) where A = {a, b, #} has three letters and p # is a unary function that prepends # in front of a word, hence is a restricted form of concatenation [9, Prop. 9]. Kuske showed that, when only the subword predicate is allowed, the logic FO(A * , ⊑) is undecidable and already its Σ 3 fragment is undecidable when |A| ≥ 2. Kudinov et al. considered definability in (A * , ⊑) and showed that the predicates definable in (A * , ⊑) are exactly the arithmetical predicates 1 [29].Kuske's result on the Σ 3 theory leaves open the question whether smaller fragments are decidable. Karandikar and Schnoebelen showed that the Σ 2 theory is undecidable [23] and this is tight since the Σ 1 fragment is decidable, in fact NP-complete[23,30].Karandikar and Schnoebelen also showed that the two-variable fragment FO 2 (A * , ⊑) is decidable[23]and that it has an elementary complexity upper bound[25]. Decidability extends to the logic FO 2 (A * , ⊑, R 1 , R 2 , . . .) where arbitrary regular languages (monadic predicates) are allowed.Objectives of this paper. We are interested in solving constraints built with the subword ordering. This corresponds to the Σ 1 fragment but beyond deciding validity, we are interested in computing sets of solutions: a formula like ϕ 2 can be seen as a conjunctive set of constraints, "abcd ⊑ x ∧ bcde ⊑ x ∧ abcde ⊑ x" that define a set of words (a set of tuples when there are several free variables).A first difficulty is that Kuske's decidability result for the Σ 1 fragment only applies to the pure fragment, where constants are not allowed. That is, we know how to decide the validity of formulas like ϕ 1 but not like ϕ 2 . However, using constants inside constraints is natural and convenient. In particular, it makes it easy to express piecewise testable constraints (see below), and we would like to generalise Kuske's result to the extended logic. | 10.1109/lics.2017.8005141 | [
"https://arxiv.org/pdf/1701.07470v2.pdf"
]
| 3,210,504 | 1701.07470 | 202cd1b7db32e7e7e194b60d7fc16e062be49a0b |
DECIDABILITY, COMPLEXITY, AND EXPRESSIVENESS OF FIRST-ORDER LOGIC OVER THE SUBWORD ORDERING
24 Sep 2021
Simon Halfon
ANDPhilippe Schnoebelen
Georg Zetzsche
DECIDABILITY, COMPLEXITY, AND EXPRESSIVENESS OF FIRST-ORDER LOGIC OVER THE SUBWORD ORDERING
24 Sep 2021
We consider first-order logic over the subword ordering on finite words where each word is available as a constant. Our first result is that the Σ 1 theory is undecidable (already over two letters).We investigate the decidability border by considering fragments where all but a certain number of variables are alternation bounded, meaning that the variable must always be quantified over languages with a bounded number of letter alternations. We prove that when at most two variables are not alternation bounded, the Σ 1 fragment is decidable, and that it becomes undecidable when three variables are not alternation bounded. Regarding higher quantifier alternation depths, we prove that the Σ 2 fragment is undecidable already for one variable without alternation bound and that when all variables are alternation bounded, the entire first-order theory is decidable.State of the art. Relatively little is known about deciding the validity of (A * , ⊑) formulas and about algorithms for computing their solutions. By comparison, it is well known that the Σ 2 -theory of FO(A * , ·), the logic of strings with concatenation, is undecidable[11,34], and that its Σ 1 fragment (aka "word equations") is decidable in PSPACE[21,33]. Moreover, introducing counting predicates leads to an undecidable Σ 1 fragment[7].Regarding the logic of subwords, Comon and Treinen showed undecidability for an extended logic FO(A * , ⊑, p # ) where A = {a, b, #} has three letters and p # is a unary function that prepends # in front of a word, hence is a restricted form of concatenation [9, Prop. 9]. Kuske showed that, when only the subword predicate is allowed, the logic FO(A * , ⊑) is undecidable and already its Σ 3 fragment is undecidable when |A| ≥ 2. Kudinov et al. considered definability in (A * , ⊑) and showed that the predicates definable in (A * , ⊑) are exactly the arithmetical predicates 1 [29].Kuske's result on the Σ 3 theory leaves open the question whether smaller fragments are decidable. Karandikar and Schnoebelen showed that the Σ 2 theory is undecidable [23] and this is tight since the Σ 1 fragment is decidable, in fact NP-complete[23,30].Karandikar and Schnoebelen also showed that the two-variable fragment FO 2 (A * , ⊑) is decidable[23]and that it has an elementary complexity upper bound[25]. Decidability extends to the logic FO 2 (A * , ⊑, R 1 , R 2 , . . .) where arbitrary regular languages (monadic predicates) are allowed.Objectives of this paper. We are interested in solving constraints built with the subword ordering. This corresponds to the Σ 1 fragment but beyond deciding validity, we are interested in computing sets of solutions: a formula like ϕ 2 can be seen as a conjunctive set of constraints, "abcd ⊑ x ∧ bcde ⊑ x ∧ abcde ⊑ x" that define a set of words (a set of tuples when there are several free variables).A first difficulty is that Kuske's decidability result for the Σ 1 fragment only applies to the pure fragment, where constants are not allowed. That is, we know how to decide the validity of formulas like ϕ 1 but not like ϕ 2 . However, using constants inside constraints is natural and convenient. In particular, it makes it easy to express piecewise testable constraints (see below), and we would like to generalise Kuske's result to the extended logic.
Introduction
A subsequence of a (finite) sequence u is a sequence obtained from u by removing any number of elements. For example, if u = (a, b, a, b, a) then u ′ = (b, b, a) is a subsequence of u, a fact we denote with u ′ ⊑ u. Other examples that work for any u are u ⊑ u (remove nothing) and () ⊑ u. In the rest of this paper, we shall use the terminology from formal methods and will speak of words and their subwords rather than finite sequences.
Reasoning about subwords occurs prominently in many areas of computer science, e.g., in pattern matching (of texts, of DNA strings, etc.), in coding theory, in theorem proving, in algorithmics, etc. Closer to our own motivations, the automatic verification of unreliable channel systems and related problems involves the subword ordering or some of its variants [2,6,16,24]. Our experience is that reasoning about subwords and related concepts (e.g., shuffles of words) involves ad hoc techniques quite unlike the standard tools that work well with prefixes and suffixes [22].
The logic of subwords. In this paper we consider the first-order logic FO(A * , ⊑) of words over some alphabet A = {a, b, c, . . .} equipped with the subword relation ⊑. Our main objective is to understand how and when one can decide queries formulated in this logic, or decide whether a given formula is valid.
For example, we consider formulas like ∀u, u ′ , u ′′ : u ⊑ u ′ ∧ u ′ ⊑ u ′′ =⇒ u ⊑ u ′′ , ϕ 1 :
∃u : abcd ⊑ u ∧ bcde ⊑ u ∧ abcde ⊑ u , ϕ 2 : ∀u, v : ∃s : u ⊑ s ∧ v ⊑ s ∧ ∀t : u ⊑ t ∧ v ⊑ t =⇒ s ⊑ t . ϕ 3 :
Here ϕ 1 states that the subword relation is transitive (which it is).
More interesting is ϕ 2 , stating that it is possible that a word contains both abcd and bcde as subwords but not abcde. This formula is true and, beyond knowing its validity, one is
The third author is supported by a fellowship within the Postdoc-Program of the German Academic Exchange Service (DAAD) and by Labex DigiCosme, Univ. Paris-Saclay, project VERICONISS. also interested in solutions: can we design a constraint solver that will produce a witness, e.g., u = bcdeabcd, or more generally the set of solutions?
Our third example, ϕ 3 , states that words ordered by subwords are an upper semilattice. This is a more complex formula with Π 3 quantifier alternation. It is not valid in general (e.g., ab and ba have no lub) but this depends on the alphabet A at hand: ϕ 3 holds if A is a singleton alphabet, i.e., {a} * |= ϕ 3 but {a, b} * |= ϕ 3 .
We say that formulas like ϕ 1 or ϕ 3 where constants from A * do not appear are in the pure fragment. Formally, there are two logics at hand here. The pure logic is the logic of the purely relational structure (A * , ⊑) while the extended logic is over the expansion (A * , ⊑, w 1 , . . .) where there is a constant symbol w i for every word in A * .
As we just illustrated with ϕ 3 , the validity of a formula may depend on the underlying alphabet even for the pure fragment. We note that this phenomenon is not limited to the degenerate case of singleton alphabets. Indeed, observe that it is possible to state that u is a letter, i.e., is a word of length 1, in the pure fragment:
∃z : ∀x : z ⊑ x ∧ (x ⊑ u =⇒ u ⊑ x ∨ x ⊑ z ) .
Thus, even in the pure fragment, one can state that A contains 2, 3, . . . , or exactly n letters. Similarly one can state that A is infinite by saying that no word contains all letters. Table 1. The cell in row i and column j shows the decidability/complexity of the fragment Σ i,j .
Σ i,j 0 1 2 3 1 NP NP in NEXP U i ≥ 2 Σ EXP i−1 U U U
We note that, in principle, the difference between the pure and the extended logic is only superficial since, up to automorphisms, arbitrary words can be defined in the logic, 2 see [23,29,30]. However this requires some universal quantification (even when defining the empty word) that are not allowed when restricting to the Σ 1 fragment. So this avenue is closed.
Summary of results. Our first result is that, when constants are allowed, the Σ 1 fragment of FO(A * , ⊑, w 1 , . . .) is actually undecidable. In fact the Σ 1 fragment of FO(A * , ⊑, W ), where a single constant W ∈ A * can be named, is undecidable unless W is too simple. These results hold as soon as A contains two distinct letters and exhibit a sharp contrast between the pure and the extended logic. We found this very surprising because, before hitting on undecidability, we had already developed algorithms that solve large classes of Σ 1 constraints.
Our second result identifies a key factor influencing decidability: it turns out that free variables ranging over a "thin" language like L = a + bc * , are easier to handle than variables ranging over a "wide" language like L ′ = (a + b) * . The key difference is that a thin language only allows a bounded number of letter changes (in L we have a's, then b's, then c's) while a wide language contains words with arbitrarily many alternations between distinct letters.
These observations lead to a new descriptive complexity measure for the formulas in FO(A * , ⊑, w 1 , . . .). The associated fragments, denoted Σ i,j for i, j ∈ N, consist of all Σ i formulas where j variables, say x 1 , x 2 , . . . , x j can be used without any restrictions, while all the other variables must be restricted with respect to letter alternations, say using x ∈ (a * 1 a * 2 · · · a * n ) ℓ for some ℓ ∈ N and assuming that a 1 , . . . , a n is a fixed enumeration of A. In computer-aided verification, such bounded quantifications occur in the analysis of bounded context-switching protocols.
Within this classification framework, we can delineate a precise undecidability landscape. The Σ 1,2 fragment is decidable while Σ 1,3 is undecidable even for |A| = 2. The Σ 2,0 fragment is decidable while Σ 2,1 is not. In fact, when all variables are alternation bounded, the entire first-order theory is decidable.
The computational complexity of all mentioned fragments is summarized in Table 1. Note that, in this table, Σ EXP n denotes the n-th level of the weak EXP hierarchy, which lies between NEXP and EXPSPACE [15,18].
Finally, we offer a series of expressiveness results showing how various predicates like concatenation or length function can, or cannot, be defined in the Σ i,j fragments. As demonstrated in the paper, expressiveness results are crucial to obtain hardness results. Beyond their theoretical interest, and since pinning down precise properties of words is not easy when only the subword ordering is available, these results provide a welcome intermediate language for defining more complex formulas.
Related work. We already mentioned works on the logic of concatenation, or the twovariable fragment FO 2 (A * , ⊑). Because undecidability appears so easily when reasoning about words, the focus is often on restricted fragments, typically Σ 1 , aka "constraint solving". Decision methods for constraints over words have been considered in several contexts but this usually does not include the subsequence predicate: these works rather consider the prefix ordering, and/or membership in a regular language, and/or functions for taking contiguous subsequences or computing the length of sequences, see, e.g., [1,13,19].
Outline of the paper. We provide in Section 2 the basic definitions and results necessary for our later developments. Then we show the undecidability of the Σ 1 fragment (Section 3) before focusing on the decidable fragments (Section 4). Finally, in Section 5, we turn to expressiveness questions.
Subwords and their logics
We consider finite words w, v, ... over a given finite alphabet A of letters like a, b, . . .. Concatenation of words is written multiplicatively, with the empty word ε as unit. We freely use regular expressions like (ab) * + (ba) * to denote regular languages.
The length of a word w is written |w| while, for a letter a ∈ A, |w| a denotes the number of occurrences of a in w. The set of all words over A is written A * .
A word v is a factor of w if there exist words w 1 and w 2 such that w = w 1 vw 2 . If furthermore w 1 = ε then v is a prefix of w, while if w 2 = ε then v is a suffix.
Subwords. We say that a word w is a subword (i.e., a subsequence) of v, written w ⊑ v, when w is some a 1 · · · a n and v can be written as v 0 a 1 v 1 · · · a n v n for some v 0 , v 1 , . . . , v n ∈ A * , e.g., ε ⊑ bba ⊑ ababa. We write w ⊏ v for the associated strict ordering, where w = v. Two words w and v are incomparable (with respect to the subword relation), denoted w ⊥ v, if w ⊑ v and v ⊑ w. Factors are a special case of subwords.
With any w ∈ A * we associate its upward closure ↑w, given by ↑w def = {v ∈ A * | w ⊑ v}. For example, ↑ab = A * aA * bA * . The definition of ↑w involves an implicit alphabet A that will always be clear from the context. Piecewise testable languages. Piecewise testable languages (abbreviated PT) constitute a subvariety of the languages of dot-depth one, themselves a subvariety of the star-free languages, which are a subvariety of the regular languages [10]. Among the several characterizations of PT languages, the most convenient for our purposes is the following one: L ⊆ A * is PT if, and only if, it is a boolean combination of languages of the form ↑w for some w ∈ A * . Thus the PT languages are exactly the monadic predicates that can be defined by a boolean combination of constraints of the form w i ⊑ x and/or w j ⊑ x, or equivalently by a quantifier-free ϕ L (x) formula in the FO(A * , ⊑, w 1 , . . .) logic. For example, the solutions of ϕ 2 (from the introduction) form a PT language. In the following, we often write "x ∈ L", where L is a given PT language, as an abbreviation for ϕ L (x), with the understanding that this is a Σ 0 formula.
Logic of subwords. Let V be the set of variables with typical elements x, y, . . . , u, v, . . .. For a first-order logic formula ϕ over a structure with domain D, we denote by ϕ ⊆ D V the set of satisfying assignments, with typical elements α, β, . . .. If ϕ has only one free variable, say x, and there is no danger of confusion, we sometimes write ϕ to mean {α(x) | α ∈ ϕ }. Moreover, fv(ϕ) denotes the set of free variables in ϕ.
By FO(A * , ⊑), we denote the first-order logic over the structure (A * , ⊑). In contrast, FO(A * , ⊑, w 1 , . . .) is the first-order logic over the structure (A * , ⊑, w 1 , . . .), where for each word w ∈ A * , the signature provides a constant symbol. In the case of FO(A * , ⊑, w 1 , . . .) and FO(A * , ⊑), assignments are members of (A * ) V . We will sometimes write w to denote the assignment that maps every variable to the word w ∈ A * . Moreover, (x → w) denotes the assignment in (A * ) {x} that maps x to w.
Bounding alternations. We define a fragment of first-order logic over the relational structure (A * , ⊑, w 1 , . . .). Let A = {a 1 , . . . , a n }. The starting point for introducing the fragments Σ i,j is the observation that if every variable in a sentence ϕ is introduced by a restricted quantifier of the form ∃x ∈ (a * 1 · · · a * n ) ℓ or ∀x ∈ (a * 1 · · · a * n ) ℓ for some ℓ ∈ N, then one can reduce the truth problem of ϕ to Presburger arithmetic. Note that the language (a * 1 · · · a * n ) ℓ is PT, implying that such restrictions, which we call alternation bounds, can be imposed within FO(A * , ⊑, w 1 , . . .) and without any additional quantifiers. This raises the question of how many variables without alternation bound one can allow without losing decidability.
In essence Σ i,j contains all formulas in the Σ i fragment with j variables without alternation bound. A formalization of this for sentences could just be a syntactic restriction: Every quantifier for all but at most j variables must be relative to some (a * 1 · · · a * n ) ℓ . However, this would not restrict free variables, which we need in order to build complex Σ i,j formulas from predicates defined in Σ i,j .
Formally, a formula with alternation bounds consists of a formula ϕ of FO(A * , ⊑, w 1 , . . .) and a function ℓ : V → N ∪ {∞}, which specifies the alternation bounds. This means, the semantics (ϕ, ℓ) of (ϕ, ℓ) is defined as φ , whereφ is defined as follows. First, we replace every quantifier Qx (Q ∈ {∃, ∀}) in ϕ by the relativized Qx ∈ (a * 1 · · · a * n ) ℓ(x) . Then we add the conjunction x∈fv(ϕ),ℓ(x)<∞ x ∈ (a * 1 · · · a * n ) ℓ(x) for the free variables. The fragment Σ i,j consists of those formulas with alternation bounds (ϕ, ℓ) where ϕ belongs to the Σ i fragment of FO(A * , ⊑, w 1 , . . .) and has at most j variables x ∈ V with ℓ(x) = ∞. We will always represent a formula in Σ i,j by its Σ i formula and the function ℓ will be clear from the context. Variables x ∈ V with ℓ(x) < ∞ will be called alternation bounded, the others alternation unbounded. In order to permit a polynomial translation into an equivalent formula in ordinary FO(A * , ⊑, w 1 , . . .), the alternation bounds are always encoded in unary. The fragment Π i,j is defined similarly, with Π i instead of Σ i .
Sometimes we define predicates that are satisfied for words with unbounded alternations (such as "u ∈ {a, b} * " when A = {a, b, c}), but want to use the corresponding formula in a context where the variables are alternation bounded ("u ∈ {a, b} * ∧ ab ⊑ u"). In that situation, we want to record the number of alternation unbounded variables we need for the definition, disregarding the free variables. Hence, Σ ′ i,j denotes those formulas with alternation bound in Σ i , where there are at most j quantified variables without alternation bound. The semantics is defined as for Σ i,j . The fragment Π ′ i,j is defined with Π i instead of Σ i .
Undecidability
3.1. The Σ 1,3 fragment. We begin with our main result, the undecidability of the Σ 1 theory of FO(A * , ⊑, w 1 , . . .) for |A| ≥ 2. In fact, we will even prove undecidability for the Σ 1,3 fragment. We need a few ingredients. A word w ∈ A + is called primitive if there is no v ∈ A + , |v| < |w|, with w ∈ v * . The following is a well-known basic fact from word combinatorics (see e.g. [4, Exercise 2.5])
Fact 3.1. If p ∈ A + is primitive, then pw = wp is equivalent to w ∈ p * .
We also use the following version of the fact that Diophantine sets are precisely the recursively enumerable sets [31].
x i = x j + x k x i = x j · x k x i = 1
with i, j, k ∈ [0, m], such that S = {y 0 ∈ N | ∃y 1 , . . . , y m ∈ N : (y 0 , . . . , y m ) satisfies E}.
We are now ready to prove our main result. Proof. We show how to express some basic properties of words and combine these to build more complex predicates, all the time keeping track of what fragments are involved. Here, we always use u, v, w as the free variables of the formula we currently construct.
Recall that for every PT language L ⊆ A * , we can express "u ∈ L" in Σ ′ 0,0 : we will use this silently, mainly for languages of the form ra * s where a is a letter and r, s are two words. 3 Note also that, since "u ∈ (a + b) * " can be expressed in Σ ′ 0,0 for a, b ∈ A, it suffices to prove the theorem in the case |A| = 2.
(1) We can express "|u| a < |v| a " in Σ ′ 1,0 :
∃x ∈ a * : x ⊑ v ∧ x ⊑ u.
(2) We can express "∃n : u = a n ∧ v = a n−1 b" in Σ 1,0 . Clearly, it suffices to show that we can express "∃n ≥ 2 : u = a n ∧ v = a n−1 b". Consider the formula:
u ∈ aaa * ∧ v ∈ a * b ∧ ∃x ∈ a * baa : |v| a < |u| a ∧ v ⊑ x ∧ u ⊑ x.
Suppose the formula is satisfied with u = a n , x = a ℓ baa and v = a m b. Then |v| a < |u| a implies m < n. By v ⊑ x, we have ℓ < m and thus ℓ < m < n, hence ℓ + 2 ≤ n. On the other hand, u ⊑ x implies n ≤ ℓ + 2 and thus n = ℓ + 2 and m = n − 1. Conversely, if u = a n and v = a n−1 b for some n ≥ 2, then the formula is satisfied with x = a n−2 baa.
(3) We can express "u, v ∈ (a + b) * b ∧ |u| a = |v| a " in Σ ′ 1,0 : u, v ∈ (a + b) *
∧ ∃x ∈ a * : ∃y ∈ a * b : ∃n : x = a n ∧ y = a n−1 b
∧ y ⊑ u ∧ y ⊑ v ∧ x ⊑ u ∧ x ⊑ v.
Suppose the formula is satisfied. Then a n−1 b ⊑ u and a n ⊑ u imply |u| a = n − 1. Moreover, if u ended in a, then a n−1 b ⊑ u would entail a n ⊑ u, which is not the case. Since |u| ≥ 1, we therefore have u ∈ {a, b} * b. By symmetry, we have |v| a = n − 1 and v ∈ {a, b} * b. Hence, |u| a = n − 1 = |v| a .
If u, v ∈ {a, b} * b with |u| a = |v| a , then the formula is satisfied with n = |u| a + 1. (4) We can express "∃n :
u = aaba n b ∧ v = aba n+1 b ∧ w = ba n+2 b" in Σ 1,0 : u ∈ aaba * b ∧ v ∈ aba * b ∧ w ∈ ba * b ∧ [u, v, w ∈ {a, b} * b ∧ |u| a = |v| a = |w| a ] .
(5) We can express "∃n : u = ba n b ∧ v = ba n+1 b" in Σ 1,0 . It suffices to show that we can express "∃n ≥ 1 : u = ba n b ∧ v = ba n+1 b". Consider the formula:
∃x ∈ aaba * b, y ∈ aba * b, z ∈ ba * b : ∃m : x = aaba m b ∧ y = aba m+1 b ∧ z = ba m+2 b ∧ u, v ∈ ba * b ∧ u ⊑ y ∧ u ⊑ x ∧ v ⊑ z ∧ v ⊑ y.
Suppose the formula is satisfied for u = ba k b and v = ba ℓ b. Then u ⊑ y and u ⊑ x imply k ≤ m + 1 and k > m, hence k = m + 1. Moreover, v ⊑ z and v ⊑ y imply ℓ ≤ m + 2 and ℓ > m + 1, hence ℓ = m + 2. Hence, with n = m + 1 we have u = ba n b and v = ba n+1 b and n ≥ 1.
Conversely, if u = ba n b and v = ba n+1 b for some n ≥ 1, then the formula is satisfied with m = n − 1.
(6) We can express "∃n : u = a n ∧ v = a n+1 " in Σ 1,0 . For this, it suffices to express "∃n ≥ 1 : u = a n ∧ v = a n+1 ". As in Item 5, one verifies correctness of the following:
∃x, y, z : ∃m :
x = ba m b ∧ y = ba m+1 b ∧ z = ba m+2 b ∧ u, v ∈ a * ∧ u ⊑ y ∧ u ⊑ x ∧ v ⊑ z ∧ v ⊑ y.(7)
We can express "v = a |u|a " in Σ ′ 1,0 : ∃x ∈ a * : ∃n : v = a n ∧ x = a n+1 ∧ v ⊑ u ∧ x ⊑ u.
(8) We can express "|u| a = |v| a " in Σ ′ 1,0 :
∃x : x = a |u|a ∧ x = a |v|a .
(9) For a = b, we can express "u ∈ a * ∧ v = bu" in Σ 1,0 :
u ∈ a * ∧ v ∈ ba * ∧ |v| a = |u| a .
(10) For a = b, we can express "u ∈ a * ∧ v = ub" in Σ 1,0 :
u ∈ a * ∧ v ∈ a * b ∧ |v| a = |u| a .
(11) We can express "|w| a = |u| a + |v| a " for any a ∈ A in Σ ′
1,0 . Let b ∈ A {a}: ∃x, y ∈ a * : ∃z ∈ a * ba * : x = a |u|a ∧ y = a |v|a ∧ xb ⊑ z ∧ xab ⊑ z ∧ by ⊑ z ∧ bya ⊑ z (1) ∧ |w| a = |z| a(2)
Note that we can define xb, (xa)b and b(ya) thanks to Items 6, 9 and 10. The constraints in Eq. (1) enforce that z = xby and hence |z| a = |x| a + |y| a = |u| a + |v| a . (12) For k, n 0 , . . . , n k ∈ N, a = b, let r a (a n0 ba n1 · · · ba n k ) = n k , which defines a function r a : {a, b} * → N. We can express "v = a ra(u) " in Σ ′ 1,0 : v ∈ a * ∧ ∃x ∈ b * a * : ∃y ∈ b * a * :
|x| b = |y| b = |u| b ∧ |y| a = |x| a + 1 ∧ x ⊑ u ∧ y ⊑ u ∧ |v| a = |x| a .
Note that |x| b = |y| b = |u| b can be expressed according to Item 8 and |y| a = |x| a + 1 can be expressed thanks to Item 11. Write u = a n0 ba n1 · · · ba n k .
Suppose the formula is satisfied. Then |x| b = |y| b = |u| b and |y| a = |x| a + 1 imply that x = b k a m and y = b k a m+1 for some m ∈ N. Moreover, x ⊑ u implies m ≤ n k and y ⊑ u implies m + 1 > n k , thus m = n k . Thus, |v| a = |x| a entails |v| a = n k .
Conversely, if v = a n k , then the formula is satisfied with x = b k a n k and y = b k a n k +1 .
(13) For a ∈ A, we can express "v ∈ a * ∧ w = uv" in Σ ′ 1,0 . Let b = a and consider the formula v ∈ a * ∧ ∧ ∃x ∈ a * : ∃y ∈ a * : x = r a (u) ∧ y = r a (w) ∧ |w| b = |u| b ∧ u ⊑ w (3) ∧ |y| a = |x| a + |v| a ∧ |w| a = |u| a + |v| a(4)
To show correctness, suppose the formula is satisfied with u = a n0 ba n1 · · · ba n k and w = a m0 ba m1 · · · ba m ℓ . The conditions in Eq. (3) imply that k = ℓ and w = a m0 ba m1 · · · ba m k and n i ≤ m i for i ∈ [0, k]. The conditions in Eq. (4) then entail m k = n k + |v| a and k i=0 m i = k i=0 n i + |v| a , which together is only possible if m i = n i for i ∈ [0, k − 1]. This means we have w = uv. The converse is clear. (14) We can express "u is prefix of v" in Σ 1,3 :
a∈A ∃x : ∃y ∈ a * : x = uy ∧ x ⊑ v ∧ |x| a = |v| a .
Suppose the formula is satisfied. Then uy ⊑ v for some y ∈ A * , which implies u ⊑ v. Let p be the shortest prefix of v with u ⊑ p. Observe that whenever uw ⊑ v, we also have pw ⊑ v, because the leftmost embedding of uw in v has to match up u with p. Now towards a contradiction, assume |p| > |u|. Then there is some a ∈ A with |p| a > |u| a . The formula tells us that for some m ∈ N, we have ua m ⊑ v and |u| a + m = |v| a . Our observation yields pa m ⊑ v, and hence |v| a ≥ |p| a + m > |u| a + m = |v| a , a contradiction. The converse is clear. (15) We can express "w = uv" in Σ 1,3 : Since expressibility is preserved by mirroring, we can express prefix and suffix by Item 14. Let ⊑ p and ⊑ s denote the prefix and suffix relation, respectively. We can use the formula
u ⊑ p w ∧ v ⊑ s w ∧ a∈A |w| a = |u| a + |v| a . (16) For a, b ∈ A, a = b, we can express "u ∈ (ab) * " in Σ 1,3 : By Item 15, we can use the formula ∃v : v = uab ∧ v = abu, which, according to Fact 3.1, is equivalent to u ∈ (ab) * . (17) For a, b ∈ A, a = b, we can express "|u| a = |v| b " in Σ 1,3 by using ∃x ∈ (ab) * : |u| a = |x| a ∧ |v| b = |x| b . (18) We can express "∃m, n : u = a n ∧ v = a m ∧ w = a m·n " in Σ 1,3 : u, v, w ∈ a * ∧ ∃x : [∃y, z : y = bu ∧ z = yx ∧ z = xy] ∧ |x| b = |v| a ∧ |w| a = |x| a .
The conditions in brackets require (bu)x = x(bu). Since bu ∈ ba * is primitive, this is equivalent to x ∈ (bu) * (cf Fact 3.1). (19) We use the fact that every recursively enumerable set of natural numbers is Diophantine. Applying Theorem 3.2 to S yields a finite set E of equations over the variables {x 0 , . . . , x m }. The formula ϕ is of the form
ϕ ≡ ∃x 1 , x 2 , . . . , x m ∈ a * : ψ,
where ψ is a conjunction of the following Σ 1,3 formulas. For each equation
x i = 1, we add x i = a. For each equation x i = x j + x k , we add a formula expressing |x i | a = |x j | a + |x k | a . For each equation x i = x j · x k , we add a formula expressing x i = a |xj |·|x k | . Then we clearly have ϕ = {a k | k ∈ S}.
As an immediate consequence, one sees that the truth problem is also undecidable for the Σ 1 fragment of the logic of subwords without constants but enriched with predicates like "|u| a = 2" for counting letter occurrences.
It can even be shown that there is a fixed word W ∈ {a, b} * such that the truth problem of Σ 1,3 over FO({a, b} * , ⊑, W ) is undecidable. In order to show undecidability with a single constant, we will need the fact that each word of length at least 3 is determined by its length and its strict subwords. For two words u, v ∈ A * , we write u ∼ n v if ↓{u}∩A ≤n = ↓{v}∩A ≤n . Theorem 3.5. There is a word W ∈ {a, b} * such that for every recursively enumerable set S ⊆ N, there is a Σ 1,3 -formula τ over the structure FO({a, b} * , ⊑, W ) such that
τ = {a k | k ∈ S}.
In particular, the truth problem for the Σ 1,3 fragment over FO({a, b} * , ⊑, W ) is undecidable.
Proof. In the proof of Theorem 3.3, we have constructed Σ 1,3 formulas over FO(A * , ⊑, w 1 , . . .) expressing successor, addition, and multiplication, more precisely: expressing "∃n ≥ 0 : u = a n ∧v = a n+1 " and "∃m, n ≥ 0 : u = a m ∧v = a n ∧w = a m+n " and "∃m, n ≥ 0 : u = a n ∧v = a m ∧ w = a m·n ". Let W 1 , . . . , W r ∈ {a, b} * be the constants occurring in these three Σ 1,3 formulas, plus ε. Let m the maximal length of any of these words, and let W = a m+1 b m+2 . Let S ⊆ N be recursively enumerable. Then, according to Theorem 3.2 and by the choice of W 1 , . . . , W r , there is a Σ 1,3 formula ϕ that only uses constants from W 1 , . . . , W r and with ϕ = {a k | k ∈ S}. We shall prove that using W , we can define all the words W 1 , . . . , W r . Consider the formula ∃x 0 , y 0 , . . . , x 2m+3 , y 2m+3 :
x 0 ⊏ · · · ⊏ x 2m+3 ⊑ W ∧ y 0 ⊏ · · · ⊏ y 2m+3 ⊑ W ∧ ∃x ′ 0 , . . . , x ′ m+1 : x ′ 0 ⊏ · · · ⊏ x ′ m+1 ⊑ W ∧ ∃y ′ 0 , . . . , y ′ m+2 : y ′ 0 ⊏ · · · ⊏ y ′ m+2 ⊑ W ∧ x 1 = y 1 ∧ x 1 ⊑ y ′ m+2 ∧ y 1 ⊑ x ′ m+1 ∧ ∃z 01 : x ′ 1 ⊑ z 01 ∧ x ′ 2 ⊑ z 01 ∧ y ′ 1 ⊑ z 01 ∧ y ′ 2 ⊑ z 01 ∧ ∃z 10 : x ′ 1 ⊑ z 10 ∧ x ′ 2 ⊑ z 10 ∧ y ′ 1 ⊑ z 10 ∧ y ′ 2 ⊑ z 10 ∧ z 01 ⊑ W ∧ z 01 = z 10 If it is satisfied, then |x i | = |y i | = i for i ∈ [0, 2m + 3] and since x 1 = y 1 , we have {x 1 , y 1 } = {a, b}. Since x 1 ⊑ y ′ m+2 we get y ′ i ∈ y * 1 and thus y ′ i = y i 1 for i ∈ [0, m + 2], which is only possible with y 1 = b. This implies x 1 = a. In particular, we get {z 01 , z 10 } = {ab, ba}. Since z 01 ⊑ W , we have z 01 = ab and z 10 = ba. On the other hand, if |x i | = |y i | = i for i ∈ [0, 2m + 3], x 1 = a, y 1 = b, x ′ i = a i , y ′ j = b j for i ∈ [0, m + 1], j ∈ [0, m + 2]
, z 01 = ab, and z 10 = ba, then the formula is clearly satisfied.
Hence, we can already define all words of length at most 2 and all words a i and b i for i ∈ [0, m + 1]. This lets us define other predicates.
(1) For each 0 ≤ ℓ ≤ m, we can express "|u| a = ℓ" using the formula
a ℓ ⊑ u ∧ a ℓ+1 ⊑ u
Note that since ℓ + 1 ≤ m + 1, we can already define a ℓ+1 . The same way, we can express "|u| b = ℓ". (2) For each 0 ≤ ℓ ≤ m, we can express "|u| = ℓ" using the formula i+j=ℓ |u| a = i ∧ |u| b = j.
(3) For each word w ∈ A ≤m , |w| > 2, we can define w. We proceed by induction. For |w| ≤ 2, we can already define w. Thus, suppose we can define every v ∈ A ≤n and let w ∈ A n+1 with n + 1 ≤ m. Consider the formula
|u| = n + 1 ∧ v∈A ≤n , v⊑w v ⊑ u ∧ v∈A ≤n , v ⊑w v ⊑ u.
Clearly, if u = w, then the formula is satisfied. On the other hand, suppose the formula is satisfied. It expresses that ↓{u} ∩ A ≤n = ↓{w} ∩ A ≤n . According to Lemma 3.4, this implies u = w. Note that all the variables we introduced to define the words in A ≤m carry words of length at most 2m + 3, meaning that we may assume that they are alternation bounded. Thus, we can define all words in A ≤m using forumlas in Σ 1,0 . Therefore, we can turn ϕ into a Σ 1,3 formula τ that contains W as its only constant and satisfies τ = ϕ .
It remains to show the second statement of the theorem. Let S ⊆ N be recursively enumerable but undecidable and let k ∈ N be given. We choose the formula ϕ as above, but we modify it as follows. Let ϕ 0 ≡ ϕ, and for i ∈ [1, k], let ϕ i express ∃y : ϕ i−1 (y) ∧ ∃n : x = a n ∧ y = a n+1 .
Finally, let ϕ k+1 be the formula ∃x : ϕ k (x) ∧ x = ε. Note that by the choice of W 1 , . . . , W r , we may assume that ϕ k+1 contains only the constants W 1 , . . . , W r . Note that ϕ k+1 has no free variables and is true if and only if k ∈ S. Now τ k+1 is obtained from ϕ k+1 just as τ is obtained from ϕ. It follows as above that τ k+1 is true if and only if k ∈ S. This proves the second statement of the theorem.
Here W must be complex enough: For instance, the Σ 1 fragment of FO({a, b} * , ⊑, ε) and of FO({a, b} * , ⊑, a), respectively, is decidable.
Theorem 3.6. The Σ 1 -fragment of FO({a, b} * , ⊑, ε, a) is decidable.
Proof. We may assume that the input formula is of the form ϕ ≡ ∃x 1 , . . . , x n : ψ, where ψ is a conjunction of literals of the following forms:
c ⊑ x c ⊑ x x ⊑ c x ⊑ c x ⊑ y x ⊑ y where c ∈ {ε, a} and x ∈ X = {x 1 , . . . , x n }.
For each literal x ⊑ c, we can guess whether x = ε or x = a and hence assume that these do not occur. Literals of the form ε ⊑ x are always satisfied, whereas ε ⊑ x is never satisfied. Hence, without loss of generality, these do not occur either and we may assume that all literals are of the form
a ⊑ x a ⊑ x x ⊑ ε x ⊑ a x ⊑ y x ⊑ y. Moreover x ⊑ ε is equivalent to x = ε and the literal a ⊑ x is equivalent to x ∈ b * .
We can therefore assume that all literals are of the form
a ⊑ x x ∈ b * x = ε x ⊑ a x ⊑ y x ⊑ y
Let L ⊆ X be the set of those variables for which we have a x ∈ b * literal. Clearly, x ∈ b * and x = ε together mean x ∈ b + . In the same way, x ∈ b * and x ⊑ a together mean x ∈ b + . Furthermore, a ⊑ x and x ∈ b * are mutually exclusive. Hence, we can rewrite our constraint system as follows:
• For each x ∈ L, we have either a constraint x ∈ b * or x ∈ b + .
• For each x ∈ X L, we have a set of constraints of the form a ⊑ x, x = ε, or x ⊑ a.
• We have constraints of the form x ⊑ y and x ⊑ y. As a final reformulation step, note that every u ∈ {a, b} * satisfies either u ∈ b * or a ⊑ u. Therefore, we may assume that for every x ∈ X, we have either a constraint x ∈ b * or x ∈ b + (and hence x ∈ L) or a ⊑ x. Notice that if we already have a ⊑ x, then x = ε is redundant. Thus, we have the following constraints:
(1) For each x ∈ L, we have either x ∈ b * or x ∈ b + .
(2) For each x ∈ X L, we have a ⊑ x and possibly x ⊑ a.
(3) A set of constraints of the form x ⊑ y or x ⊑ y. We say that a partial order (X, ≤) is compatible if (1) L is downward closed and linearly ordered,
(2) for each constraint x ⊑ y (x ⊑ y), we have x ≤ y (x ≤ y). We claim that ϕ is satisfied in FO({a, b} * , ⊑, ε, a)
if and only if there is a compatible partial order on X. Since the latter is clearly decidable, this implies the theorem.
Of course, if ϕ is satisfied, then the subword ordering induces a compatible partial order on X. So let us prove the converse and suppose (X, ≤) is a compatible partial order and let P = X L. Then, (P, ≤) is a partial order and we can find some m ≥ 0 such that (P, ≤) embeds into the lattice {0, 1} m of m-tuples over {0, 1} with componentwise comparison. Consider such an embedding with m ≥ 2. This embedding allows us to assign to each x ∈ P a word u x ∈ aβ 1 · · · aβ m , where β 1 , . . . , β m ∈ {ε, b} such that x ≤ y if and only if u x ⊑ u y . Now write L = {ℓ 1 , . . . , ℓ k } with ℓ 1 ≤ · · · ≤ ℓ k . We now define a function f : X → {0, . . . , k}. Note that for each x ∈ P , the set ↓{x} ∩ L is a downward closed subset of L and hence of the form {ℓ 1 , . . . , ℓ i } for some i ≥ 0. In this case, set f (x) = i. Moreover, let
P i = {x ∈ P | ↓{x} ∩ L = {ℓ 1 , . . . , ℓ i }}.
This allows us to construct an assignment of words v x to variables x. Let us explain the intuition. In order ensure that v ℓ1 ⊑ v x for all x ∈ P 0 , we let v x = u x and notice that then, the words v x all contain at most m-many b's. Hence, we set v ℓi = b m+1 . Now, we have to make sure that the words for v x with x ∈ P 1 all contain v ℓ1 as a subword, so we pad the words u x with b's on the left: We set v x = b m+1 u x . Now, in turn, we need to make sure that v ℓ2 contains more than (m + 1) + m-many b's, leading to v ℓ2 = b 2(m+1) , and so on. Thus, we set:
v ℓi = b i(m+1) v x = b f (x)·(m+1) u x for i ∈ {1, .
. . , k} and x ∈ P . Let us show that this assignment sastisfies our constraint system.
• Consider a constraint x ⊑ y. We have x ≤ y.
-
If x, y ∈ P , then ↓{x} ∩ L ⊆ ↓{y} ∩ L and thus f (x) ≤ f (y). Since also u x ⊑ u y , we have v x ⊑ v y . -If y ∈ L, then also x ∈ L (since L is downward closed) and thus clearly v x ⊑ v y . -If y ∈ P and x ∈ L. Suppose x = y i and f (y) = j. By definition of f , we have i ≤ j and hence v x = v ℓi = b i(m+1) ⊑ b j(m+1) u y = v y . • Consider a constraint x ⊑ y. Then x ≤ y.
-If x, y ∈ P , then u x ⊑ u y by choice of u x and u y . In particular, we have v
x = b f (x)·(m+1) u x ⊑ b f (y)·(m+1) u y = v y . -If x ∈ P and y ∈ L, then v y ∈ b * and a ⊑ v x . Thus v x ⊑ v y .
-If x ∈ L and y ∈ P , say x = ℓ i . Then x ≤ y means that ℓ i / ∈ ↓{y} ∩ L and hence f (y) < i. Note that v y = b f (y)(m+1) u y and that |u y | b ≤ m. Therefore
|v y | b ≤ f (y)(m + 1) + m < i(m + 1) and hence v x = v ℓi = b i(m+1) ⊑ v y .
-If x, y ∈ L, then x = y i and y = y j with j < i. Then clearly v x ⊑ v y .
• Constraints x ∈ b * or x ∈ b + with x ∈ L are of course satisfied.
• Constraints x ⊑ a with x ∈ P are satisfied because |u x | a ≥ m ≥ 2 for every x ∈ P . This established our claim and thus the theorem.
This raises an interesting question: For which sets {W 1 , W 2 , . . .} ⊆ A * of constants is the truth problem for Σ 1 sentences over FO(A * , ⊑, W 1 , W 2 , . . .) decidable?
3.2. The Σ 2,1 fragment. Our next result is that if we allow one more quantifier alternation, then already one variable without alternation bound is sufficient to prove undecidability.
Theorem 3.7. Let |A| ≥ 2 and a ∈ A. For each recursively enumerable set S ⊆ N, there is a Σ 2,1 formula ϕ over the structure FO(A * , ⊑, w 1 , . . .) with one free variable such that ϕ = {a k | k ∈ S}. In particular, the truth problem for Σ 2,1 is undecidable.
Proof.
(1) We can express "|u| a ≤ |v| a " in Π ′ 1,0 : ∀x ∈ a * :
x ⊑ u ∨ x ⊑ v.
Hence, we can express "|u| a = |v| a " in Π ′ 1,0 . (2) We can express "|u| a > |v| a " in Π ′ 1,0 : This follows from the fact that |u| a ≤ |v| a is expressible in Σ ′ 1,0 . (3) We can express "|u| a = |v| a " in Π ′ 1,0 according to the previous item.
(4) We can express "u ∈ a * ∧ v ∈ (bu) * " in Π 1,1 . It clearly suffices to express "u ∈ a * ∧ v ∈ (bu) + " in Π 1,1 . Consider the formula v ∈ b{a, b} * ∧ ∀x ∈ b + a * b * :
|x| b = |v| b ∨ (|x| a > |u| a ∧ x ⊑ v) ∨ (|x| a ≤ |u| a ∧ x ⊑ v) .
Note that "v ∈ b{a, b} * " is expressible in Π ′ 1,0 because v ∈ a{a, b} * is expressible in Σ ′ 1,0 (see Item 3 in the proof of Theorem 3.3). Moreover, notice that since b * a * b * = {a, b} * ↑aba, the language b + a * b * = (b * a * b * ) ∩ (↑ba ∪ (↑b ↑a)) is piecewise testable and thus definable in Σ ′ 0,0 . (5) We can express "|u| a = |v| b " in Σ ′ 2,1 : ∃x ∈ (ab) * : |u| a = |x| a ∧ |v| b = |x| b .
(6) We can express "∃m, n : u = a m ∧ v = a n ∧ w = a m·n " in Σ 2,1 :
u ∈ a * ∧ v ∈ a * ∧ w ∈ a * ∧ ∃y ∈ b * : ∃x ∈ (bu) * :
|x| b = |y| b ∧ |y| b = |v| a ∧ |x| a = |w| a .
Note that we employ the variable y because directly expressing |x| b = |v| a (using the previous item) would require an additional alternation unbounded variable besides x, but we can only use one. (7) Recall that "|w| a = |u| a + |v| a " is expressible in Σ ′ 1,0 (see Item 11 of Theorem 3.3) and hence "∃m, n : u = a m ∧ v = a n ∧ w = a m+n " in Σ 1,0 . Thus, we can implement Diophantine equations as in the proof of Theorem 3.3. Note that if we have no constants, we cannot define a language in a * , because all definable subsets are closed under automorphisms of (A * , ⊑). It will be useful for the next proof to have a classification of all automorphisms of (A * , ⊑). The following is shown implicitly by Kudinov et. al. [29], but we include a short proof for completeness. Proof. Clearly, maps as described in the lemma are automorphisms. Assume µ is an automorphism. Since µ has to preserve the minimal element, we have µ(ε) = ε. It also has to preserve the set of minimal elements of A * {ε}, hence the set A. Repeating this argument yields that µ has to preserve length. Therefore, according to Lemma 3.4, if µ is identical on A ≤n for n ≥ 2, then it is the identity on A ≤n+1 . By induction, this implies that if an automorphism is identical on A ≤2 , then it is the identity on A * . Hence, if two automorphisms agree on A ≤2 , then they are the same. It therefore suffices to show that every automorphism µ agrees on A ≤2 with a map as described in the lemma. Since µ preserves the set A, the map π = µ| A is a permutation of A. Moreover, for any a, b ∈ A, we have µ(ab) = µ(a)µ(b) or µ(ab) = µ(b)µ(a). If µ(ab) = µ(a)µ(b), then we cannot have µ(bc) = µ(c)µ(b), because the two words ab and bc have only one common upper bound of length three (namely abc), whereas the words µ(a)µ(b) and µ(c)µ(b) have two, namely µ(c)µ(a)µ(b) and µ(a)µ(c)µ(b). Therefore, if µ(ab) = µ(a)µ(b), then µ(bc) = µ(b)µ(c). In particular, if µ(ab) = µ(a)µ(b), then we have µ(cd) = µ(c)µ(d) for all c, d ∈ A. Hence on A ≤2 , µ agrees with a map as decribed.
Corollary 3.9. Let |A| ≥ 2. For each recursively enumerable set S ⊆ N, there is a Σ 2 formula τ over the structure FO(A * , ⊑) that defines the language
τ = {a k | a ∈ A, k ∈ S}.
In particular, the truth problem for Σ 2 is undecidable.
Proof. Fix a letter a ∈ A. Let S ⊆ N be recursively enumerable, let ϕ be the Σ 1,3 formula provided by Theorem 3.3 with one free variable x and with ϕ = {a k | k ∈ S}, and let w 1 , . . . , w m ∈ A * be the constants used in the formula ϕ.
It was shown in [23] that from w 1 , . . . , w m , one can construct a Σ 2 formula ψ over FO(A * , ⊑) with free variables V = {x 1 , . . . , x m } such that for α ∈ (A * ) V , we have α ∈ ψ if and only if there is an automorphism · :
A * → A * such that α(x i ) = w i for every i ∈ [1, m].
Let ϕ ′ be the formula obtained from ϕ by replacing every occurrence of w i with x i . Moreover, let τ ≡ ∃z 1 , . . . , z r : ψ ∧ ϕ ′ .
Then, τ is clearly a Σ 2 formula and has exactly one free variable, say x. We claim that
τ = {b k | b ∈ A, k ∈ S}.(5)
If k ∈ S, then a k ∈ ϕ and hence clearly b k ∈ τ for each b ∈ A. Moreover, if w ∈ τ , then for some α ∈ ψ , we have α w |= ϕ ′ , where α w denotes the assignment with α w | V = α and α(x) = w. This means, there is an automorphism · of (A * , ⊑) such that α(x i ) = w i for i ∈ [1, m]. Therefore, there is some w ′ ∈ A * that satisfies ϕ such that w = w ′ . In particular, w ′ = a k for some k ∈ S and hence w = b k for some b ∈ A. This proves Eq. (5).
We can now proceed as in Theorem 3.5 to show undecidability of the truth problem.
Complexity
In this section, we study the complexity of the truth problem for the Σ i,j fragments of FO(A * , ⊑, w 1 , . . .).
4.1.
Complexity of Σ i,0 . We begin with the case j = 0. In the following, Σ EXP n denotes the n-th level of the weak EXP hierarchy [15,18]. Theorem 4.1. If |A| ≥ 2, then the truth problem for Σ i,0 is NP-complete for i = 1 and Σ EXP i−1 -complete for i > 1.
We provide a polynomial inter-reduction with the Σ i fragment of FO(N, 0, 1, +, <), a.k.a. Presburger Arithmetic (PA), for which Haase [17] has recently proven Σ EXP i−1 -completeness for i > 1. The Σ 1 fragment of PA is NP-complete [32].
The reduction from PA to Σ 1,0 fixes a letter a ∈ A and encodes every number k ∈ N by a k . Addition can then be expressed in Σ 1,0 (Item 11 of Theorem 3.3). Note that although this ostensibly works with one letter, we need another letter in A to express addition. This is crucial: If |A| = 1, then FO(A * , ⊑, w 1 , . . .) is just FO(N, <), which has a PSPACE-complete truth problem [12,35]. Moreover, an inspection of the proof of Theorem 3.3 shows that an alternation bound of ℓ = 2 suffices to define addition, which is tight: if we only use a bound of 1, we can also easily reduce to FO(N, <).
The reduction from Σ i,0 to Presburger arithmetic encodes a word w known to belong to (a * 1 · · · a * n ) ℓ , i.e., of the form There are existential Presburger formulas ϕ and ψ of size polynomial in n and ℓ such that ϕ(x 1,1 , . . . , x ℓ,n , y 1,1 , . . . , y ℓ,n ) ⇐⇒ w x ⊑ w y , ψ(x 1,1 , . . . , x ℓ,n , y 1,1 , . . . , y ℓ,n ) ⇐⇒ w x ⊑ w y .
Let us briefly describe these formulas. Let I = [1, ℓ] × [1, n] and order the pairs (i, j) ∈ I lexicographically: (i ′ , j ′ ) < (i, j) if i ′ < i or i = i ′ and j ′ < j. This captures the order of the a xi,j j factors in w x . We now define formulas τ i,j and η i,j where the t i,j,k 's and e i,j,k 's are extra free variables:
τ i,j : 1≤k≤ℓ t i,j,k = 0 if e i ′ ,j ′ ,k ′ > 0 for some (i ′ , j ′ ) < (i, j) and k ′ > k y k,j − i−1 i ′ =1 e i ′ ,j,k otherwise η i,j : 1≤k≤ℓ e i,j,k = min t i,j,k , x i,j − k−1 r=1 e i,j,r
These expressions define the leftmost embedding of w x into w y : the variable t i,j,k describes how many letters from a y k,j j are available for embedding the a xi,j j factor of w x into w y . The variable e i,j,k counts how many of these available letters are actually used for the a xi,j j factor in the left-most embedding of w x into w y . Since i and j, k are bounded by n and ℓ, we have polynomially many formulas of polynomial size.
Define ξ = (i,j)∈I τ i,j ∧ η i,j and the formulas ϕ, ψ as: ∃t 1,1,1 · · · ∃t ℓ,n,ℓ ∃e 1,1,1 · · · ∃e ℓ,n,ℓ : ξ ∧
(i,j)∈I x i,j ≤ ℓ k=1 e i,j,k (ϕ)
∃t 1,1,1 · · · ∃t ℓ,n,ℓ ∃e 1,1,1 · · · ∃e ℓ,n,ℓ : ξ ∧ (i,j)∈I
x i,j > ℓ k=1 e i,j,k(ψ)
Since formulas τ i,j and η i,j are inductive equations that uniquely define the values of t i,j,k and e i,j,k as functions of the x and y vectors, ψ is equivalent to the negation of ϕ. Moreover, ϕ expresses that there is enough room to embed each factor a xi,j j in w y , i.e., that w x ⊑ w y as claimed, and both formulas are easily constructed in polynomial time.
4.2.
Complexity of Σ 1,1 . Theorem 4.3. The truth problem for the Σ 1,1 fragment is NP-complete.
Of course, hardness is inherited from Σ 1,0 . Conversely, NP-membership is shown by a reduction to the Σ 1,0 fragment. For this reduction, we first explain how a single "unbounded" word can be made alternation bounded while respecting its relationships with other alternation bounded words.
For this we use a slightly different measure of alternation levels for words: we factor words in blocks of repeating letters, writing u = k i=1 a i ℓi with ℓ i > 0 and a i = a i+1 for all i. By "an a-block of u" we mean an occurrence of a factor a ℓi i with a i = a. We note that requiring some bound in the number of blocks is equivalent to bounding the number of alternations when it comes to defining the Σ i,j fragments. However, counting blocks is more precise.
Lemma 4.4. Let t, x 1 , . . . , x n , y 1 , . . . , y m ∈ A * such that:
• for all i, x i ⊑ t,
• for all j, y j ⊑ t,
• for all i and j, x i and y j have less than ℓ blocks, • t has k > (m + n) · ℓ + |A| blocks. Then there exists t ′ ∈ A * such that:
• for all i, x i ⊑ t ′ ,
• for all j, y j ⊑ t ′ , • t ′ has either k − 1 or k − 2 blocks.
Proof. Given u ∈ A * , we write Im u for the image of the left-most embedding of u into t. This is a set of positions in t and, in case u ⊑ t, these positions only account for the longest prefix of u that can be embedded in t. In particular, and since we assumed x i ⊑ t and y j ⊑ t, then | Im x i | = |x i | and | Im y j | < |y j | for all i, j.
Let b 0 be an a-block of t. This block is said to be irreducible if either (1) it is the last, i.e. right-most, a-block of t, or (2) writing t under the form t = t 0 b 0 t 1 b 1 t 2 where b 1 is the next a-block, i.e. a / ∈ t 1 , one of the following holds:
• there is some i s.t. b 0 ∩ Im x i = ∅ and t 1 ∩ Im x i = ∅.
• there is j s.t. b 0 ∩ Im y j = ∅ and t 1 ∩ Im y j = ∅ and b 1 ∩ Im y j = ∅. Otherwise b 0 is said to be reducible.
Claim: t contains a reducible block.
Indeed, every irreducible block is either a right-most a-block for some a, or can be associated with a letter alternation in some x i , or in some y j . Furthermore, this association is injective. Thus there are at most (n + m) · ℓ irreducible blocks that are not right-most (and at most |A| right-most blocks). Since k > (n + m) · ℓ + |A|, t has a reducible block.
So let us pick one such reducible block, say an a-block b 0 , write t under the form t = t 0 b 0 t 1 b 1 t 2 as above, and let t ′ = t 0 t 1 b 0 b 1 t 2 .
Claim: t ′ fulfills the requirements of Lemma 4.4.
Since b 1 is an a-block, b 0 b 1 is now a block of t ′ and t ′ has less than k blocks. Moreover, the only other possible block merge is in t 0 t 1 , thus t ′ has at least k − 2 blocks. We now show that x i ⊑ t ′ and y j ⊑ t ′ for all i, j.
• Pick some i. Since x i ⊑ t, there is a unique decomposition x i = u 0 u 1 u 2 u 3 u 4 of x i such that Im u 0 ⊆ t 0 , Im u 1 ⊆ b 0 , Im u 2 ⊆ t 1 , Im u 3 ⊆ b 1 and Im u 4 ⊆ t 2 . Since b 0 is reducible one of Im u 1 or Im u 2 is empty. Thus one of u 1 or u 2 is the empty word, allowing x i ⊑ t ′ . • Assume, by way of contradiction, that for some j, y j ⊑ t ′ . Let z 1 be the maximal prefix of y j that embeds into t. We proceed to show that b 0 is irreducible.
-First, b 0 ∩ Im z 1 = ∅. Otherwise, since a / ∈ t 1 , the left-most embedding of z 1 into t ′ = t 0 t 1 b 0 b 1 t 2 does not use t 1 at all and we would have y j ⊑ t 0 b 0 b 1 t 2 ⊑ t.
-Secondly, t 1 ∩ Im z 1 is not empty. If it were, since b 0 is made of a's only and a / ∈ t 1 , the left-most embedding of z 1 into t 0 t 1 b 0 b 1 t 2 would not use t 1 and again we would have
y j ⊑ t 0 b 0 b 1 t 2 ⊑ t. -Lastly, b 1 ∩ Im z 1 = ∅. Otherwise, the already established fact b 0 ∩ Im z 1 = ∅
implies that y j embeds not only in t ′ but in t 0 t 1 t 2 , which is a subword of t. Since b 0 is reducible, we conclude that the original assumption that y j ⊑ t ′ does not hold, i.e., that y j ⊑ t ′ as required.
We now proceed to prove Theorem 4.3. Let ϕ be a Σ 1,1 sentence, where t is the only variable which is not alternation bounded. As a first step of our NP algorithm, we guess the set of literals occurring in ϕ that is satisfied. After guessing this subset, we check whether the formula would be satisfied if exactly this subset were true (which essentially amounts to evaluating a formula in propositional logic). If this is the case, it remains to check whether it is possible to choose words for all existential quantifiers of ϕ so that exactly this subset of literals is true. This means, we are left with the task of checking satisfiability of a formula ϕ of the form ϕ ≡ ∃t : ψ: Here, ψ begins with existential quantifiers for alternation bounded variables, which are followed by a conjunction of literals.
Every literal in ψ that involves t is of one of the following types:
(i) x ⊑ t, where x is an alternation bounded variable, (ii) y ⊑ t, where x is an alternation bounded variable, (iii) t ⊑ u, where u is an alternation bounded variable, (iv) t ⊑ z, where z is an alternation bounded variable, (v) t ⊑ t, (vi) t ⊑ t.
Assertions of types (v) and (vi) can be replaced by their truth value. If a literal t ⊑ z of type (iv) occurs in ψ, then ϕ is equivalent to ∃t ∈ (a * 1 · · · a * n ) ℓ : ψ, where ℓ is the alternation bound of variable z.
We can thus assume that only literals of types (i) to (iii) occur in ψ. Let n be the number of variables x that occur in literals of type (i), m the number of variables y that occur in literals of type (ii), ℓ the maximum alternation level of these variables, and k the maximum alternation bound of all variables u that appear in literals of type (iii). Let p = max{(m + n) · (ℓ · |A|) + |A|, k · |A| + 3}.
(here ℓ and k are multiplied by |A| to obtain a number of blocks from a maximum alternation). Then ϕ is equivalent to ∃t ∈ (a * 1 · · · a * n ) p : ψ. Indeed, if the restricted formula has a solution, it is a solution for ψ. Conversely, if ψ is satisfiable via some t ∈ A * having more than p blocks, then by Lemma 4.4, one can also use a t having between k · |A| and p blocks. The fact that t has more than k · |A| blocks ensures that all literals t ⊑ u are still satisfied.
Finally, we can replace every ∃t in ϕ by a bounded quantification and obtain an equivalent formula in Σ 1,0 which proves Theorem 4.3
To the authors' knowledge, the following was not known. Membership in NP follows from Theorem 4.3. For hardness, we can reduce CNF-SAT as follows. We encode an assignment α : {x 1 , . . . , x n } → {0, 1} as a word ba α(x1) ba α(x2) · · · ba α(xn) . With literals x i and ¬x i , we associate the languages K xi = ↑b i ab n−i and K ¬xi = {a, b} * ↑b i ab n−i . A clause C = L 1 ∨ · · · ∨ L m is then translated to K C = K L1 ∪ · · · ∪ K Ln and a conjunction of clauses C 1 ∧ · · · ∧ C k is satisfiable if, and only if, the PT language (b(a + ε)) n ∩ K C1 ∩ · · · ∩ K C k is nonempty.
In particular, given a finite number of PT languages, the problem of deciding whether they intersect non-vacuously is NP-complete. This is in contrast with general regular languages represented by DFAs (or NFAs), for which the problem is well-known to be PSPACEcomplete [28] Theorem 4.6. The truth problem for Σ 1,2 is in NEXP.
We prove Theorem 4.6 in two steps. The first step of our decidability result is to transform a Σ 1,2 formula into a system of constraints where the relations among those variables without an alternation bound have a tree shape. In the second step, we exploit the tree shape to construct an exponential-size counter automaton for the set of satisfying assignments.
Tree-shaped constraints. Let A = {a 1 , . . . , a n } and let V be a set of variables. A constraint system is a set of constraints of the form x ⊑ y, x ⊑ y, x = y, x ∈ (a * 1 · · · a * n ) ℓ , or x = w, where x, y ∈ V , ℓ ∈ N and w ∈ A * . A constraint of the form x ⊑ y, x ⊑ y, or x = y is also called (x, y)-constraint or (y, x)-constraint. Constraints of the form x ∈ (a * 1 · · · a * n ) ℓ are called alternation constraints. The set of assignments α ∈ (A * ) V that satisfy S is denoted by S . For a subset U ⊆ V , by existentially quantifying all variables outside of U , the constraint system S also defines a set of assignments in (A * ) U , which we denote by S U . For a constraint system S over V , we define the graph Γ(S) = (V, E) where {x, y} ∈ E if and only if S contains an (x, y)-constraint. We say that S is tree-shaped if Γ(S) is a forest. Furthermore, S is called alternation bounded if every variable occurring in S also has an alternation constraint in S. Proposition 4.7. For any disjunction-free Σ 1,2 -formula ϕ, one can construct polynomialsize constraint systems T and S over variables V ′ ⊇ fv(ϕ) such that (1) T is tree-shaped, (2) S is alternation bounded, and
(3) T ∪ S fv(ϕ) = ϕ .
Proof. We will need the notion of quotients of constraint systems. The idea is to identify certain pairs of variables. Suppose S is a constraint system over V . Furthermore, let ∼⊆ V × V be an equivalence relation that specifies which variables we want to identify with each other. Then we define the quotient S/∼ as a constraint system over the variable set V /∼ with the constraints
S/∼ = {[x]δ[y] | δ ∈ {⊑, ⊑}, (xδy) ∈ S}. ∪ {[x] = w | (x = w) ∈ S} ∪ {[x] ∈ (a *
1 · · · a * n ) ℓ | (x ∈ (a * 1 · · · a n ) ℓ ) ∈ S} In the course of constructing the constraint systems, it will be convenient to assume that two constraint systems are defined over disjoint sets of variables. To this end, we need some notation to state that a constraint system is equivalent to a formula even though its variables have different names. Suppose we have a constraint system S over the set of variables V ′ and ψ : fv(ϕ) → V ′ is an injective map. Then, via ψ, the formula ϕ defines a set of assignments ϕ ψ ⊆ (A * ) im(ψ) . We say that ϕ and S are ψ-equivalent if S im(ψ) = ϕ ψ .
We may clearly assume that all literals are of the form x ⊑ y, x ⊑ y, or x = w for w ∈ A * (and there are no literals w ⊑ x etc.).
We show the following stronger statement. Let B be the set of variables in ϕ that are alternation bounded. For each disjunction-free Σ 1,2 -formula ϕ, there is a set of variables V ′ , constraint systems T and S over V ′ , an injective map ψ : fv(ϕ) → V ′ such that the following holds. If B ′ ⊆ V ′ denotes the set of variables for which there is an alternation bound in S, then (i) T is tree-shaped, (ii) S is alternation-bounded, (iii) T ∪ S and ϕ are ψ-equivalent, (iv) for every x ∈ B we have ψ(x) ∈ B ′ , and (v) if |fv(ϕ) B| = 2 with fv(ϕ) B = {x, y}, then ψ(x) and ψ(y) are either neighbors in Γ(T ) or in distinct components.
To prove this statement by induction, we need to consider three cases.
(1) Literals, i.e. x ⊑ y, x ⊑ y, or x = w. There are only two variables so we can just take the literal as the set T and let S contain the global alternation constraints for the variables in the literal. (2) Existentially quantified formulas ∃x : ϕ. Here, we just reduce the set of free variables, so it suffices to adjust the map ψ.
(3) Conjunctions ϕ ≡ ϕ 0 ∧ ϕ 1 . Suppose we have constructed T i , S i , V ′ i , ψ i , B ′ i as above for i = 0, 1. We may clearly assume V 0 ∩ V 1 = ∅. We construct T, S, V ′ , ψ, B ′ as follows. Let ∼⊆ (V ′ 0 ∪ V ′ 1 ) × (V ′ 0 ∪ V ′ 1 )
be the smallest equivalence relation with ψ 0 (x) ∼ ψ 1 (x) for all x ∈ fv(ϕ) B. Then we take V ′ = (V ′ 0 ∪ V ′ 1 )/∼ and define T = (T 0 ∪ T 1 )/∼. Moreover, let
S = S 0 ∪ S 1 ∪ {[ψ 0 (x)] = [ψ 1 (x)] | x ∈ fv(ϕ) ∩ B}.
Moreover, we choose ψ : fv(ϕ) → V ′ so that ψ(x) = ψ 0 (x) if x ∈ fv(ϕ 0 ) and ψ(x) = ψ 1 (x) if x / ∈ fv(ϕ 0 ). It is clear that Items (ii) to (iv) above are satisfied. It remains to verify Items (i) and (v). We distinguish the following cases.
• |fv(ϕ i ) B| ≤ 1 for some i ∈ {0, 1}. Then there is at most one variable in V ′ i that is identified with a variable in V ′ 1−i by ∼. Hence, Γ(T ) is obtained from Γ(T 0 ) and Γ(T 1 ) either by disjoint union or by identifying one vertex from Γ(T 0 ) with one vertex from Γ(T 1 ). In any case, Γ(T ) is a forest. Hence, Item (i) is satisfied. Let us now show Item (v). Hence, assume |fv(ϕ) B| = 2 with fv(ϕ) B = {x, y}. Clearly, if fv(ϕ 0 ) B and fv(ϕ 1 ) B are disjoint, then no variables are identified and hence ψ(x) and ψ(y) are in distinct components of Γ(T ). Hence, we assume that fv(ϕ 0 ) B and fv(ϕ 1 ) B have a variable in common, say x. Since |fv(ϕ 0 ) B| ≤ 1, this means fv(ϕ 1 ) B = {x, y} and hence ψ 1 (x) and ψ 1 (y) are neighbors in Γ(T 1 ) or they are in distinct components of Γ(T 1 ). Γ(T ) is obtained from Γ(T 0 ) and Γ(T 1 ) by identifying ψ 0 (x) and ψ 1 (x). Therefore, ψ(x) and ψ(y) are neighbors in Γ(T ) if and only if they are neighbors in Γ(T 1 ). Moreover, they are in disjoint components of Γ(T ) if and only if they are in disjoint components of Γ(T 1 ). This proves Item (v).
• |fv(ϕ 0 )| = |fv(ϕ 1 )| = 2. Write fv(ϕ 0 ) B = fv(ϕ 1 ) = {x, y}. Note that Γ(T ) is obtained from Γ(T 0 ) and Γ(T 1 ) by identifying ψ 0 (x) with ψ 1 (x) and identifying ψ 0 (y) with ψ 1 (y). Since for each i ∈ {0, 1} we know that ψ i (x) and ψ i (y) are either neighbors in Γ(T i ) or in distinct components, this clearly implies that Γ(T ) is a forest. Hence, we have shown Item (i). Moreover, if for some i ∈ {0, 1}, ψ i (x) and ψ i (y) are neighbors in Γ(T i ), then ψ(x) and ψ(y) are neighbors in Γ(T ). Otherwise, ψ(x) and ψ(y) are in disjoint components of Γ(T ). This proves Item (v).
Counter automata. In the next step, we exploit the decomposition into a tree-shaped constraint system and an alternation-bounded constraint system to reduce satisfiability to non-emptiness of counter automata. To this end, we use a type of counter automata known as Parikh automata [8,26]. In terms of expressiveness, these are equivalent to the classical reversal-bounded counter automata [20], but their syntax makes them convenient for our purposes. Let V be a finite set of variables. A counter automaton over V is a tuple A = (Q, A, C, E, q 0 , F ), where Q is a finite set of states, A is the input alphabet, C is a set of counters,
E ⊆ Q × (A ∪ {ε}) V × N C × Q
is the finite set of edges, q 0 ∈ Q is the initial state, and F is a finite set of pairs (q, ϕ), where q ∈ Q and ϕ is an existential Presburger formula with free variables in C. A configuration of A is a tuple (q, α, µ), where q ∈ Q, α ∈ (A * ) V , µ ∈ N C . The step relation is defined as follows. We have (q, α, µ) → A (q ′ , α ′ , µ ′ ) iff there is an edge (q, β, ν, q ′ ) ∈ E such that α ′ = αβ and µ ′ = µ + ν. A counter automaton accepts a set of assignments, namely
L(A) = {α ∈ (A * ) V | ∃(q, ϕ) ∈ F : (q 0 , ε, 0) * − → A (q, α, µ), µ |= ϕ} .
We call a subset R ⊆ (A * ) V a counter relation if there is a counter automaton A with R = L(A). If |V | = 1, say V = {x}, then A defines a subset of A * , namely the language {w ∈ A * | (x → w) ∈ L(A)}. Languages of this form are called counter languages.
Suppose V 0 , V 1 are sets of variables with |V 0 ∩ V 1 | ≤ 1. Let A i = (Q i , A, C i , E i , q 0,i , F i ) be a counter automaton over V i for i = 0, 1 such that C 0 ∩ C 1 = ∅. Then a simple product construction yields a counter automaton
A 0 ⊗ A 1 = (Q 0 × Q 1 , A, C 0 ∪ C 1 , E, (q 0,0 , q 0,1 ), F ) over V 0 ∪ V 1 such that ((p 0 , p 1 ), α, µ) * − → A0⊗A1 ((p ′ 0 , p ′ 1 ), α ′ , µ ′ ) iff (p i , α| Vi , µ| Ci ) * − → Ai (p ′ i , α ′ | Vi , µ ′ | Ci ) for i = 0, 1 and F = {((p 0 , p 1 ), ϕ 0 ∧ ϕ 1 ) | (p i , ϕ i ) ∈ F i for i = 0, 1}.
Proposition 4.8. Given a tree-shaped constraint system T , one can construct in exponential time a counter automaton A with L(A) = T .
Proof. First, observe that it suffices to consider the case where every constraint in T involves two variables: The other constraints have the form x = w for some w ∈ A * or x ∈ (a * 1 · · · a * n ) ℓ for some ℓ ∈ N and can easily be imposed afterwards in the counter automaton.
We construct the automaton inductively. The statement is trivial if T involves only one variable, so assume |V | ≥ 2 from now on.
Since Γ(T ) is a forest, it contains a vertex x ∈ V with at most one neighbor. Let T ′ be the constraint system obtained from T by removing all constraints involving x and suppose we have already constructed a counter automaton A ′ with L(A ′ ) = T ′ .
Now if x has no neighbor, it is easy to construct the automaton for T . So suppose x has a unique neighbor y. Then, the additional constraints imposed in T are all (x, y)-constraints. Let T ′′ be the set of all (x, y)-constraints in T . Now note that if A ′′ is a counter automaton with L(A ′′ ) = T ′′ , then we have
L(A ′ ⊗ A ′′ ) = T ′ ∪ T ′′ = T .
Therefore, it suffices to construct in polynomial time a counter automaton A ′′ with L(A ′′ ) = T ′′ .
Observe that any set of (x, y)-constraints can be written as a disjunction of one of the following constraints:
(i) x = y (ii) x ⊏ y (iii) y ⊏ x (iv) x⊥y
Since it is easy to construct a counter automaton for the union of two relations accepted by counter automata, it suffices to construct a counter automaton for the set of solutions to each of the constraints (i)-(iv). This is obvious in all cases but the last. In that last case, one can notice that x⊥y holds if either (1) |x| < |y| and x ⊑ y or (2) |y| < |x| and y ⊑ x or (3) |x| = |y| and x = y. Note that each of these cases is easily realized in a counter automaton since we can use the counters to guarantee the length constraints. Moreover, the resulting counter automaton can clearly be constructed in polynomial time, which completes the proof.
We can now prove Theorem 4.6 by taking the constraint system provided by Proposition 4.7 and construct a counter automaton just for T using Proposition 4.8. Then, we can impose the constraints in S by using additional counters. Note that since all variables in S are alternation bounded, we can store these words, in the form of their occurring exponents, in counters. We can then install the polynomial-size Presburger formulas from Proposition 4.2 in the counter automaton to impose the binary constraints required by S. This results in an exponential size counter automaton that accepts the satisfying assignments of ϕ. The NEXP upper bound then follows from the fact that non-emptiness for counter automata is NP-complete.
Since counter automata are only a slight extension of reversal-bounded counter automata, the following is well-known.
Lemma 4.9. The non-emptiness problem for counter automata is NP-complete.
Proof. Given a counter automaton A = (Q, A, C, E, q 0 , F ), and a state q ∈ Q, we can construct an existential Presburger formula θ q with a free variable for each edge in A that is satisfied for an assignment ν ∈ N E iff there is a run from q 0 to q where each edge e ∈ E occurs exactly ν(e) times. This is just the fact that we can construct in polynomial time an existential Presburger formula for the Parikh image of a finite automaton [36].
For each edge e = (p, α, µ, p ′ ), define µ e = µ. Then the formula
(q,ϕ)∈F θ q ∧ c∈C c = µ e (c) · e ∧ ϕ
expresses precisely that there is an accepting run. The fact that the satisfiability problem for existential Presburger arithmetic is NP-complete [32] now gives us the upper bound as well as the lower bound.
We are now ready to prove Theorem 4.6.
Proof. First we use Proposition 4.7 to turn the formula ϕ into constraint systems T and S such that T is tree-shaped, S is alternation bounded, and T ∪ S fv(ϕ) = ϕ . Then we use Proposition 4.8 to to obtain in exponential time a counter automaton A with L(A) = T . Let U = {x 1 , . . . , x m } be the set of variables occurring in S. It remains to impose the constraints in S. We do this by first building the product with one automaton A i for each x i . This automaton imposes the alternation constraint on x i and stores the word read into x i in a set of counters. Note that this is possible because the word is alternation bounded. After taking the product with all these automata, we impose the remaining constraints from S (which are binary constraints or of the form x = w with x ∈ U , w ∈ A * ) by adding existential Presburger formulas that express subword constraints as provided by Proposition 4.2.
We may clearly assume that whenever there is a variable x, an alternation constraint x ∈ (a * 1 · · · a * n ) ℓ , and a constraint x = w, then w ∈ (a * 1 · · · a * n ) ℓ : Otherwise, the system is not satisfiable and clearly has an equivalent counter automaton.
Let A = {a 1 , . . . , a n }. Since S is alternation bounded, S contains an alternation constraint x i ∈ (a * 1 · · · a * n ) ℓi for each i ∈ [1, m]. Let ℓ be the maximum of all these ℓ i . We will use the counter variables c i,j,k for each i ∈ [1, m], j ∈ [1, ℓ], and k ∈ [1, n]. We set up the automaton A i over V i = {x i } such that it has an initial state q 0 , a state q 1 , and satisfies
(q 0 , ε, 0) * − → Ai (q 1 , α, µ)
if and only if α maps x i to the word a µ(ci,1,1) 1 · · · a µ(ci,1,n) n · · · a µ(c i,ℓ i ,1 ) 1
· · · a µ(c i,ℓ i ,n ) n .(6)
This can clearly be done with n · ℓ states. Moreover, let F i = {(q 1 , ⊤)}. Note that A i has the counters c i,j,k even for j > ℓ i although it never adds to them. The reason we have the variables c i,j,k for j > ℓ i is that this way, the formulas from Proposition 4.2 are applicable. Note that since each V i is a singleton, the automaton B = A × A 1 ⊗ · · · ⊗ A m is defined. It satisfies L(B) = S ′ , where S ′ is the set of alternation constraints in S.
It remains to impose the remaining constraints from S, namely the binary constraints and those of the form x = w with x ∈ U , w ∈ A * . Let R ⊆ S be the set of these remaining constraints. For each i and for µ ∈ (A * ) V , let w µ,i be the word in Eq. (6). According to Proposition 4.2, for each constraint r ∈ R, we can construct a polynomial size existential Presburger formula κ r such that µ |= κ r if and only if the constraint is satisfied for the assignment α with α(x i ) = w µ,i . Moreover, let κ = r∈R κ r .
Suppose B = (Q, A, C, E, q 0 , F ). In the last step, we construct the counter automaton B ′ = (Q, A, C, E, q 0 , F ′ ), where
F ′ = {(q, ψ ∧ κ) | (q, ψ) ∈ F }.
Now B ′ clearly satisfies L(B ′ ) = T ∪ S . Thus, if we obtain B ′′ from B ′ by projecting the input to those variables that occur freely in ϕ, then L(B ′′ ) = T ∪ S fv(ϕ) . Moreover, B ′′ can clearly be constructed in exponential time.
The membership of the truth problem in NEXP follows from the fact that emptiness of counter automata is in NP (Lemma 4.9).
Expressiveness
In this section, we shed some light on which predicates or languages are definable in our fragments Σ i,j . 5.1. Expressiveness of the Σ 1,0 fragment. A language L definable in Σ 1,0 always satisfies L ⊆ (a * 1 · · · a * n ) ℓ for some ℓ ∈ N. Hence, it can be described by the set of vectors that contain the occurring exponents. As can be derived from results in Section 4, these sets are always semilinear. In this section, we provide a decidable characterization of the semilinear sets that are expressible in this way. Stating the characterization requires some terminology.
Let V be a set of variables. By N V , we denote the set of mappings V → N. By a partition of V , we mean a set P = {V 1 , . . . , V n } of subsets V 1 , . . . ,
V n ⊆ V such that V i ∩ V j = ∅ for i = j and V 1 ∪ · · · ∪ V n = V . If U ∩ V = ∅ and α ∈ N U , β ∈ N V , we write α × β for the map γ ∈ N U∪V such that γ| U = α and γ| V = β. Furthermore, if S ⊆ N U , T ⊆ N V , then S × T = {α × β | α ∈ S, β ∈ T }. A semilinear set S ⊆ N V is P -compatible if it
has a semilinear representation where each occurring period vector belongs to N Vi for some i ∈ [1, n].
Theorem 5.1. Suppose L ⊆ (a * 1 · · · a * n ) ℓ . Let V = {x i,j | i ∈ [1, ℓ], j ∈ [1, n]} and consider the partition P = {V 1 , . . . , V n } where V j = {x i,j | i ∈ [1, ℓ]} for j ∈ [1, n]. The language L is definable in Σ 1,0 if, and only if, the set {α ∈ N V | a α(x1,1) 1 · · · a α(x1,n) n · · · a α(x ℓ,1 ) 1 · · · a α(x ℓ,n ) n ∈ L}
is a P -compatible semilinear set.
For example, this means we can define {a n ba n | n ∈ N}, but not {a n b n | n ∈ N}: A semilinear representation for the latter requires a period that produces both a's and b's.
The proof of Theorem 5.1 employs a characterization of P -compatible sets in terms of Presburger arithmetic. Let V be a set of variables and ϕ be a Presburger formula whose variables are in V . Let P = {V 1 , . . . , V n } be a partition of V . We say ϕ is P -compatible if there is a set of variables V ′ ⊇ V and a partition P ′ = {V ′ 1 , . . . , V ′ n } of V ′ such that (1) V j ⊆ V ′ j for each j ∈ [1, n] and (2) in each literal in ϕ, all variables belong to the same set V ′ j for some j ∈ [1, n]. The following is a simple observation.
Theorem 5.2. Let P = {V 1 , . . . , V n } be a partition of V . For sets S ⊆ N V , the following conditions are equivalent:
(1) S is a P -compatible semilinear set.
(2) S = ϕ for some P -compatible existential Presburger formula ϕ.
(3) S is a finite union of sets of the form A 1 × · · · × A n where each A j is a semilinear subset of N Vj .
Proof. The directions "3⇒1" and "1⇒2" are easy to see, so we show "2⇒3". If a set satisfies the condition of 3, then projecting to a subset of the coordinates yields again a set of this form. Therefore, it suffices to consider the case where in ϕ, there are no quantifiers. Now, bring ϕ into disjunctive normal form. Since each literal in ϕ only mentions variables from V j for some j ∈ [1, n], we can sort the literals of each co-clause of the DNF according to the subset V j they mention. Hence, we arrive at the form
ϕ ≡ k i=1 n j=1 ϕ i,j ,
where ϕ i,j only mentions variables from V j . This implies ϕ = k i=1 ϕ 1,j × · · · × ϕ n,j , which is the form required in 3.
We are now ready to prove Theorem 5.1.
Proof. If L is definable in Σ 1,0 , we can write down a Presburger formula that defines S. Here, in order to express the subword ordering (and its negation), we use the formulas from Proposition 4.2. Observe that these formulas are P -compatible. This means that S is P -compatible.
For the converse, suppose S is P -compatible. According to Theorem 5.21, S is defined by a P -compatible existential Presburger formula ϕ. Hence, ϕ has free variables V = {x i,j | i ∈ [1, ℓ], j ∈ [1, n]} and uses variables V ′ ⊇ V that are partitioned as V ′ = n j=1 V ′ j so that in each literal, all occurring variables belong to the same V ′ j . In the first step, we turn ϕ into a Σ 1,0 formulaφ with the same number of free variables.
For each x ∈ V ′ , we take a fresh variablex, which will hold words in a * j . More precisely, we have our new variablesV = {x | x ∈ V ′ } and a mapping ι :
N V ′ → (A * )V defined by ι(α)(x) = a α(x) j , where j is the unique index with x ∈ V ′ j .
We want to constructφ so that φ = ι( ϕ ). We obtainφ from ϕ as follows. For each literal x = y + z, we know that there is a j ∈ [1, n] with x, y, z ∈ V ′ i , so we can replace the literal with |x| aj = |ȳ| aj + |z| aj , which is expressible in Σ 1,0 according to Item 11 in the proof of Theorem 3.3 (note that in this case, we actually are in Σ 1,0 because the variablesx,ȳ, andz range over a * j and are thus alternation bounded). Since we can clearly also express |x| aj = |ȳ| aj in Σ 1,0 , we use this to implement literals x = y with x, y ∈ V ′ j . Literals of the form x = k with k ∈ N and x ∈ V ′ j can just be replaced byx = a k j . Then we clearly have φ = ι( ϕ ). In the second step, we construct the words a α(x1,1) 1 · · · a α(x1,n) n · · · a α(x ℓ,1 ) 1 · · · a α(x ℓ,n ) n for α ∈ ϕ . This is possible thanks to Item 13 of the proof of Theorem 3.3. We can express u =x 1,1 · · ·x 1,n · · ·x ℓ,1 · · ·x ℓ,n by applying Item 13 exactly ℓ · n − 1 times, once to append each x i,j to the word defined so far, using ℓ · n − 1 additional variables. Of course, all these variables can be restricted to (a * 1 · · · a * n ) ℓ , which means the resulting formula belongs to Σ 1,0 . Moreover, it clearly defines L.
Our characterization of Σ 1,0 is decidable. We use a technique from [14], where it is shown that recognizability is decidable for semilinear sets. The idea is to characterize Pcompatibility as the finiteness of the index of certain equivalence relations, which can be expressed in Presburger arithmetic. Theorem 5.3. Given a semilinear subset S ⊆ N V and a partition P of V , it is decidable whether S is P -compatible.
Proof. For α ∈ N Vi and γ ∈ N V , we write γ[i/α] to be the element of N V with
γ[i/α](v) = α(v) if v ∈ V i , γ(v) otherwise.
For α, β ∈ N Vi , we write α ∼ i β if for every γ ∈ N V , we have γ[i/α] ∈ S if and only if γ[i/β] ∈ S. Moreover, for γ ∈ N V , we will use the norm · as defined by γ = v∈V γ(v).
We claim that S is P -compatible if and only if ∃k ∈ N :
n i=1
∀α ∈ N Vi : ∃β ∈ N Vi : β ≤ k, α ∼ i β.
Suppose Eq. (7) holds. For each β ∈ N Vi , we define S i,β = {α ∈ N Vi | α ∼ i β}. Then S i,β ⊆ N Vi is semilinear and we have S = β1∈N V 1 , β1 ≤k · · · βn∈N Vn , βn ≤k S 1,β1 × · · · × S n,βn .
Hence, S is P -compatible. Now assume S is P -compatible. Then we can write S = ℓ j=1 A j,1 × · · · × A j,n , where each A j,i ⊆ N Vi is semilinear. For each i ∈ [1, n], consider the function κ i : N Vi → 2 {1,...,ℓ} with
κ i (α) = {j ∈ [1, ℓ] | α ∈ A j,i }.
Observe that if κ i (α) = κ i (β), then α ∼ i β. Since κ i has a finite codomain, this means the equivalence relation ∼ i on N Vi has finite index. This immediately implies Eq. (7).
Since we can clearly formulate the condition Eq. (7) in Presburger arithmetic, P -compatibility is decidable.
In fact, it is not hard to see that if P consists only of singletons, a semilinear set is P -compatible iff it is recognizable. Hence, Theorem 5.3 generalizes the decidability of recognizability. Let M be a monoid. A subset S ⊆ M is called recognizable if there is a finite monoid F and a morphism ϕ : M → F such that S = ϕ −1 (ϕ(S)). Proof. Mezei's Theorem [4] states that if M 1 , . . . , M n are monoids, then a subset of M 1 × · · · × M n is recognizable if and only if it is a finite union of sets S 1 × · · · × S n such that S i ⊆ M i is recognizable for i ∈ {1, . . . , n}.
Combined with the fact that a subset of N is semilinear if and only if it is recognizable, the condition 3 of Theorem 5.2 yields the result.
5.2.
Expressiveness of Σ 1,0 vs. Σ 1,1 . It is obvious that Σ 1,1 is strictly more expressive than Σ 1,0 , because it permits the definition of languages with unbounded alternations, such as {a, b} * . But is this the only difference between the two fragments? In other words: Restricted to alternation bounded languages, is Σ 1,1 more expressive? The answer is no.
Theorem 5.5. If L ⊆ (a * 1 · · · a * n ) ℓ , then L is definable in Σ 1,1 if and only if it is definable in Σ 1,0 .
Proof. Let ϕ be a Σ 1,1 formula where the free variable is alternation bounded and the variable t is not alternation bounded. We can transform ϕ into a disjunction k i=1 ϕ i , where each ϕ i belongs to Σ 1,1 and consists of a block of existential quantifiers followed by a conjunction of literals. Then, the proof of Theorem 4.3 yields for each ϕ i a (polynomial) bound p i so that if we replace the quantifier ∃t in ϕ i by ∃t ∈ (a * 1 · · · a * n ) pi , the resulting Σ 1,0 formula is equivalent.
This allows us to reason beyond alternation bounded languages. We have seen in the proof of Theorem 3.3 that one can express "|u| a = |v| b " in Σ 1,3 , which required significantly more steps, and two more alternation unbounded variables, than the ostensibly similar "|u| a = |v| a ". This raises the question: Can we define the former in Σ ′ 1,1 ? We cannot:
Corollary 5.6. The predicate "|u| a = |u| b " is not definable in Σ 1,1 .
Proof. Otherwise, we could define the set {a n b n | n ≥ 0} in Σ 1,1 , hence in Σ 1,0 , contradicting Theorem 5.1.
5.3.
Expressiveness of Σ 1,2 vs. Σ 1,3 . We have seen in Theorem 3.3 that Σ 1,3 can express all recursively enumerable unary languages. Moreover, Theorem 4.6 tells us that the languages definable in Σ 1,2 are always counter languages. How do the fragments compare with respect to natural (binary) predicates on words. We already know from Item 14 in Theorem 3.3 that over two letters, the prefix relation is expressible in Σ 1,3 . Note that the following theorem does not follow directly from the fact that for any Σ 1,2 formula ϕ, the set ϕ is a counter relation, as shown in Section 4. This is because the prefix relation is a counter relation (and even rational).
Theorem 5.7. In Σ 1,2 , "u is a prefix of v" is not expressible.
Proof. Suppose the prefix relation were expressible using a Σ 1,2 formula ϕ. Then, by reversing all constants in ϕ, we obtain a formula expressing the suffix relation. Let ⊑ p denote the prefix relation and ⊑ s the suffix relation. We can now express ∃v ∈ {a, b} * : v ⊑ p u ∧ v ⊑ s u ∧ |u| a = 2 · |v| a ∧ |u| b = 2 · |v| b , which is equivalent to u ∈ S, where S = {vv | v ∈ {a, b} * }. Note that |u| a = 2 · |v| a can be expressed by using |u| a = |v| a + ·|v| a , which can be done in Σ ′ 1,0 according to Item 11 in Theorem 3.3.
However, S is not a counter language. This is due to the fact that the class of recursively enumerable languages is the smallest language class that contains S, is closed under rational transductions, union, and intersection (this can be shown as in the case of the set of palindromes [3]). However, the class of counter languages also has these closure properties and is properly contained in the recursively enumerable languages. Hence, S is indeed not a counter language. This is in contradiction with the fact that for any Σ 1,2 formula ϕ, the set ϕ is a counter relation, as shown in Section 4.
Conclusion
We have shown that the Σ 1 theory of the subword ordering is undecidable (already for two letters), if all words are available as constants. This implies that the Σ 2 theory is undecidable already for two letters, even without constants.
In order to shed light on decidable fragments of first-order logic over the structure (A * , ⊑, w 1 , . . .), we introduced the fragments Σ i,j . We have completely settled their decidability status. In terms of complexity, the only open case is the Σ 1,2 fragment. We have an NP lower bound and an NEXP upper bound.
This aligns with the situation for expressiveness. We have a decidable characterization for the expressiveness of Σ 1,0 and, obvious exceptions aside, Σ 1,1 is as expressive as Σ 1,0 . However, we do not know whether Σ 1,1 and Σ 1,2 differ significantly: Of course, Σ 1,2 can have two alternation unbounded free variables, but it is conceivable that Σ 1,1 and Σ 1,2 define the same languages.
Theorem 3 . 2 .
32Let S ⊆ N be a recursively enumerable set. Then there is a finite set of variables {x 0 , . . . , x m } and a finite set E of equations, each of the form
Theorem 3. 3 .
3Let |A| ≥ 2 and a ∈ A. For each recursively enumerable set S ⊆ N, there is a Σ 1,3 formula ϕ over the structure FO(A * , ⊑, w 1 , . . .) with one free variable such that ϕ = {a k | k ∈ S}.
Lemma 3.4 ([23]). Let n ≥ 2 and |u| = |v| = n + 1. Then u = v if and only if u ∼ n v.
3. 3 .
3The Σ 2 fragment over two letters in the pure logic. The final result in this section settles the question of how many letters are needed to make the Σ 2 fragment of FO(A * , ⊑) undecidable. We show here that two letters suffice. Observe that if |A| = 1, FO(A * , ⊑) can be interpreted in FO(N, <) and is thus decidable.Let µ : A * → A * be a map. It is called a morphism if µ(uv) = µ(u)µ(v) for all u, v ∈ A * . It is an anti-morphism if µ(uv) = µ(v)µ(u) for all u, v ∈ A * . Finally, it is an automorphism of (A * , ⊑) if for any u, v ∈ A * , we have u ⊑ v if and only if µ(u) ⊑ µ(v).
Lemma 3.8. A map µ is an automorphism of (A * , ⊑) if and only if (i) µ is either a morphism or an anti-morphism and (ii) µ permutes A.
vector (x 1,1 , . . .) ∈ N ℓ·n of exponents. With this encoding, it suffices to show how to express literals w ⊑ w ′ (and also w ⊑ w ′ ) by polynomial-size existential Presburger formulas for w, w ′ ∈ (a * 1 · · · a * n ) ℓ . For a vector x = (x 1,1 , . . . , x ℓ,n ) from N ℓ·n , let w x =
Corollary 4 . 5 .
45If PT languages are represented as boolean combinations of sets of the form ↑w with w ∈ A * , then their non-emptiness problem is NP-complete.
. 4 . 3 .
43Complexity of Σ 1,2 . Our next result is an upper bound for the truth problem of Σ 1,2 .
Theorem 5 . 4 .
54Suppose P consists only of singletons. Then S ⊆ N V is P -compatible if and only if it is recognizable.
This is a common situation, shared with, e.g., FO(A * , ·) and FO(N, <).
That any language of the form ra * s is PT is easy to prove, e.g., using the characterization of[27].
String Constraints for Verification. P A Abdulla, M F Atig, Y Chen, L Holík, A Rezine, P Rümmer, J Stenman, Proc. CAV 2014. CAV 2014Springer8559P. A. Abdulla, M. F. Atig, Y. Chen, L. Holík, A. Rezine, P. Rümmer, and J. Stenman. "String Constraints for Verification". In: Proc. CAV 2014, LNCS 8559. Springer, 2014, pp. 150-166.
Using Forward Reachability Analysis for Verification of Lossy Channel Systems. P A Abdulla, A Collomb-Annichini, A Bouajjani, B Jonsson, Form. Methods Sys. Des. 25. 1P. A. Abdulla, A. Collomb-Annichini, A. Bouajjani, and B. Jonsson. "Using Forward Reachability Analysis for Verification of Lossy Channel Systems". In: Form. Methods Sys. Des. 25.1 (2004), pp. 39-65.
Reversal-bounded multipushdown machines. B S Baker, R V Book, J. Comput. System Sci. 83B. S. Baker and R. V. Book. "Reversal-bounded multipushdown machines". In: J. Comput. System Sci. 8.3 (1974), pp. 315-332.
Transductions and Context-Free Languages. J Berstel, B. G. TeubnerStuttgartJ. Berstel. Transductions and Context-Free Languages. Stuttgart: B. G. Teubner, 1979.
About the Theory of Tree Embedding. A Boudet, H Comon, Proc. TAP-SOFT '93. TAP-SOFT '93Springer668A. Boudet and H. Comon. "About the Theory of Tree Embedding". In: Proc. TAP- SOFT '93, LNCS 668. Springer, 1993, pp. 376-390.
On Termination and Invariance for Faulty Channel Machines. P Bouyer, N Markey, J Ouaknine, Ph, J Schnoebelen, Worrell, Form. Asp. Comp. 24P. Bouyer, N. Markey, J. Ouaknine, Ph. Schnoebelen, and J. Worrell. "On Termination and Invariance for Faulty Channel Machines". In: Form. Asp. Comp. 24.4-6 (2012), pp. 595-607.
Definability in the Existential Theory of Concatenation and Undecidable Extensions of this Theory. J R Büchi, S Senger, Z. Math. Logik Grundlag. Math. 34J. R. Büchi and S. Senger. "Definability in the Existential Theory of Concatenation and Undecidable Extensions of this Theory". In: Z. Math. Logik Grundlag. Math. 34.4 (1988), pp. 337-342.
Affine Parikh automata. M Cadilhac, A Finkel, P Mckenzie, RAIRO Theor. Inf. Appl. 46M. Cadilhac, A. Finkel, and P. McKenzie. "Affine Parikh automata". In: RAIRO Theor. Inf. Appl. 46.4 (2012), pp. 511-545.
Ordering Constraints on Trees. H Comon, R Treinen, Proc. CAAP '94. CAAP '94Springer787H. Comon and R. Treinen. "Ordering Constraints on Trees". In: Proc. CAAP '94, LNCS 787. Springer, 1994, pp. 1-14.
A Survey on Small Fragments of First-Order Logic over Finite Words. V Diekert, P Gastin, M Kufleitner, Int. J. Found. Comput. Sci. 193V. Diekert, P. Gastin, and M. Kufleitner. "A Survey on Small Fragments of First-Order Logic over Finite Words". In: Int. J. Found. Comput. Sci. 19.3 (2008), pp. 513-548.
Undecidability of the positive ∀∃ 3 -theory of a free semigroup. V G Durnev, Sib. Math. J. 36V. G. Durnev. "Undecidability of the positive ∀∃ 3 -theory of a free semigroup". In: Sib. Math. J. 36.5 (1995), pp. 917-929.
The computational complexity of logical theories. J Ferrante, C W Rackoff, Lecture Notes in Mathematics. 718SpringerJ. Ferrante and C. W. Rackoff. The computational complexity of logical theories. Vol. 718. Lecture Notes in Mathematics. Springer, 1979.
Word Equations with Length Constraints: What's Decidable?. V Ganesh, M Minnes, A Solar-Lezama, M C Rinard, In: Proc. HVC 2012. Springer7857V. Ganesh, M. Minnes, A. Solar-Lezama, and M. C. Rinard. "Word Equations with Length Constraints: What's Decidable?" In: Proc. HVC 2012, LNCS 7857. Springer, 2013, pp. 209-226.
Bounded Regular Sets". S Ginsburg, E H Spanier, In: Proc. Amer. Math. Soc. 17S. Ginsburg and E. H. Spanier. "Bounded Regular Sets". In: Proc. Amer. Math. Soc. 17.5 (1966), pp. 1043-1049.
Second Order Logic and the Weak Exponential Hierarchies. G Gottlob, N Leone, H Veith, Proc. MFCS '95. MFCS '95Springer969G. Gottlob, N. Leone, and H. Veith. "Second Order Logic and the Weak Exponential Hierarchies". In: Proc. MFCS '95, LNCS 969. Springer, 1995, pp. 66-81.
The Power of Priority Channel Systems. Ch, S Haase, Ph Schmitz, Schnoebelen, Log. Methods Comput. Sci. 104Ch. Haase, S. Schmitz, and Ph. Schnoebelen. "The Power of Priority Channel Sys- tems". In: Log. Methods Comput. Sci. 10.4:4 (2014).
Subclasses of Presburger Arithmetic and the Weak EXP Hierarchy. C Haase, Proc. CSL-LICS. CSL-LICSACM47C. Haase. "Subclasses of Presburger Arithmetic and the Weak EXP Hierarchy". In: Proc. CSL-LICS 2014. ACM, 2014, 47:1-47:10.
The strong exponential hierarchy collapses. L Hemachandra, J. Comput. System Sci. 39L. Hemachandra. "The strong exponential hierarchy collapses". In: J. Comput. System Sci. 39.3 (1989), pp. 299-322.
StrSolve: solving string constraints lazily. P Hooimeijer, W Weimer, Autom. Softw. Eng. 19P. Hooimeijer and W. Weimer. "StrSolve: solving string constraints lazily". In: Autom. Softw. Eng. 19.4 (2012), pp. 531-559.
Reversal-bounded multicounter machines and their decision problems. O H Ibarra, J. ACM 25. 1O. H. Ibarra. "Reversal-bounded multicounter machines and their decision problems". In: J. ACM 25.1 (1978), pp. 116-133.
Recompression: A Simple and Powerful Technique for Word Equations. A Jeż, J. ACM. 6351A. Jeż. "Recompression: A Simple and Powerful Technique for Word Equations". In: J. ACM 63.1 (2016), 4:1-4:51.
On the state complexity of closures and interiors of regular languages with subwords and superwords. P Karandikar, M Niewerth, Ph Schnoebelen, Theoret. Comput. Sci. 610P. Karandikar, M. Niewerth, and Ph. Schnoebelen. "On the state complexity of clo- sures and interiors of regular languages with subwords and superwords". In: Theoret. Comput. Sci. 610 (2016), pp. 91-107.
Decidability in the Logic of Subsequences and Supersequences. P Karandikar, Ph, Schnoebelen, Proc. FST&TCS 2015, LIPIcs 45. Leibniz-Zentrum für Informatik. FST&TCS 2015, LIPIcs 45. Leibniz-Zentrum für InformatikP. Karandikar and Ph. Schnoebelen. "Decidability in the Logic of Subsequences and Supersequences". In: Proc. FST&TCS 2015, LIPIcs 45. Leibniz-Zentrum für Infor- matik, 2015, pp. 84-97.
Generalized Post Embedding Problems. P Karandikar, Ph, Schnoebelen, Theory of Computing Systems. 56P. Karandikar and Ph. Schnoebelen. "Generalized Post Embedding Problems". In: Theory of Computing Systems 56.4 (2015), pp. 697-716.
The height of piecewise-testable languages with applications in logical complexity. P Karandikar, Ph, Schnoebelen, Proc. CSL 2016, LIPIcs 62. Leibniz-Zentrum für Informatik. CSL 2016, LIPIcs 62. Leibniz-Zentrum für Informatik3722P. Karandikar and Ph. Schnoebelen. "The height of piecewise-testable languages with applications in logical complexity". In: Proc. CSL 2016, LIPIcs 62. Leibniz-Zentrum für Informatik, 2016, 37:1-37:22.
Monadic Second-Order Logics with Cardinalities. F Klaedtke, H Rueß, Proc. ICALP 2003, LNCS 2719. ICALP 2003, LNCS 2719SpringerF. Klaedtke and H. Rueß. "Monadic Second-Order Logics with Cardinalities". In: Proc. ICALP 2003, LNCS 2719. Springer, 2003, pp. 681-696.
Alternative Automata Characterization of Piecewise Testable Languages. O Klíma, L Polák, Proc. DLT 2013. DLT 2013Springer7907O. Klíma and L. Polák. "Alternative Automata Characterization of Piecewise Testable Languages". In: Proc. DLT 2013, LNCS 7907. Springer, 2013, pp. 289-300.
Lower bounds for natural proof systems. D C Kozen, Proc. FOCS '77. FOCS '77IEEED. C. Kozen. "Lower bounds for natural proof systems". In: Proc. FOCS '77. IEEE, 1977, pp. 254-266.
Definability in the Subword Order. O V Kudinov, V L Selivanov, L V Yartseva, Proc. CiE 2010. CiE 2010Springer6158O. V. Kudinov, V. L. Selivanov, and L. V. Yartseva. "Definability in the Subword Order". In: Proc. CiE 2010, LNCS 6158. Springer, 2010, pp. 246-255.
Theories of orders on the set of words. D Kuske, RAIRO Theor. Inf. Appl. 401D. Kuske. "Theories of orders on the set of words". In: RAIRO Theor. Inf. Appl. 40.1 (2006), pp. 53-74.
Hilbert's Tenth Problem. Y V Matiyasevich, MIT PressCambridge, MassachusettsY. V. Matiyasevich. Hilbert's Tenth Problem. Cambridge, Massachusetts: MIT Press, 1993.
A 2 2 2 pn upper bound on the complexity of Presburger Arithmetic. D C Oppen, J. Comput. System Sci. 16D. C. Oppen. "A 2 2 2 pn upper bound on the complexity of Presburger Arithmetic". In: J. Comput. System Sci. 16.3 (1978), pp. 323-332.
An efficient algorithm for solving word equations. W Plandowski, Proc. STOC. STOCACM PressW. Plandowski. "An efficient algorithm for solving word equations". In: Proc. STOC 2006. ACM Press, 2006, pp. 467-476.
Concatenation as a basis for arithmetic. W V Quine, J. Symb. Logic. 11W. V. Quine. "Concatenation as a basis for arithmetic". In: J. Symb. Logic 11.4 (Dec. 1946), pp. 105-114. issn: 1943-5886.
The complexity of decision problems in automata theory and logic. L J Stockmeyer, MAC-TR-133Department of Electical Engineering, MITPhD thesisL. J. Stockmeyer. "The complexity of decision problems in automata theory and logic". Available as Report MAC-TR-133. PhD thesis. Department of Electical Engineering, MIT, July 1974.
On the Complexity of Equational Horn Clauses. K N Verma, H Seidl, T Schwentick, Proc. CADE 2005. CADE 2005Springer3632K. N. Verma, H. Seidl, and T. Schwentick. "On the Complexity of Equational Horn Clauses". In: Proc. CADE 2005, LNCS 3632. Springer, 2005, pp. 337-352.
| []
|
[
"MULTISPECTRAL IMAGE FUSION BASED ON SUPER PIXEL SEGMENTATION",
"MULTISPECTRAL IMAGE FUSION BASED ON SUPER PIXEL SEGMENTATION"
]
| [
"Nati Ofir \nKingston University London\n\n",
"Jean-Christophe Nebel \nKingston University London\n\n"
]
| [
"Kingston University London\n",
"Kingston University London\n"
]
| []
| Multispectral image fusion is a computer vision process that is essential to remote sensing. For applications such as dehazing and object detection, there is a need to offer solutions that can perform in real-time on any type of scene. Unfortunately, current state-of-the-art approaches do not meet these criteria as they need to be trained on domain-specific data and have high computational complexity. This paper focuses on the task of fusing color (RGB) and near-infrared (NIR) images as this the typical RGBT sensors, as in multispectral cameras for detection, fusion, and dehazing. Indeed, the NIR channel has the ability to capture details not visible in RGB and see beyond haze, fog, and clouds. To combine this information, a novel approach based on superpixel segmentation is designed so that multispectral image fusion is performed according to the specific local content of the images to be fused. Therefore, the proposed method produces a fusion that contains the most relevant content of each spectrum. The experiments reported in this manuscript show that the novel approach better preserve details than alternative fusion methods. | 10.1109/icassp49357.2023.10095874 | [
"https://export.arxiv.org/pdf/2112.11329v3.pdf"
]
| 255,394,085 | 2112.11329 | 0ef1545409e95e786179d7a347f00590b16c14bd |
MULTISPECTRAL IMAGE FUSION BASED ON SUPER PIXEL SEGMENTATION
Nati Ofir
Kingston University London
Jean-Christophe Nebel
Kingston University London
MULTISPECTRAL IMAGE FUSION BASED ON SUPER PIXEL SEGMENTATION
Index Terms-Multispectral ImagesImage FusionNear-InfraredSuperpixel Segmentation
Multispectral image fusion is a computer vision process that is essential to remote sensing. For applications such as dehazing and object detection, there is a need to offer solutions that can perform in real-time on any type of scene. Unfortunately, current state-of-the-art approaches do not meet these criteria as they need to be trained on domain-specific data and have high computational complexity. This paper focuses on the task of fusing color (RGB) and near-infrared (NIR) images as this the typical RGBT sensors, as in multispectral cameras for detection, fusion, and dehazing. Indeed, the NIR channel has the ability to capture details not visible in RGB and see beyond haze, fog, and clouds. To combine this information, a novel approach based on superpixel segmentation is designed so that multispectral image fusion is performed according to the specific local content of the images to be fused. Therefore, the proposed method produces a fusion that contains the most relevant content of each spectrum. The experiments reported in this manuscript show that the novel approach better preserve details than alternative fusion methods.
INTRODUCTION
Image fusion is an important task of image processing that has numerous applications such as detection, dehazing, and visualization. It is relying on modern and defense cameras and has challenged the research communities for a couple of decades. This manuscript introduces a new approach to fuse multispectral images based on the content of regions defined through superpixel segmentation. As this research is focused on multispectral images for object detection [23], dehazing [8], and fusion visualization like in satellite imaging [29], we specifically address the data of the RGB-NIR dataset of [3], however, it can be extended easily to other modalities. Therefore, this work focuses on the fusion of RGB visible color 0.4 − 0.7µm and NIR 0.7 − 2.5µm images as described in [17]. Since each spectrum captures different information about a scene, their fusion is challenging and informative. While the RGB channel captures the visible color, as seen by human eyes, the NIR channel can see beyond haze, fog, and clouds, and thus can in particular reveal far-distance de-tails in a scene. Figure 1 illustrates scene enhancements that the proposed multispectral image fusion delivers. Indeed, the fused image contains both color information and the structure of the far mountains that were invisible in the original RGB image. As more and more sensors are integrated into modern cameras, the ability to fuse effectively their inputs in realtime is essential to produce the most informative pictures. A non-trivial preprocessing step [15], which is out of the scope of this work, is the alignment of the sensor data based on multispectral image registration [18,17].
Once aligned, the fusion of the input images can be performed using a variety of approaches including α-blending, Principal-Component-Analysis (PCA) blending [9] and spectral blending like Wavelet fusion [7]. Although deep neural network approaches have recently been proposed [6], they require large databases and heavy computational resources that are often unavailable in real-life projects with unique spectra, such as object detection of autonomous driving, of fusing multispectral images captured from a satellite. In this paper, the focus is on real-time and generic approaches for multispectral image fusion, while the algorithms behind PCA and spectral fusions are detailed as they have inspired the proposed solution. Then, it describes our approach which is spatial and based on superpixel segmentation of the input images.
We apply a soft mask fusion such that the blending weight is changing between every pixel in the image. This soft mask emphasizes the details in the fused images.
This manuscript is organized as follows. After a review of relevant work in Section 2, Section 3 presents further details on existing approaches that have inspired the proposed solution and are used for its evaluation. Then Section 4 describes the new spatial approach for image fusion that relies on superpixel content. Next, Section 5 reports both qualitative and quantitative results demonstrating the added value of the proposed fusion approach. Finally, conclusions and future work are presented in Section 6.
PREVIOUS WORK
Multispectral image registration and fusion is a challenging task that has been studied in the last decades. It has been addressed by both traditional [18] and deep learning (DL)-based [17] computer vision approaches. Whereas DL solutions tend to deliver much better performance in the specific domain for which they were trained, traditional algorithms are generally rooted in stronger theoretical frameworks, have lower computational complexity, are more generic, and do not rely on the availability of suitable training data [16,19]. Image fusion has an application in multispectral images [28] and in medical imaging [11]. Image fusion is also important in processing images from a single modality as in multi-focus [30]. This work is addressing specifically the problem of multispectral image fusion.
Image fusion can be addressed through various technical approaches. Toet et al [24] use a hierarchical approach of fusion. A novel group of works introduced fusion according to the Principal Component Analysis (PCA) [13,22]. Early methods relied on global statistics [9] or on spectral properties like in wavelet fusion [20]. Advanced methods use guided filtering for fusion [14]. Following the DL revolution in computer vision, such approaches have become the main sources of investigation for a fusion of images, features, and data. For example, Chen et al [6] exploited deep fusion to improve classification accuracy, while convolutional neural networks were successfully used to detect pedestrians [25]. As these DL methods rely on large datasets and heavy training processes, they are not suitable for the applications targeted in this work. Still, it is worth mentioning that a recent DL solution took advantage of the concept of superpixel for hyperspectral fusion [12] incorporating texture information.
The proposed method of this paper introduces a method that incorporates low-level statistics of image superpixels. Due to its low computational complexity, it could run in realtime on simple devices and embedded systems and produces informative fused multispectral images. In contrast to previous works, we are focused on the multispectral case for multi-sensor cameras, astrophysics satellites and defense systems that are typically in the near to visible light frequencies such as UV and IR, and in addition, do not contain heavy computational resources since their systems are embedded.
CONCEPTS BEHIND RELEVANT FUSION METHODS
In this section, further details are provided about approaches that have inspired the proposed solution. They comprise global α blending, PCA, and spectral fusions.
The α Blending Fusion
Global α blending is a simple standard image fusion technique. Given multispectral image N IR, RGB that are NIR and visible color images respectively we first convert them to grayscale such that: I 1 = N IR, I 2 = gray(RGB). Then given a constant α ∈ [0, 1] the α blending gray fusion is:
F gray = α · I 1 + (1 − α) · I 2 .
Finally, the colored fusion image is produced using the following color preservation formula: F = Fgray I2 · RGB.
PCA Fusion
The PCA fusion computes α according to the joint statistics of the fused images as follows. Given the gray images, I 1 , I 2 , their individual variance, V j = V ar(I j (x)), and their joint covariance, C = cov(I 1 (x), I 2 (x)), are calculated. PCA decomposition is then applied by computing the eigenvalues, λ, and eigenvectors of their covariance matrix:
|λI − V 1 C C V 2 | = (λ − V 1 )(λ − V 2 ) − C 2(1)
Subsequently, by setting that this equation equals to zero,
λ 2 − λ(V 1 + V 2 ) + V 1 V 2 − C 2 = 0, its discriminant, Σ, can be computed: Σ = (V 1 + V 2 ) 2 − 4 * V 1 V 2 − 4C 2 , i.e., Σ = V 2 1 + V 2 2 − 2 * V 1 V 2 − 4C 2 .
Next, the eigenvalues, λ 1,2 are evaluated: λ 1,2 = V1+V2±Σ 2 , leading to the eigenvector, v λ1 , being expressed as:
v λ1 = [1, λ 1 − V 1 C ] T(2)
Note that as it is assumed the first eigenvalue is the larger principal component, the second smaller eigenvalue is neglected. Finally the blending factor is set as: α P CA = ||v λ1 || −1 .
To conclude, this process allows performing multispectral image fusion based on global PCA coefficients. In the experiment Section 5, this approach is used as baseline to evaluate the proposed fusion method based on region content.
Spectral Fusion
Spectral methods has been an approach of choice to perform local image fusion [16,20]. These methods rely on the computation of low and high pass filters. The low pass filter of an image, I j , is computed by: LP j = I j * Gaussian(x, y), where x, y are the spatial filter pixels.
Then, the high pass filter is calculated as follows: HP j = I j − LP j . Then, each band-pass is processed using a different fusion mechanism.
For the low frequencies, α blending is applied: F LP = α · LP 1 + (1 − α) · LP 2 . where α is typically set to 0.5 when no additional information is known about the images.
For the high frequencies, a maximum criterion is used: F HP (x) = max(HP 1 (x), HP 2 (x)).
Finally, the overall fusion is derived by adding the different frequencies: F gray = F LP + F HP .
Spectral fusion forms the basis for the proposed contentbased fusion. Although, similar to spectral fusion, the region content-based fusion puts emphasis on high frequencies: it calculates the gain multiplier according to the content of the local superpixels using pixel-wise soft maps.
THE PROPOSED METHOD: SUPER PIXEL REGION-BASED FUSION
In this section, we introduce the proposed approach of multispectral image fusion by region content analysis. It extends the concepts behind PCA and spectral fusion by relying on the local content of the fused images that are expressed using high-order statistics to compute fusion weights for each pixel. Note that although the presented approach addresses the challenge of fusing two images, it could easily be extended to process any number of images as in hyperspectral imaging [28]. The fusion of the two input images uses a soft smooth pixel map using a mask with pixel-wise fusion weights:
F gray (x) = mask(x) · I 1 (x) + (1 − mask(x)) · I 2 (x) (3)
This fusion is performed in a hierarchical manner using Laplacian Pyramids [4] to apply visual-informative masking with mask(x). In the next steps, the construction of a smooth mask(x, y) is detailed. Note that mask(x, y) is a function of the content in the fused images based on the superpixel region content segmentation. Given an input image I comprised of pixels, are {x 1 , ..., x N }, the proposed method performs a superpixel decomposition [1] of the pixels such that:
X = {x 1 , ..., x N } = k j X j .
As the standard deviation of a superpixel has a significant correlation with its content level [1], for each superpixel X j of the input image as follows, its standard deviation is calculated by the expected mean E:
std(X j ) = E x∈Xj [I 2 (x)] − E 2 x∈Xj [I(x)](4)
Here the grade of a superpixel X j is defined by its standard deviation: grade(X j ) = std(X j ).
Eventually, the grade of a single pixel in each input image, i.e., its statistical score, is set to the value of the superpixel it belongs to: ∀x ∈ X j : grade(x) = grade(X j ).
The final mask(x) should maximize the grades of the fused images while maintaining global smoothness. Inspired by optical flow regularization [10], this can be achieved by finding the argmax of the following formula: mask·grade(I 1 )+(1−mask)·grade(I 2 )−β|| mask|| 2 (5) According to Euler-Lagrange equation [2], and the general approximation suggested by [10], a solution can be found using an iterative process:
mask j+1 =mask j + grade(I 1 ) − grade(I 2 ) 4β ,(6)
wherem ask is the four-pixel-neighbor average. It can be shown that a heuristic to find such a solution mask can be given by the difference of grades together with a smoothing operation on top of it:
mask(x) = σ[grade(I 1 (x)) − grade(I 2 (x))](7)
The sigmoid σ(x) = e x 1+e x is used to emphasize the level information in the fused image pixels. Finally, mask(x) is normalized linearly so that it lies in the range of [0, 1] and applies a Gaussian smoothing on its values. See Figure 2 for an example of a fusion mask of the proposed method of superpixel fusion, where black and white pixels correspond to scores of 0 and 1, respectively. Having described the concepts and algorithmic steps behind the proposed approach, the next section provides its evaluation on multispectral image fusion.
RESULTS
The proposed approach is evaluated on the NIR-RGB multispectral dataset of [3] since it contains natural images that can be captured by any typical camera both in the wild and in indoors. This dataset contains aligned multispectral images, with 954 pairs, and it is divided into content types, such as country (104 images), mountain (110 images), and urban (116 images).
To assess that the produced fused images are meaningful, i.e., contain the color information of the RGB together with the far details of the NIR images, they are evaluated both quantitatively, i.e, by edge map preservation and ssim scores [27,21], and qualitatively, i.e., through comparison with the outcomes of the well-known PCA fusion [26] and spectral fusion [18]. Note that all the compared methods are of linear complexity delivering runtime results in a few milliseconds. Quantitative evaluation is performed by measuring edge preservation of the multispectral image fusion as this estimates how much of the edge content and details of the multispectral images remain in the fused images.
By defining C 1 , C 2 as the Canny [5] edge maps of the input images, and C F as the edge map of the fused image, the edge preservation metric is expressed by 1 Figure 3 shows examples of multispectral images processed by the proposed fusion approach from the country, mountain, and urban image categories. It reveals that the fused images contain details invisible in the RGB channels due to obstacles formed by haze, fog, or clouds. Moreover, they do not show high-frequency artifacts. It is important to note that although fusion masks are based on local patches, they cannot be detected on the fused images. Figure 4 pares the local approach to global PCA fusion and spectral fusion. The proposed fusion is more informative and combines information from both spectral channels. The far mountains can be seen only by this method, and not by the PCA approach nor the spectral fusion. Although the spectral approach also emphasized details, it is performed globally and not locally like the proposed method. This qualitative assessment suggests that local weighting of pixel fusion enhances fusion quality. Table 1 shows that the proposed method achieves the highest percentage of Canny [5] edge preservation in the fused images whatever their category. This provides evidence that the super-pixel approach better maintains details from the input multispectral images. Finally, Table 2 reports the corresponding SSIM [27,21] scores that are known to be correlated with the human visual system. It shows that the superpixel segmentation approach achieves the top SSIM.
2 i Ci·C F Ci .
Both qualitative and quantitative results support the claim that the proposed method produces multispectral fused images that enhance visual information.
CONCLUSIONS
We introduced a new method for multispectral fusion which applies a spatial soft map based on input image superpixel segmentation. This method shows advantages over existing approaches such that the details in the input images are preserved better in the fusion result, and still, the information of the color remains valid. In addition, we explained the theory behind the PCA and spectral fusion techniques and compared their principles to the proposed approach. As a whole, this paper produces an informative research work on the interesting problem of multispectral image fusion, in the color RGB to NIR domain.
Fig. 1 .
1Example of the proposed fusion method results. From left to right: input RGB image, the proposed fusion result, input NIR image. The proposed fusion contains both the color information of the RGB and the far details captured by the NIR image.
Fig. 2 .
2Mask of pixel blending computed by the proposed method for images shown inFigure 1. Darker shades indicate higher weights for the pixels of the RGB image.
Fig. 3 .
3Outcomes of the proposed multispectral image fusion. From left to right: input RGB, fused, and input NIR images.
Fig. 4 .
4Comparison of fused images produced by the proposed approach (Left), global PCA fusion[9] (Middle) and spectral fusion (Right).
com-Table 1. Percentage of initial Canny edges remaining in the fused images. The proposed super-pixel method better preserves edges than the other classic approaches for image fusion whatever the image category[3].Table 2. Structure of similarity scores of the fused images.Category SuperPixel PCA Spectral
Country
54.7
53.1
51.3
Mountain
58.4
56.6
54.4
Urban
76.4
76.4
74.5
Street
59.3
59.7
55.9
Category SuperPixel PCA Spectral
Country
81.5
74.1
78.9
Mountain
89.9
89.6
88.1
Urban
93.8
93.9
92.7
Street
87.5
87.3
85.5
Slic superpixels compared to state-of-the-art superpixel methods. R Achanta, A Shaji, K Smith, A Lucchi, P Fua, S Süsstrunk, IEEE transactions on pattern analysis and machine intelligence. 34R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk. Slic superpixels compared to state-of-the-art su- perpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11):2274-2282, 2012.
Formulation of euler-lagrange equations for fractional variational problems. O P , Journal of Mathematical Analysis and Applications. 2721O. P. Agrawal. Formulation of euler-lagrange equations for fractional variational problems. Journal of Mathematical Anal- ysis and Applications, 272(1):368-379, 2002.
Multispectral SIFT for scene category recognition. M Brown, S Süsstrunk, Computer Vision and Pattern Recognition (CVPR11). Colorado SpringsM. Brown and S. Süsstrunk. Multispectral SIFT for scene cate- gory recognition. In Computer Vision and Pattern Recognition (CVPR11), pages 177-184, Colorado Springs, June 2011.
The laplacian pyramid as a compact image code. P J Burt, E H Adelson, Readings in computer vision. ElsevierP. J. Burt and E. H. Adelson. The laplacian pyramid as a com- pact image code. In Readings in computer vision, pages 671- 679. Elsevier, 1987.
A computational approach to edge detection. J Canny, IEEE Transactions. 6J. Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6):679-698, 1986.
Deep fusion of remote sensing data for accurate classification. Y Chen, C Li, P Ghamisi, X Jia, Y Gu, IEEE Geoscience and Remote Sensing Letters. 148Y. Chen, C. Li, P. Ghamisi, X. Jia, and Y. Gu. Deep fusion of remote sensing data for accurate classification. IEEE Geo- science and Remote Sensing Letters, 14(8):1253-1257, 2017.
Wavelets and image fusion. L J Chipman, T M Orr, L N Graham, Proceedings., International Conference on Image Processing. International Conference on Image ProcessingIEEE3L. J. Chipman, T. M. Orr, and L. N. Graham. Wavelets and image fusion. In Proceedings., International Conference on Image Processing, volume 3, pages 248-251. IEEE, 1995.
Rsdehazenet: Dehazing network with channel refinement for multispectral remote sensing images. J Guo, J Yang, H Yue, H Tan, C Hou, K Li, IEEE Transactions on geoscience and remote sensing. 59J. Guo, J. Yang, H. Yue, H. Tan, C. Hou, and K. Li. Rsde- hazenet: Dehazing network with channel refinement for mul- tispectral remote sensing images. IEEE Transactions on geo- science and remote sensing, 59(3):2535-2549, 2020.
C He, Q Liu, H Li, H Wang, Multimodal medical image fusion based on ihs and pca. Procedia Engineering. 7C. He, Q. Liu, H. Li, and H. Wang. Multimodal medical image fusion based on ihs and pca. Procedia Engineering, 7:280-285, 2010.
Determining optical flow. B K Horn, B G Schunck, Artificial intelligence. 171-3B. K. Horn and B. G. Schunck. Determining optical flow. Ar- tificial intelligence, 17(1-3):185-203, 1981.
Medical image fusion: A survey of the state of the art. A P James, B V Dasarathy, Information fusion. 19A. P. James and B. V. Dasarathy. Medical image fusion: A survey of the state of the art. Information fusion, 19:4-19, 2014.
Collaborative representation-based multiscale superpixel fusion for hyperspectral image classification. S Jia, X Deng, J Zhu, M Xu, J Zhou, X Jia, IEEE Transactions on Geoscience and Remote Sensing. 5710S. Jia, X. Deng, J. Zhu, M. Xu, J. Zhou, and X. Jia. Collabora- tive representation-based multiscale superpixel fusion for hy- perspectral image classification. IEEE Transactions on Geo- science and Remote Sensing, 57(10):7770-7784, 2019.
International Society for Optics and Photonics. S S Kumar, S Muttan, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII. 623362331Pca-based image fusionS. S. Kumar and S. Muttan. Pca-based image fusion. In Algo- rithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, volume 6233, page 62331T. Inter- national Society for Optics and Photonics, 2006.
Image fusion with guided filtering. S Li, X Kang, J Hu, IEEE Transactions on Image processing. 227S. Li, X. Kang, and J. Hu. Image fusion with guided filter- ing. IEEE Transactions on Image processing, 22(7):2864- 2875, 2013.
Distinctive image features from scale-invariant keypoints. D G Lowe, International journal of computer vision. 602D. G. Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91- 110, 2004.
Classic versus deep approaches to address computer vision challenges. N Ofir, J.-C Nebel, arXiv:2101.09744arXiv preprintN. Ofir and J.-C. Nebel. Classic versus deep approaches to address computer vision challenges. arXiv preprint arXiv:2101.09744, 2021.
Deep multi-spectral registration using invariant descriptor learning. N Ofir, S Silberstein, H Levi, D Rozenbaum, Y Keller, S D Bar, 25th IEEE International Conference on Image Processing (ICIP). IEEEN. Ofir, S. Silberstein, H. Levi, D. Rozenbaum, Y. Keller, and S. D. Bar. Deep multi-spectral registration using invariant de- scriptor learning. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 1238-1242. IEEE, 2018.
Registration and fusion of multi-spectral images using a novel edge descriptor. N Ofir, S Silberstein, D Rozenbaum, Y Keller, S D Bar, 25th IEEE International Conference on Image Processing (ICIP). IEEEN. Ofir, S. Silberstein, D. Rozenbaum, Y. Keller, and S. D. Bar. Registration and fusion of multi-spectral images using a novel edge descriptor. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 1857-1861. IEEE, 2018.
Classic versus deep learning approaches to address computer vision challenges: a study of faint edge detection and multispectral image registration. Y N Ofir, Kingston UniversityPhD thesisY. N. Ofir. Classic versus deep learning approaches to address computer vision challenges: a study of faint edge detection and multispectral image registration. PhD thesis, Kingston Univer- sity, 2021.
A wavelet-based image fusion tutorial. G Pajares, J M De La, Cruz, Pattern recognition. 379G. Pajares and J. M. De La Cruz. A wavelet-based image fusion tutorial. Pattern recognition, 37(9):1855-1872, 2004.
Image quality assessment through fsim, ssim, mse and psnr-a comparative study. U Sara, M Akter, M S Uddin, Journal of Computer and Communications. 73U. Sara, M. Akter, and M. S. Uddin. Image quality assess- ment through fsim, ssim, mse and psnr-a comparative study. Journal of Computer and Communications, 7(3):8-18, 2019.
Combination of pca and contourlets for multispectral image fusion. A Srivastava, V Bhateja, A Moin, Proceedings of the international conference on data engineering and communication technology. the international conference on data engineering and communication technologySpringerA. Srivastava, V. Bhateja, and A. Moin. Combination of pca and contourlets for multispectral image fusion. In Proceedings of the international conference on data engineering and com- munication technology, pages 577-585. Springer, 2017.
Multispectral object detection for autonomous vehicles. K Takumi, K Watanabe, Q Ha, A Tejero-De-Pablos, Y Ushiku, T Harada, Proceedings of the on Thematic Workshops of ACM Multimedia. the on Thematic Workshops of ACM MultimediaK. Takumi, K. Watanabe, Q. Ha, A. Tejero-De-Pablos, Y. Ushiku, and T. Harada. Multispectral object detection for autonomous vehicles. In Proceedings of the on Thematic Work- shops of ACM Multimedia 2017, pages 35-43, 2017.
Hierarchical image fusion. Machine Vision and Applications. A Toet, 3A. Toet. Hierarchical image fusion. Machine Vision and Ap- plications, 3(1):1-11, 1990.
Multispectral pedestrian detection using deep fusion convolutional neural networks. J Wagner, V Fischer, M Herman, S Behnke, ESANN. 587J. Wagner, V. Fischer, M. Herman, S. Behnke, et al. Multispec- tral pedestrian detection using deep fusion convolutional neural networks. In ESANN, volume 587, pages 509-514, 2016.
Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine. Z Wang, A C Bovik, 26Z. Wang and A. C. Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine, 26(1):98-117, 2009.
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE transactions on image processing. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600- 612, 2004.
Hyperspectral and multispectral image fusion based on a sparse representation. Q Wei, J Bioucas-Dias, N Dobigeon, J.-Y Tourneret, IEEE Transactions on Geoscience and Remote Sensing. 537Q. Wei, J. Bioucas-Dias, N. Dobigeon, and J.-Y. Tourneret. Hy- perspectral and multispectral image fusion based on a sparse representation. IEEE Transactions on Geoscience and Remote Sensing, 53(7):3658-3668, 2015.
Fusion of satellite images in urban area: Assessing the quality of resulting images. Y Zeng, W Huang, M Liu, H Zhang, B Zou, 2010 18th international conference on geoinformatics. IEEEY. Zeng, W. Huang, M. Liu, H. Zhang, and B. Zou. Fusion of satellite images in urban area: Assessing the quality of result- ing images. In 2010 18th international conference on geoin- formatics, pages 1-4. IEEE, 2010.
Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network. W Zhao, D Wang, H Lu, IEEE Transactions on Circuits and Systems for Video Technology. 294W. Zhao, D. Wang, and H. Lu. Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 29(4):1102-1115, 2018.
| []
|
[
"Statistical Dynamics of Religions and Adherents",
"Statistical Dynamics of Religions and Adherents"
]
| [
"M Ausloos \nSUPRATECS\nSart TilmanB5, B-4000LiègeBelgium\n",
"F Petroni \nSUPRATECS\nSart TilmanB5, B-4000LiègeBelgium\n"
]
| [
"SUPRATECS\nSart TilmanB5, B-4000LiègeBelgium",
"SUPRATECS\nSart TilmanB5, B-4000LiègeBelgium"
]
| []
| Religiosity is one of the most important sociological aspects of populations. All religions may evolve in their beliefs and adapt to the society developments. A religion is a social variable, like a language or wealth, to be studied like any other organizational parameter.Several questions can be raised, as considered in this study: e.g. (i) from a "macroscopic" point of view : How many religions exist at a given time? (ii) from a "microscopic" view point: How many adherents belong to one religion? Does the number of adherents increase or not, and how? No need to say that if quantitative answers and mathematical laws are found, agent based models can be imagined to describe such nonequilibrium processes.It is found that empirical laws can be deduced and related to preferential attachment processes, like on evolving network; we propose two different algorithmic models reproducing as well the data. Moreover, a population growth-death equation is shown to be a plausible modeling of evolution dynamics in a continuous time framework. Differences with language dynamic competition is emphasized. | 10.1209/0295-5075/77/38002 | [
"https://arxiv.org/pdf/physics/0612032v1.pdf"
]
| 2,098,960 | physics/0612032 | 1a45ecddc71e5231983466119a56974d6f25b4e1 |
Statistical Dynamics of Religions and Adherents
4 Dec 2006 October 20, 2018
M Ausloos
SUPRATECS
Sart TilmanB5, B-4000LiègeBelgium
F Petroni
SUPRATECS
Sart TilmanB5, B-4000LiègeBelgium
Statistical Dynamics of Religions and Adherents
4 Dec 2006 October 20, 2018arXiv:physics/0612032v1 [physics.soc-ph]
Religiosity is one of the most important sociological aspects of populations. All religions may evolve in their beliefs and adapt to the society developments. A religion is a social variable, like a language or wealth, to be studied like any other organizational parameter.Several questions can be raised, as considered in this study: e.g. (i) from a "macroscopic" point of view : How many religions exist at a given time? (ii) from a "microscopic" view point: How many adherents belong to one religion? Does the number of adherents increase or not, and how? No need to say that if quantitative answers and mathematical laws are found, agent based models can be imagined to describe such nonequilibrium processes.It is found that empirical laws can be deduced and related to preferential attachment processes, like on evolving network; we propose two different algorithmic models reproducing as well the data. Moreover, a population growth-death equation is shown to be a plausible modeling of evolution dynamics in a continuous time framework. Differences with language dynamic competition is emphasized.
Introduction
All features of societies (beliefs, attitudes, behaviors, languages, wealth, etc.) are due to competition [1]. Recently the dynamics of world's languages, especially on their disappearing due to competition with other languages [2] has been of interest. It is fair to examine whether such considerations can be applied to religions.
We do not enter into any discussion on the definition of a religion; we recognize that there are various denominations which can impair data gathering and subsequent analysis; like many, we admit to put on the same footing religions, philosophies, sects and rituals. Idem for adherents or adepts; there are also agnostics, atheists or "not concerned". In fact, a similar set of considerations exists when discussing languages and dialects, slangs, etc. Indeed it is expected that there are many similarities, although many are differences 1 , between the diffusion, relaxation and distribution of languages and religions. What is their geographical distribution? What is their life time? How do they evolve, -from monotheism to polytheism and "backwards"? How long does an adept/adherent remain in one religion? Moreover, even though many societies are thought to form a hierarchy, due to a competition between individual concerns, as explained by Bonabeau et al. [3] or discussed by Sousa and Stauffer [4], such considerations for religion should be left for further investigation. These questions need much more reliable data than it seems available and practical at this time. Thus, let us claim that we are not interested here in religion's origin, activity, history or hierarchy, but rather in statistical physics aspects of a non-equilibrium agent based system. We will then consider as parameters the numbers of adherents of each religion, and only these numbers will be treated as physics object (and not the religions themselves).
To address these issues, we have followed classical scientific steps as in physics investigations. We have analyzed "empirical" data on the number of adherents of religions taken from two different freely available data sets.
The next scientific step is to analyze the data along statistical physics modern lines. Zipf and Pareto-like plots will be given. After deducing empirical laws, a theoretical modeling is in order. In view of the observed features, and following standard intuition, one thinks at once about two algorithmic "agent based" models, describing preferential attachment on a network, as in the Potts model [5] of magnetism or Axelrod model [1] in sociology, already applied in opinion formation studies [6].
Thereafter studying the time evolution of several "main" religions, we observe that a microscopic interpretation is plausible along the lines of a growth Avrami equation in a continuous time framework. This equation seems more plausible than the generalized Verhulst equation for modeling the dynamics of language deaths [2] because the former allows better handling of "internal and/or external fields" (as those mentioned above, -most of them missing in language dynamics) as well as (microscopic) homogeneous and/or heterogeneous fluctuations at an early stage of evolution, -while the Verhulst equation of Abrams and Strogatz [2] is grossly empirical. Notice that languages were simulated also with other than Lotka-Verhulst-Volterra mechanisms [2]; see e.g. ref.
[7].
Data
The first data set is taken from The International Data Base (IDB) [8]. Data on Religions are included in table 58 and contains information on the population of 103 nations worldwide. The surveys were carried between 1960 and 1992. In the dataset are recorded the number of adherents of 150 religions, taking into account about 2 billion people (1/3 of the present world population).
The second data set was taken from the World Christian Encyclopedia (WCE) [9], it gives information on the number of adherents of the world's main religions and their main denominations (56 religions overall) consider-ing the whole world population. From this data set we have also information on changes during one century of the number of adherents of each religion from 1900 till 2000, measured over a 5 year span, with a forecast for 2025 and 2050. No need to say that further work should go back to history: the number of "religions" is highly time dependent, the more so when one distinguishes them to the level of denominations and sects; the number of adherents of a given religion is not fixed either. History is full of examples of individuals or entire groups of people changing their religion, -for various reasons: following the "leader" (e.g. Constantinus, ...) or "external pressure" (e.g. inquisition, ...) or "internal pressure" or so called adaptation under proselytism action...
One should also be aware that such surveys are biased, and are hardly snapshots of a situation like in a laboratory experiment. Yet, beside these caveats, the main difference between the two data sets is in the information they give on religions with a small number of adherents. While this information is present (even if not for all considered nations, and only partially) in the first data set, the second data set does not consider small religious groups. It is also unclear how much distinction was made in the IDB and WCE surveys concerning denominations and sects so called adstrated to the main religions.
Zipf 's and Pareto's distributions
The Zipf's and Pareto's distributions are shown in figure 1 for both data sets. Recall that the Zipf distribution results from a hierarchical ranking (of the considered religions according to their number of adherents). The Pareto distribution shows instead the number of religions with a number of adherents n greater than N as a function of N . In figure 1(a) and 1(b), the Zipf and Pareto distributions are shown respectively for the first dataset while 1(c) and 1(d) show results for the second data set in different (so-called) years. It can be noticed that the Zipf distribution for both data sets can be fitted by a straight line, with different slopes, -except for the tails, i.e. where religions with a very small or high number of adherents are to be found (see caveats above). However it is remarkable that a different behavior between both data sets is found in the case of the Pareto distribution: while for the IDB data set, Fig.1(b), it can be seen that the Pareto distribution roughly follows a power law at least for N > 10 5 , i.e. f (N ) ∝ N −0.4 ; this is not the case for the WCE data set, Fig. 1(d), where the linearity is present only in a log-linear plot. Notice that the former law exponent of the Pareto distribution is similar to that found in language studies [7]. Such an empirical non trivial power law is consistent with a preferential attachment process [10] on a network, i.e. it is more likely that one has the religion of one's mother or neighbor....
Partial distribution functions
In order to compare the two data sets, and their meaning or bias, and observe the time evolution of adherence (or attachment) we have divided the population interval [1, 10 9 ] into 18 bins of exponentially increasing size and filled each bin with the number of religions having that number of adherents (normalized to have the distribution area equal to 1). The result is a partial distribution function (pdf), Fig.2, that can be fitted (i) with a Weibull distribution (symbol +), much used in life time (or failure) studies,
f (x) = 1 β e − (x−µ) β e −e − (x−µ) β(1)
where x = log 10 (n) and n is the number of adherents or/and (ii) with a lognormal distribution (symbol x); both fits are quite similar, with a slight difference in the upper tail. For comparison the best corresponding Gaussian distribution (continuous line) is shown in the same plot. This leads to consider that two empirical functions can be possible based on different concepts at this level of data acquisition : (i) birth-death processes 2 , (ii) multiplicative production with many independent factors. The same procedure can be applied to the WCE data set, whence obtaining the pdf's shown in Fig. 3 for different (so called) years. To eliminate the effect due to increasing world population in the "years" considered, all pdf's of different "years" were normalized to the same population number considering 1900 as the reference population. A fit of these distributions, with Eq.(1), is shown in Fig.3. In order to plot all the pdf's on the same graph each pdf has been successively displaced by 0.6. The apparent flatness of the pdf is due to the vertical rescaling. From this figure a critical view of this data has to be implied: notice the break at 10 7 , indicating in our view an overestimation of adepts/adherents in the most prominent religions, or a lack of distinctions between denominations, as can be easily understood [11]. This "emphasis" of the "winner takes all" in the WCE data, i.e. the empirical data results from summing up adherents from the most important sort of religion and smaller related denominations into a single number corresponding to (the main) religion, hints to explaining the difference between Pareto plots in Figs. 1(b), -1(d),
Time evolution
Finally, it is easily accepted that the percentages of adherents are not fixed over time. Therefore a nucleation-growth-death process can be proposed, in analogy with crystal growth studies [12]. We consider that a microscopic-like, continuous time differential equation can be written for the evolution of the number of adherents (in terms of percentage with respect to the world population) of the world main religions, as for competing entities of the type [13] d dt
g(t) = Sk(t)[1 − g(t)] dV n dt (2)
where, adapting to our case this Avrami-Kolmogorov equation, g(t) is counting the fraction of adherents of a given religion, V n is instead connected with the total world population, S is a parameter to be determined and k(t) ∝ t −h where h is a parameter to be deduced in each case, measuring the attachment-growth (or death) process in this continuous time approximation. This should be contrasted with the Lotka-Volterra-Verhulst mechanistic approach (for languages) which hardly allows for nucleation, dissipation and/or time delayed correlations of different entities, in contrast to generalizations of Eq. (2) using such physical features.
A few examples of religions for which the number of adherents is increasing (e.g., Islam), decaying (e.g., Ethnoreligions and Buddhists) or rather stable (e.g., Christianity) is shown in Fig.4. The data can be well fitted to the solution of the Avrami-Kolmogorov growth-death equation Eq. (2). The values of h for the considered religions, as obtained by a least-square best fit, are reported in the plot. The parameter h values and their meaning deserve some short explanation and discussion. The parameter can be thought to be like a reproduction rate in Verhulst logistic equation, or a true attachment like in sexual networks [14] or in molecular processes [15]. It is interesting to observe that h can be positive or negative, indicating also the possibility for detachment. Other parametrizations of k(t) can be imagined and are possible. Our theoretical law elsewhere derived from first principles [13] concludes the present scientific analysis in showing that a predictability level can be reached on the evolutions.
Conclusions
In conclusion, as for languages or wealth, one can recognize religions as a signature of population dynamics. Even though characteristic time scales are different, and religion dynamics is more complex than language dynamics because of the presence of external fields and spontaneous nucleations, empirical ranking laws look similar. Therefore similar growth-death agent based models can be thought of. Yet, there are useful differences to be expected (and found) which lead to different models from those describing language death and appearance. We propose an algorithmic approach based on attachment processes for the macroscopic point of view, -not deciding on the statistical alternative, i.e. Weibull or log-normal law, and a diffusion growth rate based equation for modeling the data at the microscopic level. There are possible open problems on the ongoing research, or further investigations taking into account the available/reliable data at this time, as to look for (time dependent) geographical effects, like clustering, or through other definitions, like normalizing with respect to some population size or country surface, or GDP, or other socio-economic index allowing to build correlation matrices and search for socio-economic field influence. Each value of the attachment parameter h as given by the best fit is reported in the plots
Figure Captions
Figure 1 -
1Zipf's and Pareto's distributions. Subplots (a) and (c) show the Zipf's distribution for the IDB and WCE data sets respectively. On the y axis is the number of adherents; on the x axis the ranked religions. Subplots (b) and (d) show the Pareto distributions for these data sets. These plots show the number of religions (y axis) with a number of adherents n > N as function of N . The axis scales have been chosen to enlighten linear regions
Figure 2 -
2Partial Distribution Function (pdf) of adherents. The distribution of the number of adherents of religions from the IDB dataset is shown (squares); an exponentially increasing bin size is used for the x-axis. The pdf is fitted with Weibull (+) or lognormal (x) distributions and compared with the best Gaussian fit (continuous line).
Figure 3 -
3Time evolution of Partial Distribution Functions of religion sizes. The distribution of the number of adherents of religions from WCE data set is shown according to an exponentially increasing bin size on the x-axis. Results for different "years" are vertically displaced of 0.6 in order to have them on the same plot. The fit is done using a Weibull distribution (continuous lines).
Figure 4 -
4Time evolution of adherents from the WCE data set. The plot shows the percentage of adherents for 4 typical world religions as a function of time. Each value of the attachment parameter h as given by the best fit is reported in the plots.
Figure 1 :
1Zipf's and Pareto's distributions of religions. Subplots (a) and (c) show the Zipf's distribution for the IDB and WCE data sets respectively. On the y axis is the number of adherents; on the x axis the ranked religions.Subplots (b) and (d) show the Pareto distributions for these data sets. These plots show the number of religions (y axis) with a number of adherents n > N as function of N . The axis scales have been chosen to enlighten linear regions
Figure 2 :
2Partial Distribution Function (pdf) of adherents. The distribution of the number of adherents of religions from the IDB dataset is shown (squares); an exponentially increasing bin size is used for the x-axis. The pdf is fitted with Weibull (+) or log-normal (x) distributions and compared with the best Gaussian fit (continuous line).
Figure 3 :
3Time evolution of Partial Distribution Functions of religion sizes. The distribution of the number of adherents of religions from WCE data set is shown according to an exponentially increasing bin size on the x-axis. Results for different "years" are vertically displaced of 0.6 in order to have them on the same plot. The fit is done using a Weibull distribution (continuous lines).
Figure 4 :
4Time evolution of adherents from the WCE data set. The plot shows the percentage of adherents for 4 typical world religions as a function of time.
We realize that x is Eq.(1) is the size of the population, while the variable of the Weibull distribution is rather the strength of to-be-broken bonds in a "time to failure" analysis. If there is a one-to-one correspondence between the x and y axes in cause-effect relations, such a change in meaning is only a change in notations. Otherwise, hysteresis effects are to be considered. This goes beyond our present study.
AcknowledgmentsThe work by FP has been supported by European Commission Project E2C2 FP6-2003-NEST-Path-012975 Extreme Events: Causes and Consequences. Critical and encouraging comments by A. Morelli have been very valuable. Referees should be thanked, moreover for their warning and putting pressure on us to emphasize that we mere treat religions and adherents as physics variables, so that our results and their interpretation have never the intention of vilifying any religion, sect, person, etc.
. R Axelrod, J Confl. Res. 41203R. Axelrod, J Confl. Res. 41 (1997) 203.
. D M Abrams, S H Strogatz, Nature. 424900D.M. Abrams, S.H.Strogatz, Nature 424 (2003) 900.
. E Bonabeau, G Theraulaz, J L Deneubourg, Physica A. 217373E. Bonabeau, G.Theraulaz, J.L.Deneubourg, Physica A 217 (1995) 373.
. A O Sousa, D Stauffer, Int. J. Mod. Phys. C. 111063A.O. Sousa, D. Stauffer, Int. J. Mod. Phys. C 11 (2000) 1063.
. C Tsallis, A C N De Magalhaes, Phys. Reports. 268305C. Tsallis, A.C.N. de Magalhaes, Phys. Reports 268 (1996) 305;
. A R R Papa, C Tsallis, Phys. Rev. E. 573923A.R.R.Papa, C. Tsallis, Phys. Rev. E 57 (1998) 3923.
. J Holyst, K Kacperski, F Schweitzer, Physica A. 285199J. Holyst, K. Kacperski, F. Schweitzer, Physica A 285 (2000) 199.
. V M De Oliveira, M A F Gomes, I R Tsang, Physica A. 361361V.M. de Oliveira, M.A.F. Gomes, I.R. Tsang, Physica A 361 (2006) 361.
The IDB provides a quick access to specialized information, with emphasis on demographic measures, for individual countries or selected groups of countries in the world. The major types of data available in the IDB include: Population by age and sex, Vital rates, infant mortality, and life tables, Fertility and child survivorship, Migration, Marital status, Family planning, Ethnicity, religion, and language, Literacy, Labor force, employment, and income. The International Data Base (IDB) is a computerized source of demographic and socioeconomic statistics for 227 countries and areas of the world. ILOUnited Nations and Specialized AgenciesThe International Data Base (IDB) is a computerized source of demo- graphic and socioeconomic statistics for 227 countries and areas of the world. The IDB provides a quick access to specialized information, with emphasis on demographic measures, for individual countries or selected groups of countries in the world. The major types of data available in the IDB include: Population by age and sex, Vital rates, infant mortality, and life tables, Fertility and child survivorship, Migration, Marital sta- tus, Family planning, Ethnicity, religion, and language, Literacy, Labor force, employment, and income, Households. Sources of the data include: U.S. Census Bureau, Estimates and Projections, National Statistics Offices, United Nations and Specialized Agencies (ILO, UNESCO, WHO))
. D Barrett, G Kurian, T Johnson, T World Christian, Encyclopedia, Oxford University PressNew York2nd editionD. Barrett, G. Kurian, T. Johnson, T. World Christian Encyclopedia (2nd edition). New York: Oxford University Press. (2001)
. R Albert, A L Barabási, A.-L , Phys. Rev. Lett. 8552340R. Albert, A.L. Barabási, A.-L., Phys. Rev. Lett. 85 (2003) 52340.
. A Morelli, private communicationA. Morelli, private communication.
. R Cloots, N Vandewalle, M Ausloos, J. Cryst. Growth. 166816R. Cloots, N. Vandewalle, M. Ausloos, J. Cryst. Growth 166 (1996) 816.
. A Gadomski, J. Phys. II France. 61537A. Gadomski, J. Phys. II France 6 (1996) 1537.
. J H Jones, M S Handcock, Proc. R. Soc. Lond. B. 2701123J. H. Jones and M. S. Handcock, Proc. R. Soc. Lond. B 270 (2003) 1123.
. M Ausloos, N Vandewalle, R Cloots, Phil. Mag. Lett. 73101M. Ausloos, N. Vandewalle and R. Cloots, Phil. Mag. Lett. 73 (1996) 101.
| []
|
[
"Bell's inequality for n spin-s particles",
"Bell's inequality for n spin-s particles",
"Bell's inequality for n spin-s particles",
"Bell's inequality for n spin-s particles"
]
| [
"Adán Cabello \nDepartamento de Física Aplicada II\nUniversidad de Sevilla\n41012SevillaSpain\n",
"Adán Cabello \nDepartamento de Física Aplicada II\nUniversidad de Sevilla\n41012SevillaSpain\n"
]
| [
"Departamento de Física Aplicada II\nUniversidad de Sevilla\n41012SevillaSpain",
"Departamento de Física Aplicada II\nUniversidad de Sevilla\n41012SevillaSpain"
]
| []
| The Mermin-Klyshko inequality for n spin-1 2 particles and two dichotomic observables is generalized to n spin-s particles and two maximal observables. It is shown that some multiparty multilevel Greenberger-Horne-Zeilinger states [A. Cabello, Phys. Rev. A 63, 022104 (2001)] maximally violate this inequality for any s. For a fixed n, the magnitude of the violation is constant for any s, which provides a simple demonstration and generalizes the conclusion reached by Gisin and Peres for two spin-s particles in the singlet state [Phys. Lett. A 162, 15 (1992)]. For a fixed s, the violation grows exponentially with n, which provides a generalization to any s of Mermin's conclusion for n spin-1 2 particles [Phys. Rev. Lett. 65, 1838(1990]. | 10.1103/physreva.65.062105 | [
"https://arxiv.org/pdf/quant-ph/0202126v3.pdf"
]
| 118,935,028 | quant-ph/0202126 | d153a6a08b9bd7df89309ea92a3c8533735f584c |
Bell's inequality for n spin-s particles
7 Jun 2002
Adán Cabello
Departamento de Física Aplicada II
Universidad de Sevilla
41012SevillaSpain
Bell's inequality for n spin-s particles
7 Jun 2002(Dated: October 27, 2018)PACS numbers: 0365Ud, 0365Ta
The Mermin-Klyshko inequality for n spin-1 2 particles and two dichotomic observables is generalized to n spin-s particles and two maximal observables. It is shown that some multiparty multilevel Greenberger-Horne-Zeilinger states [A. Cabello, Phys. Rev. A 63, 022104 (2001)] maximally violate this inequality for any s. For a fixed n, the magnitude of the violation is constant for any s, which provides a simple demonstration and generalizes the conclusion reached by Gisin and Peres for two spin-s particles in the singlet state [Phys. Lett. A 162, 15 (1992)]. For a fixed s, the violation grows exponentially with n, which provides a generalization to any s of Mermin's conclusion for n spin-1 2 particles [Phys. Rev. Lett. 65, 1838(1990].
I. INTRODUCTION
Einstein, Podolsky, and Rosen (EPR) [1] believed that the results of experiments on a local subsystem of a composite physical system which can be predicted with certainty from the results of local experiments in other regions would be determined by the local properties of the subsystem. However, the violation of Bell's inequality by quantum mechanics [2] meant a spectacular departure from EPR's point of view. According to quantum mechanics, the results of local experiments cannot be described in terms of classical local properties.
On the other hand, it was commonly accepted that classical properties would emerge for large quantum systems. The adjective "large" usually means either systems composed of many particles or systems with a high number of internal degrees of freedom. Early violations of Bell's inequalities [2,3] involved pairs of spin- 1 2 particles in the singlet state [4]. However, the EPR argument is also applicable to pairs of spin-s particles in the singlet state or to systems of n spin-1 2 particles in Greenberger, Horne, and Zeilinger (GHZ) states [5]. Violations of Bell's inequalities for the two spin-s singlet state have been extensively discussed [6,7,8,9,10,11,12,13,14,15] and have stimulated some recent experiments for s = 1 [16,17]. On the other hand, violations of Bell's inequalities for n spin-1 2 particles have attracted much attention [18,19,20,21,22,23,24,25,26,27]. However, a study of Bell's inequalities for systems of n spin-s particles and the limit of both n → ∞ and s → ∞ was still missing.
In order to place our discussion in a suitable context, we shall review some of the earlier violations of Bell's inequalities for two spin-s particles and for n spin-1 2 particles.
Mermin [6] showed that a pair of spin-s particles in the singlet state violates a particular Bell's inequality involving four local spin component observables S 1 ·â, * Electronic address: [email protected] S 1 ·b, S 2 ·b, and S 2 ·ĉ. He found that the range of settings for which the violation occurs vanishes as 1/s when s → ∞. Subsequently, however, Mermin and Schwarz [7] found evidence that this vanishing might be peculiar to the chosen inequality (see also [12,13]).
Ogren [10] studied the original Bell's inequality [2] for three different ways of defining dichotomic observables from S 1 ·â, S 1 ·b, S 2 ·b, and S 2 ·ĉ. He found that the range of settings for which the singlet state of two spin-s particles violates Bell's inequality is of the same magnitude, at least for small s, and larger than those obtained in Ref. [6].
Peres [14] and Gisin and Peres [15] found dichotomic operators such that two spin-s particles in the singlet state violate the Clauser-Horne-Shimony-Holt [3] (CHSH) inequality and that the magnitude of the violation (that is, the ratio of the quantum correlation to the maximal classical one) tends to a constant [14] or is constant [15] for any s.
An experimental violation of Bell's inequalities for an optical analog of the singlet state of two spin-1 particles has been recently reported in Ref. [17].
On the other hand, Mermin [18] has shown that the correlations found by n spacelike separated observers who share n spin-1 2 particles in a GHZ state maximally violate a Bell's inequality involving two local spin component observables per particle by a factor that increases exponentially with n. Mermin's inequality for n spin-1 2 particles distinguishes between the n even and odd cases. Ardehali [21] derived a similar inequality that leads to a higher violation for even n. Finally, Belinsky and Klyshko [22] proposed an elegant single inequality that leads to a maximal violation for arbitrary n. This inequality is mostly referred to as the Mermin-Klyshko inequality.
The structure of this paper is as follows: In Sec. II we introduce a generalization for any spin of the Mermin-Klyshko inequality using two maximal observables (i.e., represented by nondegenerated operators) per particle. In Sec. III we show that maximally entangled states of two spin-s particles and some multiparticle multilevel GHZ states defined in Ref. [28] maximally violate the inequality presented in Sec. II.
In Sec. IV we present the conclusions of our research: On one hand, we reach Gisin and Peres's conclusion in Ref. [15], namely that for two particles in a maximally entangled state the ratio of the quantum correlation to the maximal classical one is constant as s grows. Moreover, we extend Gisin and Peres's conclusion to systems of three or more particles. On the other hand, we generalize to any s Mermin's conclusion in Ref. [18] that the ratio of the quantum correlation to the maximal classical one grows exponentially with the number of particles. In addition, the inequality presented in Sec. II would allow us to translate the proofs of Bell's theorem without inequalities for multiparticle multilevel GHZ states introduced in Ref. [28] into feasible experimental tests.
M (s) n = M (s) n−1 A (s) n + B (s) n + K (s) n−1 A (s) n − B (s) n ,(2)
letting M
2 = A (s) 1 A (s) 2 + A (s) 1 B (s) 2 + B (s) 1 A (s) 2 − B (s) 1 B (s) 2(3)
and
M (s) 3 = 2 A (s) 1 B (s) 2 B (s) 3 + B (s) 1 A (s) 2 B (s) 3 +B (s) 1 B (s) 2 A (s) 3 − A (s) 1 A (s) 2 A (s) 3 .(4)
In any theory in which local variables of particle j determine the results of local observables A
This is the generalization to spin-s of the Mermin-Klyshko inequality. If A j and B j are observables taking values −1 or 1 (i.e., for s = 1), or for s = 1 2 and choosing units in which 2h = 1, then we obtain the Mermin-Klyshko inequality [22]. If, in addition, n is odd and greater than 3, then (up to a factor 2 f ((n−1)/2) ) we obtain Mermin's inequality [18]. If n = 2 we obtain the CHSH inequality [3].
The bounds in inequality (5) can be easily derived as follows: In any local-realistic theory, for any individual system, observables A j and B j have predefined values a j and b j , respectively. Each of these values is constrained to lie between −s and s. Since M
III. VIOLATIONS OF THE GENERALIZED MERMIN-KLYSHKO INEQUALITY
For a n spin-s particle system in a quantum pure state |ψ , the quantum correlation of A 1 , . . . , A n is defined as ψ|Â Let us consider the following local operators on particle j:Â
(s) j = s s − 1 · · · −s + 1 −s ,(6)B (s) j = s s − 1 · · · s − 1 s .(7)
A (s) j andB (s) j are diagonal (2s + 1) × (2s + 1) matrices, with nondegenerated eigenvalues −s, −s + 1, . . ., s − 1, s.
In addition, let us recursively define the following operator on the composite system consisting on n ≥ 2 spin-s particles:
M (s) n =M (s) n−1 ⊗ Â (s) n +B (s) n +K (s) n−1 ⊗ Â (s) n −B (s) n ,(8)
For n = 2, µ (s) n is a maximally entangled state of two spin-s particles. For n ≥ 3, µ (s) n is a generalized GHZ state, as defined in Ref. [28], and allows us to develop an EPR-like argument for observables A j and B j . For n odd (even), µ [29] (see [28] for the details).
In this paper, however, we are interested in violations of inequality (5). For that purpose, let us take a look at the prediction of quantum mechanics for the state µ
This value violates inequality (5). Indeed, it can be proved that this is the maximum allowed violation of inequality (5). The proof is simple for n odd. Then, M (s) n is a linear combination with coefficients ±2 (n−1)/2 of 2 n−1 operators of the type (s) 1 ⊗· · ·⊗ (s) n , and each of these correlations is bound by ±s n . Therefore, for n odd, the maximum value that M (s) n can reach is, by definition, 2 3(n−1)/2 s n , Q.E.D.
If n is even the proof is more difficult (for n = 2 and s = 1, or for s = 1 2 and choosing units in which 2h = 1, proofs can be found in Refs. [30,31]).
IV. CONCLUSIONS
The ratio between the quantum correlation given by Eq. (10) and the maximal classical one, which appears in Eq. (5), is
µ (s) n M (s) n µ (s) n max M (s) n = 2 (n−1)/2 ∀s.(11)
That is, for a fixed n ≥ 2 the contradiction between quantum mechanics and local realism is constant as the spin s increases. For n = 2 the same conclusion was reached by Gisin and Peres in Ref. [15]. Therefore, our analysis is in agreement with Gisin and Peres's and generalizes it to systems of n ≥ 2 particles. On the other hand, ratio (11) shows that for a fixed s, the correlations found by n distant observers violate the classical bound by a factor that increases exponentially with the number n of particles. For s = 1 2 the same conclusion was reached by Mermin in Ref. [18]. Thus our analysis generalizes Mermin's to systems of spin s ≥ 1 2 . Therefore, the approach presented in this paper unifies and generalizes some previous results, in particular, those in Refs. [15,18,22], and unifies the conclusions reached in Refs. [14,15,18]: Neither a large spin nor a large number of particles nor a large number of large spin particles guarantee classical behavior.
In addition, this approach allows us to translate the proofs without inequalities of Bell's theorem for multiparty multilevel GHZ states introduced in Ref. [28] into Bell's inequalities that can be tested in real experiments.
II. THE MERMIN-KLYSHKO INEQUALITY FOR N SPIN-S PARTICLESLet us consider a system with n ≥ 2 distant spin-s particles, 1, . . . , n shared by n distant observers which perform spacelike local experiments, chosen between A physical observables on particle j taking values −s, −s + 1, . . . , or s.The correlation A m 1 , . . . , A (s) n = m n ), measured. Let us consider the linear combination of 2 2f (n/2) correlations, where f (x) is the greatest integer less than or equal to x, defined recursively by
linear in each local observable (fixing the value of the other 2n − 1 local observables), M (s) n will take its extremal values when local observables take their extremal values, −s or s. The various combinations of a j = ±s and b j = ±s always give ±2 n−1 s n , Q.E.D.
to develop a GHZ-like proof without inequalities of Bell's theorem
represented in quantum mechanics by the self-adjoint operatorM (s) n . Therefore, as can be immediately seen in Eq. (9), according to quantum mechanics the expected value for M
. A Einstein, B Podolsky, N Rosen, Phys. Rev. 47777A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935).
. J S Bell, Physics. 1195J.S. Bell, Physics (Long Island City, N.Y.) 1, 195 (1964).
. J F Clauser, M A Horne, A Shimony, R A Holt, Phys. Rev. Lett. 23880J.F. Clauser, M.A. Horne, A. Shimony, and R.A. Holt, Phys. Rev. Lett. 23, 880 (1969).
D Bohm, Quantum Theory. Englewood Cliffs, New JerseyPrentice-HallD. Bohm, Quantum Theory (Prentice-Hall, Englewood Cliffs, New Jersey, 1951).
D M Greenberger, M A Horne, A Zeilinger, Bell's Theorem, Quantum Theory, and Conceptions of the Universe. M. KafatosDordrechtKluwer69D.M. Greenberger, M.A. Horne, and A. Zeilinger, in Bell's Theorem, Quantum Theory, and Conceptions of the Universe, edited by M. Kafatos (Kluwer, Dordrecht, 1989), p. 69.
. N D Mermin, Phys. Rev. D. 22356N.D. Mermin, Phys. Rev. D 22, 356 (1980).
. N D Mermin, G M Schwarz, Found. Phys. 12101N.D. Mermin and G.M. Schwarz, Found. Phys. 12, 101 (1982).
. A Garg, N D Mermin, Phys. Rev. Lett. 49901A. Garg and N.D. Mermin, Phys. Rev. Lett. 49, 901 (1982);
. Phys. Rev. Lett. 491294Phys. Rev. Lett. 49, 1294 (1982).
. A Garg, N D Mermin, Phys. Rev. D. 27339A. Garg and N.D. Mermin, Phys. Rev. D 27, 339 (1983).
. M Ögren, Phys. Rev. D. 271766M.Ögren, Phys. Rev. D 27, 1766 (1983).
. S L Braunstein, C M Caves, Phys. Rev. Lett. 61662S.L. Braunstein and C.M. Caves, Phys. Rev. Lett. 61, 662 (1988).
. A L Sanz, J L Sánchez Gómez, An. Fis., Ser. A. 8677A.L. Sanz and J.L. Sánchez Gómez, An. Fis., Ser. A 86, 77 (1990).
. M Ardehali, Phys. Rev. D. 443336M. Ardehali, Phys. Rev. D 44, 3336 (1991).
. A Peres, Phys. Rev. A. 464413A. Peres, Phys. Rev. A 46, 4413 (1992).
. N Gisin, A Peres, Phys. Lett. A. 16215N. Gisin and A. Peres, Phys. Lett. A 162, 15 (1992).
. A Lamas-Linares, J C Howell, D Bouwmeester, Nature. 412887A. Lamas-Linares, J.C. Howell, and D. Bouwmeester, Na- ture (London) 412, 887 (2001).
. J C Howell, A Lamas-Linares, D Bouwmeester, Phys. Rev. Lett. 8830401J.C. Howell, A. Lamas-Linares, and D. Bouwmeester, Phys. Rev. Lett. 88, 030401 (2002).
. N D Mermin, Phys. Rev. Lett. 651838N.D. Mermin, Phys. Rev. Lett. 65, 1838 (1990).
. S M Roy, V Singh, Phys. Rev. Lett. 672761S.M. Roy and V. Singh, Phys. Rev. Lett. 67, 2761 (1991).
. R K Clifton, M L G Redhead, J N Butterfield, Found. Phys. 21149R.K. Clifton, M.L.G. Redhead, and J.N. Butterfield, Found. Phys. 21, 149 (1991).
. M Ardehali, Phys. Rev. A. 465375M. Ardehali, Phys. Rev. A 46, 5375 (1992).
. A V Belinsky, D N Klyshko, Usp. Fiz. Nauk. 163Phys. Usp.A.V. Belinsky and D.N. Klyshko, Usp. Fiz. Nauk 163, 1 (1993) [Phys. Usp. 36, 653 (1993)].
. S L Braunstein, A Mann, Phys. Rev. A. 472427S.L. Braunstein and A. Mann, Phys. Rev. A 47, R2427 (1993).
. M Żukowski, D Kaszlikowski, Phys. Rev. A. 561682M.Żukowski and D. Kaszlikowski, Phys. Rev. A 56, R1682 (1997).
. N Gisin, H Bechmann-Pasquinucci, Phys. Lett. A. 2461N. Gisin and H. Bechmann-Pasquinucci, Phys. Lett. A 246, 1 (1998).
. R F Werner, M M Wolf, Phys. Rev. A. 6162102R.F. Werner and M.M. Wolf, Phys. Rev. A 61, 062102 (2000).
. R F Werner, M M Wolf, Phys. Rev. A. 6432112R.F. Werner and M.M. Wolf, Phys. Rev. A 64, 032112 (2001).
. A Cabello, Phys. Rev. A. 6322104A. Cabello, Phys. Rev. A 63, 022104 (2001).
odd and even, and can be used to develop a GHZ-like proof of Bell's theorem without inequalities in both cases. odd and even, and can be used to develop a GHZ-like proof of Bell's theorem without inequalities in both cases.
Cirel'son. B S , Lett. Math. Phys. 493B.S. Cirel'son, Lett. Math. Phys. 4, 93 (1980).
. L J Landau, Phys. Lett. A. 12054L.J. Landau, Phys. Lett. A 120, 54 (1987).
| []
|
[
"Polynomial Approximation of Value Functions and Nonlinear Controller Design with Performance Bounds",
"Polynomial Approximation of Value Functions and Nonlinear Controller Design with Performance Bounds"
]
| [
"Morgan Jones ",
"Matthew M Peet "
]
| []
| []
| For any suitable Optimal Control Problem (OCP) there exists a value function, defined as the unique viscosity solution to the Hamilton-Jacobi-Bellman (HJB) Partial-Differential-Equation (PDE), and which can be used to design an optimal feedback controller for the given OCP. In this paper we approximately solve the HJB-PDE by proposing a sequence of Sum-Of-Squares (SOS) problems, each of which yields a polynomial subsolution to the HJB-PDE. We show that the resulting sequence of polynomial sub-solutions converges to the value function of the OCP in the L 1 norm. Furthermore, for each polynomial subsolution in this sequence we show that the associated sequence of sublevel sets converges to the sublevel set of the value function of the OCP in the volume metric. Next, for any approximate value function, obtained from an SOS program or any other method (e.g. discretization), we construct an associated feedback controller, and show that sub-optimality of this controller as applied to the OCP is bounded by the distance between the approximate and true value function of the OCP in the W 1,∞ (Sobolev) norm. Finally, we demonstrate numerically that by solving our proposed SOS problem we are able to accurately approximate value functions, design controllers and estimate reachable sets.I. INTRODUCTIONConsider a nested family of Optimal Control Problems (OCPs), each initialized by (x 0 ,t 0 ) ∈ R n × [0, T ], and each an optimization problem of the formc(x(t), u(t),t)dt + g(x(T )) subject to,The problem of solving OCPs (1) plays a central role in many practical applications, for instance in the design of nonpharmaceutical interventions in epidemics [1], optimal train operation [2], optimal maintenance strategies for manufacturing systems [3], etc.Solving OCPs directly can be challenging. Fortunately, the problem of solving a family of OCPs (1) can be reduced to the problem of solving a Partial Differential Equation (PDE)[4]. From the principle of optimality, if (u * , x * ) solve the OCP for (x 0 ,t 0 ), then (τ t u * , x * (t)) (where τ t u * (s) = u * (t + s) for all s ≥ 0) solves the OCP for (x * (t),t) for any t ∈ [t 0 , T ]. This can be used to show that if a function, V , satisfies the Hamilton Jacobi Bellman (HJB) PDE, defined as ∇ t V (x,t) + inf u∈U c(x, u,t) + ∇ x V (x,t) T f (x, u) = 0 for all (x,t) ∈ R n × (0, T ), (2) V (x, T ) = g(x) for all x ∈ R n , M. Jones is with the | null | [
"https://export.arxiv.org/pdf/2010.06828v4.pdf"
]
| 222,341,274 | 2010.06828 | ad8aac76df52cf29ada0273416839ee58b7506e9 |
Polynomial Approximation of Value Functions and Nonlinear Controller Design with Performance Bounds
Morgan Jones
Matthew M Peet
Polynomial Approximation of Value Functions and Nonlinear Controller Design with Performance Bounds
For any suitable Optimal Control Problem (OCP) there exists a value function, defined as the unique viscosity solution to the Hamilton-Jacobi-Bellman (HJB) Partial-Differential-Equation (PDE), and which can be used to design an optimal feedback controller for the given OCP. In this paper we approximately solve the HJB-PDE by proposing a sequence of Sum-Of-Squares (SOS) problems, each of which yields a polynomial subsolution to the HJB-PDE. We show that the resulting sequence of polynomial sub-solutions converges to the value function of the OCP in the L 1 norm. Furthermore, for each polynomial subsolution in this sequence we show that the associated sequence of sublevel sets converges to the sublevel set of the value function of the OCP in the volume metric. Next, for any approximate value function, obtained from an SOS program or any other method (e.g. discretization), we construct an associated feedback controller, and show that sub-optimality of this controller as applied to the OCP is bounded by the distance between the approximate and true value function of the OCP in the W 1,∞ (Sobolev) norm. Finally, we demonstrate numerically that by solving our proposed SOS problem we are able to accurately approximate value functions, design controllers and estimate reachable sets.I. INTRODUCTIONConsider a nested family of Optimal Control Problems (OCPs), each initialized by (x 0 ,t 0 ) ∈ R n × [0, T ], and each an optimization problem of the formc(x(t), u(t),t)dt + g(x(T )) subject to,The problem of solving OCPs (1) plays a central role in many practical applications, for instance in the design of nonpharmaceutical interventions in epidemics [1], optimal train operation [2], optimal maintenance strategies for manufacturing systems [3], etc.Solving OCPs directly can be challenging. Fortunately, the problem of solving a family of OCPs (1) can be reduced to the problem of solving a Partial Differential Equation (PDE)[4]. From the principle of optimality, if (u * , x * ) solve the OCP for (x 0 ,t 0 ), then (τ t u * , x * (t)) (where τ t u * (s) = u * (t + s) for all s ≥ 0) solves the OCP for (x * (t),t) for any t ∈ [t 0 , T ]. This can be used to show that if a function, V , satisfies the Hamilton Jacobi Bellman (HJB) PDE, defined as ∇ t V (x,t) + inf u∈U c(x, u,t) + ∇ x V (x,t) T f (x, u) = 0 for all (x,t) ∈ R n × (0, T ), (2) V (x, T ) = g(x) for all x ∈ R n , M. Jones is with the
[email protected] then necessary and sufficient conditions for (u * , x * ) to solve OCP (1) initialized by (x 0 ,t 0 ) are u * (t) = k(x * (t),t),ẋ * (t) = f (x * (t), u * (t)), and x * (t 0 ) = x 0 , where k(x,t) ∈ arg inf u∈U c(x, u,t)
+ ∇ x V (x,t) T f (x, u) . (3)
For a given family of OCPs (1), if V satisfies Eq. (2), then V is called the Value Function (VF) of the OCP. If V is the VF, then for any (x,t), the value V (x,t) determines the optimal objective value of OCP (1) initialized by (x,t). Furthermore, the VF yields a solution to the OCP (1) initialized by (x 0 ,t 0 ) through application of Eq. (3). We call any k : Ω × [0, T ] → U that satisfies Eq. (3) a controller and we say this controller is the optimal controller for the OCP when V is the VF of the OCP.
Thus knowledge of the VF allows us to solve the nested family of OCPs in (1). Unfortunately, to find the VF, we must solve the HJB PDE, given in Eq. (2), and this PDE has no analytic solution. In the absence of an analytic solution, we often parameterize a family of candidate VFs and search for one which satisfies the HJB PDE. However, this is a nonconvex optimization problem since the HJB PDE is nonlinear. In this paper we view the search for a VF through the lens of convex optimization. Moreover, given an OCP, we are particularly interested in computing a sub-VF, a function that is uniformly less than or equal to the VF of the OCP (ie a functionṼ such thatṼ (x,t) ≤ V (x,t) for all (x,t) ∈ R n × [0, T ] where V is the VF of the OCP). We consider what happens when we relax the nonlinear equality constraints imposed by the HJB PDE to linear inequality constraints and tighten the optimization problem's feasible set to polynomials. In this context, given an OCP, we consider the following questions. Q1: Can we pose a sequence of convex optimization problems, each yielding a polynomial sub-VF that can be made arbitrarily "close" to the VF of the OCP? Q2: Can we bound the sub-optimality in performance of a controller constructed from some function V by the "distance" between V and the VF of the OCP?
A. Q1: Optimal Polynomial Sub-Value Functions
Over the years, many numerical methods have been proposed for solving the HJB PDE (2) for a given OCP. Within this literature, a substantial number of the algorithms are based on a finite-dimensional projection of the spatial domain (griding/meshing/discretization of the state space). In this class of algorithms we include (mixed) finite elements methods -an important example of which is [5]. Specifically, the approach in [5] yields an approximate VF with an error bound on the first order mixed L 2 norm -a bound which converges as the number of elements is increased (assuming the Cordes condition holds). Other examples of this class of methods include the discretization approaches in [6], [7]. For example, in [6], we find an algorithm which yields an approximate VF with an L ∞ error bound which converges as the level of discretization increases. Alternative non-grid based algorithms include the method of characteristics [4], which can be used to compute evaluations of VF at fixed (x,t) ∈ R n , and maxplus methods [8]. The result in [8] considers an OCP with linear dynamics and a cost function which is the point-wise maximum of quadratic functions. This max-plus approach yields an approximate VF with a converging error bound which holds on x ∈ R n , but increases with |x|.
While all of these numerical methods yield approximate VFs with associated approximation error bounds, the use of these functions for controller synthesis (see Q2) and reachable set estimation has been more limited (the connection between VFs, the HJB and reachable sets was made in [9]). This is due to the fact that the approximate VFs obtained from such discretization methods are difficult to manipulate and apart from being close to the true VF, have relatively few provable properties (such as being uniformly less than or greater to the true VF ie being sub or super-VFs). Being a sub or super-VF is an important property of any approximate VF. As shown in Cor. 1, sub/super-VFs can yield outer bounds on reachable sets that can be used to certify that the underlying system does not transition into regions of the state space deemed unsafe; a useful tool in the safety analysis of dynamical systems.
To address these issues, in this paper we focus on obtaining approximate VFs which are both polynomial and sub-VFs. Specifically, the use of polynomials ensures that the derivative of the approximated VFs can be efficiently computed (a useful property for solving the controller synthesis Eq. (3)), while the use of sub-VFs ensures that sublevel sets of the VF are guaranteed to contain the sublevel set of the true VF (see Cor. 1), and hence provide provable guarantees on the boundary of the reachable set (a useful property for safety analysis).
Substantial work on SOS relaxations of the HJB PDE for reachable set estimation and safety set analysis includes the carefully constructed optimization problems in [10], [11], [12], [13] and includes, of course, our work in [14], [15]. Such SOS relaxations of the HJB PDE can yield approximate VFs. However, there seems to be no prior work on using approximation theory to prove bounds on the sub-optimality of either controllers (see Q2) or corresponding reachable sets constructed from such approximated VFs. We note, however, that [12] did establish the existence of a polynomial subsolution to the HJB arbitrarily close to the true solution of the HJB in the framework of reachable sets. Treatments of the moment-based alternatives to the SOS approach includes [16], [17], [18], [19]. Another duality-based approach, found in [20], considers a density-based dual to the VF and uses finite elements method to iteratively approximate the density and VF.
In this paper we answer Q1 by considering "sub-solutions" to the HJB PDE (2). Specifically, a "sub-solution",Ṽ , to the HJB PDE (2) satisfies the relaxed inequality constraint
∇ tṼ (x,t) + c(x, u,t) + ∇ xṼ (x,t) T f (x, u) ≥ 0(4)
for all u ∈ U and (x,t) ∈ R n × [0, T ], which implies that if V is a VF,Ṽ (x,t) ≤ V (x,t) -i.e.Ṽ is a sub-VF. Then given an OCP (1) and based on this relaxed version of the HJB PDE (4), we propose a sequence of SOS programming problems, indexed by the degree d ∈ N of the polynomial variables, and given in Eq. (61). The solution to each instance of the proposed sequence of optimization problems yields a polynomial P d that is a sub-solution to the HJB PDE (2) (or sub-VF). We then show in Prop. 5 that for any VF V associated with the given OCP we have, lim
d→∞ P d −V L 1 = 0.
Furthermore, in Prop. 6 we show that this implies that the sublevel sets of {P d } d∈N converge to the sublevel sets of any VF, V , of the OCP (respect to the volume metric). Our proposed method of approximately solving the HJB PDE by solving an SOS programming problem is implemented via Semi-Definite Programming (SDP). SDP problems can be solved to arbitrary accuracy in polynomial time using interior point methods [21]. However, the number of variables in the SDP problem associated with an n-dimensional and ddegree SOS problem is of the order n d [22], and therefore exponentially increases as d → ∞. Fortunately there exist several methods that improve the scalability of SOS [22], [23] but we do not discus such methods in this paper.
B. Q2: Performance bounds for controllers constructed from approximate VFs
The use of approximate VFs to construct controllers has been well-treated in the literature, although such controllers often: apply only to OCPs with specific structure (typically dynamics are affine in the input variable, see [24] for linearization techniques that approximate non-input affine dynamics by input affine dynamics); do not have associated performance bounds; and/or assume differentiability of the VF. For example, in [25], [26], [27], [28], [29] policy iteration methods are proposed that alternate between finding approximations of the VF based on a controller and using the approximate VF to synthesizing controller. Also in [26] it was shown that the proposed policy iteration method converges under the rather restrictive assumption that the true VF is differentiable. Alternatively, grid based approaches that synthesize controllers can be found in [30], [31]. However, the method in [30] is only shown to yield a function that converges to the VF but no performance bound is given for the controller. In [31], convergence to the optimal controller is demonstrated numerically in certain cases, but no provable performance bound is given.
There are also results within the SOS framework for optimization of polynomials that use approximate VFs to construct controllers. For example, in [32] it was shown that the objective value of a specific class of OCP's using a controller constructed from a given approximate VF was bounded from above by the approximated VF. However, this bound was conservative and no method was given for refinement of the bound. In [33] a method for approximating VFs by sub and super-VFs that are also SOS polynomials is given, however, no VF approximation error bounds or resulting controller synthesis performance bound is given. Alternatively [34] proposes a bilinear SOS optimization framework which iterates between finding a Lyapunov function and finding a controller to maximize the region of attraction. However, this work does not consider OCPs or VFs per se.
Despite this extensive literature, to the best of our knowledge, there exists no way of constructing approximate VFs for which the performance of the associated controller can be proven to be arbitrarily close to optimal (although such bounds exist for discrete time systems over infinite time horizons [35]). For such a result to exist in continuous-time over finite time horizons, then, we need some way of bounding sub-optimality of the performance of the controller based on distance of the approximated VF to the true VF.
To address this need, in Sec. VIII we answer Q2 by showing that for any V , we can construct a candidate solution to the OCP (1), u(t) = k(x(t),t), given by the controller defined in Eq. (3). We then show in Thm. 4 that the corresponding objective value of the OCP (1) evaluated at u is within C V * −V W 1,∞ of the optimal objective, where V * is the true VF of the OCP and C > 0 is given in Eq. (70).This result implies approximation of value functions in the W 1,∞ norm results in feedback controllers with performance that can be made arbitrarily close to optimality. Note, this result may be of broad interest since it does not require V to be a solution to our proposed SOS Problem (61) and hence provides a bound on the sub-optimality of controllers constructed from any approximate VF.
II. NOTATION
A. Standard Notation
We define sign : R → {−1, 1} and for A ⊂ R n 1 A :
R n → R by sign(x) = 1 if x ≥ 0 −1 otherwise and 1 A (x) = 1 if x ∈ A 0 otherwise. For two sets A, B ⊂ R n we denote A/B = {x ∈ A : x / ∈ B}. For B ⊆ R n , µ(B) := R n 1 B (x)
dx is the Lebesgue measure of B, and for X ⊆ R n and a function f : X → R we denote the essential infimum by ess inf x∈X f (x) := sup{a ∈ R : µ({x ∈ X : f (x) < a}) = 0}. Similarly we denote the essential supremum by ess sup x∈X f (x) := inf{a ∈ R : µ({x ∈ X : f (x) > a}) = 0}.
For x ∈ R n we denote the Euclidean norm by ||x|| 2 = ∑ n i=1 x 2 i . For r > 0 and x ∈ R n we denote the ball B(x, r) := {y ∈ R n : ||x − y|| 2 < r}. For an open set Ω ⊂ R n we denote the boundary of the set by ∂ Ω and denote the closure of the set by Ω. Let C(Ω, Θ) be the set of continuous functions with domain Ω ⊂ R n and image Θ ⊂ R m . For an open set Ω ⊂ R n and p ∈ [1, ∞) we denote the set of p-integrable functions by L p (Ω,
R) := { f : Ω → R measurable : Ω | f | p < ∞}, in the case p = ∞ we denote L ∞ (Ω, R) := { f : Ω → R measurable : ess sup x∈Ω | f (x)| < ∞}. For α ∈ N n we denote the partial derivative D α f (x) := Π n i=1 ∂ α i f ∂ x α i i (x) where by convention if α = [0, .., 0] T we denote D α f (x) := f (x)
. We denote the set of i continuously differentiable functions by C i (Ω, Θ) := { f ∈ C(Ω, Θ) : D α f ∈ C(Ω, Θ) for all α ∈ N n such that ∑ n j=1 α j ≤ i}. For k ∈ N and 1 ≤ p ≤ ∞ we denote the Sobolev space of functions with weak derivatives (Defn. 9) by W k,p (Ω,
R) := {u ∈ L p (Ω, R) : D α u ∈ L p (Ω, R) for all |α| ≤ k}. For u ∈ W k,p (Ω, R) we denote the Sobolev norm ||u|| W k,p (Ω,R) := ∑ |α|≤k Ω (D α u(x)) p dx 1 p if 1 ≤ p < ∞ ∑ |α|≤k ess sup x∈Ω {|D α u(x)|} if p = ∞.
In the case k = 0 we have W 0,p (Ω, R) = L p (Ω, R) and thus we use the notation || · || L p (Ω,R) := || · || W 0,p (Ω,R) . We denote the shift operator τ s :
L 2 ([0, T ], R m ) → L 2 ([0, T − s], R m ), where s ∈ [0, T ], and defined by (τ s u)(t) := u(s + t) for all t ∈ [0, T − s].
B. Non-Standard Notation
We denote the set of locally and uniformly Lipschitz continuous functions on Θ 1 and Θ 2 , Defn. 1, by LocLip(Θ 1 , Θ 2 ) and Lip(Θ 1 , Θ 2 ) respectively. Let us denote bounded subsets of R n by B :
= {B ⊂ R n : µ(B) < ∞}. If M is a subspace of a vector space X we denote equivalence relation ∼ M for x, y ∈ X by x ∼ M y if x − y ∈ M. We denote quotient space by X (mod M) := {{y ∈ X : y ∼ M x} : x ∈ X}. For an open set Ω ⊂ R n and σ > 0 we denote < Ω > σ := {x ∈ Ω : B(x, σ ) ⊂ Ω}. For V ∈ C 1 (R n × R, R) we denote ∇ x V := ( ∂V ∂ x 1 , ...., ∂V ∂ x n ) T and ∇ t V = ∂V ∂ x n+1
. We denote the space of polynomials p : Ω → Θ by P(Ω, Θ) and polynomials with degree at most d ∈ N by
P d (Ω, Θ). We say p ∈ P d (R n , R) is Sum-of-Squares (SOS) if there exists p i ∈ P d (R n , R) such that p(x) = ∑ k i=1 (p i (x)) 2 .
We denote ∑ d SOS to be the set SOS polynomials of at most degree d ∈ N and the set of all SOS polynomials as ∑ SOS . We denote Z d : R n × R → R N d as the vector of monomials of degree d ∈ N or less and of size N d := d+n d .
III. OPTIMAL CONTROL PROBLEMS
The nested family of finite-time Optimal Control Problems (OCPs), each initialized by (x 0 ,t 0 ) ∈ R n × [0, T ], are defined as:
(u * , x * ) ∈ arg inf u,x T t 0
c(x(t), u(t),t)dt + g(x(T )) subject to,
x(t) = f (x(t), u(t)) for all t ∈ [t 0 , T ],(5)(x(t), u(t)) ∈ Ω ×U for all t ∈ [t 0 , T ], x(t 0 ) = x 0 ,
where c : R n × R m × R → R is referred to as the running cost; g : R n → R is the terminal cost; f : R n × R m → R n is the vector field; Ω ⊂ R n is the state constraint set; U ⊂ R m is the input constraint set; and T is the final time. For a given family of OCPs of Form (5) we associate the tuple {c, g, f , Ω,U, T }.
In this paper we consider a special class of OCPs of Form (5), where U is compact and c, g, f are locally Lipschitz continuous. We next recall the definition of local Lipschitz continuity. Definition 1. Consider sets Θ 1 ⊂ R n and Θ 2 ⊂ R m . We say the function F : Θ 1 → Θ 2 is locally Lipschitz continuous on Θ 1 and Θ 2 , denoted F ∈ LocLip(Θ 1 , Θ 2 ), if for every compact set X ⊆ Θ 1 there exists K X > 0 such that for all x, y ∈ X
||F(x) − F(y)|| 2 ≤ K X ||x − y|| 2 .(6)
If there exists K > 0 such that Eq. (6) holds for all x, y ∈ Θ 1 we say F is uniformly Lipschitz continuous, denoted F ∈ Lip(Θ 1 , Θ 2 ).
1) c ∈ LocLip(Ω ×U × [0, T ], R). 2) g ∈ LocLip(Ω, R)
.
3) f ∈ LocLip(Ω ×U, R). 4) U ⊂ R m is compact. For {c, g, f , Ω,U, T } ∈ M Lip ,
if Ω = R n we say the family of associated OCPs is state unconstrained, and if Ω = R n we say the associated family of OCPs is state constrained.
IV. PROPERTIES AND IMPORTANCE OF VALUE FUNCTIONS
We recall several important properties of Value Functions (VFs) that we use to prove the main result of the paper, given in Sec VII. In the following subsections we recall that for every family of Lipschitz OCPs, as defined in Section III, there exists a function, called the Value Function (VF), which: (A) Is determined by the solution map -Eq.
(t) = f (x(t), u(t)), x(0) = x 0 ,(7)
where f : R n × R m → R n , u : R → R m , and x 0 ∈ R n .
Definition 3. We say the function φ f is a solution map of the ODE given in Eq.
(7) on [0, T ] ⊂ R if for all t ∈ [0, T ] ∂ φ f (x 0 ,t, u) ∂t = f (φ f (x 0 ,t, u), u(t)), and φ f (x 0 , 0, u) = x 0 .∂ φ f (x 0 ,t − t 0 , τ t 0 u) ∂t = f (φ f (x 0 ,t − t 0 , τ t 0 u), u(t)) for t ∈ [t 0 , T ], φ f (x 0 ,t − t 0 , τ t 0 u) ∈ Ω for t ∈ [t 0 , T ], and φ f (x 0 , 0, τ t 0 u) = x 0 .(9)
For a given family of OCPs of Form (5), we now define the associated VF using the solution map, φ f . Lemma 1 then shows that VFs are locally Lipschitz continuous.
Definition 5. For given {c, g, f , Ω,U, T } ∈ M Lip we say V * : R n × R → R is a Value Function (VF) of the associated family of OCPs if for (x,t) ∈ Ω × [0, T ], the following holds
V * (x,t) = inf u∈U Ω,U, f ,T (x,t) (10) T t c(φ f (x, s − t, τ t u), u(s), s)ds + g(φ f (x, T − t, τ t u)) ,
where φ f is as in Eq. (9). By convention if U Ω,U, f ,T (x,t) = / 0 then V * (x,t) = ∞.
B. Value Functions are Solutions to the HJB PDE
Consider the family of OCPs associated with {c, g, f , Ω,U, T } ∈ M Lip . As shown in [37], a sufficient condition for a function V * to be a VF, is for V * to satisfy the Hamilton Jacobi Bellman (HJB) PDE, given in Eq. (12). However, for a general family of OCPs of form {c, g, f , Ω,U, T } ∈ M Lip , solutions to the HJB PDE may not be differentiable, and hence classical solutions to the HJB PDE may not exist. For this reason, one typically uses a generalized notion of a solution to the HJB PDE called a viscosity solution, which is defined in [38] as follows.
Definition 6. Consider the first order PDE
F(x, y(x), ∇y(x)) = 0 for all x ∈ Ω,(11)
where Ω ⊂ R n and F ∈ C(Ω × R × R n , R). We say y ∈ C(Ω) is a viscosity sub-solution of (11) if F(x, y(x), p) ≤ 0 for all x ∈ Ω and p ∈ D + y(x),
where D + y(x) := {p ∈ R : ∃Φ ∈ C 1 (Ω, R) such that ∇Φ(x) = p and y − Φ attains a local max at x}. Similarly, y ∈ C(Ω) is a viscosity super-solution of (11) if
F(x, y(x), p) ≥ 0 for all x ∈ Ω and p ∈ D − y(x)
where D − y(x) := {p ∈ R : ∃Φ ∈ C 1 (Ω, R) such that ∇Φ(x) = p and y − Φ attains a local min at x}. We say y ∈ C(Ω) is a viscosity solution of (11) if it is both a viscosity sub and super-solution.
∇ t V (x,t) + inf u∈U c(x, u,t) + ∇ x V (x,t) T f (x, u) = 0 for all (x,t) ∈ R n × [0, T ] (12) V (x, T ) = g(x) for all x ∈ R n .
Note that Lemma 1 and Theorem 1 are only valid in the absence of state constraints (Ω = R n ). However, as we will show in Lemma 3, if the state constraints are sufficiently "loose", then the unconstrained and constrained solutions coincide.
C. VFs Can Construct Optimal Controllers
Given an OCP, we next show if a "classical" differentiable solution to the HJB PDE (12) associated with the OCP is known then a solution to the OCP can be constructed using Eqs. (13) and (14). We will refer to any k : Ω × [0, T ] → U that satisfies Eqs. (13) and (14) for some V as a controller and say this is the optimal controller of the OCP if V is the VF of the OCP. Theorem 2 ( [4]). Consider the family of OCPs associated with tuple {c, g, f , R n ,U, T } ∈ M Lip . Suppose V ∈ C 1 (R n × R, R) solves the HJB PDE (12). Then u * :
[t 0 , T ] → U solves the OCP associated with {c, g, f , R n ,U, T } initialized at (x 0 ,t 0 ) ∈ R n × [0, T ] if and only if u * (t) = k(φ f (x 0 ,t, u * ),t) for all t ∈ [t 0 , T ],(13)where k(x,t) ∈ arg inf u∈U {c(x, u,t) + ∇ x V (x,t) T f (x, u)}. (14)
If the function V in Eq. (14) is not a VF the resulting controller may no longer construct a solution to the OCP. In Section VIII we will provide a bound on the performance of a constructed controller from a candidate VF based on how "close" the candidate VF is to the true VF under the Sobolev norm.
V. THE FEASIBILITY PROBLEM OF FINDING VFS
Consider a family of OCPs associated with some {c, g, f , Ω,U, T } ∈ M Lip . Previously it was shown in Theorem 2 that if V ∈ C 1 (R n × R, R) is a solution to the HJB PDE (12) then V may be used to solve the family of OCPs using Eqs. (13) and (14). The question, now, is how to find such a V .
Let us consider the problem of finding a value function as an optimization problem subject to constraints imposed by the HJB PDE (12). This yields the following feasibility problem:
Find V ∈ C 1 (R n × R, R),(15)
such that V satisfies (12).
Note that our optimization problem of Form (15) is nonconvex and may not even have a solution with sufficient regularity. For these reasons, we next propose a convex relaxation of Problem (15). We first define sub-VFs and super-VFs that uniformly bound VFs either from above or bellow. Definition 7. We say the function J : R n × R → R is a sub-VF to the family of OCPs associated with {c, g, f , Ω,U,
T } ∈ M Lip if J(x,t) ≤ V * (x,t) for all t ∈ [0, T ] and x ∈ Ω,
for any V * satisfying Eq. (10).
Moreover if J(x,t) ≥ V * (x,t) for all t ∈ [0, T ] and x ∈ Ω,
for any V * satisfying Eq. (10), we say J is a super-VF.
A. A Sufficient Condition For A Function To Be A Sub-VF
We now propose "dissipation" inequalities and show that if a differentiable function satisfies such inequalities then it must be a sub-value function.
Proposition 1. For given {c, g, f , Ω,U, T } ∈ M Lip suppose J ∈ C 1 (R n × R, R) satisfies for all (x, u,t) ∈ Ω ×U × (0, T ) ∇ t J(x,t) + c(x, u,t) + ∇ x J(x,t) T f (x, u) ≥ 0, (16) J(x, T ) ≤ g(x).(17)
Then J is a sub-value function of the family of OCPs associated with {c, g, f , Ω,U, T }.
Proof. Suppose J ∈ C 1 (R n ×R, R) satisfies Eqs. (16) and (17).
Consider an arbitrary (x 0 ,t 0 ) ∈ Ω × [0, T ]. If U Ω,U, f ,T (x 0 ,t 0 ) = / 0 then V * (x 0 ,t 0 ) = ∞. Clearly in this case J(x 0 ,t 0 ) < V * (x 0 ,t 0 )
as J is continuous and therefore is finite over the compact region Ω × [0, T ]. Alternatively if U Ω,U, f ,T (x 0 ,t 0 ) = / 0, then for anyũ ∈ U Ω,U, f ,T (x 0 ,t 0 ), we have the following by Defn. 4:
φ f (x 0 ,t − t 0 , τ t 0ũ ) ∈ Ω for all t ∈ [t 0 , T ], u(t) ∈ U for all t ∈ [t 0 , T ].
Therefore (using the shorthandx(t) :
= φ f (x 0 ,t − t 0 , τ t 0ũ )), by Eq. (16) we have for all t ∈ [t 0 , T ] ∇ t J(x(t),t) + c(x(t),ũ(t),t) + ∇ x J(x(t),t) T f (x(t),ũ(t)) ≥ 0.
Now, using the chain rule we deduce
d dt J(x(t),t) + c(x(t),ũ(t),t) ≥ 0 for all t ∈ [t 0 , T ].
Then, integrating over t ∈ [t 0 , T ], and since J(
x(T ), T ) ≤ g(x(T )) by Eq. (17), we have J(x 0 ,t 0 ) ≤ T t 0 c(x(t),ũ(t),t)dt + g(x(T )).(18)
Since Eq.
(18) holds for allũ ∈ U Ω,U, f ,T (x 0 ,t 0 ), we may take the infimum over U Ω,U, f ,T (x 0 ,t 0 ) to show that J(x 0 ,t 0 ) ≤ V * (x 0 ,t 0 )
. As this argument can be used for any (16) and (17).
(x 0 ,t 0 ) ∈ Ω × [0, T ] it follows that J is a sub-value function. Definition 8. For given {c, g, f , Ω,U, T } ∈ M Lip we say a function J ∈ C 1 (R n × R, R) is dissipative if it satisfies Inequal- ities
Dissipative functions are viscosity sub-solutions (as per Defn. 6) to the HJB PDE (12). Moreover, by Prop. 1 a dissipative function is a sub-VF. However, a sub-VF need not be dissipative or a viscosity sub-solution to the HJB PDE.
B. A Convex Relaxation Of The Problem Of Finding VFs
The set of functions satisfying Eqs. (16) and (17) is convex as Eqs. (16) and (17) are linear in terms of the unknown variable/function J. Furthermore, for given {c, g, f , Ω,U, T } ∈ M Lip , any function which satisfies the HJB PDE (12) also satisfies Eqs. (16) and (17). This allows us to propose the following convex relaxation of the problem of finding a VF (Problem (15)):
Find
J ∈ C 1 (R n × R, R),(19)
such that J satisfies (16) and (17).
C. A Polynomial Tightening Of The Problem Of Finding VFs
Problem (19) is convex. However, a function J, feasible for Problem (19) (and hence dissipative), may be arbitrarily far from the VF. For instance, in the case c(x, u,t) ≥ 0 and 0 ≤ g(x) < M, the constant function J(x,t) ≡ −C is dissipative for any C > M. Thus, by selecting sufficiently large enough C > M, we can make ||J − V || arbitrary large, regardless of the chosen norm, || · ||.
To address this issue, we propose a modification of Problem (19), wherein we include an objective of Form Λ×[0,T ] w(x,t)J(x,t)dxdt, parameterized by a compact domain of interest Λ ⊂ R n and weight w ∈ L 1 (Λ × [0, T ], R + ) (we use the weight, w, in Prop. 6). Specifically, for given {c, g, f , Ω,U, T } ∈ M Lip and d ∈ N, consider the optimization problem:
J d ∈ arg max J∈P d (R n ×R,R) Λ×[0,T ] w(x,t)J(x,t)dxdt(20)
subject to:
∇ t J(x,t) + c(x, u,t) + ∇ x J(x,t) T f (x, u) > 0 for all x ∈ Ω,t ∈ (0, T ), u ∈ U, J(x, T ) < g(x) for all x ∈ Ω. Maximizing Λ×[0,T ] w(x,t)J(x,t)dxdt minimizes the weighted L 1 norm Λ×[0,T ] w(x,t)|V (x,t) − J(x,t)|dxdt. The restriction to polynomial solutions J ∈ P d (R n × R, R) makes the problem finite-dimensional.
VI. A SEQUENCE OF DISSIPATIVE POLYNOMIALS THAT CONVERGE TO THE VF IN SOBOLEV SPACE
For a given {c, g, f , Ω,U, T } ∈ M Lip , in Eq. (20), we proposed a sequence of optimization problems, indexed by d ∈ N, each instance of which yields a dissipative function J d ∈ P d (R n × R, R). In this section, we prove that
lim d→∞ J d −V L 1 (Λ×(0,T ),R) → 0 where V is the VF associated with the OCP {c, g, f , Ω,U, T } ∈ M Lip .
To accomplish this proof, we divide the section into three subsections, wherein we find the following.
(A) In Prop. 3 we show that for any V ∈ Lip(Ω × [0, T ], R)
that satisfies the dissipation-type inequality in Eq. (23) and any ε > 0 there exists a dissipative function
J ε ∈ C ∞ (Ω × [0, T ], R) such that ||J ε −V || W 1,p (Ω×[0,T ],R) < ε. (B) In Theorem 3 we show that for every ε > 0, there exists d ∈ N and dissipative P ε ∈ P d (R n × R, R) such that ||P ε − V || W 1,p (Ω×[0,T ],R) < ε, for any value function, V , associated with {c, g, f , Ω,U, T } ∈ M Lip . (C) For any positive weight w, Prop. 4 shows that if J d solves (20) for d ∈ N, then lim d→∞ ||w(J d −V )|| L 1 (Λ×(0,T ),R) = 0 for any VF, V , associated with {c, g, f , Ω,U, T } ∈ M Lip .
A. Existence Of Smooth Dissipative Functions That Approximate The VF Arbitrarily Well Under The W 1,p Norm
In this section we create a sequence of smooth (elements of C ∞ (R n × R, R)) functions that converges, with respect to the W 1,p norm, to any Lipschitz function, V , satisfying the dissipation-type inequality in Eq. (23). This subsection uses some aspects of mollification theory. For an overview of this field, we refer to [39]. a) Mollifiers: The standard mollifier, η ∈ C ∞ (R n × R, R) is defined as
η(x,t) := C exp 1 ||(x,t)|| 2 2 −1 when ||(x,t)|| 2 < 1, 0 when ||(x,t)|| 2 ≥ 1,(21)
where C > 0 is chosen such that R n ×R η(x,t)dxdt = 1.
For σ > 0 we denote the scaled standard mollifier by
η σ ∈ C ∞ (R n × R, R) such that η σ (x,t) := 1 σ n+1 η x σ , t σ . Note, clearly η σ (x,t) = 0 for all (x,t) / ∈ B(0, σ ). b) Mollification of a Function (Smooth Approximation): Recall from Section II-B that for open sets Ω ⊂ R n , (0, T ) ⊂ R, and σ > 0 we denote < Ω × (0, T ) > σ := {x ∈ Ω × (0, T ) : B(x, σ ) ⊂ Ω × (0, T )}. Now, for each σ > 0 and function V ∈ L 1 (Ω × (0, T ), R) we denote the σ -mollification of V by [V ] σ :< Ω × (0, T ) > σ → R, where [V ] σ (x,t) := R n ×R η σ (x − z 1 ,t − z 2 )V (z 1 , z 2 )dz 1 dz 2 (22) = B(0,σ ) η σ (z 1 , z 2 )V (x − z 1 ,t − z 2 )dz 1 dz 2 .
To calculate the derivative of a mollification we next introduce the concept of weak derivatives.
Definition 9.
For Ω ⊂ R n and F ∈ L 1 (Ω, R) we say any H ∈
L 1 (Ω, R) is the weak i ∈ {1, .., n}-partial derivative of F if Ω F(x) ∂ ∂ x i α(x)dx = − Ω H(x)α(x)dx, for α ∈ C ∞ (R n , R).
Weak derivatives are "essentially unique". That is if H 1 and H 2 are both weak derivatives of a function F then the set of points where H 1 (x) = H 2 (x) has measure zero. If a function is differentiable then its weak derivative is equal to its derivative in the "classical" sense. We will use the same notation for the derivative in the "classical" sense and in the weak sense.
In the next proposition we state some useful properties about Sobolev spaces and mollifications taken from [39].
Proposition 2 ([39]). For 1 ≤ p < ∞ and k ∈ N/{0} we consider V ∈ W k,p (E, R), where E ⊂ R n+1
is an open bounded set, and its σ -mollification [V ] σ . Recalling from Section II-B that for an open set Ω ⊂ R n and σ > 0 we denote < Ω > σ := {x ∈ Ω : B(x, σ ) ⊂ Ω}, the following holds:
1) For all σ > 0 we have [V ] σ ∈ C ∞ (< E > σ , R). 2) For all σ > 0 we have ∇ t [V ] σ (x,t) = [∇ t V ] σ (x,t) and ∇ x [V ] σ (x,t) = [∇ x V ] σ (x,t) for (x,t) ∈ < E > σ , where ∇ t V and ∇ x V are weak derivatives. 3) If V ∈ C(E, R) then for any compact set K ⊂ E we have lim σ →0 sup (x,t)∈K |V (x,t) − [V ] σ (x,t)| = 0. 4) (Meyers-Serrin Local Approximation) For any compact set K ⊂ E we have lim σ →0 [V ] σ −V W k,p (K,R) = 0.
c) Approximation of Lipschitz functions satisfying a dissipation-type inequality: We now show that for any Lipschitz function, V , satisfying the dissipation-type inequality in Eq. (23), V can be approximated arbitrarily well by a smooth function, J ε , that also satisfies the dissipation-type inequality in Eq. (23). We use a similar proof strategy first appearing in [40] and also later appearing in [41], [42], [43].
Lemma 2. Let E ⊂ R n+1 be an open bounded set, Ω ⊂ R n be such that Ω × (0, T ) ⊆ E, where T > 0, U ⊂ R m be a compact set, f ∈ Lip(Ω ×U, R n ), c ∈ Lip(Ω ×U × [0, T ], R), and V ∈ Lip(E, R) such that ess inf (x,t)∈Ω×(0,T ) {∇ t V (x,t)+∇ x V (x,t) T f (x, u) + c(x, u,t)}≥0,(23)
where the derivatives, ∇ t V and ∇ x V , are weak derivatives. Then for any compact set K ⊂ E, 1 ≤ p < ∞ and for all (24) and for all (x,t) ∈ K ∩ (Ω × (0, T )) and u ∈ U
ε > 0 there exists J ε ∈ C ∞ (K, R) such that ||V − J ε || W 1,p (K,R) < ε and sup (x,t)∈K |V (x,t) − J ε (x,t)| < ε,∇ t J ε (x,t) + ∇ x J ε (x,t) T f (x, u) + c(x, u,t) ≥ −ε.(25)
Proof. Suppose V satisfies Eq. (23), K ⊂ E is a compact set, 1 ≤ p < ∞, and ε > 0. By Rademacher's Theorem (Theorem 7)
V is weakly differentiable with essentially bounded derivative. Therefore V ∈ W 1,∞ (E, R) and hence V ∈ W 1,p (E, R). Now Prop. 2 (Statements 3 and 4) can be used to show there exists σ 1 > 0 such that for any 0 ≤ σ < σ 1 we have
||V − [V ] σ 1 || W 1,p (K,R) < ε and sup (x,t)∈K |V (x,t) − [V ] σ 1 (x,t)| < ε.(26)
Select
σ 2 > 0 small enough so K ⊂< E > σ 2 (which can be done as E is open). Select 0 < σ 3 < ε L V L f +2L c , where L V , L f , L c > 0 are the Lipschitz constant of the functions V ,
f , and c respectively. We now have the following for all
σ 4 < min{σ 3 , σ 2 }, u ∈ U and (x,t) ∈ K ∩ (Ω × (0, T )), ∇ t [V ] σ 4 (x,t) + ∇ x [V ] σ 4 (x,t) T f (x, u) + c(x, u,t) (27) = [∇ t V ] σ 4 (x,t) + [∇ x V ] σ 4 (x,t) T f (x, u) + c(x, u,t) = B(0,σ 4 ) η σ 4 (z 1 , z 2 ) ∇ t V (x − z 1 ,t − z 2 ) + ∇ x V (x − z 1 ,t − z 2 ) T f (x − z 1 , u) + c(x − z 1 , u,t − z 2 ) dz 1 dz 2 − B(0,σ 4 ) η σ 4 (z 1 , z 2 )∇ x V (x − z 1 ,t − z 2 ) T f (x − z 1 , u) − f (x, u) dz 1 dz 2 − B(0,σ 4 ) η σ 4 (z 1 , z 2 ) c(x − z 1 , u,t − z 2 ) − c(x, u,t) dz 1 dz 2 ≥ ess inf (z 1 ,z 2 )∈B(0,σ 4 ) ∇ t V (x − z 1 ,t − z 2 ) + ∇ x V (x − z 1 ,t − z 2 ) T f (x − z 1 , u) + c(x − z 1 , u,t − z 2 ) − ess sup (z 1 ,z 2 )∈B(0,σ 4 ) ||∇ x V (x − z 1 ,t − z 2 )|| 2 ess sup (z 1 ,z 2 )∈B(0,σ 4 ) || f (x − z 1 , u) − f (x, u)|| 2 − ess sup (z 1 ,z 2 )∈B(0,σ 4 ) |c(x − z 1 , u,t − z 2 ) − c(x, u,t)| ≥ −L V ess sup (z 1 ,z 2 )∈B(0,σ 4 ) || f (x − z 1 , u) − f (x, u)|| 2 − ess sup (z 1 ,z 2 )∈B(0,σ 4 ) |c(x − z 1 , u,t − z 2 ) − c(x, u,t)| ≥ −L V L f ess sup (z 1 ,z 2 )∈B(0,σ 4 ) ||z 1 || 2 − L c ess sup (z 1 ,z 2 )∈B(0,σ 4 ) ||z 1 || 2 + |z 2 | = −(L V L f + 2L c )σ 4 ≥ −ε.
The first equality of Eq.
(27) follows since ∇ t [V ] σ 4 (x,t) = [∇ t V ] σ 4 (x,t) and ∇ x [V ] σ 4 (x,t) = [∇ x V ] σ 4 (x,t) for all (x,t) ∈ K ⊂< E > σ 4 by Prop. 2 (Statement 2)
. The first inequality follows by the monotonicity property of integration and the Cauchy Swartz inequality.
Since V is Lipschitz ess sup (x,t)∈E ||∇ x V (x,t)|| 2 < L V by Rademacher's Theorem (Theorem 7)
. Now the second inequality follows by using (23) together with ess sup (
x,t)∈E ||∇ x V (x,t)|| 2 < L V .
The third inequality follows by the Lipschitz continuity of f and c. Finally the fourth inequality follows by the fact
σ 4 < σ 3 < ε L V L f +L c . Now define J ε (x,t) := [V ] σ (x.t) where 0 < σ < min{σ 1 , σ 4 }. It follows that J ε ∈ C ∞ (K, R) by Prop. 2 (Statement 1)
. Moreover J ε satisfies Eqs. (24) and (25) by Eqs. (26) and (27).
In Lemma 2 we showed that for any given function, V ∈ Lip(E, R), any compact subsets K ⊂ E, any ε > 0, and any 1 ≤ p < ∞, there exists a smooth function, J ε , satisfying Eq. (25), such that ||V − J ε || W 1,p (K,R) < ε. We next show this "local" result over compact subsets, K, can be extended to a "global" results over the entire domain, E. To do this we use Theorem 9, stated in Section XIII. Given an open cover of E, Theorem 9 states that there exists a family of functions, called a partition of unity. In the next proposition we use partitions of unity together with the "local" approximates of the Lipschitz function, V , to construct a smooth "global" approximation of V over the entire domain E. (28) and for all (x, u,t) ∈ Ω ×U × (0, T )
Proposition 3. Let E ⊂ R n+1 be an open bounded set, Ω ⊂ R n be such that Ω × (0, T ) ⊆ E, where T > 0, U ⊂ R m be a compact set, f ∈ Lip(Ω × U, R n ), c ∈ Lip(Ω × U × [0, T ], R), and V ∈ Lip(E, R) satisfies Eq. (23). Then for all 1 ≤ p < ∞ and ε > 0 there exists J ∈ C ∞ (E, R) such that ||V − J|| W 1,p (E,R) < ε and sup (x,t)∈E |V (x,t) − J(x,t)| < ε,∇ t J(x,t) + ∇ x J(x,t) T f (x, u) + c(x, u,t) ≥ −ε.(29)
Proof. Let us consider the family of sets
E i = {x ∈ E : sup y∈∂ E ||x − y|| 2 < 1 i } for i ∈ N. It follows {E i } ∞ i=1
is an open cover (Defn. 14) for E and thus by Theorem 9 there exists a smooth partition of unity,
{ψ i } ∞ i=1 ⊂ C ∞ (E, R)
, that satisfies Statements 1 to 4 of Theorem 9.
For ε > 0 Lemma 2 shows that for each i ∈ N there exists a function
J i ∈ C ∞ (E i , R) such that sup (x,t)∈E i |V (x,t) − J i (x,t)| < ε 2 i+1 (1 + τ i + θ i ) ,(30)||V − J i || W 1,p (E i ,R) < ε 2 i+1 (1 + τ i + θ i ) ,(31)∇ t J i (x,t) + ∇ x J i (x,t) T f (x, u) + c(x, u,t) ≥ − ε 2 i+1 (1 + τ i + θ i ) for all (x,t) ∈ E i ∩ (Ω × (0, T )), u ∈ U,(32)
where we denote
τ i := sup (x,u,t)∈Ω×U×(0,T ) {|∇ t ψ i (x,t) + ∇ x ψ i (x,t) T f (x, u)|} ≥ 0 and θ i := max |α|≤1 sup (x,t)∈E |D α ψ i (x,t)| p p ≥ 0; which is well defined and finite as Ω × U × (0, T ) is bounded and ψ i is smooth. Now, let us define J(x,t) := ∑ ∞ i=1 ψ i (x,t)J i (x,t)
, we will show J ∈ C ∞ (E, R) and that J satisfies Eqs. (28) and (29).
It follows J ∈ C ∞ (E, R) by Theorem 9. To see this we note for each i ∈ N we have ψ i ∈ C ∞ (E, R) and ψ i (x,t) = 0 outside E i implying ψ i J i ∈ C ∞ (E, R). Moreover, for each (x,t) ∈ E there exists an open set, S ⊆ E, where only a finite number of ψ i are nonzero. Therefore it follows that the function J is a finite sum of infinitely differentiable functions and thus J is also infinitely differentiable.
We now show J satisfies Eq. (28). We first show V − J W 1,p (E,R) < ε:
V − J W 1,p (E,R) = V − ∞ ∑ i=1 ψ i J i W 1,p (E,R) (33) = ∞ ∑ i=1 ψ i (V − J i ) W 1,p (E,R) ≤ ∞ ∑ i=1 ψ i (V − J i ) W 1,p (E,R) = ∞ ∑ i=1 ψ i (V − J i ) W 1,p (Ē i ,R) ≤ ∞ ∑ i=1 θ i V − J i W 1,p (Ē i ,R) < ∞ ∑ i=1 ε + θ i 2 i+1 (1 + τ i + θ i ) < ε.
The second equality of Eq. (33) follows since partitions of unity satisfy ∑ ∞ i=1 ψ i (x,t) ≡ 1 by Theorem 9. The first inequality follows by the triangle inequality. The third equality follows since partitions of unity satisfy ψ i (x,t) = 0 outside of E i for all i ∈ N by Theorem 9. The third inequality follows by Eq. (31). The fourth inequality follows as ∑ ∞ Next we will show J satisfies Eq. (29). Before doing this we first prove a preliminary identity. Specifically,
∞ ∑ i=1 ∇ t ψ i (x,t) + ∇ x ψ i (x,t) T f (x, u) = 0,(34)
for all (x,t) ∈ Ω × (0, T ) ⊆ E and u ∈ U. This follows because only finitely many ψ i 's are non-zero for each (x,t) ∈ E and thus it follows ∑ ∞ i=1 ψ i (x,t) is a finite sum of infitely differentiable functions. Therefore, we can interchange derivatives and summations, thus since
∑ ∞ i=1 ψ i (x,t) ≡ 1 it follows that ∇ t ∑ ∞ i=1 ψ i (x,t) = ∑ ∞ i=1 ∇ t ψ i (x,t) = 0. Similarly for each j ∈ {1, ..., n} we have ∑ ∞ i=1 ∂ ψ i (x,t) ∂ x j = 0 which implies ∑ ∞ i=1 ∇ x ψ i (x,t) = 0 ∈ R n . Now, it follows J satisfies Eq. (29) since ∇ t J(x,t) + ∇ x J(x,t) T f (x, u) + c(x, u,t) (35) = ∞ ∑ i=1 ψ i (x,t)(∇ t J i (x,t) + ∇ x J i (x,t) T f (x, u) + c(x, u,t)) + ∞ ∑ i=1 J i (x,t)(∇ t ψ i (x,t) + ∇ x ψ i (x,t) T f (x, u)) ≥ −ε 2 + ∞ ∑ i=1 (J i (x,t) −V (x,t))(∇ t ψ i (x,t) + ∇ x ψ i (x,t) T f (x, u)) ≥ −ε,
for all (x,t) ∈ Ω × (0, T ) ⊆ E and u ∈ U. The first equality of Eq. (35) follows by the chain rule and the fact ∑ ∞ i=1 ψ i (x,t) ≡ 1. The first inequality follows by Eqs. (32) and (34). The second inequality follows by Eq. (30) and ∑ ∞ to show for any VF, associated with some family OCPs {c, g, f , Ω,U, T } ∈ M Lip , there exists a dissipative polynomial, V l , that approximates the VF arbitrarily well with respect to the Sobolev norm. Our proof uses Theorem 6, found in Appendix XIII, that shows differentiable functions, such as J, can be approximated up to their first order derivatives over compact sets arbitrarily well by polynomials. Prop. 3 only gives the existence of a smooth approximation, J, when the VF is Lipschitz continuous. Lemma 1 shows the VF, associated with a family of OCPs, is locally Lipschitz when Ω = R n (which is not a compact set). Unfortunately, Theorem 6 can only be used for polynomial approximation over compact sets. Thus, before proceeding we first give a sufficient condition for a VF, associated with a family of OCPs with compact state constraints, to be Lipschitz continuous over some set Λ ⊂ Ω.
a) Lipschitz continuity of VFs associated with a family of state constrained OCPs: Consider the family of OCPs {c, g, f , Ω,U, T } ∈ M Lip . If the state is constrained (Ω = R n ), the associated VF can be discontinuous and is no longer uniquely defined as the viscosity solution of the HJB PDE. Next, in Lemma 3, we give a sufficient condition that when satisfied implies VFs, associated with a family of state constrained OCPs, are equal to the unique locally Lipschitz continuous VF of the state unconstrained OCP over some subset Λ ⊆ Ω, and hence are Lipschitz continuous over Λ.
To state Lemma 3 we first define the forward reachable set.
Definition 10. For X 0 ⊂ R n , Ω ⊆ R n , U ⊂ R m , f : R n × R m → R n and S ⊂ R + , defineFR f (Λ, R n ,U, [0, T ]) ⊆ Ω,(36)then V 1 (x,t) = V 2 (x,t) for all (x,t) ∈ Λ × [0, T ]. Proof. To show V 1 (x,t) = V 2 (x,t) for all (x,t) ∈ Λ × [0, T ] we must prove U Ω,U, f ,T (x,t) = U R n ,U, f ,T (x,t) for all (x,t) ∈ Λ × [0, T ]. For any (x,t) ∈ Λ × [0, T ] if u ∈ U Ω,U, f ,T (x,t) then clearly u ∈ U R n ,U, f ,T (x,t), thus U Ω,U, f ,T (x,t) ⊆ U R n ,U, f ,T (x,t).
On the other hand if u ∈ U R n ,U, f ,T (x,t) then by Defn. 4 it follows u(s) ∈ U for all s ∈ [t, T ] and that there exists a unique map, denoted by φ f (x, s, u), that satisfies the following for all s ∈ [t, T ]
∂ φ f (x, s − t, τ t u) ∂ s = f (φ f (x, s − t, τ t u), u(s)), φ f (x, 0, τ t u) = x.
To show u ∈ U Ω,U, f ,T (x,t) we need φ f (x, s − t, τ t u) ∈ Ω for all s ∈ [t, T ], which is equivalent to
φ f (x, s,ũ) ∈ Ω for all s ∈ [0, T − t],(37)
whereũ = τ t u ∈ U Ω,U, f ,T −t (x, 0). Eq. (37) then follows trivially by Eq. (36).
Alternative sufficient conditions that imply a VF, associated with some family of state constrained OCPs, is Lipschitz continuous and the unique viscosity solution of the HJB PDE include: the Inward Pointing Constraint Qualification (IPCQ) [44] [45], the Outward Pointing Constraint Qualification (OPCQ) [46], and epigraph characterization of VF's [47].
b) Approximation of VFs by dissipative polynomials: Considering a family of OCPs {c, g, f , Ω,U, T } ∈ M Lip , and assuming there exists a set Λ ⊆ Ω that satisfies Eq. (36), we now prove the existence of dissipative polynomial functions that can approximate the any VF of {c, g, f , Ω,U, T } ∈ M Lip arbitrarily well under the Sobolev norm.
Theorem 3. For given {c, g, f , Ω,U, T } ∈ M Lip suppose Λ ⊆ Ω is a bounded set that satisfies (36), then for any function V satisfying Eq. (10), 1 ≤ p < ∞, and ε > 0 there exists V l ∈ P(R n × R, R) such that
V −V l W 1,p (Λ×[0,T ],R) < ε,(38)
sup
(x,t)∈Λ×[0,T ] |V (x,t) −V l (x,t)| < ε,(39)V l (x,t) ≤ V (x,t) for all t ∈ [0, T ] and x ∈ Ω,(40)∇ t V l (x,t) + c(x, u,t) + ∇ x V l (x,t) T f (x, u) > 0 (41) for all x ∈ Ω,t ∈ (0, T ), u ∈ U, V l (x, T ) < g(x) for all x ∈ Ω.(42)
Proof. Let ε > 0. Suppose V satisfies Eq. (10). Rather than approximating V , defined for a family of OCPs on the compact set Ω, we instead approximate the unique VF, denoted by V * , associated with the family of OCPs where Ω = R n . It is easier to approximate V * compared to V as V * has the following useful properties: By Lemma 1, V * is locally Lipschitz continuous; and by Theorem 1, V * is the unique viscosity solution of the HJB PDE (12). Furthermore, as Λ satisfies Eq. (36), Lemma 3 implies
V * (x,t) = V (x,t) for all (x,t) ∈ Λ × [0, T ].(43)
This proof is structured as follows. We first use Prop. 3 to approximate V * by an infinitely differentiable function denoted as J δ . Then using Theorem 6, found in Appendix XIII, we approximate J δ by a polynomial P δ . Finally, to ensure Inequalities (41) and (42) are satisfied, a correction term ρ is subtracted from P δ , creating the function V l (x,t) := P δ (x,t) − ρ(t) that we show satisfies Eqs. (38) to (42).
Since Ω is compact, there exists some open bounded set E ⊂ R n+1 of finite measure which contains Ω × (0, T ). Since V * ∈ LocLip(R n ×R, R) (by Lemma 1) and E ⊂ R n is bounded it follows V * ∈ Lip(E × [0, T ], R). Then by Rademacher's theorem (See Theorem 7 in Section XIII), V * is differentiable almost everywhere in E. Moreover, as V * is the unique viscosity solution to the HJB PDE, the following holds for all u ∈ U and almost everywhere in (x,t) ∈ Ω × (0, T ) ⊂ E.
∇ t V * (x,t) + c(x, u,t) + ∇ x V * (x,t) T f (x, u) ≥ ∇ t V * (x,t) + inf u∈U {c(x, u,t) + ∇ x V * (x,t) T f (x, u)} = 0
This implies that the following holds for all u ∈ U ess inf (x,t)∈Ω×(0,T )
∇ t V * (x,t)+∇ x V * (x,t) T f (x, u) + c(x, u,t) ≥ 0.
Therefore, we conclude that V * satisfies Eq. (23). Thus, by Prop. 3, for any δ > 0 there exists J δ ∈ C ∞ (E, R) such that
V * − J δ W 1,p (E,R) < δ ,(44)∇ t J δ (x,t) + ∇ x J δ (x,t) T f (x, u) + c(x, u,t) ≥ −δ(45)
for all(x,t) ∈ Ω × (0, T ).
In particular, let us choose δ > 0 such that
δ < ε 2 + (2 + 4T + 2MT )(T µ(Λ)) 1 p ,(46)
where M := sup (x,u)∈Ω×U || f (x, u)|| 2 < ∞ and µ(Λ) < ∞ is the Lebesgue measure of Λ.
We now approximate J δ ∈ C ∞ (E, R) by a polynomial function. Theorem 6, found in Section XIII, shows there exists
P δ ∈ P(R n × R, R) such that for all (x,t) ∈ E |J δ (x,t) − P δ (x,t)| < δ . (47) |∇ t J δ (x,t) − ∇ t P δ (x,t)| < δ . (48) ||∇ x J δ (x,t) − ∇ x P δ (x,t)|| 2 < δ . (49) J δ − P δ W 1,p (E,R) < δ .(50)Now, V * − P δ W 1,p (E,R) = V * − J δ + J δ − P δ W 1,p (E,R) ≤ V * − J δ W 1,p (E,R) + J δ − P δ W 1,p (E,R) < 2δ ,(51)
where the first inequality follows by the triangle inequality, and the second inequality follows from Eq. (44) and Eq. (50). By a similar argument to Inequality (51) we deduce,
sup (x,t)∈E |V * (x,t) − P δ (x,t)| < 2δ .(52)
Furthermore,
∇ t P δ (x,t) + ∇ x P δ (x,t) T f (x, u) + c(x, u,t) ≥ ∇ t P δ (x,t) + ∇ x P δ (x,t) T f (x, u) + c(x, u,t) − δ − ∇ t J δ (x,t) + ∇ x J δ (x,t) T f (x, u) + c(x, u,t) = −δ + ∇ t P δ (x,t) − ∇ t J δ (x,t) − ∇ x J δ (x,t) − ∇ x P δ (x,t) T f (x, u) > −δ − δ − ||∇ x J δ (x,t) − ∇ x P δ (x,t)|| 2 || f (x, u)|| 2 > −(2 + M)δ for all (x,t) ∈ Ω × (0, T ),(53)
The first inequality of Eq. (53) follows by Eq. (45). The second inequality follows by Eq. (48) and the Cauchy Schwarz inequality. The third inequality follows by Eq. (49). Moreover, we have that
P δ (x, T ) = P δ (x, T ) −V * (x, T ) +V * (x, T ) < g(x) + 2δ for all x ∈ Ω.(54)
This inequality follows from the fact that V * (x, T ) = g(x) since V * satisfies the boundary condition in the HJB PDE (12), and Eq. (52). We now construct V l from P δ . Let us denote the correction function ρ(t) :
= (2 + M)(T − t)δ + 2δ , where M = sup (x,u)∈Ω×U || f (x, u)|| 2 . We define V l as V l (x,t) := P δ (x,t) − ρ(t).(55)
We now find that V l satisfies Inequality (41) since we have
∇ t V l (x,t) + c(x, u,t) + ∇ x V l (x,t) T f (x, u) = ∇ t P δ (x,t) + ∇ x P δ (x,t) T f (x, u) + c(x, u,t) + (2 + M)δ > 0, for all (x,t) ∈ Ω × (0, T ),
where the above inequality follows from Eq. (53). We next show V l satisfies Inequality (42):
V l (x, T ) = P δ (x, T ) − 2δ < g(x) for all x ∈ Ω,
where the above inequality follows by Eq. (54). Now, since V l satisfies Eqs. (41) and (42) it follows V l satisfies Eq. (40) by Prop. 1.
To show that V l satisfies Inequality (38), we first we derive a bound on the norm of the correction function ρ.
ρ W 1,p (Λ×[0,T ],R) = Λ×[0,T ] |(2 + M)(T − t)δ + 2δ | p dxdt 1 p + Λ×[0,T ] |(2 + M)δ | p dxdt 1 p ≤ (2 + 4T + 2MT )(T µ(Λ)) 1 p δ .
where V is any function satisfying Eq. (10), and J d ∈ P d (R n × R, R) is any solution to Optimization Problem (20) for d ∈ N.
Proof. Suppose V satisfies the theorem statement. To show Eq. (57) we must show that for any ε > 0 there exists N ∈ N such that
Λ×[0,T ] w(x,t)|V (x,t) − J d (x,t)|dxdt < ε for all d ≥ N.
Since by assumption Λ satisfies Eq. (36), we can use Theorem 3 (from Section VI-B) to show that for any δ > 0 there exists dissipative V l ∈ P(R n × R, R) feasible to Optimization Problem (20) and is such that
ess sup (x,t)∈Λ×[0,T ] |V (x,t) −V l (x,t)| < δ .
For our given ε > 0, by selecting δ < ε/ Λ×[0,T ] w(x,t)dxdt (Note if Λ×[0,T ] w(x,t)dxdt = 0, Eq. (57) already holds and the proof is complete) we have a V l such that
Λ×[0,T ] w(x,t)|V (x,t) −V l (x,t)|dxdt (58) ≤ Λ×[0,T ] w(x,t)dxdt ess sup (x,t)∈Λ×[0,T ] |V (x,t) −V l (x,t)| < δ Λ×[0,T ]
w(x,t)dxdt < ε. Now define N := deg(V l ) and denote the solution to Problem (20) for d ≥ N as J d ∈ P N (R n × R, R). As V l is feasible to Problem (20) for all d ≥ N, it follows the objective function evaluated at J d is greater than or equal to the objective function evaluated at V l ; that is For our SOS implementation we consider a special class of OCPs, given next in Defn. 11. Definition 11. We say the six tuple {c, g, f , Ω,U, T } is a polynomial optimal control problem or {c, g, f , Ω,U, T } ∈ M Poly if the following holds 1) c ∈ P(R n × R m × R, R) and g ∈ P(R n , R).
Λ×[0,T ] w(x,t)J d (x,t)dxdt ≥ Λ×[0,T ] w(x,t)V l (x,t)dxdt for d ≥ N. (59) Now, Λ×[0,T ] w(x,t)|V (x,t) − J d (x,t)|dxdt (60) = Λ×[0,T ] w(x,t)V (x,t) − w(x,t)J d (x,t)dxdt ≤ Λ×[0,T ] w(x,t)|V (x,t) −V l (x,t)|dxdt < ε for all d ≥ N.2) f ∈ P(R n × R m , R n ). 3) There exists h Ω ∈ P(R n , R) such that Ω = {x ∈ R n : h Ω (x) ≥ 0}. 4) There exists h U ∈ P(R m , R) such that U = {u ∈ R m : h U (u) ≥ 0}.
Note polynomials are locally Lipschitz continuous, that is P(R n ×R, R) ⊂ LocLip(R n ×R, R). Therefore M Poly ⊂ M Lip .
For given {c, g, f , Ω,U, T } ∈ M Poly , d ∈ N, Λ ⊂ R n and w ∈ L 1 (Λ × [0, T ], R + ), we thus propose an SOS tightening of Optimization Problem (20) as follows: (20) due to the strict inequalities of the latter problem.
P d ∈ arg max P∈P d (R n ×R,R) c T α (61) subject to: k 0 , k 1 ∈ d ∑ SOS , s i ∈ d ∑ SOS for i = 0, 1, 2, 3, P(x,t) = c T Z d (x,t), k 0 (x) = g(x) − P(x, T ) − s 0 (x)h Ω (x), k 1 (x, u,t) = ∇ t P(x,t) + c(x, u,t) + ∇ x P(x,t) T f (x, u) − s 1 (x, u,t)h Ω (x) − s 2 (x, u,t)h U (u) − s 3 (x, u,t) · (T t − t 2 ), where α i = Λ×[0,T ] w(x,t)Z d,i (x,
A. We Can Numerically Construct A Sequence Of Polynomials That Converge The VF
For a given family of OCPs, we now show that the sequence of solutions to the SOS Opts. (61) converges locally to the VF of the associated OCPs with respect to the L 1 norm.
w(x,t)|V (x,t) − P d (x,t)|dxdt = 0,(62)
where V is any function satisfying Eq. (10) and P d ∈ P d (R n × R, R) is any solution to Problem (61) for d ∈ N.
Proof. To show Eq. (62) we show that for any ε > 0 there exists N ∈ N such that for all d ≥ N
Λ×[0,T ] w(x,t)|V (x,t) − P d (x,t)|dxdt < ε.
As it is assumed Λ satisfies Eq. (36) we are able to use Prop. 4 that shows for any ε > 0 there exists N 1 ∈ N such that for all d ≥ N 1
Λ×[0,T ] w(x,t)|V (x,t) − J d (x,t)|dxdt < ε,(63)
where J d is a solution to Optimization Problem (20) for d ∈ N.
In particular let us fix some d 1 ≥ N 1 . Since J d 1 solves Problem (20) it must satisfy the constraints of Problem (20). Thus we have
k 0 (x) := g(x) − J d 1 (x, T ) > 0 for all x ∈ Ω, k 1 (x, u,t) := ∇ t J d 1 (x,t) + c(x, u,t) + ∇ x J d 1 (x,t) T f (x, u) > 0 for all (x, u,t) ∈ Ω ×U × [0, T ].
Since k 0 and k 1 are strictly positive functions over the com-
pact semialgebriac set Ω × U × [0, T ] = {(x, u,t) ∈ R n+m+1 : h Ω (x) ≥ 0, h U (u) ≥ 0,t(T −t) ≥ 0}, Putinar's Positivstellensatz (stated in Theorem 8, Appendix XIII) shows that there exist s 0 , s 1 , s 2 , s 3 , s 4 , s 5 ∈ ∑ SOS such that k 0 − h Ω s 0 = s 1 , (64) k 1 − h Ω s 2 − h U s 3 − h T s 4 = s 5 , where h T (t) := (t)(T − t).
Let N 2 := max i∈{0,1,2,3,4,5} deg(s i ). By Eq. (64) it follows that if J d 1 is feasible to Problem (61) for d ≥ max{d 1 , N 2 }. Therefore, for d ≥ max{d 1 , N 2 }, the objective function evaluated at the solution to Problem (61) must be greater than or equal to objective function evaluated at J d 1 . That is by writing the solution to Problem (61)
as P d (x,t) = c T d Z d (x,t) and writ- ing J d 1 as J d 1 = b T d 1 Z d 1 (x,t) we get that for d ≥ max{d 1 , N 2 } c T d α ≥ b T d 1 α.(65)
Now using Eqs. (65) and (63) it follows for all d ≥
max{d 1 , N 2 } Λ×[0,T ] w(x,t)|V (x,t) − P d (x,t)|dxdt = Λ×[0,T ] w(x,t)V (x,t)dxdt − Λ×[0,T ] w(x,t)P d (x,t)dxdt = Λ×[0,T ] w(x,t)V (x,t)dxdt − c T d α ≤ Λ×[0,T ] w(x,t)V (x,t)dxdt − b T d 1 α = Λ×[0,T ] w(x,t)|V (x,t) − J d 1 (x,t)|dxdt < ε,
where the above inequality follows using Prop. 1, which shows
P d (x,t) ≤ V (x,t) and J d (x,t) ≤ V (x,t) for all (x,t) ∈ Ω×[0, T ],
as P d and J d satisfy Inequalities (16) and (17)
B. We Can Numerically Construct A Sequence Of Sublevel Sets That Converge To The VF's Sublevel Set
For a given family of OCPs, Prop. 5 shows the SOS optimization problem, given in Eq. (61), yields a sequence of polynomials, {P d } d∈N , a sequence that converges to the VF (denoted by V ), where convergence is with respect to the L 1 norm, and where the VF is associated with the given family of OCPs. We next extend this convergence result by showing that, for any γ ∈ R, the sequence {P d } d∈N yields a sequence of γ-sublevel sets, where the sequence of γ-sublevel sets converges to the γ-sublevel set of the value function, V , where convergence is with respect to the volume metric.
For sets A, B ⊂ R n , we denote the volume metric as
D V (A, B), where D V (A, B) := µ((A/B) ∪ (B/A)),(66)
where we recall µ(A) := R n 1 A (x)dx is the Lebesgue measure. Note that Lemma 4 (Appendix XI) shows that D V is a metric. From the definition of Problem (61), we have that P d satisfies the inequalities in (16) and (17). Therefore, by Prop. 1, We now apply Prop. 7 (Section XI) to deduce Eq. (67).
we have that P d (x,t) ≤ V (x,t) for all (x,t) ∈ Ω × [0, T ],
VIII. A PERFORMANCE BOUND ON CONTROLLERS CONSTRUCTED USING APPROXIMATION TO THE VF
Given an OCP, if an associated differentiable VF is known then a solution to the OCP can be constructed using Thm. 2. However, in general, it is challenging to find a VF analytically. Rather than computing a true VF, we consider a candidate VF which is "close" to a true VF under some norm. This motivates us to ask the question: how well will a controller constructed from a candidate VF perform? To answer this question we next define the loss/performance of an input. For state unconstrained OCPs, {c, g, f , R n ,U, T } ∈ M Lip , we denote the loss/performance function as,
L(x 0 , u) := T 0 c(φ f (x 0 , s, u), u(s), s)ds + g(φ f (x 0 , T, u)) (68) − inf u∈U R n ,U, f ,T (x 0 ,0) T 0 c(φ f (x 0 , s, u), u(s), s)ds + g(φ f (x 0 , T, u)) .
Clearly, L(x 0 , u) ≥ 0 for all (x 0 , u) ∈ Ω × U R n ,U, f ,T (x 0 , 0). such that FR f ({x 0 }, R n ,U, [0, T ]) ⊆ Ω. Then for any function J ∈ C 2 (R n × R, R) we have that
L(x 0 , u J ) ≤ C||J −V * || W 1,∞ (Ω×[0,T ],R) ,(69)
where C := 2 max 1, T, T max
1≤i≤n sup (x,t)∈Ω×U | f i (x, u)| , (70) u J (t) = k J (φ f (x 0 ,t, u J ),t),(71)and k J (x,t) ∈ arg inf u∈U {c(x, u,t) + ∇ x J(x,t) T f (x, u)}. (72)
Proof. Now, for any J ∈ C 2 (R n × R, R) ⊂ LocLip(R n × R, R), we wish to show that Eq. (69) holds. To do this, we will show that J is the true VF for some modified OCP. Before constructing this modified OCP, for any F ∈ LocLip(R n × R, R), let us define
H F (x,t, u) := ∇ t F(x,t) + c(x, u,t) + ∇ x F(x,t) T f (x, u), H F (x,t) := inf u∈U H F (x,t, u),
where ∇ t F and ∇ x F are weak derivatives, known to exist by Rademacher's Theorem (Thm. 7). Then, by construction, J satisfies the following PDE Since H J is independent of u ∈ U, we have that
∇ t J(x,t) + inf u∈U c(x, u,t) −H J (x,t) + ∇ x J(x,t) T f (x, u) = 0 for all (x,t) ∈ R n × [0, T ].(73)arg inf u∈U {c(x, u,t) + ∇ x J(x,t) T f (x, u)} = arg inf u∈U {c(x, u,t) + ∇ x J(x,t) T f (x, u)},
and therefore we are able to deduce by Theorem 2 that u J (given in Eq. (71)) solves the modified OCP associated with {c,g, f , R n ,U, T } with initial condition x 0 ∈ R n . Thus for all
u ∈ U R n ,U, f ,T (x 0 , 0) we have that T 0c (φ f (x 0 , s, u J ), u J (s), s)ds +g(φ f (x 0 , T, u J ))(74)
≤ T 0c (φ f (x 0 , s, u), u(s), s)ds +g(φ f (x 0 , T, u)).
By substitutingc(x, u,t) = c(x, u,t) −H J (x,t) andg(x) = J(x, T ) into Inequality (74) and noting that V * (x, T ) = g(x), we have the following for all u ∈ U R n ,U, f ,T (x 0 , 0): The second and third inequalities of Eq. (75) follow be- Before proceeding with Parts 1 to 3 we introduce some notation for the set of points where the VF is differentiable, satisfied by a compact Ω if we were to use Thm. 4 to derive performance bounds for controllers synthesized from our SOS derived approximate VFs.
T 0 c(φ f (x 0 , s, u J ), u J (s), s)ds + g(φ f (x 0 , T, u J )) (75) − T 0 c(φ f (x 0 , s, u), u(s), s)ds − g(φ f (x 0 , T, u)) ≤ T 0H J (φ f (x 0 , s, u J ), s) −H J (φ f (x 0 , s, u), s)ds +V * (φ f (x 0 , T, u J ), T ) − J(φ f (x 0 , T, u J ), T ) + J(φ f (x 0 , T, u), T ) −V * (φ f (x 0 , T, u), T ) < T ess sup s∈[0,T ] {H J (φ f (x 0 , s, u J ), s) −H J (φ f (x 0 ,cause φ f (x 0 ,t, u) ∈ Ω for all (t, u) ∈ [0, T ]× ∈ U R n ,U, f ,T (x 0 , 0) (since it is assumed FR f ({x 0 }, R n ,
IX. NUMERICAL EXAMPLES: USING OUR SOS ALGORITHM TO APPROXIMATE VFS
In this section we use the SOS programming problem as defined in Eq. (61) to numerically approximate the VFs associated with several different OCPs. We first approximate a known VF. Then, in Subsection IX-A, we approximate an unknown VF and use this approximation to construct a controller and analyze its performance. Then, in Subsection IX-B, we approximate another unknown VF for reachable set estimation.
Example 1. Let us consider the tuple {c, g, f , Ω,U, T } ∈ M Poly , where c(x, u,t) ≡ 0, g(x) = x, f (x, u) = xu, Ω = (−R, R) = {x ∈ R : x 2 < R 2 }, U = (−1, 1) = {u ∈ R : u 2 < 1},
and T = 1. It was shown in [4] that the VF associated with {c, g, f , R n ,U, T } can be analytically found as
V * (x,t) = exp(t − 1)x if x > 0, exp(1 − t)x if x < 0, 0 if x = 0. (85)
We note that V * is not differentiable at x = 0. However, V * satisfies the HJB PDE away from x = 0. This problem shows that the VF can be non-smooth even for simple OCP's with polynomial vector field and cost functions.
In Fig. 1
A. Using SOS Programming To Construct Polynomial Sub-Value Functions For Controller Synthesis
Given an OCP, in Theorem 4 we showed that the performance of a controller constructed from a candidate VF is bounded by the W 1,∞ norm between the true VF of the OCP and the candidate VF. We next demonstrate through numerical examples that the performance of a controller constructed from a typical solutions to the SOS Problem (61) is significantly higher than that predicted by this bound.
Consider tuple {c, g, f , R n ,U, T } ∈ M Poly , where the cost function is of the form c(x, u,t) = c 0 (x,t) + ∑ m i=1 c i (x,t)u i , thei = 2u i −2b i b i −a i for i ∈ {1, .
.., m}, without loss of generality we assume U = [−1, 1] m . Now, given an OCP associated with {c, g, f , R n ,U, T } ∈ M Poly suppose V ∈ C 1 (R n × R, R) solves the HJB PDE (12), then by Theorem 2 a solution to the OCP initialized at x 0 ∈ R n can be found as
u * (t) := k(φ f (x 0 ,t, u * ),t), where(86)
k(x,t) ∈ arg inf
u∈[−1,1] m m ∑ i=1 c i (x,t)u i + ∇ x V (x,t) T f i (x)u i .(87)
Since the objective function in Eq. (87) is linear in the decision variables u ∈ R m , and since the constraints have the form u i ∈ [−1, −1], it follows that Eqs. (86) and (87) can be reformulated as
u * (t) := k(φ f (x 0 ,t, u * ),t), where (88) k i (x,t) = − sign(c i (x,t) + ∇ x V (x,t) T f i (x)).(89)
In the following numerical examples we approximately solve OCPs of this form (with cost functions and dynamics affine in the input variable) by constructing a controller from the solution P to the SOS Problem (61). We construct such controllers by replacing V with P in Eqs. (88) and (89). We will consider OCPs with no state constraints and initial conditions inside some set Λ ⊆ R n . We select Ω = B(0, R) with R > 0 sufficiently large enough so Eq. (36) is satisfied. That is, no matter what control we use, the solution map starting from any x 0 ∈ Λ will not be able to leave the state constraint set Ω. In this case the solution to the state constrained problem, {c, g, f , Ω,U, T }, is equivalent to the solution of the state unconstrained problem, {c, g, f , R n ,U, T }.
To evaluate the performance of our constructed controller, u, we approximate the objective/cost function of the OCP evaluated at u (ie the cost associated with u) using the Riemann sum:
T 0 c(φ f (x 0 ,t,u),t)dt + g(φ f (x 0 , T, u)) (90) ≈ N−1 ∑ i=1 c(φ f (x 0 ,t i , u),t i )∆t i + g(φ f (x 0 ,t N , u)), where 0 = t 0 < ... < t N = T , ∆t i = t i+1 −t i for all i ∈ {1, ..., N − 1}, and {φ f (x 0 ,t i , u)} N i=0
can be found using Matlab's ode45 function.
Example 2. Let us consider the following OCP from [48]:
min u 5 0 x 1 (t)dt(91)
subject to:
ẋ 1 (t) x 2 (t) = x 2 (t) u(t) , u(t) ∈ [−1, 1] for all t ∈ [0, 5].
We h Ω (x) = 10 2 − x 2 1 − x 2 2 and h U (u) = 1 − u 2 , it is possible to obtain a polynomial sub-value function P. By replacing V with P in Eqs. (88) and (89) it is then possible to construct a controller, k P , that yields a candidate solution to the OCP as u(t) = k P (x(t),t).
For initial condition x 0 = [0, 1] T we use Matlab's ode45 to find the set {φf (x 0 ,t,ũ) ∈ R 2 : t ∈ [0, T ]} (recalling φ f denotes the solution map (Defn. 4)), which is shown in the phase plot in Figure 3. For N = 10 8 Eq. (90) was used to find the cost associated with a fixed input, u(t) ≡ 1, as 354.17, whereas the cost of using u(t) ≡ −1 was found to be 41.67. The cost of using our derived inputũ was found to be 0.2721, an improvement when compared to the cost 0.2771 found in [48]. Note, it may be possible that the results of the algorithm proposed in [48] may be improved by selecting different tunning parameters of the algorithm. We have assumed that the authors of [48] have selected the tunning parameters for which their algorithm performs best. Example 3. Consider an OCP found in [48] and [49] which has the same dynamics as Eq. 1 − x 2 2 and h U (u) = 1 − u 2 , we obtain the polynomial sub-VF P. Similarly to Example 2 we construct a controller from the polynomial sub-VF P using Eqs. (88) and (89). Using Eq. (90), the fixed input u(t) ≡ +1 was found to have cost 446.03. The fixed input u(t) ≡ −1 cost was found to be 67. 48. The controller derived from P was found to have cost 0.7255, an improvement compared to a cost of 0.75041 found in [49] and 0.8285 found in [48]. Note that in this numerical example the dynamics are linear and the cost function is quadratic. However, due to the input constraints this is not a classical LQR problem and hence cannot be trivially solved. Also note, it may be possible that the results of the algorithms proposed in [48], [49] may be improved by selecting different tunning parameters of the algorithms. We have assumed that the authors of [48], [49] have selected the tunning parameters for which their algorithm performs best.
Example 4. As in [50] let us consider the (scaled) Van der Pol oscillator:
x 1 (t) = 2x 2 (t),(92)x 2 (t) = 10x 2 (t)(0.21 − 1.2 2 x 1 (t)) − 0.8x 1 (t) + u(t),
where u(t) ∈ [−1, 1]. Let us consider OCPs of Form Table I. Eqs. (88) and (89) we then construct controllers, k P i , that yield candidate solution to the OCPs,ũ i (t) = k P i (x(t),t) i ∈ {1, 2}.
For initial condition x 0 = [0.75, 0.75] T and terminal time T = 10 we use Matlab's ode45 to find the curves {φf (x 0 ,t,ũ i ) ∈ R 2 : t ∈ [0, T ]} for i = 1, 2 (recalling φ f denotes the solution map (Defn. 4)), which is shown as the blue and red curves respectively in the phase plot in Figure 4. Moreover, for comparison we have also plotted the solution trajectory under the fixed input u(t) ≡ 0 as the green curve, which demonstrates the shape of the Van-der-Pol limit cycle. As expected the input
B. Using SOS Programming to Construct Polynomial Sub-Value Functions For Reachable Sets Estimation
Appendix XII shows the sublevel sets of VFs characterize reachable sets. We now numerically solve the SOS programming problem in Eq. (61) obtaining an approximate VF that can be used to estimate the reachable set of the Lorenz system. The problem of estimating the Lorenz attractor has previously been studied in [51], [15], [52], [53], [54].
Example 5. Let us consider the Lorenz system defined by the three dimensional second order nonlinear ODE:
x 1 (t) = σ (x 2 (t) − x 1 (t)),ẋ 2 (t) = x 1 (t)(ρ − x 3 (t)) − x 2 (t), x 3 (t) = x 1 (t)x 2 (t) − β x 3 (t),(93)
where σ = 10, β = 8/3, ρ = 28. We make a coordinate change so the Lorenz attractor is located in a unit box by defininḡ
x 1 := 50x 1 ,x 2 := 50x 2 ,x 3 := 50x 3 + 25.(94)
The ODE (93) can then be written in the
formẋ(t) = f (x(t), u(t)) usingf (x) = [50σ (x 2 − x 1 ), 50x 1 (ρ − 50x 3 − 50(25)) − 50x 2 , 50 2 x 1 x 2 − 50β x 3 − 25β ] T .
Note, asf is independent of any input u ∈ U without loss of generality we will set U = / 0. The problem of estimating the Lorenz attractor is then equivalent to the problem of estimating FRf (R n , R n ,U, {∞}). In this section we estimate FRf (R n , R n ,U, {∞}) by estimating FRf (X 0 , Λ,U, {T }) for some T < ∞, Λ ⊂ R 3 , X 0 := {x ∈ R 3 : g(x) < 0}, and g ∈ P(R n , R). Figure 5 shows the set {x ∈ R 3 : P(x, 0) < 0} where P is the solution to the SOS Optimization Problem shows BR f (X 0 , Λ,U, {T }) ⊆ {x ∈ R 3 : P(x, 0) < 0} and hence FRf (X 0 , Λ,U, {T }) = BR f (X 0 , Λ,U, {T }) ⊆ {x ∈ R 3 : P(x, 0) < 0} by Lem. 6. Thus the 0-sublevel set of P contains the forward reachable set. Moreover, Figure 5 provides numerical evidence that the 0-sublevel set of P approximates the Lorenz attractor accurately.
Note, given an OCP with VF denoted by V * , Prop. 5 shows that the sequence of polynomial solutions to the SOS Problem (61), indexed by d ∈ N, converges to V * with respect to the L 1 norm as d → ∞. Moreover, Prop. 6 shows that this sequence of polynomial solutions yields a sequence of sublevel sets that converges to {x ∈ R n : V * (x, 0) ≤ 0} with respect to the volume metric as d → ∞. However, Theorem. 5 shows reachable sets are characterized by the "strict" sublevel sets of VFs, {x ∈ R n : V * (x, 0) < 0}. Counterexample 1 (Appendix XI) shows that a sequence of functions that converges to some function V with respect to the L 1 norm may not yield a sequence of "strict" sublevel sets that converges to the "strict" sublevel set of V . Therefore we conclude that the sequence of "strict" sublevel sets obtained by solving the SOS Problem (61) may in general not converge to the desired reachable (61), the 20 3 green points represent initial conditions, the 20 3 red points represent where initial conditions transition to after t = 0.5 under scaled dynamics from the ODE (93) (found using Matlab's ODE45 function), and the three blue curves represents three sample trajectories terminated at t = 0.5 and initialized at three randomly selected green initial conditions. set. However, in practice there is often little difference between the sets {x ∈ R n : V * (x, 0) ≤ 0} and {x ∈ R n : V * (x, 0) < 0}. Example 5 shows how accurate estimates of reachable sets can be obtained by solving the SOS Problem (61). Moreover, these reachable set estimations are guaranteed to contain the true reachable set by Cor. 1, a property useful in safety analysis [11].
X. CONCLUSION For a given optimal control problem, we have proposed a sequence of SOS programming problems, each instance of which yields a polynomial, and where the polynomials become increasingly tight approximations to the true value function of the optimal control problem respect to the L 1 norm. Moreover, the sublevel sets of these polynomials become increasingly tight approximations to the sublevel sets of the true value function with respect to the volume metric. Furthermore, we have also shown that a controller can be constructed from a candidate value function that performs arbitrarily close to optimality when the candidate value function approximates the true value function arbitrarily well with respect to the W 1,∞ norm. We would like to emphasize that our performance bound, for controllers constructed from candidate value functions, can be applied independently of our proposed SOS algorithm for value function approximation, and therefore is maybe of broader interest.
XI. APPENDIX A: SUBLEVEL SET APPROXIMATION In this appendix we show that the volume metric (D V in Eq. (66)) is indeed a metric. Moreover, in Prop. 7 we show that if lim d→∞ ||J d − V || L 1 = 0 then for any γ ∈ R we have lim d→∞ D V ({x ∈ Λ : V (x) ≤ γ}, {x ∈ Λ : J d (x) ≤ γ}) = 0. The sublevel approximation results presented in this appendix are required in the proof of Prop. 6. Inspired by an argument used in [55] we now show if two functions are close in the L 1 norm then it follows their sublevel sets are close with respect to the volume metric. In order to do this we first denote the following family of sets for each n ∈ N A n := x ∈ Λ : V (x) ≤ γ + 1 n .
Since converges from bellow to V with respect to the L ∞ norm). To see this we next consider a counterexample where {J d } d∈N is a family of functions that can uniformly approximate some given V :∈ Lip((0, 1), R) but {x ∈ Λ : J d (x) < γ} does not converge to {x ∈ Λ : V (x) < γ}. In this appendix we present several reachable set results required in our numerical approximation of the Lorenz attractor (Example 5). Similarly to forward reachable sets (Defn. 10) we now define backward reachable sets.
Definition 13. For X 0 ⊂ R n , Ω ⊆ R n , U ⊂ R m , f : R n × R m → R n and S ⊂ R + , let BR f (X 0 , Ω,U, S) := y ∈ R n : there exists x ∈ X 0 , T ∈ S, and u ∈ U Ω,U, f ,T (y, 0) such that φ f (y, T, u) = x .
Theorem 5 (VFs characterize backward reachable sets [14]). Given {0, g, f , Ω,U, T } ∈ M Lip define X 0 := {x ∈ R n : g(x) < 0}. Then
where V * : R n × R → R is any function that satisfies Eq. (10).
where X 0 := {x ∈ R n : g(x) < 0}.
Lemma 6 (Equivalence of computation of backward and forward reachable sets [14]). Suppose X 0 ⊂ R n , Ω ⊂ R n , U ⊂ R m , f : R n × R m → R n , and T ∈ R + . Then FR − f (X 0 , Ω,U, {T }) = BR f (X 0 , Ω,U, {T }).
XIII. APPENDIX C In this appendix we present several miscellaneous results required in various places throughout the paper and not previously found in any of the other appendices.
Theorem 6 (Polynomial Approximation [56]). Let E ⊂ R n be an open set and f ∈ C 1 (E, R). For any compact set K ⊆ E and ε > 0 there exists g ∈ P(R n , R) such that sup x∈K |D α f (x) − D α g(x)| < ε for all |α| ≤ 1.
Theorem 7 (Rademacher's Theorem [57] [39]). If Ω ⊂ R n is an open subset and V ∈ Lip(Ω, R), then V is differentiable almost everywhere in Ω with point-wise derivative corresponding to the weak derivative almost everywhere; that is the set of points in Ω where V is not differentiable has Lebesgue measure zero. Moreover,
ess sup x∈Ω ∂ ∂ x i V (x) ≤ L V for all 1 ≤ i ≤ n,
where L V > 0 is the Lipschitz constant of V and ∂ ∂ x i V (x) is the weak derivative of V .
Lemma 7 (Infimum of family of Lipschitz functions is Lipschitz [58]). Suppose {h α } α∈I ⊂ LocLip(R n , R) is a family of locally Lipschitz continuous functions. Then h : R n → R defined as h(x) := inf α∈I h α (x) is such that h ∈ LocLip(R n , R) provided there exists x ∈ R n such that h(x) < ∞.
Theorem 8 (Putinar's Positivstellesatz [59]). Consider the semialgebriac set X = {x ∈ R n : g i (x) ≥ 0 for i = 1, ..., k}. Further suppose {x ∈ R n : g i (x) ≥ 0} is compact for some i ∈ {1, .., k}. If the polynomial f : R n → R satisfies f (x) > 0 for all x ∈ X, then there exists SOS polynomials {s i } i∈{1,..,m} ⊂ ∑ SOS such that,
f − m ∑ i=1 s i g i ∈ ∑ SOS .
Definition 14.
Let Ω ⊂ R n . We say {U i } ∞ i=1 is an open cover for Ω if U i ⊂ R n is an open set for each i ∈ N and Ω ⊆ {U i } ∞ i=1 . Theorem 9 (Existence of Partitions of Unity [60]). Let E ⊆ R n and let {E i } ∞ i=1 be an open cover of E. Then there exists a collection of C ∞ (E, R) functions, denoted by {ψ} ∞ i=1 , with the following properties:
1) For all x ∈ E and i ∈ N we have 0 ≤ ψ i (x) ≤ 1.
2) For all x ∈ E there exists an open set S ⊆ E containing x such that all but finitely many ψ i are 0 on S. 3) For each x ∈ E we have ∑ ∞ i=1 ψ i (x) = 1. 4) For each i ∈ N we have {x ∈ E : ψ i (x) = 0} ⊆ E i .
Lemma 8 (Chebyshev's Inequality). Let (X, Σ, µ) be a measurable space and f ∈ L 1 (X, R). For any ε > 0 and 0 < p < ∞, µ({x ∈ X : | f (x)| > ε}) ≤ 1
ε p X | f (x)| p dx.
Lemma 9 (Equivalence of essential supremum and supremum [61]). Let E ⊂ R n be an open set and f ∈ C(E, R). Then ess sup x∈E | f (x)| = sup x∈E | f (x)|.
Definition 2 .
2We say the six tuple {c, g, f , Ω,U, T } is a Family of Lipschitz OCPs of Form (5) or {c, g, f , Ω,U, T } ∈ M Lip if:
(10). (B) Solves the Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE) -Eq. (12). (C) Can be used to construct a solution to the OCP. A. Value Functions Are Determined By The Solution Map Consider a nonlinear Ordinary Differential Equation (ODE) of the formẋ
Definition of Admissible Inputs: Given {c, g, f , Ω,U, T } ∈ M Lip and associated family of OCPs of Form (5), we now use the solution map to define the set of admissible input signals for the OCP initialized at (x 0 ,t 0 ) ∈ Ω × [0, T ]. For this we use the shift operator, denoted τ s : L 2 ([0, T ], R m ) → L 2 ([0, T − s], R m ), where s ∈ [0, T ], and defined by (τ s u)(t) := u(s + t) for all t ∈ [0, T − s]. (8) Definition 4. For any (x 0 ,t 0 ) ∈ R n × [0, T ], we say u is admissible, denoted u ∈ U Ω,U, f ,T (x 0 ,t 0 ), if u : [t 0 , T ] → U and there exists a unique solution map, φ f , such that
Lemma 1 ([36], Local Lipschitz continuity of VF). Consider some {c, g, f , R n ,U, T } ∈ M Lip . Then if V * satisfies Eq. (10), we have that V * ∈ LocLip(R n × [0, T ], R).
Theorem 1 (
1[36], Uniqueness of VF). Consider the family of OCPs associated with the tuple {c, g, f , R n ,U, T } ∈ M Lip . Any function satisfying Eq. (10) is the unique viscosity solution of the HJB PDE
= 1. Now, by a similar augment to Eq. (33), using Eq. (30) rather than Eq. (31), it also follows sup (x,t)∈E |V (x,t) − J(x,t)| < ε and thus J satisfies Eq.(28).
Existence Of Dissipative Polynomials That Can Approximate The VF Arbitrarily well Under The W 1,p Norm Previously, in Prop. 3, we showed for any V ∈ Lip(Ω × [0, T ], R) satisfying Eq. (23) there exists a smooth function J that also satisfies Eq. (23) and approximates V with arbitrary accuracy under the Sobolev norm. We now use this result
FR f (X 0 ,
0Ω,U,S) := y ∈ R n : there exists x ∈ X 0 , T ∈ S, and u ∈ U Ω,U, f ,T (x, 0) such that φ f (x, T, u) = y .
Lemma 3 .
3Consider {c, g, f , Ω,U, T } ∈ M Lip and any function V 1 : Ω × [0, T ] → R that satisfies Eq. (10). Let V 2 : R n × [0, T ] → R be the VF for the unconstrained problem {c, g, f , R n ,U, T }. If Λ ⊆ Ω is such that
Now, by Eqs.(43),(46) and(51),V −V l W 1,p (Λ×[0,T ],R) = V * −V l W 1,p (Λ×[0,T ],R) (56) = V * − P δ − η W 1,p (Λ×[0,T ],R) ≤ V * − P δ W 1,p (E,R) + η W 1,p (Λ×[0,T ],R) ≤ 2δ + (2 + 4T + 2MT )(T µ(Λ)) 1 p δ < ε.By a similar argument to Eq. (56) we deduce V l satisfies Eq.(39) We conclude that V l , defined in Eq. (55), satisfies Eqs. (39), (40),(41), and (42) thus completing the proof.C. Our Family Of Optimization Problems Yield A Sequence Of Polynomials That Converge To A VF Under The L 1 NormConsider some {c, g, f , Ω,U, T } ∈ M Lip and suppose the sequence {J d } d∈N solves each instance of the optimization problem given in Eq.(20) for d ∈ N. We next use Theorem 3 to show that the sequence, {J d } d∈N converges to any VF associated with the family of OCP's {c, g, f , Ω,U, T } ∈ M Lip with respect to the weighted L 1 norm as d → ∞.
Proposition 4 .
4For given {c, g, f , Ω,U, T } ∈ M Lip and positive integrable function w ∈ L 1 (Ω × [0, T ], R + ) suppose Λ ⊆ Ω satisfies Eq. (36) then lim d→∞ Λ×[0,T ] w(x,t)|V (x,t) − J d (x,t)|dxdt = 0,
The equality in Eq. (60) follows since J d (x,t) ≤ V (x,t) for all (x,t) ∈ Ω × [0, T ] (Prop. 1). The first inequality follows by a combination of Eq. (59) and the inequality V l (x,t) ≤ V (x,t) for all (x,t) ∈ Ω × [0, T ].Finally, the second inequality follows by Eq. (58). VII. A FAMILY OF SOS PROBLEMS THAT YIELD POLYNOMIALS THAT CONVERGE TO THE VF Consider some {c, g, f , Ω,U, T } ∈ M Lip and denote {J d } d∈N as the sequence of solutions to the optimization problem found in Eq. (20). We have shown in Prop. 4 that the sequence of functions, {J d } d∈N , converge to any VF associated with the family of OCPs {c, g, f , Ω,U, T } ∈ M Lip with respect to the L 1 norm. The indexed polynomial optimization problems in Eq. (20) may now be readily tightened to more tractable SOS optimization problems.Specifically, for each d ∈ N, we tighten the polynomial optimization problem in Eq.(20) to the SOS optimization problem given in Eq.(61). We later show in Prop. 5 that the sequence of solutions to the SOS problem given in Eq. (61) yield polynomials, {P d } d∈N , indexed by degree d ∈ N, that converge to the VF (with respect to the L 1 norm) as d → ∞.
Proposition 5 .
5For given {c, g, f , Ω,U, T } ∈ M Poly and positive integrable function w ∈ L 1 (Ω × [0, T ], R + ) suppose Λ ⊆ Ω satisfies Eq. (36) then lim d→∞ Λ×[0,T ]
Proposition 6 .
6Consider {c, g, f , Ω,U, T } ∈ M Poly and w(x,t) = δ (t − s) where s ∈ [0, T ] and δ is the Dirac delta function. Suppose Λ ⊆ Ω satisfies Eq. (36). Then we have the following for all γ ∈ R:lim d→∞ D V {x ∈ Λ : V (x, s) ≤ γ}, {x ∈ Λ : P d (x, s) ≤ γ} = 0,(67)where V is any function satisfying Eq.(10), and P d is any solution to Problem (61) for d ∈ N. Proof. To show Eq. (67) we use Prop. 7, found in Appendix XI. Let us consider the family of functions, {P d ∈ P d (R n × R, R) : d ∈ N}, where P d solves the optimization problem given in Eq. (61) for d ∈ N and w(x,t) = δ (t − s).
where V is any function satisfying Eq. (10). Since Λ ⊆ Ω satisfies Eq. (36), and although the Dirac delta function is not a member of L 1 (Ω × [0, T ], R), a similar argument to Prop. 5 implies that lim d→∞ Λ |V (x, s) − P d (x, s)|dxdt = lim d→∞ Λ×[0,T ] δ (t − s)|V (x,t) − P d (x,t)|dxdt = 0.
Theorem 4 .
4Consider some state unconstrained OCP {c, g, f , R n ,U, T } ∈ M Lip initialized at x 0 ∈ R n with associated VF V * . Suppose there exists an open set Ω ∈ R n
Eq. (73) implies that J satisfies the HJB PDE associated with {c,g, f , R n ,U, T }, wherec(x, u,t) := c(x, u,t) −H J (x,t) and g(x) := J(x, T ). Note that since c ∈ LocLip(Ω ×U × [0, T ], R), f ∈ LocLip(Ω × U, R), and ∂ ∂ x i J ∈ LocLip(Ω × [0, T ], R) for all i ∈ {1, ...n + 1} (since J ∈ C 2 (R n × R, R)) it follows that H J ∈ LocLip(Ω ×U × [0, T ], R). By Lemma 7 we then deducẽ H J ∈ LocLip(Ω ×[0, T ], R) and thus {c,g, f , R n ,U, T } ∈ M Lip .
{|V * (y, s) − J(y, s)|}.
U, [0, T ]) ⊆ Ω), and because sup y∈Ω {|V * (y, T ) − J(y, T )|} = ess sup y∈Ω {|V * (y, T ) − J(y, T )|} holds by Lemma 9 (since V * and J are both continuous, and Ω is open).We now split the remainder of the proof into three parts. In Part 1, we derive an upper bound for ess sup (y,s)∈Ω×[0,T ] {H J (y, s)}. In Part 2, we find a lower bound for ess inf (y,s)∈Ω×[0,T ] {H J (y, s)}. In Part 3 we use these two bounds, combined with Inequality (75) to verify Eq. (69) and complete the proof.
we have plotted the point wise error, e(x,t) := V * (x,t) − P d (x,t), where P d is the solution to the SOS Optimization Problem (61) for d = 16, T = 1, Λ = [−0.5, 0.5], w(x,t) ≡ 1, h Ω (x) = 2.4 2 − x 2 and h U (u) = 1 − u 2 . The figure shows e(x,t) ≥ 0 for all (x,t) ∈ [−0.5, 0.5] × [0, 1] verifying that, as expected by Prop. 1, P d is a sub-VF. Moreover, 0 < e(x,t) < 0.1125 for all (x,t) ∈ [−0.5, 0.5] × [0, 1] implying ||V * − P d || ∞ < 0.1125, showing that we get a tight VF approximation in the L ∞ norm (even though we optimize for the L 1 norm). In Fig. 2 we have plotted the function F(d) := ||V * − P d || L 1 (Λ×[0,T ],R) where V * is given in Eq. (85) and P d is the solution to the SOS Optimization Problem (61) for d = 4 to 24, where Λ = [−0.5, 0.5], w(x,t) ≡ 1, h Ω (x) = 2.4 2 − x 2 and h U (u) = 1 − u 2 . All solutions, V d , of Problem (61) were subvalue functions as expected. Moreover, the figure shows by increasing the degree d ∈ N the resulting sub-VF, V d , better approximates V * , however convergence does slow after d = 5.
Figure 1 .
1Plot associated with Example 1 showing point wise error, e(x,t) := V * (x,t) − P d (x,t) where V * is given in Eq. (85) and P d solves the SOS Problem (61) for d = 16.
Figure 2 .
2Scatter plot associated with Example 1 showing the L 1 norm error: ||V * − P d || L 1 (Λ×[0,T ],R) , where V * is given in Eq. (85) and P d solves the SOS Problem (61) for d = 4 to 24. The smallest L 1 norm error occur ed at d = 24 with a value of 0.020316. dynamics are of the form f (x, u) = f 0 (x)+∑ m i=1 f i (x)u i , and the input constraints are of the form U = [a 1 , b 1 ] × ... × [a m , b m ]. Since any rectangular set can be represented as U = [−1, 1] m using the substitutionũ
associate this problem with the tuple {c, g, f , Ω,U, T } ∈ M Poly where c(x,t) = x 1 , g(x) ≡ 0, f (x, u) = [x 2 , u] T , U = [−1, 1], and T = 5. By solving the SOS Optimization Problem (61) for d = 3, Λ = [−0.6, 0.6] × [−1, 1], w(x,t) ≡ 1,
Figure 3 .
3The phase plot of Example 2 found by constructing the controller given in Eq. (88) using the solution to the SOS Problem(61).
( 91 )
91but a different cost function. The associated tuple is {c, g, f , Ω,U, T } ∈ M Poly where c(x,t) = x 2 1 +x 2 2 , g(x) ≡ 0, f (x, u) = [x 2 , u] T , U = [−1, 1], and T = 5. By solving the SOS Optimization Problem (61) for d = 4, Λ = [−0.5, 1.1] × [−1.1, .5], w(x,t) ≡ 1, h Ω (x) = 10 2 − x 2
( 5 )Figure 4 .
54governed by the dynamics given in Eq. (92) with Ω = R n , U = [−1, 1] and cost functions of the form c(x, u,t) = ||x − q|| 2 2 and g(x) = ||x − q|| 2 2 , where q = [−0.4, 0] or q = [0; 0]. Clearly any solution to the OCP is an input u that forces the systems trajectories towards the point q ∈ R 2 . By solving the SOS Optimization Problem (61) twice for q = [−0.4, 0] and q = [0; 0] with d = 14, T = 10, f , c, g as defined previously, Λ = [−1, 1] 2 , w(x,t) ≡ 1, h Ω (x) = 2.1−x 2 1 −x 2 2 , and h U (u) = 1 − u 2 we obtain polynomial sub-value functions P 1 and P 2 respectively. By replacing V with P i , for i ∈ {1, 2}, in Graph showing the phase plot of Example 4 found by constructing controllers given by Eq. (88) using the solution to the SOS Problem (61). The blue curve shows the T = 10 solution trajectory initialized at [0.75; 0.75] of the ODE (92) driven by the controller found by considering costs c(x, u,t) = ||x − q|| 2 2 and g(x) = ||x − q|| 2 2 , where q = [−0.4, 0]. The red curve shows the T = 10 solution trajectory initialized at [0.75; 0.75] of the ODE (92) driven by the controller found by considering the same costs but with q = [0; 0]. The green curve is the T = 10 solution trajectory initialized at [0.75; 0.75] of the ODE (92) under the input u(t) ≡ 0. Terminal states for each trajectory are given by the black dots. Costs associated with each trajectory can be found in
u 1 drives the system to the point q = [−0.4; 0] with terminal state, shown as the black dot in Figure 4, as [−0.430; 0.112]. Moreover, the input u 2 drives the system to the point q = [0; 0] with terminal state [−0.012; 0.007]. Table I shows the T = 10 cost of using various inputs when q = [−0.4, 0] or q = [0; 0]. All costs were calculated using Eq. (90) for initiali condition [0.75; 0.75]. The costs of using u 1 and u 2 are shown in the u SOS row under columns q = [−0.4, 0] or q = [0; 0] respectively. As expected the inputs derived using SOS out perform (have lower cost) compared to constant inputs.
( 61 )
61for d = 10, T = 0.5, f (x) = −f (x) for all x ∈ Ω := {x ∈ R n : h Ω (x) ≥ 0} and f (x) = 0 for all x ∈ ∂ Ω (freezing the dynamics on ∂ Ω helps to ensure Eq. (36) is satisfied, improving numerical performance), h U ≡ 0, h Ω (x) = 2 2 − x 2 1 − x 2 2 − x 2 3 , c ≡ 0, g(x) = (x 1 + 0.6) 2 + (x 2 − 0.6) 2 + (x 3 − 0.2) 2 − 0.1 2 , Λ = [−0.4, 0.4] × [−0.5, 0.5] × [−0.4, 0.6], and w(x,t) = δ (t) where δ is the Dirac delta function. Prop. 1 shows P is a sub-VF. Then Cor. 1
Figure 5 .
5Forward reachable set estimation from Example 5. The transparent cyan set represents the 0-sublevel set of the solution to the SOS Problem
Definition 12 .
12D : X × X → R is a metric if the following is satisfied for all x, y ∈ X,• D(x, y) ≥ 0, • D(x, y) = 0 iff x = y, • D(x, y) = D(y, x), • D(x, z) ≤ D(x, y) + D(y, z).Lemma 4 ([51]). Consider the quotient space, X := B (mod {X ⊂ R n : X = / 0, µ(X) = 0}), recalling B := {B ⊂ R n : µ(B) < ∞} is the set of all bounded sets. Then D V : X × X → R, defined in Eq. (66), is a metric. Lemma 5 ([51]). If A, B ∈ B and B ⊆ A then D V (A, B) = µ(A/B) = µ(A) − µ(B).
Proposition 7 .
7Consider a set Λ ∈ B, a function V ∈ L 1 (Λ, R), and a family of functions {J d ∈ L 1 (Λ, R) : d ∈ N} that satisfies the following properties:1) For any d ∈ N we have J d (x) ≤ V (x) for all x ∈ Λ. 2) lim d→∞ ||V − J d || L 1 (Λ,R) = 0.Then for all γ ∈ R lim d→∞ D V {x ∈ Λ : V (x) ≤ γ}, {x ∈ Λ : J d (x) ≤ γ} = 0. (95) Proof. To prove Eq. (95) we show for all ε > 0 there exists N ∈ N such that for all d ≥ N D V {x ∈ Λ : V (x) ≤ γ}, {x ∈ Λ : J d (x) ≤ γ} < ε. (96)
J d (x) ≤ V (x) for all x ∈ Λ and d ∈ N we have {x ∈ Λ : V (x) ≤ γ} ⊆ {x ∈ Λ : J d (x) ≤ γ} for all d ∈ N. (97) Moreover, since {x ∈ Λ : V (x) ≤ γ} ⊆ Λ, {x ∈ Λ : J d (x) ≤ γ} ⊆ Λ and Λ ∈ B it follows {x ∈ Λ : V (x) ≤ γ} ∈ B and {x ∈ Λ : J d (x) ≤ γ} ∈ B. Now for d ∈ N D V {x ∈ Λ : V (x) ≤ γ}, {x ∈ Λ : J d (x) ≤ γ} (98) = µ({x ∈ Λ : J d (x) ≤ γ}) − µ({x ∈ Λ : V (x) ≤ γ}) = µ({x ∈ Λ : J d (x) ≤ γ}) − µ(A n ∩ {x ∈ Λ : J d (x) ≤ γ}) + µ(A n ∩ {x ∈ Λ : J d (x) ≤ γ}) − µ({x ∈ Λ : V (x) ≤ γ}) ≤ µ({x ∈ Λ : J d (x) ≤ γ}) − µ(A n ∩ {x ∈ Λ : J d (x) ≤ γ}) + µ(A n ) − µ({x ∈ Λ : V (x) ≤ γ})
Counterexample 1 .
1We show there exists γ ∈ R, Λ ⊂ R, V ∈ Lip(Λ, R) and{J d } d∈N ⊂ Lip(Λ, R) such that J d (x) ≤ V (x) for all x ∈ Λ and lim d→∞ Λ |V (x) − J d (x)|dx = 0 but lim d→∞ D V {x ∈ Λ : V (x) < γ}, {x ∈ Λ : J d (x) < γ} = Now for all d ∈ N it is clear that we have J d (x) ≤ V (x) and V (x) − J d (x) < 1 d for all x ∈ Λ. This implies lim d→∞ Λ V (x) − J d (x)dx ≤ lim d→∞ sup x∈Λ (V (x) − J d (x)) ≤ lim d→∞ 1 d = 0. However {x ∈ Λ : V (x) < γ} = (0, 0.75) and for all d ∈ N {x ∈ Λ : J d (x) < γ} = (0, 1). Therefore D V ({x ∈ Λ : V (x) < γ}, {x ∈ Λ : J d (x) < γ}) = D V ((0, 0.75), (0, 1)) = 0.25 for all d ∈ N.Hence,lim d→∞ D V ({x ∈ Λ : V (x) < γ}, {x ∈ Λ : J d (x) < γ}) = 0.25 = 0. XII. APPENDIX B: VALUE FUNCTIONS CHARACTERIZE REACHABLE SETS
BR f (X 0 ,
0Ω,U, {T }) = {x ∈ Ω : V * (x, 0) < 0},
Corollary 1 (
1Sub-VFs contain reachable sets). Given {0, g, f , Ω,U, T } ∈ M Lip and suppose V l : R n × R → R is a sub-VF (Defn. 7), then BR f (X 0 , Ω,U, {T }) ⊆ {x ∈ Ω : V l (x, 0) < 0},
t)dxdt, and recalling Z d : R n × R → R N d is the vector of monomials of degree d ∈ N. Note that solutions to Opt. (61) may not be feasible to Opt.
VFs in the L 1 norm over compact sets, uses the fact that the VF is Lipschitz continuous when Eq. (36) is satisfied. For cases where Eq. (36) is not satisfied it may still be possible to show convergence if the VF of the state constrained OCP {c, g, f , Ω,U, T } ∈ M Poly is Lipschitz continuos.(since they both
satisfy the constraints of Optimization Problem (20)).
Note, the proof of Prop. 5, that shows we can approximate
Table I THIS
ITABLE SHOWS THE CORRESPONDING COSTS OF VARIOUS INPUTS FOR THE OCPS OF FORM (5) GIVEN IN EXAMPLE 4.Input u
Cost for q = [−0.4; 0] Cost for q = [0; 0]
u SOS
0.21473
0.078919
u(t) ≡ 0
0.84466
1.0037
u(t) ≡ +1 1.1824
2.444
u(t) ≡ −1 4.5615
2.4681
Lemma 1 shows that V * ∈ Lip(Ω × [0, T ], R) ⊂ LocLip(R n × R, R) and Rademacher's Theorem (Thm. 7) states that Lipschitz functions are differentiable almost everywhere. It follows, therefore, that µ((Ω × [0, T ])/S V * ) = 0, where µ is the Lebesgue measure.Part 1 of Proof: For each (y, s) ∈ S V * let us consider some family of points k * y,s ∈ U such that k * y,s ∈ arg inf u∈U c(y, u, s) + ∇ x V * (y, s) T f (y, u) .Note, k * y,s exists for each fixed (y, s) ∈ S V * by the extreme value theorem since U ⊂ R m is compact, c, f are continuous, and ∇ x V * is independent of u ∈ U and bounded by Rademacher's Theorem (Thm. 7). Now for all (y, s) ∈ S V * it follows thatMoreover, since V * is the viscosity solution to the HJB PDE by Theorem 1, we have thatCombing Eqs. (76) and (77) it follows thatfor all (y, s) ∈ S V * . As Eq. (78) is satisfied for all (y, s) ∈ S V * and µ((Ω × [0, T ])/S V * ) = 0 it follows Eq. (78) holds almost everywhere. Therefore ess sup.Moreover, since V * is a viscosity solution to the HJB PDE (12), we have by Theorem 1 thatCombining Eqn's (80) and (81) it follows thatPart 3 of Proof:Combining Inequalities (75), (79) and (83) it follows thatNow as Inequality (84) holds for all u ∈ U R n ,U, f ,T (x 0 , 0) we can take the infimum and deduce Inequality (69).Note, the condition in Thm. 4 that FR f ({x 0 }, R n ,U, [0, T ]) ⊆ Ω is trivially satisfied by Ω = R n . However, since our SOS method only approximates VFs over compact sets (Prop. 5) we would need this condition to beThe first equality of Eq. (98) follows by Lemma 5 (since the sublevel sets of V and J d are bounded and satisfy Eq. (97)). The first inequality follows as A n ∩ {x ∈ Λ : J d (x) ≤ γ} ⊆ A n which implies µ(A n ∩ {x ∈ Λ : J d (x) ≤ γ}) ≤ µ(A n ). The third equality follows using Lemma 5 and since A n ∩{x ∈ Λ :To show that Eq. (96) holds for any ε > 0 we will split the remainder of the proof into two parts. In Part 1 we show that there exists N 1 ∈ N such that µ(A n /{x ∈ Λ : V (x) ≤ γ}) < ε 2 for all n ≥ N 1 . In Part 2 we show that for any n ∈ N therePart 1 of proof: In this part of the proof we show that there exists N 1 ∈ N such that µ(A n /{x ∈ Λ : V (x) ≤ γ}) < ε 2 for all n > N 1 .Since ∩ ∞ n=1 A n = {x ∈ Λ : V (x) ≤ γ} and A n+1 ⊆ A n for all n ∈ N we have that µ({x ∈ Λ : V (x) ≤ γ}) = µ(∩ ∞ n=1 A n ) = lim n→∞ µ(A n ) (using the "continuity from above" property of measures). Thus there exists N 1 ∈ N such thatTherefore it followsPart 2 of proof: For fixed n > N 1 we now show there existsThe set containment in Eq. (99) follows since if y ∈ {x ∈ Λ : J d (x) ≤ γ}/A n then y ∈ Λ, J d (y) ≤ γ and y / ∈ A n . Since y / ∈ A n we have V (y) > γ + 1 n . ThusTherefore,The first inequality in Eq. Prop. 7 shows if a sequence of functions {J d } d∈N converges from bellow to some function V with respect to the L 1 norm then the sequence sublevel sets {x ∈ Λ : J d (x) ≤ γ} converge to {x ∈ Λ : V (x) ≤ γ} with respect to the volume metric. However, this does not imply the sequence of "strict" sublevel sets {x ∈ Λ : J d (x) < γ} converge to {x ∈ Λ : V (x) < γ} (even if {J d } d∈N
Beyond just "flattening the curve": Optimal control of epidemics with purely non-pharmaceutical interventions. M Kantner, T Koprucki, Journal of Mathematics in Industry. 101M. Kantner and T. Koprucki, "Beyond just "flattening the curve": Opti- mal control of epidemics with purely non-pharmaceutical interventions," Journal of Mathematics in Industry, vol. 10, no. 1, pp. 1-23, 2020.
On an optimal control problem of train operation. E Khmelnitsky, IEEE transactions on automatic control. 457E. Khmelnitsky, "On an optimal control problem of train operation," IEEE transactions on automatic control, vol. 45, no. 7, pp. 1257-1266, 2000.
A real-time maintenance policy for multi-stage manufacturing systems considering imperfect maintenance effects. J Huang, Q Chang, J Zou, J Arinez, IEEE Access. 6J. Huang, Q. Chang, J. Zou, and J. Arinez, "A real-time maintenance policy for multi-stage manufacturing systems considering imperfect maintenance effects," IEEE Access, vol. 6, pp. 62174-62183, 2018.
Calculus of variations and optimal control theory: a concise introduction. D Liberzon, Princeton University PressD. Liberzon, Calculus of variations and optimal control theory: a concise introduction. Princeton University Press, 2011.
Mixed finite element approximation of periodic Hamilton-Jacobi-Bellman problems with application to numerical homogenization. D Gallistl, T Sprekeler, E Süli, arXiv:2010.01647arXiv preprintD. Gallistl, T. Sprekeler, and E. Süli, "Mixed finite element approxima- tion of periodic Hamilton-Jacobi-Bellman problems with application to numerical homogenization," arXiv preprint arXiv:2010.01647, 2020.
Homogenization of Hamilton-Jacobi equations: numerical methods. Y Achdou, F Camilli, I Capuzzo Dolcetta, 18Mathematical models and methods in applied sciencesY. Achdou, F. Camilli, and I. Capuzzo Dolcetta, "Homogenization of Hamilton-Jacobi equations: numerical methods," Mathematical models and methods in applied sciences, vol. 18, no. 07, pp. 1115-1143, 2008.
Polynomial approximation of highdimensional Hamilton-Jacobi-Bellman equations and applications to feedback control of semilinear parabolic PDEs. D Kalise, K Kunisch, SIAM Journal on Scientific Computing. 402D. Kalise and K. Kunisch, "Polynomial approximation of high- dimensional Hamilton-Jacobi-Bellman equations and applications to feedback control of semilinear parabolic PDEs," SIAM Journal on Scientific Computing, vol. 40, no. 2, pp. A629-A652, 2018.
A curse-of-dimensionality-free numerical method for solution of certain HJB PDEs. W M Mceneaney, SIAM journal on Control and Optimization. 464W. M. McEneaney, "A curse-of-dimensionality-free numerical method for solution of certain HJB PDEs," SIAM journal on Control and Optimization, vol. 46, no. 4, pp. 1239-1276, 2007.
A time-dependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games. I M Mitchell, A M Bayen, C J Tomlin, IEEE Transactions on automatic control. 507I. M. Mitchell, A. M. Bayen, and C. J. Tomlin, "A time-dependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games," IEEE Transactions on automatic control, vol. 50, no. 7, pp. 947- 957, 2005.
Quantitative local L2-gain and reachability analysis for nonlinear systems. E Summers, A Chakraborty, W Tan, U Topcu, P Seiler, G Balas, A Packard, International Journal of Robust and Nonlinear Control. 2310E. Summers, A. Chakraborty, W. Tan, U. Topcu, P. Seiler, G. Balas, and A. Packard, "Quantitative local L2-gain and reachability analysis for nonlinear systems," International Journal of Robust and Nonlinear Control, vol. 23, no. 10, pp. 1115-1135, 2013.
Reachability analysis using dissipation inequalities for nonlinear dynamical systems. H Yin, A Packard, M Arcak, P Seiler, arXiv:1808.02585arXiv preprintH. Yin, A. Packard, M. Arcak, and P. Seiler, "Reachability analysis using dissipation inequalities for nonlinear dynamical systems," arXiv preprint arXiv:1808.02585, 2018.
Inner-approximating reachable sets for polynomial systems with time-varying uncertainties. B Xue, M Fränzle, N Zhan, IEEE Transactions on Automatic Control. B. Xue, M. Fränzle, and N. Zhan, "Inner-approximating reachable sets for polynomial systems with time-varying uncertainties," IEEE Transactions on Automatic Control, 2019.
Set theorybased safety supervisory control for wind turbines to ensure adequate frequency response. Y Zhang, M E Raoufat, K Tomsovic, S M Djouadi, IEEE Transactions on Power Systems. 341Y. Zhang, M. E. Raoufat, K. Tomsovic, and S. M. Djouadi, "Set theory- based safety supervisory control for wind turbines to ensure adequate frequency response," IEEE Transactions on Power Systems, vol. 34, no. 1, pp. 680-692, 2019.
Relaxing the Hamilton Jacobi Bellman equation to construct inner and outer bounds on reachable sets. M Jones, M M Peet, arXiv:1903.07274arXiv preprintM. Jones and M. M. Peet, "Relaxing the Hamilton Jacobi Bellman equation to construct inner and outer bounds on reachable sets," arXiv preprint arXiv:1903.07274, 2019.
Using SOS and sublevel set volume minimization for estimation of forward reachable sets. M Jones, M M Peet, arXiv:1901.11174arXiv preprintM. Jones and M. M. Peet, "Using SOS and sublevel set volume minimization for estimation of forward reachable sets," arXiv preprint arXiv:1901.11174, 2019.
On infinite linear programming and the moment approach to deterministic infinite horizon discounted optimal control problems. A Kamoutsi, T Sutter, P M Esfahani, J Lygeros, IEEE control systems letters. 11A. Kamoutsi, T. Sutter, P. M. Esfahani, and J. Lygeros, "On infinite linear programming and the moment approach to deterministic infinite horizon discounted optimal control problems," IEEE control systems letters, vol. 1, no. 1, pp. 134-139, 2017.
A convex duality approach to optimal control of killed markov processes. A Pakniyat, R Vasudevan, CDC. A. Pakniyat and R. Vasudevan, "A convex duality approach to optimal control of killed markov processes," CDC, 2019.
Controller design and value function approximation for nonlinear dynamical systems. M Korda, D Henrion, C N Jones, Automatica. 67M. Korda, D. Henrion, and C. N. Jones, "Controller design and value function approximation for nonlinear dynamical systems," Automatica, vol. 67, pp. 54-66, 2016.
Control synthesis for nonlinear optimal control via convex relaxations. P Zhao, S Mohan, R Vasudevan, 2017 American Control Conference (ACC). IEEEP. Zhao, S. Mohan, and R. Vasudevan, "Control synthesis for nonlinear optimal control via convex relaxations," in 2017 American Control Conference (ACC), pp. 2654-2661, IEEE, 2017.
Duality between density function and value function with applications in constrained optimal control and markov decision process. Y Chen, A D Ames, arXiv:1902.09583arXiv preprintY. Chen and A. D. Ames, "Duality between density function and value function with applications in constrained optimal control and markov decision process," arXiv preprint arXiv:1902.09583, 2019.
Semidefinite programming. L Vandenberghe, S Boyd, SIAM review. 381L. Vandenberghe and S. Boyd, "Semidefinite programming," SIAM review, vol. 38, no. 1, pp. 49-95, 1996.
DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization. A A Ahmadi, A Majumdar, SIAM Journal on Applied Algebra and Geometry. 32A. A. Ahmadi and A. Majumdar, "DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization," SIAM Journal on Applied Algebra and Geometry, vol. 3, no. 2, pp. 193- 230, 2019.
Block factor-widthtwo matrices and their applications to semidefinite and sum-of-squares optimization. Y Zheng, A Sootla, A Papachristodoulou, arXiv:1909.11076arXiv preprintY. Zheng, A. Sootla, and A. Papachristodoulou, "Block factor-width- two matrices and their applications to semidefinite and sum-of-squares optimization," arXiv preprint arXiv:1909.11076, 2019.
Control design based on sum of squares programming for non-affine in input systems. A M Ribeiro, A R Fioravanti, A Moutinho, E C De Paiva, 2020 IEEE 6th International Conference on Control Science and Systems Engineering (ICCSSE). IEEEA. M. Ribeiro, A. R. Fioravanti, A. Moutinho, and E. C. de Paiva, "Control design based on sum of squares programming for non-affine in input systems," in 2020 IEEE 6th International Conference on Control Science and Systems Engineering (ICCSSE), pp. 130-135, IEEE, 2020.
Global adaptive dynamic programming for continuous-time nonlinear systems. Y Jiang, Z.-P Jiang, IEEE Transactions on Automatic Control. 6011Y. Jiang and Z.-P. Jiang, "Global adaptive dynamic programming for continuous-time nonlinear systems," IEEE Transactions on Automatic Control, vol. 60, no. 11, pp. 2917-2929, 2015.
Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach. M Abu-Khalaf, F L Lewis, Automatica. 415M. Abu-Khalaf and F. L. Lewis, "Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach," Automatica, vol. 41, no. 5, pp. 779-791, 2005.
A scalable iterative convex design for nonlinear systems. S Baldi, P A Ioannou, E B Kosmatopoulos, 2012 American Control Conference (ACC). IEEES. Baldi, P. A. Ioannou, and E. B. Kosmatopoulos, "A scalable itera- tive convex design for nonlinear systems," in 2012 American Control Conference (ACC), pp. 979-984, IEEE, 2012.
Piecewise polynomial policy iterations for synthesis of optimal control laws in input-saturated systems. S Baldi, G Valmorbida, A Papachristodoulou, E , 2015 American Control Conference (ACC). IEEEKosmatopoulosS. Baldi, G. Valmorbida, A. Papachristodoulou, and E. B. Kosmatopou- los, "Piecewise polynomial policy iterations for synthesis of optimal control laws in input-saturated systems," in 2015 American Control Conference (ACC), pp. 2850-2855, IEEE, 2015.
Policy iteration for H ∞ optimal control of polynomial nonlinear systems via sum of squares programming. Y Zhu, D Zhao, X Yang, Q Zhang, IEEE transactions on cybernetics. 482Y. Zhu, D. Zhao, X. Yang, and Q. Zhang, "Policy iteration for H ∞ optimal control of polynomial nonlinear systems via sum of squares pro- gramming," IEEE transactions on cybernetics, vol. 48, no. 2, pp. 500- 509, 2017.
Mitigating the curse of dimensionality: sparse grid characteristics method for optimal feedback control and HJB equations. W Kang, L C Wilcox, Computational Optimization and Applications. 682W. Kang and L. C. Wilcox, "Mitigating the curse of dimensionality: sparse grid characteristics method for optimal feedback control and HJB equations," Computational Optimization and Applications, vol. 68, no. 2, pp. 289-315, 2017.
HJB-POD-based feedback design for the optimal control of evolution problems. K Kunisch, S Volkwein, L Xie, SIAM Journal on Applied Dynamical Systems. 34K. Kunisch, S. Volkwein, and L. Xie, "HJB-POD-based feedback design for the optimal control of evolution problems," SIAM Journal on Applied Dynamical Systems, vol. 3, no. 4, pp. 701-722, 2004.
Optimal controller synthesis for nonlinear dynamical systems. Y P Leong, M B Horowitz, J W Burdick, arXiv:1410.0405arXiv preprintY. P. Leong, M. B. Horowitz, and J. W. Burdick, "Optimal controller synthesis for nonlinear dynamical systems," arXiv preprint arXiv: 1410.0405, 2014.
Performance bounds for optimal control of polynomial systems: A convex optimization approach. T Jennawasin, M Kawanishi, T Narikiyo, SICE Journal of Control, Measurement, and System Integration. 46T. Jennawasin, M. Kawanishi, and T. Narikiyo, "Performance bounds for optimal control of polynomial systems: A convex optimization ap- proach," SICE Journal of Control, Measurement, and System Integration, vol. 4, no. 6, pp. 423-429, 2011.
Sum-of-squares flight control synthesis for deep-stall recovery. T Cunis, J.-P Condomines, L Burlion, Journal of Guidance, Control, and Dynamics. T. Cunis, J.-P. Condomines, and L. Burlion, "Sum-of-squares flight control synthesis for deep-stall recovery," Journal of Guidance, Control, and Dynamics, pp. 1-14, 2020.
Neuro-dynamic programming: an overview. D P Bertsekas, J N Tsitsiklis, Proceedings of 1995 34th IEEE Conference on Decision and Control. 1995 34th IEEE Conference on Decision and ControlIEEE1D. P. Bertsekas and J. N. Tsitsiklis, "Neuro-dynamic programming: an overview," in Proceedings of 1995 34th IEEE Conference on Decision and Control, vol. 1, pp. 560-564, IEEE, 1995.
Viscosity solutions of Hamilton-Jacobi equations and optimal control problems. A Bressan, A. Bressan, "Viscosity solutions of Hamilton-Jacobi equations and optimal control problems,"
D P Bertsekas, Dynamic programming and optimal control. MA1D. P. Bertsekas, Dynamic programming and optimal control, vol. 1. Athena scientific Belmont, MA, 2005.
Viscosity solutions: a primer. M G Crandall, Viscosity solutions and applications. SpringerM. G. Crandall, "Viscosity solutions: a primer," in Viscosity solutions and applications, pp. 1-43, Springer, 1997.
L C Evans, Partial differential equations. American Mathematical Society19L. C. Evans, Partial differential equations, vol. 19. American Mathe- matical Society, 2010.
On the inversion of ljapunov's second theorem on stability of motion. J Kurzwel, AMS Translations Series. 2J. Kurzwel, "On the inversion of ljapunov's second theorem on stability of motion," AMS Translations Series 2, vol. 24, pp. 19-77, 1963.
Smoothing derivatives of functions and applications. F W Wilson, Transactions of the American Mathematical Society. 139F. W. Wilson, "Smoothing derivatives of functions and applications," Transactions of the American Mathematical Society, vol. 139, pp. 413- 428, 1969.
A smooth converse Lyapunov theorem for robust stability. Y Lin, E D Sontag, Y Wang, SIAM Journal on Control and Optimization. 341Y. Lin, E. D. Sontag, and Y. Wang, "A smooth converse Lyapunov theorem for robust stability," SIAM Journal on Control and Optimization, vol. 34, no. 1, pp. 124-160, 1996.
A smooth lyapunov function from a classestimate involving two positive semidefinite functions. A R Teel, L Praly, 5ESAIM: Control, Optimisation and Calculus of VariationsA. R. Teel and L. Praly, "A smooth lyapunov function from a class- estimate involving two positive semidefinite functions," ESAIM: Control, Optimisation and Calculus of Variations, vol. 5, pp. 313-367, 2000.
Optimal control with state-space constraint i. H M Soner, SIAM Journal on Control and Optimization. 243H. M. Soner, "Optimal control with state-space constraint i," SIAM Journal on Control and Optimization, vol. 24, no. 3, pp. 552-561, 1986.
Discontinuous solutions of Hamilton-Jacobi-Bellman equation under state constraints. H Frankowska, M Mazzola, Calculus of Variations and Partial Differential Equations. 463-4H. Frankowska and M. Mazzola, "Discontinuous solutions of Hamilton- Jacobi-Bellman equation under state constraints," Calculus of Variations and Partial Differential Equations, vol. 46, no. 3-4, pp. 725-747, 2013.
Existence of neighboring feasible trajectories: applications to dynamic programming for state-constrained optimal control problems. H Frankowska, R Vinter, Journal of Optimization Theory and Applications. 1041H. Frankowska and R. Vinter, "Existence of neighboring feasible tra- jectories: applications to dynamic programming for state-constrained optimal control problems," Journal of Optimization Theory and Appli- cations, vol. 104, no. 1, pp. 20-40, 2000.
A general Hamilton-Jacobi framework for non-linear state-constrained control problems. A Altarovici, O Bokanowski, H Zidani, 19ESAIM: Control, Optimisation and Calculus of VariationsA. Altarovici, O. Bokanowski, and H. Zidani, "A general Hamilton- Jacobi framework for non-linear state-constrained control problems," ESAIM: Control, Optimisation and Calculus of Variations, vol. 19, no. 2, pp. 337-357, 2013.
Computation of optimal singular controls. D Jacobson, S Gershwin, M Lele, IEEE Transactions on Automatic Control. 151D. Jacobson, S. Gershwin, and M. Lele, "Computation of optimal singular controls," IEEE Transactions on Automatic Control, vol. 15, no. 1, pp. 67-73, 1970.
On the computation of optimal singular and bang-bang controls. S Dadebo, K Mcauley, P Mclellan, Optimal Control Applications and Methods. 194S. Dadebo, K. McAuley, and P. McLellan, "On the computation of optimal singular and bang-bang controls," Optimal Control Applications and Methods, vol. 19, no. 4, pp. 287-297, 1998.
Sum of squares based convex approach for optimal control synthesis. J Moyalan, H Choi, Y Chen, U Vaidya, 2021 29th Mediterranean Conference on Control and Automation (MED). J. Moyalan, H. Choi, Y. Chen, and U. Vaidya, "Sum of squares based convex approach for optimal control synthesis," in 2021 29th Mediter- ranean Conference on Control and Automation (MED), pp. 1270-1275, 2021.
Using SOS for optimal semialgebraic representation of sets: Finding minimal representations of limit cycles, chaotic attractors and unions. M Jones, M M Peet, arXiv:1809.10308arXiv preprintM. Jones and M. M. Peet, "Using SOS for optimal semialgebraic representation of sets: Finding minimal representations of limit cycles, chaotic attractors and unions," arXiv preprint arXiv:1809.10308, 2018.
Estimating the bounds for the Lorenz family of chaotic systems. D Li, J Lu, X Wu, G Chen, Chaos, Solitons and Fractals. 23D. Li, J. Lu, X. Wu, and G. Chen, "Estimating the bounds for the Lorenz family of chaotic systems," Chaos, Solitons and Fractals, vol. 23, pp. 529-534, 2005.
Polynomial level-set method for attractor estimation. T Wang, S Lall, M West, Journal of The Franklin Institute. 349T. Wang, S. Lall, and M. West, "Polynomial level-set method for attrac- tor estimation," Journal of The Franklin Institute, vol. 349, pp. 2783- 2798, 2012.
Bounding extreme values on attractors using sum-ofsquares optimization, with application to the Lorenz attractor. D Goluskin, arXivD. Goluskin, "Bounding extreme values on attractors using sum-of- squares optimization, with application to the Lorenz attractor," arXiv, 2018.
Tractable approximations of sets defined with quantifiers. J B Lasserre, Mathematical Programming. 1512J. B. Lasserre, "Tractable approximations of sets defined with quanti- fiers," Mathematical Programming, vol. 151, no. 2, pp. 507-527, 2015.
Exponentially stable nonlinear systems have polynomial Lyapunov functions on bounded regions. M Peet, IEEE Transactions on Automatic Control. M. Peet, "Exponentially stable nonlinear systems have polynomial Lya- punov functions on bounded regions," IEEE Transactions on Automatic Control, 2009.
Fine regularity of solutions of elliptic partial differential equations. J Malỳ, W P Ziemer, American Mathematical SocJ. Malỳ and W. P. Ziemer, Fine regularity of solutions of elliptic partial differential equations. No. 51, American Mathematical Soc., 1997.
Geometric analysis. P H Lasz, P. H. Lasz, "Geometric analysis," 2014.
Positive polynomials on compact semialgebriac sets. M Putinar, Math J. M. Putinar, "Positive polynomials on compact semialgebriac sets.," Math J, 1993.
M Spivak, Calculus on Manifolds. Addison-WesleyM. Spivak, Calculus on Manifolds. Addison-Wesley, 1965.
Essential supremum with the continuous function?. D Fischer, D. Fischer, "Essential supremum with the continuous function?," 2015.
Since 2022 he has been a lecturer in the department of Automatic Control and Systems Engineering at the University of Sheffield. His research primarily focuses on the estimation of reachable sets, attractors and regions of attraction for nonlinear ODEs. Furthermore, he has an inter. Morgan Jones received the Mmath degree in mathematics from The University of Oxford, England in 2016 and PhD degree from Arizona State UniversityUSA in 2021. est in extensions of the dynamic programing framework to non-separable cost functionsMorgan Jones received the Mmath degree in mathematics from The University of Oxford, England in 2016 and PhD degree from Arizona State University, USA in 2021. Since 2022 he has been a lecturer in the department of Automatic Control and Systems Engineering at the University of Sheffield. His research pri- marily focuses on the estimation of reachable sets, attractors and regions of attraction for nonlinear ODEs. Furthermore, he has an inter- est in extensions of the dynamic programing framework to non-separable cost functions.
He was an Assistant Professor of Aerospace Engineering at the Illinois Institute of Technology. M Matthew, Peet received the B.S. degree in physics and in aerospace engineering from the University of Texas. Austin, TX, USA, in; Stanford, CA; Paris, France; Chicago, IL, USA; Tempe, AZ, USA. DrCurrently, he is an Associate Professor of Aerospace Engineering at Arizona State Universityrespectively. He was a Postdoctoral Fellow at INRIA. Peet received a National Science Foundation CAREER award in 2011Matthew M. Peet received the B.S. degree in physics and in aerospace engineering from the University of Texas, Austin, TX, USA, in 1999 and the M.S. and Ph.D. degrees in aeronau- tics and astronautics from Stanford University, Stanford, CA, in 2001 and 2006, respectively. He was a Postdoctoral Fellow at INRIA, Paris, France from 2006 to 2008. He was an As- sistant Professor of Aerospace Engineering at the Illinois Institute of Technology, Chicago, IL, USA, from 2008 to 2012. Currently, he is an Associate Professor of Aerospace Engineering at Arizona State University, Tempe, AZ, USA. Dr. Peet received a National Science Foundation CAREER award in 2011.
| []
|
[
"Grammatical Error Correction as GAN-like Sequence Labeling",
"Grammatical Error Correction as GAN-like Sequence Labeling"
]
| [
"Kevin Parnow \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Zuchao Li \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n",
"Hai Zhao [email protected] \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n"
]
| [
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n"
]
| []
| In Grammatical Error Correction (GEC), sequence labeling models enjoy fast inference compared to sequence-to-sequence models; however, inference in sequence labeling GEC models is an iterative process, as sentences are passed to the model for multiple rounds of correction, which exposes the model to sentences with progressively fewer errors at each round. Traditional GEC models learn from sentences with fixed error rates. Coupling this with the iterative correction process causes a mismatch between training and inference that affects final performance. In order to address this mismatch, we propose a GAN-like sequence labeling model, which consists of a grammatical error detector as a discriminator and a grammatical error labeler with Gumbel-Softmax sampling as a generator. By sampling from real error distributions, our errors are more genuine compared to traditional synthesized GEC errors, thus alleviating the aforementioned mismatch and allowing for better training. Our results on several evaluation benchmarks demonstrate that our proposed approach is effective and improves the previous state-of-the-art baseline. * Corresponding author. † These authors made equal contribution. | 10.18653/v1/2021.findings-acl.290 | [
"https://arxiv.org/pdf/2105.14209v1.pdf"
]
| 235,254,145 | 2105.14209 | 24467da89797924cc0fb3931184c17c25b472b37 |
Grammatical Error Correction as GAN-like Sequence Labeling
Kevin Parnow
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Zuchao Li
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Hai Zhao [email protected]
Department of Computer Science and Engineering
Shanghai Jiao Tong University
Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering
Shanghai Jiao Tong University
ShanghaiChina
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
Grammatical Error Correction as GAN-like Sequence Labeling
In Grammatical Error Correction (GEC), sequence labeling models enjoy fast inference compared to sequence-to-sequence models; however, inference in sequence labeling GEC models is an iterative process, as sentences are passed to the model for multiple rounds of correction, which exposes the model to sentences with progressively fewer errors at each round. Traditional GEC models learn from sentences with fixed error rates. Coupling this with the iterative correction process causes a mismatch between training and inference that affects final performance. In order to address this mismatch, we propose a GAN-like sequence labeling model, which consists of a grammatical error detector as a discriminator and a grammatical error labeler with Gumbel-Softmax sampling as a generator. By sampling from real error distributions, our errors are more genuine compared to traditional synthesized GEC errors, thus alleviating the aforementioned mismatch and allowing for better training. Our results on several evaluation benchmarks demonstrate that our proposed approach is effective and improves the previous state-of-the-art baseline. * Corresponding author. † These authors made equal contribution.
Introduction
Sequence-to-sequence neural solutions (Parnow et al., 2020) have been quite successful in comparison to their statistical counterparts (Sutskever et al., 2014), but these approaches suffer from a couple key problems, which has given rise to sequence labeling approaches for GEC (Omelianchuk et al., 2020). Such approaches task models with generating a list of labels to classify the grammatical errors in a sentence before correcting these errors.
Sequence labeling approaches have recently gained popularity in GEC and are currently stateof-the-art. One typical aspect of sequence labeling approaches (He et al., 2018;Li et al., 2018b) is labeling and correcting sentences through an iterative process. As successive edits will depend on how other errors are corrected in a sentence, using an iterative process and correcting only the most salient errors in each round allows models to achieve better performance; however, because of this process, models are tasked with handling sentences with varying rates of errors, as during each round of inference for a given sentence, a model encounters a sentence with progressively fewer errors. This of course causes an exposure bias problem, as the training data does not match the test data, and suggests that providing the model with training data with varying error rates will lead to better performance.
To combat this exposure bias, we propose a new approach for training a sequence labeling GEC model that draws from GANs (Goodfellow et al., 2014), which consist of a generator that generates increasingly realistic fake inputs and a discriminator that is tasked with differentiating these fake inputs from real inputs. Other GEC works like (Raheja and Alikaniotis, 2020) directly used GANs to produce grammatically correct sentences given grammatically incorrect ones. This contrasts our work, which uses aspects of a GAN to enhance the training process rather than using a GAN itself as the correcting model. Our model consists of three components: an encoder, a Grammatical Error Detector, and a Grammatical Error Labeler. By sampling from the error distribution in the error labeler, our model can synthesize sentences with new errors creating new sentence pairs for further arXiv:2105.14209v1 [cs.CL] 29 May 2021 Figure 1: An overview of our model. training data. As a result, our Detector continually improves its ability to detect errors and essentially acts as a discriminator of errors, and our Labeler continually improves the authenticity of its error distribution and becomes a better generator of errors. This process allows us to counter the exposure bias problem sequence labeling GEC models face because in addition to allowing us to generate new errorful sentences whose errors are increasingly representative of those in real data, we can also use control parameters to set the error rates of these sentences and accommodate our iterative inference process.
… … … β h i ∑P GED ∑P GEL X X SYN Y γ
Our Approach
We formulate the GEC task as a problem of sequence labeling and create a neural sequence labeling model based on a deep pre-trained Transformer encoder to deal with this problem. Inspired by the work of (Omelianchuk et al., 2020), our full model's overall architecture diagram is shown in Figure 1. There are three main components in our basic neural GEC model: a deep pre-trained Transformer Encoder, a Grammatical Error Detector, and a Grammatical Error Labeler. To accommodate our new GAN-like training process, we add a Gumbelsoftmax sampling component to the basic GEC model.
Background and Notation
First, in training, given incorrect input sentence X = x 1 , x 2 , ..., x n and its corrected version X c = y 1 , y 2 , ..., y m , the model predicts a corrective label sequence T = t 1 , t 2 , ..., t n by minimizing the token-level Levenshtein distance on the span-based alignments of X and X c . The corrective label set is given as T = {$KEP, $DEL, $APP, $REP} ∪ {$CAS, $MRG, $SPL, $NNUM, $VFORM}, in which the first set consists of the basic text editing transformation operations and the second consists of g-transformations as defined by (Omelianchuk et al., 2020) for GEC 1 . Aligning sentences using these transformations in preprocessing, reduces what would be a sequence generation task that handles unequal source-target lengths to a set of label classification problems. In this formulation, the neural sequence labeling model trains to optimize the input sequence's negative log-likelihood loss for an input sequence:
J (θ) = − n i=1 log p(t i |x, θ),
where p is the conditional probability that the model outputs at each position i.
Deep Pre-trained Transformer Encoder
As in most neural sequence labeling models (Ma and Hovy, 2016), a neural encoder such as a BiL-STM (Hochreiter and Schmidhuber, 1997) or a Transformer (Vaswani et al., 2017;Li et al., 2021) is used to extract context-aware features from the input sequence. Deep pre-trained language models such as BERT (Devlin et al., 2019;Zhang et al., 2020b), RoBERTa , and XLNet (Yang et al., 2019) have recently demonstrated the efficacy of Transformer models trained on largescale unlabeled data in various NLP tasks. We leveraged these very beneficial models by using a pre-trained language model as our encoder. We define the contextualized features captured by the neural encoder as:
h i = [Enc(X)] i ,
where Enc represents the encoder, and [·] i represents the output of the i-th position after encoding.
Grammatical Error Detector and Labeler
Next, we adopt a a Grammatical Error Detector (GED) to detect the presence of errors and a Gram-matical Error Labeler (GEL) to predict detailed error labels. With these labels, corrections are applied to sentences, and this process is typically iterative, as some corrections may depend on others, and applying corrections only once may not be enough to fully correct the sentence. During iterative correction, the model needs to assess at each round whether more correction is required. To this end, we use the GED to determine the degree of error for an entire sentence and control the iterative correction process.
Specifically, we use a binarization Y b of the corrective labels Y as the training target of the GED and use Y as the training target of the GEL. To obtain label probabilities grammatical error detection and labeling, two linear layers with softmax layers are appended to the encoder:
P i GED = softmax(MLP GED (h i )), P i GEL = softmax(MLP GEL (h i ))
. The binary classification probabilities in the GED output do not necessarily control the inference process's iterations. Rather, after using the GEL error label probabilities as thresholds for sentence positions, we also use the sum of these probabilities as a threshold for attempting another round of correction on the whole sentence. The model continues correcting the sentence until either it reaches a preset maximum number of iterations or no longer satisfies the following condition:
i [P i GED ] err=1 > γ,
where γ is the minimum error probability threshold for a sentence.
Additionally, since GEC usually corrects a small portion of a sentence (and there are therefore no errors in most of the input), the corrective label prediction task is an imbalanced classification problem. We alleviate this imbalance classification issue by taking advantage of this prior knowledge and adding a fixed and preset confidence β to the label $KEP to keep a position unchanged when applying corrections:
[P i GEL ] $KEP = [P i GEL ] $KEP + β.
GAN-like Sequence Labeling Training
While we adopt sequence labeling instead of sequence-to-sequence modeling in this paper and therefore avoid the exposure bias problem caused by left-to-right sequence generation, our model still faces exposure bias because of the iterative correction process, which, through its iterative correction process, tasks the model with handling much more varied error rates in inference compared to in training, where it handles static data and does not use multiple-round corrections. To address this issue, we borrow the idea of a GAN (Goodfellow et al., 2014) and propose a GAN-like iterative training approach for a sequence labeling GEC model. GANs, whose training objective can be formulated as a minimax game between a generator that creates increasingly realistic fake outputs and a discriminator that must differentiate these outputs from their real counterparts, have been suggested for sequence-tosequence text generation Zhang et al., 2020a;Li et al., 2018a) as they do not suffer from exposure bias. Initialize model parameters from previous training stage θi ← θi-1 when i > 1 3:
for j in 1, ..., M do 4:
for k in 1, ..., |D ∪ DSYN| do 5:
Encode each sentence X k as H k 6: P k GED = Softmax(MLPGED(H k )) 7: P k GEL = Softmax(MLPGEL(H k )) 8: lossGED = CrossEntropy(P k GED , Y k err ) 9: lossGEL = CrossEntropy(P k GEL , Y k label ) 10: loss = lossGED + lossGEL 11:
Update the model parameter θi with loss 12: end for 13: end for 14: DSYN = {} 15:
for k in 1, ..., |D| do 16:
Encode each sentence X k as H k 17:
P k GED = Softmax(MLPGED(H k )) 18: P k GED = [P k GED ]err=1 > γ 19: P k GEL = Softmax(MLPGEL(H k )) 20: [P k GEL ] $KEP = [P k GEL ] $KEP + β 21: P k GEL = GumbelSoftmax(P k GEL ) 22:
Use P k GED and P k GEL to produce sampled sequence X k SYN 23:
DSYN = DSYN ∪ {(X k SYN , Y k )} 24:
end for 25: end for In our model, the GEL module can be considered a discriminator, as it must differentiate whether tokens are erroneous, and by adding a sampling module to the GED module, we can create a generator that outputs grammatical errors (rather than corrections) that are increasingly realistic. We can then pair these sampling outputs with their golden sequence in the training dataset to create new training Table 1: Comparison of GEC models. The baseline comes from the model released by (Omelianchuk et al., 2020). samples. This trains the model with more samples and more varied errors and alleviates the exposure bias issue. Separate cross-entropy losses are calculated for the Grammatical Error Detector and Labeler, and we detail the whole algorithm for our training process in Algorithm 3.
Detailed Training Process
To synthesize new errors based on a genuine grammatical error distribution, we add a sampling module to a trained GED module. Specifically, we use Gumbel-softmax sampling, a simple and efficient way to draw samples z from a categorical distribution with class probabilities P GEL using the Gumbel-Max trick (Gumbel, 1954;Maddison et al., 2014):
z = one_hot argmax j g j + log[P i GEL ] j
(1) where g 1 ...g j are i.i.d samples drawn from Gumbel(0, 1) 2 . We use the softmax function as a continuous, differentiable approximation to argmax:
[y i ] k = exp((log([P i GEL ] k ) + g k )/τ ) |C| j=1 exp((log([P i GEL ] j ) + g j )/τ ) ,(2)
where |C| is the number of classes, τ is the softmax temperature. Altering γ and β allows us to synthesize input samples of different error rates.
2 The Gumbel(0, 1) distribution can be sampled using inverse transform sampling by drawing u ∼ Uniform(0, 1) and computing g = − log(− log(u)).
Sampling
CoNLL -
Experiments
Setup
To isolate our GAN-like Sequence Labeling Training (GST) approach, we use the same model setting and training details as in (Omelianchuk et al., 2020). The training data includes PIE's synthetic data (Awasthi et al., 2019), NUCLE (Dahlmeier et al., 2013), Lang-8 (Tajiri et al., 2012), FCE (Yannakoudakis et al., 2011), Cambridge Learner Corpus (the publicly available portion) (Nicholls, 2003), and WI+LOCNESS (Bryant et al., 2019). Our models are evaluated on the test sets of CoNLL-2014(Ng et al., 2014, BEA-2019 (Bryant et al., 2019), and JFLEG (Napoles et al., 2017) with the official M 2 (Dahlmeier and Ng, 2012), ERRANT (Bryant et al., 2017), and GLEU (Napoles et al., 2015) scorers, respectively.
Results and Analysis
Our results on the three test datasets are listed in benchmarks are further improved using the GST approach, which demonstrates that the GST approach can effectively alleviate the exposure bias issue. With GST, we achieved new best results on the CoNLL-2014 test dataset, surpassing ensemble methods while only using a single model. In order to illustrate the benefits of sampling using Gumbel-Softmax, we replaced it with random sampling and Multinomial. The comparison is shown in Table 2. Random sampling actually hampers performance, which shows that synthetic sentences not based on a genuine error distribution do not alleviate exposure bias. Both GumbelSoftmax and Multinomial, which use a genuine error distribution, improve the model, though Gumbel-Softmax appears to be more suitable for sampling in sequence labeling modeling.
In Figure 2, we show how the performance changes with increasing rounds of GST training. In the first few rounds, due to the model's readaptation to new errors, there was a drop in performance on the test datasets; however, as the number of training rounds increased, performance on the test set gradually improved and finally stabilized.
Intermediate Outputs and Longer Training
In this experiment, we explored using intermediate outputs from our iterative inference process as additional training outputs to highlight the impact of generating new erroneous sentence by sampling from the real error distribution with our GST approach. For this experiment, we use our baseline architecture. As seen in the results in Table 3, whereas GST leads to a 0.6 F 0.5 gain over the baseline, using intermediate training outputs paired with golden sentences for additional training actually leads to worse performance, yielding a 0.3 F 0.5 loss in comparison to the baseline.
To confirm that GST's performance gain is not due to the added training time, we also train the baseline for a commensurate amount of additional steps but find that this does not have any effect on model performance. This experiment demonstrates that our model does bring improvement to the baseline without relying on additional training steps. We also note that as our model is not significantly different in size from our baseline, our improvement is also not brought about by simply using a larger model.
Performance with out Pre-trained Language Models We additionally explored the performance of our system in the absence of contextualized pre-trained language models. As we expected, these models make our model much more resilient to the exposure bias problem, and as seen in Table 4, the improvement brought about GST is therefore much more evident. In comparison to the baseline, using GST brings an improvement of 1.5 F 0.5 points.
Conclusion
In this paper, we studied the exposure bias problem GEC sequence labeling models face. To alleviate this issue, we proposed a novel GAN-like training method for the GEC sequence labeling model. Through evaluation on three GEC benchmarks, we demonstrate that our novel training approach further improves a strong baseline model, illustrating the effectiveness of our training approach. Notably, with the help of pre-trained language models and our training approach, we achieved state-of-the-art results on the CoNLL-2014 benchmark.
Algorithm 1
1GAN-like Sequence Labeling Training Require: Genuine GEC parallel dataset D = {(X, Y)} Synthesized GEC parallel dataset DSYN = {} Number of training stages N Number of training epochs M Sentence error probability threshold γ Additional confidence β for label $KEP 1: for i in 1, ..., N do 2:
Figure 2 :
2The GEC performance versus the the GST rounds on the CoNLL-2014 test set.
Table 2 :
2Comparing the effects of different sampling distributions.
Table 1 .
1Our baseline model achieves the best sin-
gle model CoNLL-2014 F 0.5 , BEA-2019 F 0.5 , and
JFLEG GLEU scores, showing that the baseline
we use is very strong. The results on the three
Table 3 :
3Comparing GST training with additional base-
lines.
Table 4 :
4Evaluating GST without pre-trained language models.
The label set here only presents the transformations' basic names. Some transformations require additional parameters because they are context-specific and thus have many different versions.
Parallel iterative edit models for local sequence transduction. Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, Vihari Piratla, 10.18653/v1/D19-1435Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsAbhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Par- allel iterative edit models for local sequence trans- duction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP), pages 4260-4270, Hong Kong, China. Association for Computational Linguistics.
The BEA-2019 shared task on grammatical error correction. Christopher Bryant, Mariano Felice, E Øistein, 10.18653/v1/W19-4406Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fourteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsFlorence, ItalyAndersen, and Ted Briscoe. Association for Computational LinguisticsChristopher Bryant, Mariano Felice, Øistein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.
Automatic annotation and evaluation of error types for grammatical error correction. Christopher Bryant, Mariano Felice, Ted Briscoe, 10.18653/v1/P17-1074Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, Canada1Long Papers). Association for Computational LinguisticsChristopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 793-805, Vancouver, Canada. Associa- tion for Computational Linguistics.
Better evaluation for grammatical error correction. Daniel Dahlmeier, Hwee Tou Ng, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaAssociation for Computational LinguisticsDaniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montréal, Canada. Association for Com- putational Linguistics.
Building a large annotated corpus of learner English: The NUS corpus of learner English. Daniel Dahlmeier, Siew Mei Hwee Tou Ng, Wu, Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. the Eighth Workshop on Innovative Use of NLP for Building Educational ApplicationsAtlanta, GeorgiaAssociation for Computational LinguisticsDaniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Generative adversarial nets. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, Yoshua Bengio, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Gen- erative adversarial nets. In Advances in Neural Infor- mation Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672-2680.
Statistical theory of extreme values and some practical applications: a series of lectures. Emil Julius Gumbel, US Government Printing Office. 33Emil Julius Gumbel. 1954. Statistical theory of ex- treme values and some practical applications: a se- ries of lectures, volume 33. US Government Print- ing Office.
Syntax for semantic role labeling, to be, or not to be. Shexia He, Zuchao Li, Hai Zhao, Hongxiao Bai, 10.18653/v1/P18-1192Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers). Association for Computational LinguisticsShexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2061-2071, Melbourne, Australia. Association for Computational Linguis- tics.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, Kentaro Inui, 10.18653/v1/2020.acl-main.391Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMasahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4248- 4254, Online. Association for Computational Lin- guistics.
Learning to combine grammatical error corrections. Yoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen-Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, Noam Slonim, 10.18653/v1/W19-4414Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fourteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsFlorence, ItalyAssociation for Computational LinguisticsYoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen- Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, and Noam Slonim. 2019. Learning to com- bine grammatical error corrections. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 139-148, Florence, Italy. Association for Computa- tional Linguistics.
An empirical study of incorporating pseudo data into grammatical error correction. Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui, 10.18653/v1/D19-1119Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsShun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP-IJCNLP), pages 1236-1242, Hong Kong, China. Association for Computational Lin- guistics.
Seq2seq dependency parsing. Zuchao Li, Jiaxun Cai, Shexia He, Hai Zhao, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsZuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018a. Seq2seq dependency parsing. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 3203-3214, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
A unified syntax-aware framework for semantic role labeling. Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, Luo Si, 10.18653/v1/D18-1262Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018b. A unified syntax-aware framework for se- mantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2401-2411, Brussels, Bel- gium. Association for Computational Linguistics.
Data-dependent gaussian prior objective for language generation. Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao, 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa. EthiopiaOpenReview.netZuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao. 2020. Data-dependent gaussian prior objective for language generation. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
Text compression-aided transformer encoding. Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, abs/2102.05951CoRRZuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, and Eiichiro Sumita. 2021. Text compression-aided transformer encod- ing. CoRR, abs/2102.05951.
Corpora generation for grammatical error correction. Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, Simon Tong, 10.18653/v1/N19-1333Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Cor- pora generation for grammatical error correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3291-3301, Minneapolis, Minnesota. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. Xuezhe Ma, Eduard Hovy, 10.18653/v1/P16-1101Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational LinguisticsLong Papers)Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Ger- many. Association for Computational Linguistics.
A* sampling. Chris J Maddison, Daniel Tarlow, Tom Minka, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaChris J. Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Infor- mation Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3086-3094.
Ground truth for grammatical error correction metrics. Courtney Napoles, Keisuke Sakaguchi, Matt Post, Joel Tetreault, 10.3115/v1/P15-2097Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsShort Papers)Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammati- cal error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), pages 588-593, Beijing, China. Association for Computational Linguistics.
JFLEG: A fluency corpus and benchmark for grammatical error correction. Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, Spain2Association for Computational LinguisticsCourtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 229-234, Valencia, Spain. Association for Computational Lin- guistics.
The CoNLL-2014 shared task on grammatical error correction. Hwee Tou Ng, Mei Siew, Ted Wu, Christian Briscoe, Raymond Hendy Hadiwinoto, Christopher Susanto, Bryant, 10.3115/v1/W14-1701Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. the Eighteenth Conference on Computational Natural Language Learning: Shared TaskBaltimore, MarylandAssociation for Computational LinguisticsHwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14, Balti- more, Maryland. Association for Computational Lin- guistics.
The cambridge learner corpus: Error coding and analysis for lexicography and elt. Diane Nicholls, Proceedings of the Corpus Linguistics. the Corpus Linguistics16Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In Proceedings of the Corpus Linguistics 2003 con- ference, volume 16, pages 572-581.
GECToR -grammatical error correction: Tag, not rewrite. Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, Oleksandr Skurzhanskyi, 10.18653/v1/2020.bea-1.16Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fifteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsSeattle, WA, USA, OnlineAssociation for Computational LinguisticsKostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR -grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163-170, Seattle, WA, USA, Online. Association for Computational Linguistics.
Grammatical error correction: More data with more context. Kevin Parnow, Zuchao Li, Hai Zhao, 10.1109/IALP51396.2020.9310498International Conference on Asian Language Processing. Kuala Lumpur, MalaysiaIEEE2020Kevin Parnow, Zuchao Li, and Hai Zhao. 2020. Gram- matical error correction: More data with more context. In International Conference on Asian Language Processing, IALP 2020, Kuala Lumpur, Malaysia, December 4-6, 2020, pages 24-29. IEEE.
Adversarial Grammatical Error Correction. Vipul Raheja, Dimitris Alikaniotis, 10.18653/v1/2020.findings-emnlp.275Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsVipul Raheja and Dimitris Alikaniotis. 2020. Adver- sarial Grammatical Error Correction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3075-3087, Online. Associa- tion for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaIlya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Informa- tion Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
Tense and aspect error correction for ESL learners using global context. Toshikazu Tajiri, Mamoru Komachi, Yuji Matsumoto, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, Korea2Short Papers). Association for Computational LinguisticsToshikazu Tajiri, Mamoru Komachi, and Yuji Mat- sumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 198-202, Jeju Island, Korea. Associa- tion for Computational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; BC, CanadaVancouverZhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Con- ference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancou- ver, BC, Canada, pages 5754-5764.
A new dataset and method for automatically grading ESOL texts. Helen Yannakoudakis, Ted Briscoe, Ben Medlock, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsHelen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.
Neural machine translation with universal visual representation. Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao, 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa. EthiopiaOpenReview.netZhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao. 2020a. Neural machine translation with universal visual representation. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou, Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020b.
Semantics-aware BERT for language understanding. The Thirty-Fourth AAAI Conference on Artificial Intelligence. New York, NY, USAAAAI Press2020Semantics-aware BERT for language understanding. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, Febru- ary 7-12, 2020, pages 9628-9635. AAAI Press.
Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu, 10.18653/v1/N19-1014Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented archi- tecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 156-165, Minneapolis, Min- nesota. Association for Computational Linguistics.
| []
|
[
"Approximate Selection with Unreliable Comparisons in Optimal Expected Time",
"Approximate Selection with Unreliable Comparisons in Optimal Expected Time"
]
| [
"Shengyu Huang \nDepartment of Computer Science\nDepartment of Computer Science\nETH Zürich\nZürichSwitzerland\n",
"Chih-Hung Liu \nDepartment of Computer Science\nETH Zürich\nZürichSwitzerland\n",
"Daniel Rutschman \nETH Zürich\nZürichSwitzerland\n"
]
| [
"Department of Computer Science\nDepartment of Computer Science\nETH Zürich\nZürichSwitzerland",
"Department of Computer Science\nETH Zürich\nZürichSwitzerland",
"ETH Zürich\nZürichSwitzerland"
]
| []
| Given n elements, an integer k and a parameter ε, we study to select an element with rank in (k − nε, k + nε] using unreliable comparisons where the outcome of each comparison is incorrect independently with a constant error probability, and multiple comparisons between the same pair of elements are independent. In this fault model, the fundamental problems of finding the minimum, selecting the k-th smallest element and sorting have been shown to require Θ n log 1 Q , Θ n log min{k,n−k} Q and Θ n log n Q comparisons, respectively, to achieve success probability 1 − Q [10]. Although finding the minimum and selecting the k-th smallest element have different complexities, to attain the high probability guarantee (Q = 1 n ), both of them require Θ(n log n) comparisons. Recently, Leucci and Liu[23]proved that the approximate minimum selection problem (k = 0) requires expected Θ(ε −1 log 1 Q ) comparisons. Therefore, it is interesting to study if there exists a clear distinction between the two problems in the approximation scenario.We develop a randomized algorithm that performs expected O( k n ε −2 log 1 Q ) comparisons to achieve success probability at least 1 − Q. We also prove that any randomized algorithm with success probability at least 1 − Q performs expected Ω( k n ε −2 log 1 Q ) comparisons. Our results indicate a clear distinction between approximating the minimum and approximating the k-th smallest element, which holds even for the high probability guarantee, e.g., if k = n 2 and Q = 1 n , Θ(ε −1 log n) versus Θ(ε −2 log n). Moreover, if ε = n −α for α ∈ (0, 1 2 ), the asymptotic difference is almost quadratic, i.e.,Θ(n α ) versusΘ(n 2α ). As a by-product, we give an algorithm using deterministic O k n ε −2 log 1 Q + (log 1 Q )(log log 1 Q ) 2 comparisons, which is optimal as long as k n ε −2 = Ω (log log 1 Q ) 2 .ACM Subject Classification Theory of computation → Design and analysis of algorithms | 10.48550/arxiv.2205.01448 | [
"https://arxiv.org/pdf/2205.01448v1.pdf"
]
| 248,506,123 | 2205.01448 | 9386c1541a9da922f724171120cff8a99330073c |
Approximate Selection with Unreliable Comparisons in Optimal Expected Time
Shengyu Huang
Department of Computer Science
Department of Computer Science
ETH Zürich
ZürichSwitzerland
Chih-Hung Liu
Department of Computer Science
ETH Zürich
ZürichSwitzerland
Daniel Rutschman
ETH Zürich
ZürichSwitzerland
Approximate Selection with Unreliable Comparisons in Optimal Expected Time
10.4230/LIPIcsand phrases Approximate SelectionUnreliable ComparisonsIndependent Faults
Given n elements, an integer k and a parameter ε, we study to select an element with rank in (k − nε, k + nε] using unreliable comparisons where the outcome of each comparison is incorrect independently with a constant error probability, and multiple comparisons between the same pair of elements are independent. In this fault model, the fundamental problems of finding the minimum, selecting the k-th smallest element and sorting have been shown to require Θ n log 1 Q , Θ n log min{k,n−k} Q and Θ n log n Q comparisons, respectively, to achieve success probability 1 − Q [10]. Although finding the minimum and selecting the k-th smallest element have different complexities, to attain the high probability guarantee (Q = 1 n ), both of them require Θ(n log n) comparisons. Recently, Leucci and Liu[23]proved that the approximate minimum selection problem (k = 0) requires expected Θ(ε −1 log 1 Q ) comparisons. Therefore, it is interesting to study if there exists a clear distinction between the two problems in the approximation scenario.We develop a randomized algorithm that performs expected O( k n ε −2 log 1 Q ) comparisons to achieve success probability at least 1 − Q. We also prove that any randomized algorithm with success probability at least 1 − Q performs expected Ω( k n ε −2 log 1 Q ) comparisons. Our results indicate a clear distinction between approximating the minimum and approximating the k-th smallest element, which holds even for the high probability guarantee, e.g., if k = n 2 and Q = 1 n , Θ(ε −1 log n) versus Θ(ε −2 log n). Moreover, if ε = n −α for α ∈ (0, 1 2 ), the asymptotic difference is almost quadratic, i.e.,Θ(n α ) versusΘ(n 2α ). As a by-product, we give an algorithm using deterministic O k n ε −2 log 1 Q + (log 1 Q )(log log 1 Q ) 2 comparisons, which is optimal as long as k n ε −2 = Ω (log log 1 Q ) 2 .ACM Subject Classification Theory of computation → Design and analysis of algorithms
Introduction
We study a generalization of the fundamental problem of selecting the k-th smallest elements in terms of approximation and fault tolerance. Given a set S of n elements, an integer k and a parameter ε, the fault-tolerant ε-approximate k-selection problem, FT-APX(k, ε) for short, is to return an element with rank in (k − nε, k + nε] only using unreliable comparisons whose outcome can be incorrect. Due to these comparison faults, it is impossible to guarantee a correct solution, so the number of comparisons performed by an algorithm should depend on the failure probability Q of the algorithm where Q < 1 2 . Without loss of generality, we assume that n is even and k ≤ n 2 ; if k > n 2 , the problem becomes to approximate the (n − k)-th largest element, which is symmetric. The elements with rank in (0, k − nε], (k − nε, k + nε] and (k + nε, n] of S are called small, relevant and large, respectively.
We consider independent random comparison faults: There is a strict ordering relation among S, but algorithms can only gather information via unreliable comparisons between two elements. The outcome of each comparison is wrong with a known constant probability p < 1 2 . When comparing the same pair of elements multiple times, each outcome is independent of the previous outcomes; comparisons involving different pairs of elements are also independent.
The above fault model has been widely studied for various fundamental problems such as finding the minimum, selecting the k-th smallest element and sorting a sequence [10,29,30]. Feige et al [10] proved that to achieve success probability 1 − Q, the aforementioned three problems require Θ n log 1 Q , Θ n log min{k,n−k} Q and Θ n log n Q comparisons, respectively, both in expectation and in the worst case. In the sequel, their selection algorithm is denoted by Select(k, Q), and its performance is summarized as follows.
▶ Theorem 1 ([10]). Select(k, Q) performs O n log min{k,n−k} Q comparisons to select the k-th smallest element among n elements with probability at least 1 − Q.
Due to the increasing complexity of modern computing, error detection and correction require enormous computing resources. Emerging technologies enable the tolerance of computation errors for saving computing resources [28,17,8,19,32]. Meanwhile, many practical applications do not require an optimal answer but good enough ones. Therefore, fault-tolerant approximation algorithms are well-motivated.
An intuitive approach to the FT-APX(k, ε) problem is first to pick m = Θ( k n ε −2 log 1 Q ) elements randomly so that the underlying ⌈k · m n ⌉-th smallest element is relevant with probability at least 1 − Q 2 , and then to apply Select(⌈k · m n ⌉, Q 2 ) on the m elements. By Theorem 1, this approach requires Θ k n ε −2 (log 2 1 Q + (log 1 Q )(log k n ε −2 )) comparisons. Recently, Leucci and Liu [23] studied the approximate minimum selection problem, which asks for one element with rank in (0, nε] and thus is equivalent to FT-APX(0, ε). They developed an algorithm using expected O(ε −1 log 1 Q ) comparisons and also proved a matching lower bound. It is of great interest to study if the FT-APX(k, ε) problem can be solved with probability 1 − Q using O( k n ε −2 log 1 Q ) comparisons. Moreover, although finding the minimum and finding the k-th smallest element require different numbers of comparisons, i.e., Θ(n log 1 Q ) versus Θ(n log min{k,n−k} Q ), to attain the so-called high probability guarantee, i.e., Q = 1 n , both problems require Θ(n log n) comparisons. Thus, it is also desirable to investigate if there is a stronger distinction between these two problems in the approximation scenario.
▶ Remark 2. Similar to many randomized algorithms, a bound with a log 2 1 Q + (log 1 Q ) · (log k n ε −2 ) term can be easily attained as in the above intuitive approach, but improving such a term to exactly log 1 Q would be nontrivial. For example, Section 4 will discuss how a variant of Quickselect fails to attain the log 1 Q bound.
Our Contributions
We develop a randomized algorithm that performs expected O( k n ε −2 log 1 Q ) comparisons to solve the FT-APX(k, ε) problem with probability at least 1 − Q. We also prove that any algorithm with success probability 1 − Q requires expected Ω( k n ε −2 log 1 Q ) comparisons, implying the optimality of our algorithm. As a by-product, we give a randomized algorithm using deterministic O k n ε −2 log 1 Q + (log 1 Q )(log log 1 Q ) 2 comparisons, which is optimal as long as k n ε −2 = Ω (log log 1 Q ) 2 . Our results indicate that there is a distinction between the approximate minimum selection problem and the general approximate k-th element selection problem in terms of the expected number of comparisons, i.e., Θ(ε −1 log 1 Q ) [24] versus Θ( k n ε −2 log 1 Q ). This distinction even holds for the high probability guarantee (Q = 1 n ) in contradiction to the fact that the two problems have the same complexity Θ(n log n) in the exact selection [10]. For example, if k = n 2 and Q = 1 n , the two approximate selection problems require expected Θ(ε −1 log n) and Θ(ε −2 log n) comparisons, respectively. Moreover, if ε = n −α for a constant α ∈ (0, 1 2 ), the asymptotic difference is almost quadratic, i.e.,Θ(n α ) versusΘ(n 2α ).
▶ Remark 3. The k n ε −2 term in those complexities is actually max{ε −1 , k n ε −2 }. If k ≤ nϵ, by which ε −1 ≥ k n ε −2
, a correct answer to FT-APX(0, ε) is also correct to FT-APX(k, ε), indicating that this case is essentially the approximate minimum selection and can be solved optimally by Leucci and Liu's algorithms [23]. Therefore, to simplifying the description, we assume that k > nϵ throughout the paper if no further specification.
As noted in Remark 2, our technical advance is to improve the log 2 1 Q + (log 1 Q )(log k n ε −2 ) term to log 1 Q . To some extent, compared with Leucci and Liu's algorithms, our algorithms cover the entire range of k instead of the case when k is trivially small. In addition, our algorithm owns an elegant feature that it only exploits simple sampling techniques, e.g., selecting the median of three samples and selecting the minimum of two samples.
The top-level of our algorithm, inspired by Leucci and Liu [23], reduces the FT-APX(k, ε) problem on n elements to the FT-APX( m 2 , 3 8 ) problem on m = Θ(log 1 Q ) elements. More precisely, if a relevant element can be selected with probability 8 9 , we can generate a sequence of Θ(log 1 Q ) elements in which 3 4 of elements around the middle, with probability 1 − Q 2 , are all relevant. For such a "dense" sequence, we design a delicate trial-and-error method to select a relevant element with probability 1 − Q 2 using expected Θ(log 1 Q ) comparisons. The main challenge is to obtain a relevant element with probability 8 9 using only O( k n ε −2 ) comparisons. For the approximate minimum (k = 0), Leucci and Liu [23] applied Select(1, 1 10 ) on Θ(ε −1 ) randomly picked elements and attained O(ε −1 ) comparisons. However, for general k, this method requires Θ( k n ε −2 log k n ε −2 ) comparisons with an extra logarithmic factor. We first work on a special case that k = n 2 , i.e., the approximate median selection. Based on the symmetry property of the median, we observe that the median of three randomly picked elements is more likely to be relevant than a randomly picked element. We exploit this observation to iteratively increase the ratio of relevant elements while keeping the underlying median being relevant. Once the ratio becomes a constant fraction, we will apply a straightforward method.
For general k, we design a "purifying" process that iteratively increases the ratio of relevant elements while keeping elements around a "controlled" position being relevant. Despite no symmetry property, we still observe that under certain conditions, the minimum of two randomly picked elements is more likely to be relevant than a randomly picked one. Then, we derive feasible parameters to control the relative position of k, i.e., the middle of the remaining relevant elements, during the purifying process. Once the relative position XX:4
Approximate Selection with Unreliable Comparisons in Optimal Expected Time becomes a constant fraction of the remaining elements, we add dummy smallest elements and apply our approximate median selection.
For some range of (k, ε), our bounds are not tight. If k n ε −2 log 1 Q = Ω(n), the lower bound is Ω(max{n, ε −1 log (k+nε)/(2nε) Q }) (Theorem 23). For this range, a trivial upper bound of O(n log k Q ) follows from Theorem 1, indicating a gap between Ω(max{n, ε −1 log( k+nε 2nε · 1 Q )}) and O(n log k Q ) for some range of (k, ε). The rest of the paper is organized as follows. Section 1.2 gives a brief literature review. Section 2 provides a few preliminary remarks. Section 3 presents the top-level algorithm. Section 4 and Section 5 describe sub-algorithms to approximate the median and the k-th element with constant probability, respectively. Section 6 sketches the lower bound analysis. Interested readers are referred to the appendix for detailed technical proofs.
Brief Literature
Dating back to the 1987, Ravikumar et al. [31] already studied a variant of the problem of finding the exact minimum using unreliable comparisons when at most f comparisons are allowed. They proved that Θ(f n) comparisons are necessary in the worst case. Later, Aigner [1] considered a prefix-bounded error model: for a fraction parameter γ < 1 2 , at most an γ-fraction of the past comparisons failed at any point during the execution of an algorithm. He proved that Θ( 1 1−p ) n comparisons is necessary to find the minimum in the worst case. Furthermore, he proved that if p > 1 n−1 , no algorithm can succeed with certainty [1]. When errors occur independently, as already discussed, Feige et al. [10] showed that the required number of comparisons for selecting the exact k-th smallest element with probability at least 1 − Q is Θ(n log max{k,n−k} Q ). Recently, Braverman et al. [5] investigated the round complexity and the number of comparisons required by partition and selection algorithms. They proved that for any constant error probability, Θ(n log n) comparisons are necessary for any algorithm that selects the minimum with high probability. Also, Chen et al. [7] studied the problem of computing the smallest k elements using r given independent noisy comparisons between each pair of elements. In a very general error model called strong stochastic model, they gave a linear-time algorithm with competitive ratio ofÕ( √ n), and also proved that this competitive ratio is tight.
The related problem of sorting with faults has also received considerable attention. When there are at most f comparison faults, Θ(n log n + f n) comparisons are necessary and sufficient to correctly sort n elements [21,25,3]. For the prefix-bounded model, although Aigner's result on the minimum selection [1] implies that ( 1 1−p ) O(n log n) are sufficient to sort n elements, Borgstrom and Kosaraju [4] showed that checking whether the input elements are sorted already requires Ω ( 1 1−p ) n comparisons. When comparison faults are permanent, or equivalently, when a pair of elements can only be compared once, the underlying sorting problem has also been extensively studied especially because it can be connected to both the minimum feedback arc set problem and the rank aggregation problem [26,18,5,6,20,22,15,12,14,13]. There are also sorting algorithms for memory faults [11,24].
For more knowledge about fault-tolerant search algorithms, we refer the interested readers to a survey by Pelc [30] and a monograph by Cicalese [9].
Preliminary
As explained in remark 3, we assume that k > nϵ throughout the paper if no further specification. For ease of exposition, we use β to denote k n in some analyses and sometimes abuse the name x of an element to denote its rank, e.g., we might write "x ∈ [l, r]" to denote that the rank of x lies in the range [l, r]. Comparing two elements, x and y, yields an outcome of either x < y or y > x. A typical subroutine in our algorithms is to draw elements using sampling with replacement, so multiple copies of an element may appear in a set. When two copies of the same element are compared, the tie is broken using any arbitrary (but consistent) ordering among the copies. In our fault model, there is a standard strategy called majority vote for reducing the "error probability" of comparing two elements. We state this strategy as follows.
2c p · t + 1 i (1 − p) i p 2cp·t+1−i .
Top Level of Algorithm
The high-level idea is to reduce solving FT-APX(k, ε) on n elements with probability at least 1 − Q to solving FT-APX( m 2 , 3 8 ) on m = Θ(log 1 Q ) elements with probability at least 1 − Q 2 . Specifically, if a relevant element can be selected with probability at least 8 9 , then m selected elements, for some m = Θ(log 1 Q ), contain at least 7 8 m relevant elements with probability at least 1 − Q 2 ; see Lemma 21 in Appendix B. In this situation, at least 2 · ( 7 8 − 1 2 ) · m = 2 · 3 8 m elements around the median, i.e., the range ( 1 8 m, 7 8 m], are relevant. Therefore, solving the FT-APX( m 2 , 3 8 ) problem on these m elements with probability at least 1 − Q 2 yields a relevant element with probability at least 1 − 2 · Q 2 = 1 − Q. Section 5 will present an approach that uses O( k n ε −2 ) comparisons to select a relevant element with probability at least 8 9 , by which the above reduction takes O( k n ε −2 log 1 Q ) comparisons. In the remaining of this section, we will explain how to solve FT-APX( m 2 , 3 8 ) with probability 1 − Q 2 efficiently both in expectation and in determination cases. We first design a simple trial-and-error method that uses expected O(log 1 Q ) comparisons to select an element from ( 1 8 m, 7 8 m] with probability at least 1 − Q 2 :
Repeatedly pick an element randomly and verify if its rank lies in ( 1 8 m, 7 8 m] until one element passes the verification.
Since ( 1 8 m, 7 8 m] contains 3 4 m elements, the expected number of repetitions before encountering a correct element is only O (1). Therefore, the key is to implement the verification step such that the method returns a correct element with probability at least 1 − Q 2 and the expected number of comparisons is O(log 1 Q ). We implement the verification step for an element x based on a simple experiment that randomly picks three other elements, and checks if x is neither the smallest nor the largest among the four elements. The probability that the if-condition holds is
1 − ( rx m ) 3 − (1 − rx m ) 3
where r x is the rank of x among the m elements. Also, the check can be conducted with success probability at least 17 18 using O(1) comparisons (by plugging in n = 4, Q = 1 36 into Theorem 1 twice with k = 1 and k = 4.) Therefore, if x ∈ ( 2 8 m, 6 8 m], the experiment succeeds with probability at least 9 7 8 m] since returning an element in these two ranges is safe and the considered range ( 2 8 m, 6 8 m] contains enough elements. Based on the above calculated probabilities, we can conceptually treat the above simple experiment as an unreliable comparison with error probability 15 32 . By Lemma 4, if the verification step conducts this simple experiment 2 · c 15/32 ln 2 Q + 1 times and takes the majority result, its success probability is at least 1 − Q 2 , Now, we are ready to analyze the expected number of comparisons and the success probability of our trial-and-error method. First, a single round returns an element in ( 2 8 m, 6 8 m] with probability at least 1 2 1 4 , and thus the probability to conduct the i-th round is at most ( 3 4 ) i−1 . Therefore, the expected number of comparisons is at most
· (1 − Q 2 ) ≥i≥1 ( 3 4 ) i−1 · (2 · c 15/32 ln 2 Q + 1) = O(log 1 Q ). Moreover, a single round returns an element in [1, 1 8 m] or ( 7 8 m, m] with probability at most 1 4 · Q 2 = Q 8 , so the failure probability is at most i≥1 ( 3 4 ) i−1 · Q 8 = Q 2 ,
concluding the following theorem:
▶ Theorem 5. It takes expected O( k n ε −2 log 1 Q ) comparisons to solve the FT-APX(k, ε) problem with probability at least 1 − Q.
Finally, to derive a deterministic bound, we note that the simple experiment in the verification step may be viewed as a biased coin toss. From this viewpoint, we are able to turn the FT-APX( m 2 , 3 8 ) problem into finding a coin with bias bigger than 15 32 , given that at least half of the coins have bias at least 17 32 . Grossman and Moshkovitz [16] provided an algorithm that solves the new problem with probability 1 − Q 2 using O(log 1 Q · (log log 1 Q ) 2 ) coin tosses, leading to the following theorem.
▶ Theorem 6. It takes O( k n ε −2 log 1 Q +log 1 Q (log log 1 Q ) 2 )
comparisons to solve the FT-APX(k, ε) problem with probability at least 1 − Q.
Approximate Median Selection
We attempt to select an element in ( n 2 − nε, n 2 + nε], i.e., k = n 2 , with probability at least 1 − 1 18 using only O(ε −2 ) comparisons. This algorithm will then be applied in Section 5 as a subroutine. A straightforward method, denoted by ST-Median(ε), picks m = Θ(ε −2 ) elements randomly to make their median relevant with probability at least 1 − 1 72 and applies the Select( m 2 , 1 72 ) algorithm (Theorem 1), resulting in a failure probability of at most 1 36 .
However, the Select( m 2 , 1 72 ) algorithm takes O(m log m 1/72 ) = O(ε −2 log ε −1 )
comparisons with an extra logarithmic factor. To achieve O(ε −2 ) comparisons, we will "purify" the input elements in a way that the ratio of relevant elements is increasing while the underlying median is still relevant. Once the ratio of relevant elements becomes a constant fraction, i.e., from 2ε to O(1), we can afford to apply the ST-Median algorithm. We assume that ε < 1
6 since if ε ≥ 1 6 , the ST-Median(ε) algorithm takes only O(ε −2 log ε −1 ) = O(1) comparisons.
A major difficulty to overcome in the purifying process is the following: if we consider three elements that are each relevant with probability ρ, then, even in the absence of comparison faults, their median is relevant with probability at most 3 2 ρ + O(ρ 2 ), which is a lot less than 3ρ. Thus, one risks running out of elements long before the ratio of relevant elements becomes a constant. This issue remains if we replace three by a larger constant, and it applies to any algorithm that works in a non-constant number of phases, including algorithms that more closely resemble Quickselect. Those algorithms would need to start with Ω(ε −(2+δ) ) elements for some δ > 0 and hence cannot achieve the O(ε −2 ) bound.
To settle the above issue, we maintain a multiset of elements and re-sample from this multiset at every phase. Our re-sampling method allows us to decrease the number of elements by less than a factor of 3 2 , so we can avoid running out of elements. The algorithm is sketched as follows:
1. For 1 ≤ i ≤ L,Initially, M 0 = S, n 0 = n, ε 0 = ε. M i is called good if all elements in the range ( ni 2 − n i ε i , ni 2 + n i ε i ] are relevant. Moreover, n i is decreasing with i while ε i is increasing with i, and L = min{i | ε i ≥ 1 6 }, i.e.
, the minimum of number of rounds such that at least 2 · 1 6 = 1 3 of the elements around the middle is relevant. The rest of this section illustrates the idea behind this process and implements these parameters n i and ε i . The above algorithm returns the median with probability at least 1 − 1 13 , and returns the minimum and the maximum with the same probability, i.e., at most 1 26 .
The purifying process is inspired by a simple observation: a randomly picked element is relevant with probability 2ε, while the median of three randomly picked elements is relevant with probability much greater than 2ε. Let E S denote the event that the median of three randomly picked elements is small. Then, 13 9 ε. By Lemma 7, the median selection returns the median with probability at least 1 − 1 13 , and returns the minimum (resp. the maximum) with probability at most 1 26 . A simple calculation, together with the above arguments, gives the following lemma:
Pr[E S ] = 3 1 2 − ε 2 1 2 + ε + 1 2 − ε 3 = 1 2 − 3 2 ε + 2ε 3 . If ε < 1 6 , then Pr[E S ] ≤ 1 2 − 3 2 ε + 2( 1 6 ) 2 ε = 1 −▶ Lemma 8. If M i−1 is good, then each element in M i is small (resp. large) with probability at most 1 2 − 4 3 ε i−1 .
By Lemma 8, it is feasible to set ε i = ( 5 4 ) i · ε, i.e., growing slightly slower than 4 3 . The size n i is set as ⌈2000 · i · ( 4 5 ) 2i · ε −2 ⌉ to limit the number of comparisons and the failure probability. First, n i is linear in ε −2 since the minimum number of elements to be looked at is Ω(ε −2 ) (Section 6). Second, to bound the total number of comparisons, n i should shrink exponentially with i. Third, to bound the failure probability of the algorithm, the failure probability of the i-th round should also shrink exponentially with i. From the above three aspects, since the Chernoff bound (Lemma 20 in Appendix A) will be applied for the probabilitic analysis, n i should be linear in i, and the shrink factor of n i should be at least ( 4 5 ) 2 to cancel out the square of the growth factor 5 4 of ε i . Because the ST-Median(ε L ) algorithm fails with probability at most 1 36 , it is sufficient to
prove that Pr[M L is good ] ≥ 1 − 1 36 . Let E i denote the event that M i is good. By definition, Pr[E 0 ] = 1.
With the Chernoff bound, we can prove the following lemma:
▶ Lemma 9. For 1 ≤ i ≤ L Pr[M i is NOT good | M i−1 is good ] ≤ 2 · e −5i .
By Lemma 9, we can lower bound Pr[E L ] as (1) comparisons, concluding the following theorem:
Pr[E L ] = 1 − Pr[ L i=1 E i | E i−1 ] ≥ 1 − L i=1 2 · e −5i ≥ 1 − 4 · e −5 ≥ 1 − 1 36 . By Lemma 7, each median selection takes O(1) comparisons, so the purifying process takes O( L i=1 n i ) = O ε −2 L i=1 i·( 4 5 ) 2i = O(ε −2 ) comparisons. Since ε L ≥ 1 6 , the ST-Median(ε L ) algorithm takes O▶ Theorem 10. It takes O(ε −2 ) comparisons to select an element in ( n 2 − nε, n 2 + nε] with probability at least 1 − 1 18 .
Approximate k-th Element Selection
We attempt to select an element in (k − nε, k + nε] with probability at least 1 − 1 9 using only O( k n ε −2 ) comparisons. Recall that k > nε as assumed in Remark 3. If nε < k ≤ 2nε, we halve the value of ε so that k > 2nε, which does not increase the asymptotic complexity. Therefore, we can safely assume k > 2nε afterwards. In this scenario, the straightforward approach mentioned in Section 1 requires O k n ε −2 log(kε −1 ) comparisons with an extra log(kε −1 ) factor. Another approach is to add n − 2k dummy smallest elements (so that the relevant elements lie in the middle) and to apply the algorithm in Section 4 with ε 2 , leading to O(ε −2 ) comparisons. As a result, both approaches are more expensive than O( k n ε −2 ). At a high level, our breakthrough is an iterative "purifying" process that increases both the ratio of relevant elements and the relative position of k, i.e., the middle position of relevant elements, while "controlling" the relative position. Once the relative position becomes a constant fraction of the remaining elements, e.g., 1 8 , we add dummy smallest elements and apply the approximate median selection algorithm in Section 4. As the ratio of relevant elements increases at the same time, the resulting number of comparisons will be O( k
n ε −2 ) instead of O(ε −2 ).
The algorithm is sketched as follows:
1. For 1 ≤ i ≤ L, generate a set S i of n i elements by repeatedly picking two elements from S i−1 randomly and selecting the minimum of the two using 6c p + 1 comparisons (Lemma 4).
2.
Add n L − 2k l + 2ε L dummy smallest elements to M L and apply the approximate median selection algorithm in Section 4 on M L with respect to ε L .
Initially, S 0 = S, n 0 = n, k 0 = k, ε 0 = ε. S i is called good if all elements in the range (k i − n i ε i , k i + n i ε i ] are relevant.
For ease of exposition, let β i denote ki ni . Both β i and ε i increase with i while n i decreases with i, and we set L = min{i | β i ≥ 1 8 }. Recall that β = k n . We assume that β < 1 8 ; otherwise, we conduct the second step directly, i.e., L = 0.
The purifying process is based on a simple observation that the minimum of two randomly picked element is small with probability
(β − ε) 2 two small + 2 (β − ε) (1 − (β − ε)) one small & one non-small = 2 (β − ε) − (β − ε) 2 ,
while a randomly picked element is small with probability merely β − ε. By a similar calculation, the minimum of two randomly picked elements is relevant with 4ε − β · 4ε. Since k is exactly the number of small elements plus half the number of relevant elements, the above derivation suggests the following formulation of β i :
β i := 2 (β i−1 − ε i−1 ) − (β i−1 − ε i−1 ) 2 Pr[ small ] + (2ε i−1 − β i−1 · 2ε) Pr[ relevant ]÷2
.
These derivations need to adapt to the failure probability q of selecting the minimum using 6c p + 1 comparisons. By Lemma 4, q ≤ e −3 < 1 20 and
q = 3cp i=1 (1 − p) i p 6cp+1−i .
Then, a selected element in the first round is relevant with probability
4ε 2 two relevant +q · 2 · (β − ε) 2ε one small & one relevant +(1 − q) · 2 · (1 − (β + ε)) 2ε one large & one relevant , which is equal to 4ε · (1 − q) − (1 − 2q) · β .
Since β < 1 8 and q < 1 20 , the above probability is larger than 67 40 · 2ε. Therefore, it is feasible to set ε i = ( 3 2 ) i · ε, i.e., growing slower than 67 20 . To fit the formulation of β i to the above failure probability q, a similar calculation yields that each selected element in the first round is small with probability
(β − ε) 2 + (1 − q) · 2 (β − ε) (1 − (β − ε)) .
Since the relative position is the number of small elements plus half the number of relevant elements, it is feasible to set the value of β i as follows (after arrangement):
β i := 2β i−1 − β 2 i−1 − ε 2 i−1 − 2q β i−1 − β 2 i−1 − ε 2 i−1 .
Moreover, we can prove by induction important properties of β i as stated below:
▶ Lemma 11. For 0 ≤ i ≤ L, β i > 2ε i and β i ≤ 2 i · β. Thus, k i n i ≤ 2 i · k n for 0 ≤ i ≤ L.
The size n i of S i is set as ⌈960 · i · ( 8 9 ) i · k n ε −2 ⌉ to control the number of comparisons and the failure probability. Similar to Section 4, n i should shrink exponentially with i and should also be linear in both k n ε −2 and i. The major difference lies in that the existence of k i changes the shrink factor of n i . Since ki ni ≤ 2 i · k n and ε i = ( 3 2 ) i · ε, the shrink factor of n i should be at least 8 9 . This is based on the fact that 2 −i · ( 3 2 ) 2i · ( 8 9 ) i = 1, which will be much clearer in the probability analysis.
To sum up,
n i = ⌈960 · i · ( 8 9 ) i · k n ε −2 ⌉, ε i = ( 3 2 ) i · ε, β i = (2β i−1 − β 2 i−1 − ε 2 i−1 ) − 2q · (β i−1 − β 2 i−1 − ε 2 i−1 ) with q = 3cp i=1 (1 − p) i p 6cp·+1−i , and L = min{i | β i ≥ 1 8 }.
To attain the success probability 1− 1 9 , it is sufficient to prove that Pr[S L is good] ≥ 1− 1 18 (Theorem 10) since the approximate median selection in Section 4 fails with probability at most 1 18 . Let E i denote the event that S i is good. By definition, Pr[E 0 ] = 1. Applying the Chernoff bound with the above parameters gives the following lemma:
▶ Lemma 12. For 1 ≤ i ≤ L Pr[S i is NOT good | S i−1 is good ] ≤ 2 · e −4i .
By Lemma 12, we can lower bound Pr[E L ] as
Pr[E L ] = 1 − Pr[ L i=1 E i | E i−1 ] ≥ 1 − L i=1 2 · e −4i ≥ 1 − 4 · e −4 ≥ 1 − 1 9 .
For the number of comparisons, since each selection takes 6c P +1 = O(1) comparisons, the purifying process takes
L i=1 O(n i ) = k n ε −2 · L i=1 O i · ( 8 9 ) i = O( k n ε −2 ) comparisons. By Theorem 10, the approximate median selection takes O(ε −2 L ) = O ( 2 3 ) 2L ε −2 = O(2 −L · ε −2 ) comparisons. Since k L n L ≤ 2 L · k n (Lemma 11) and k L n L > 1 8 , we have 2 −L = O( k n ) and O(2 −L · ε −2 ) = O( k n ε −2 )
, implying the following main theorem:
▶ Theorem 13. It takes O( k n ε −2 ) comparisons to select an element in (k − nε, k + ε] with probability at least 1 − 1 9 .
Lower Bound
We sketch the derivation of an Ω(min{n, k n ε −2 log 1 Q }) lower bound for the expected number of comparisons. The lower bound is based on a sampling lemma (Corollary 17 in Section E) about elements with a certain rank among all sampled elements. We assume that 4nε ≤ k. If k ≤ nε, the Ω(ε −1 log 1 Q ) lower bound for the approximate minimum selection problem [23] applies, and if nε < k < 4nε, we multiply the value of ε by 4 so that k ≤ nε and the former argument still works, which does not change the lower bound asymptotically.
Let T be the decision tree of any randomized algorithm that solves FT-APX(k, ε) with probability at least 1 − Q. T is said to look at an element x if T performs at least one comparison involving x. Let D be the expected number of elements that T looks at. Since D is not larger than twice the expected number of comparisons, it is sufficient to lower bound D. We assume that there are no comparison faults, which does not increase the lower bound and is easier for analysis.
If D ≥ n 10 , then D = Ω(n). Below, we deal with the case that D < n 10 . Markov's inequality implies that T looks at more than 2 D elements with probability at most 1 2 . We construct a new decision treeT based on T :T first simulates T until reaching a leaf u of T that returns an element x, and then conducts three additional steps sequentially:
(a) If T does not look at x, thenT compares x with another element. (b) IfT has looked at fewer than 2 D + 8n k elements so far, thenT performs more comparisons such thatT has looked at exactly 2 D + 8n k elements after this step. (c)T compares all pairs of elements that it has looked at, and then returns x.
Intuitively,T represents the same algorithm as T , but these additional steps giveT the following nice properties for analysis (as shown in Lemma 22, these properties follow directly from the three additional steps above):
(1)T knows the sorted order of the elements thatT has looked at.
(2)T has success probability at least 1 − Q.
(3)T looks at exactly 2 D + 8n k elements with probability at least 1/2. Note that this includes the elements thatT looks at during its simulation of T .
Let us consider the execution ofT on a uniformly shuffled input. By property (1), the element returned by a fixed leaf ofT will always have the same rank among the elements T has looked at, independent of order of the input. (Note that we assumed there are no comparison faults.) By property (3), with probability at least 1/2, the execution ofT reaches a leaf after looking at exactly 2 D + 8n k elements. By applying a sampling lemma (Corollary 17) to each such leaf, we can lower bound the failure probability ofT .
▶ Lemma 14. If k ≥ 200 and 4nε < k, then the failure probability ofT on a uniformly shuffled input is at least
1 2 · η ·e −24ε 2 n k (2 D +⌈ 8n k ⌉) for a constant η.
SinceT succeeds with probability at least 1 − Q, we have Q ≥ 1 2 η ·e −24ε 2 n k (2 D +⌈ 8n k ⌉) , implying that D = Ω( k n ε −2 log 1 Q ). We can conclude the following main theorem.
▶ Theorem 15. If Q < 1 2 , then the expected number of comparisons performed by any randomized algorithm that solves the FT-APX(k, ε) problem with probability at least 1 − Q is Ω min n, k n ε −2 log 1 Q .
Sampling lemma
In the previous part, we reduced a general algorithm to returning an element of a certain rank among all elements the algorithm looked at. We will now derive a sampling lemma for this case. For ease of exposition, we also use β to denote k n . ▶ Lemma 16. Let A consist of m ≤ n 4 elements sampled from S without replacement. Suppose that mβ ≥ 8 and that 1 2 ≥ β ≥ 4ε. Then there is an absolute constant η with the following properties.
1.
Let u be the r-th smallest element of A. If r ≤ ⌈βm⌉, then u is small with probability at least η ·e −12 ε 2 β(1−β) m .
Let v be the r-th largest element of A. If r ≤ ⌈(1 − β)m⌉, then v is large with probability at least
η ·e −12 ε 2 β(1−β) m .
As every element is either among the ⌈βm⌉ smallest or among the ⌈(1 − β)m⌉ largest ones, the lemma directly implies the following.
Pr [X = ℓ] ≥ π 32 1 − x ma(1 − a) e −F for F = D (a∥b) + x 1 − x · (a − b) 2 2(b − ax)((1 − b) − (1 − a)x) · m
where D (a∥b) is the Kullback-Leibler divergence (Definition 27).
By summing this over the tail, we get the following tail bound, from which Lemma 16 follows.
▶ Corollary 19. Let X ∼ Hypergeom(M, K, m). Let 0 ≤ ℓ ≤ m be a real number with ℓ < K and m − ℓ < M − K. Put a = ℓ m , b = K M and x = m M . If a ≤ 8 5 b, (1 − a) ≤ 2(1 − b) , x ≤ 1 4 and ma(1 − a) ≥ 4, then we have P r[X ≥ ℓ] ≥ π 320 · e −24 · e − 6(a−b) 2 b(1−b) m .
For a detailed derivation of these bounds, see Appendix E.
A Supplementary material for Section 2
▶ Lemma 20 (Chernoff Bound). Let X be the sum of independent Bernoulli random variables. If A ≤ E[X] ≤ B, then for any δ ∈ (0, 1),
Pr[X ≥ (1 + δ) · B] ≤ e − δ 2 3 B
and
Pr[X ≤ (1 − δ) · A] ≤ e − δ 2 2 A .
Proof. The two statements can be extended from the proofs of [
E[e tX ] ≤ e (e t −1)E[X] .
For the first claim, using any t > 0,
Pr[X ≥ (1 + δ) · B] = Pr[e tX ≥ e t(1+δ)·B ] ≤ E[e tX ] e t(1+δ)B ≤ e (e t −1)E[X] e t(1+δ)B E[X]≤B ≤ e (e t −1)B e t(1+δ)B .
The remaining steps are identical to the proof of [27,Theorem 4.4(2)].
For the second claim, using any t < 0,
Pr[X ≤ (1 − δ) · A] = Pr[e tX ≥ e t(1−δ)·A ] ≤ E[e tX ] e t(1−δ)A ≤ e (e t −1)E[X] e t(1−δ)A A≤E[X] ≤ e (e t −1)A e t(1+δ)A .
The remaining steps are identical to the proof of [27,Theorem 4.5(2)].
2c p · t + 1 i (1 − p) i p 2cp·t+1−i . Proof. Let {X i | 1 ≤ i ≤ 2c p · t + 1}= 0] = p. Let X = 2cp·t+1 i=1 X i . Then, E[X] = (2c p · t + 1)(1 − p).
Since p < 1 2 , we know 2(1 − p) > 1 and we can apply Lemma 20 to prove the first statement as follows:
Pr[X ≤ 2c p · t + 1 2 ] = Pr[X ≤ 1 2(1 − p) E[X]] = Pr[X ≤ 1 − 1 − 2p 2 − 2p E[X]] ≤ exp − 1 2 · ( 1 − 2p 2 − 2p ) 2 · E[X] Lemma 20 = exp − 1 2 · ( 1 − 2p 2 − 2p ) 2 · (2c p · t + 1)(1 − p) = exp (2c p · t + 1) (1 − 2p) 2 8(1 − p) < exp −c p t (1 − 2p) 2 4(1 − p) .
XX:14 Approximate Selection with Unreliable Comparisons in Optimal Expected Time
which satisfies the statement if we choose c p = ⌈ 4(1−p) (1−2p) 2 ⌉. Since X is a binomial random variable and c p is an integer, the second statement comes as follows:
Pr[X ≤ 2c p · t + 1 2 ] = Pr[X ≤ c p · t] = cp·t i=0 2c p · t + 1 i (1 − p) i p 2cp·t+1−i . ◀ B
Supplementary material for Section 3 ▶ Lemma 21. Let m = 2 10 · 3 2 · ln 2 Q , let X 1 , X 2 . . . , X m be m identically and independently distributed Bernoulli random variables with probability p ≥ 8 9 , and let X =
m i=1 X i . Pr[X ≥ 7 8 m] ≥ 1 − Q 2 .
Proof. It is sufficient to prove that Pr[X ≤ 7 To derive the expected total number of comparisons, we need to calculate the probability of conducting the i-th round. Since the probability of picking an element in ( 2 8 m, 6 8 m] is 1 2 at any round and such an element is verified in ( 1 8 m, 7 8 m] with probability at least 1 − Q 2 ≥ 1 2 at any round, any round returns an element with probability at least 1 2 · 1 2 = 1 4 . Similar to geometric distribution, the probability that the i-th round is conducted is at most ( 1. For each pair of elements, apply the majority vote strategy with 2c p · 4 + 1 comparisons (Lemma 4), and assign a point to the element that attains the majority result.
8 m] ≤ Q 2 . Since p ≥ 8 9 , E[X] ≥ 8 9 m. By Lemma 20, Pr[X ≤ 7 8 m] = Pr[X ≤ (1 − 1 64 ) · 8 9 m] Lemma 20 ≤ exp − 1 2 · ( 1 64 ) 2 · 8 9 m = exp − 1 2 10 · 3 2 m ≤ exp − 2 10 · 3 2 · ln 2 Q 2 10 · 3 2 = e − ln 2 Q = Q 2 .1 − 1 4 ) i−1 = ( 3 4 ) i−1 , so the second stage takes expected ∞ i=1 ( 3 4 ) i−1 O(log 1 Q ) = O log 1 Q · ∞ i=1 ( 3 4 ) i−1 = O(log 1 Q )
2.
Return the element with exactly one point. If all three elements get exactly one point, return one of them uniformly at random.
The above algorithm returns the median with probability at least 1 − 1 13 , and returns the minimum and the maximum with the same probability, i.e., at most 1 26 .
Proof. Let q be the failure probability of one majority vote. Since one majority vote consists of 2c p · 4 + 1 comparisons, by Lemma 4, q ≤ e −4 . If all three majority votes succeed, then the algorithm will return the median, implying that the algorithm will return the median with probability at least (1 − q) 3 ≥ 1 − 3q ≥ 1 − 3 · e −4 ≥ 1 − 1 13 . Now, we will prove that the algorithm returns the minimum and the maximum with the same probability. Since there are three majority votes, there are 8 possibilities, and these 8 possibilities lead to four different situations: exactly the minimum or exactly the median or exactly the maximum gets one point, or all the three elements get one point. A tree diagram for these 8 possibilities can easily calculate the probabilities of the four situations. In detail, exactly the minimum (resp. exactly the maximum) gets one point with probability q(1 − q), exactly the median gets one point with probability (1 − q) 3 + q 3 , and all three elements get one point with probability q(1 − q). Since the algorithm returns an element uniformly at random when all the three elements get one point, the algorithm returns the minimum and the maximum with the same probability 4 3 q(1 − q). Since the algorithm returns the median with probability at least 1 − 1 13 and returns the minimum and the maximum with the same probability, the probability that the algorithm returns the minimum (resp. the maximum) is at most 1 26 . ◀
Lemma 8. If M i−1 is good, then each element in M i is small (resp. large) with probability at most 1 2 − 4 3 ε i−1 .
Proof. We only prove the small case, and it is symmetric to the large case. Let p s denote the probability that an element randomly picked from M i−1 is small. Since M i−1 is good, all elements in its range ( 1
2 n i−1 − n i−1 ε i−1 , 1 2 n i−1 + n i−1 ε i−1 ]
are relevant, and p s ≤ 1 2 − ε i−1 . Let p 1 , p 2 and p 3 denote the probabilities that the median selection algorithm in Lemma 7 returns the minimum, the median and the maximum of three elements, respectively. By Lemma 7, p 2 ≥ 12 13 , and p 1 = p 3 ≤ 1 26 . Also recall that ε i−1 ≤ 1 6 . Then, the probability that an element in M i is small is
XX:16 Approximate Selection with Unreliable Comparisons in Optimal Expected Time
p 3 s three small + 3p 2 s (1 − p s ) two small & one non-small ·(1 − p 3 ) + 3p s (1 − p s ) 2 one small & two non-small ·p 1 = p 3 s + 3p 2 s (1 − p s )(1 − p 1 ) + 3p s (1 − p s ) 2 · p 1 = 3p 2 s − 2p 3 s + p 1 · 3p s − 9p 2 s + 6p 3 s ≥0 since 0≤ps≤ 1 2 p1≤ 1 26 ≤ 1 26 · 3p s + 69p 2 s − 46p 3 s f (x):=−46x 3 +69x 2 +3x & f ′ (x)>0 for 0≤x≤1 ps≤ 1 2 −εi−1 ≤ 1 26 · 3 1 2 − ε i−1 + 69 1 2 − ε i−1 2 − 46 1 2 − ε i−1 3 = 1 2 − 75 52 ε i−1 + 23 13 ε 3 i−1 εi−1< 1 6 ≤ 1 2 − 75 52 ε i−1 + 23 468 ε i−1 = 1 2 − 652 468 ε i−1 ≤ 1 2 − 4 3 ε i−1 ◀ Lemma 9. For 1 ≤ i ≤ L Pr[M i is NOT good | M i−1 is good ] ≤ 2 · e −5i .
Proof. Assume that M i−1 is good. Let X i be the number of small elements in M i and let Y i be the number of large elements in M i . For the statement, it is sufficient to prove that
Pr[X i ≥ ni 2 − n i ε i ] ≤ e −5i and Pr[Y i ≥ ni 2 − n i ε i ] ≤ e −5i .
We will prove the first claim, and it is symmetric to the second claim. By Lemma 8,
E[X i ] ≤ ( 1 2 − 4 3 ε i−1 )n i = ( 1 2 − 4 3 · 4 5 ε i )n i = ( 1 2 − 16 15 ε i )n i = 15 − 32ε i 30 n i .
By Lemma 20 (Chernoff Bound), we can get
Pr X i ≥ 1 2 − ε i n i = Pr 1 + 1 15 ε i 1 2 − 16 15 ε i 1 2 − 16 15 ε i n i = Pr 1 + 2ε i 15 − 32ε i :=δ 15 − 32ε i 30 n i ≥E[Xi] ≤ exp − 1 3 2ε i 15 − 32ε i 2 · 15 − 32ε i 30 n i Lemma 20 = exp − 4 90 · ε 2 i 15 − 32ε i · n i ≤ exp − 4 90 · ε 2 i 15 · n i = exp − 2 675 · 5 4 2i ε 2 · 2000 · i · 4 5 2i · ε −2 ≤ e −5i .
◀ D Supplementary material for Section 5
Lemma 11. For 0 ≤ i ≤ L, β i > 2ε i and β i ≤ 2 i · β. Thus, k i n i ≤ 2 i · k n for 0 ≤ i ≤ L.
Proof. We prove by induction. For i = 0, by assumption in the first paragraph of Section 5, β > 2ε, i.e., β 0 = β > 2ε = 2ε 0 . Also, β 0 = β ≤ 2 0 · β. Assume that for i = k ≥ 0, β k > 2ε k and β k ≤ 2 k · β. Note that k < L; otherwise, the (k + 1)-th round does not exist. By Section 5,
β k+1 = 2β k − β 2 k − ε 2 k − 2q β k − β 2 k − ε 2 k .
We first prove that β k+1 > 2ε k+1 as follows:
β k+1 = 2β k − β 2 k − ε 2 k − 2q β k − β 2 k − ε 2 k q< 1 20 > 19 10 β k − 9 10 (β 2 k + ε 2 k ) = 9 10 − β k − 19 18 2 + 19 18 2 − ε 2 k β k < 1 8 & 2ε k <β k > 9 10 − 2ε k − 19 18 2 + 19 18 2 − ε 2 k = 19 5 ε k − 9 2 ε 2 k ε k < 1 2 β k < 1 16 > 563 160 ε k > 3ε k = 2ε k+1 .
Then, we prove that β k+1 ≤ 2 k+1 · β as follows:
β k+1 = 2β k − β 2 k − ε 2 k − 2q β k − β 2 k − ε 2 k q≥0 ≤ 2β k − β 2 k − ε 2 k ≤ 2β k ≤ 2 · 2 k · β = 2 k+1 · β.
XX:18 Approximate Selection with Unreliable Comparisons in Optimal Expected Time
◀ Lemma 12. For 1 ≤ i ≤ L Pr[S i is NOT good | S i−1 is good ] ≤ 2 · e −4i .
Proof. Assume that S i−1 is good. Let X i be the number of small elements in S i and let Y i be the number of small and relevant elements in S i . For the statement, it is sufficient to
prove Pr[X i ≥ k i − n i ε i ] ≤ e −4i and Pr[Y i ≤ k i + n i ε i ] ≤ e −4i . Since S i−1 is good, all elements in the range (k i−1 − n i−1 ε i−1 , k i−1 + n i−1 ε i−1 ] of S i−1 relevant. Recall that β i = ki
ni . Therefore, according to the way of selecting elements for S i in Section 5, the probability that an element in S i is small is at most
(β i−1 − ε i−1 ) 2 + (1 − q) · 2 · (β i−1 − ε i−1 )(1 − (β i−1 − ε i−1 )) = 2(β i−1 − ε i−1 ) − (β i−1 − ε i−1 ) 2 − q 2(β i−1 − ε i−1 ) − 2(β i−1 − ε i−1 ) 2
Let p s denote the above upper bound. Similarly, the probability that a selected element is small or relevant is at least
(β i−1 + ε i−1 ) 2 + (1 − q) · 2 · (β i−1 + ε i−1 )(1 − (β i−1 + ε i−1 )) = 2(β i−1 + ε i−1 ) − (β i−1 + ε i−1 ) 2 − q 2(β i−1 + ε i−1 ) − 2(β i−1 + ε i−1 ) 2
Let p sr denote the above lower bound. Then,
E[X i ] ≤ p s · n i and E[Y i ] ≥ p sr · n i .
By the formulation of β i in Section 5, we can re-formulate β i with p s and p sr as follows:
β i = p s + p sr 2 .
Therefore, we can reformulate Pr[X i ≥ k i − n i ε i ] and Pr[Y i ≤ k i + n i ε i ] as follows:
Pr [X i ≥ k i − ε i n i ] = Pr [X i ≥ (β i − ε i )n i ] = Pr X i ≥ p s + p sr 2 − 3 2 ε i−1 n i = Pr X i ≥ 1 + p sr − p s − 3ε i−1 2p s · p s n i
and similarly,
Pr [Y i ≤ k i + ε i n i ] = Pr Y i ≥ 1 − p sr − p s − 3ε i−1 2p sr · p sr n i .
In order to apply Lemma 20 (Chernoff bound), we need to show that (1)
p sr −p s −3ε i−1 > 0, (2) p sr − p s − 3ε i−1 < 2p s and (3) p sr − p s − 3ε i−1 < 2p sr .
Since p sr > p s , it is sufficient to prove the first two inequalities.
For the first inequality,
p sr − p s − 3ε i−1 = (1 − 4q) · ε i−1 − (1 − 2q) · 4β i−1 ε i−1 βi−1< 1 8 , 1−2q>0 > 1 − 6q 2 ε i−1 q< 1 20 ≥ 7 20 ε i−1 .
For the second inequality, we upper bound p sr − p s − 3ε i−1 and lower bound 2p s :
p sr − p s − 3ε i−1 = (1 − 4q) · ε i−1 − (1 − 2q) · 4β i−1 ε i−1 βi−1≥0, 1−2q>0 ≤ (1 − 4q)ε i−1 q≥0 ≤ ε i−1 , and 2p s = 2 2(β i−1 − ε i−1 ) − (β i−1 − ε i−1 ) 2 − 2q 2(β i−1 − ε i−1 ) − 2(β i−1 − ε i−1 ) 2 q< 1 20 ≥ 19 5 (β i−1 − ε i−1 ) − 9 5 (β i−1 − ε i−1 ) 2 = 1 5 (β i−1 − ε i−1 ) (19 − 9(β i−1 − ε i−1 )) βi−1−εi−1≤ βi−1 ≤ 1 8 ≥ 143 40 (β i−1 − ε i−1 ) βi−1>2εi−1 (Lemma 11) ≥ 143 40 ε i−1 , implying that p sr − p s − 3ε i−1 < p s < p sr .
For applying the Chernoff bound, it is convenient to have a simple lower bound for p sr − p s − 3ε i−1 and simple upper bounds for p s and p sr . Since we already derive that p sr − p s − 3ε i−1 ≥ 7 20 ε i−1 , we deal with the other two as follow.
p s = 2(β i−1 − ε i−1 ) − (β i−1 − ε i−1 ) 2 − q 2(β i−1 − ε i−1 ) − 2(β i−1 − ε i−1 ) 2 q≥0 ≤ 2(β i−1 − ε i−1 ) − (β i−1 − ε i−1 ) 2 ≤ 2β i−1 ,
and Proof. Property (1) comes from step (c) in whichT compares all pairs of elements thatT has looked at. Remember that we assume no comparison faults for the lower bound analysis.
p sr = 2(β i−1 + ε i−1 ) − (β i−1 + ε i−1 ) 2 − q 2(β i−1 + ε i−1 ) − 2(β i−1 + ε i−1 ) 2 q≥0 ≤ 2(β i−1 + ε i−1 ) − (β i−1 + ε i−1 ) 2 ≤ 2(β i−1 + ε i−1 ) 2εi−1<βi−1 (Lemma 11) ≤ 3β i−1 . Pr[X i ≥ k i − ε i n i ] = Pr X i ≥ 1 + p sr − p s − 3ε i−1 2p s :=δ · p s n i ≥E[Xi] ≤ exp − 1 3 · p sr − p s − 3ε i−1 2p s 2 · p s n i (Lemma 20) = exp − 1 3 · (p sr − p s − 3ε i−1 ) 2 4p s psr−ps−3εi−1≥ 7 20 εi−1 & ps≤2βi−1 ·n i ≤ exp − 1 3 · 7 20 ε i−1 2 8β i−1 · n i = exp − 49 9600 · ε 2 i−1 · β −1 i−1 · 960 · i · ( 8 9 ) i · k n ε −2 εi−1=( 3 2 ) i−1 ε & β −1 i−1 ≥2 −(i−1) n k (Lemma 11) ≤ exp − 49 10 · i · 9 4 i−1 2 −(i−1) 8 9 i ≤ e −4i Pr[Y i ≤ k i + ε i n i ] = Pr Y i ≥ 1 + p sr − p s − 3ε i−1 2p sr :=δ · p sr n i ≤E[Yi] ≤ exp − 1 2 · p sr − p s − 3ε i−1 2p sr 2 · p sr n i (Lemma 20) = exp − 1 2 · (p sr − p s − 3ε i−1 ) 2 4p sr psr−ps−3εi−1≥ 7 20 εi−1 & psr≤3βi−1 ·n i ≤ exp − 1 2 · 7 20 ε i−1 2 12β i−1 · n i = exp − 49 9600 · ε 2 i−1 · β −1 i−1 · 960 · i · ( 8 9 ) i · k n ε −2 εi−1=( 3 2 ) i−1 ε & β −1 i−1 ≥2 −(i−1) n k (Lemma 11) ≤ exp − 49 10 · i · 9 4 i−1 2 −(i−1) 8 9 i ≤ e −
For property (2), note thatT first simulates T , then does some additional comparison and then returns the element that T would have returned (independent of the outcome of the additional comparisons). HenceT has the same success probability as T , which is at least 1 − Q by assumption.
For property (3), according to the three steps, if T looks at no more than 2 D elements, thenT will look exactly 2 D + 8n k elements. Since the probability that T looks at more than 2 D elements is at most 1 2 (by the definition of D and by Markov's inequality), property (3) follows. ◀
Lemma 14.
If k ≥ 200 and 4nε < k, then the failure probability ofT on a uniformly shuffled input is at least
1 2 · η ·e −24ε 2 n k (2 D +⌈ 8n k ⌉) for a constant η.
Proof. Recall that we buildT only when D < n 10 . Fix a leaf w ofT . Suppose that the execution ofT reaches w. Let x be the element thatT returns and let A be the set of elements thatT has looked at when the execution reaches w.
AsT is run on a uniformly shuffled input, the distribution of the set A as a random variable is the same as the distribution of a set of |A| elements sampled from S without replacement. Note that sinceT has only compared elements in A, these comparisons do not affect the distribution of A as a random variable. By Lemma 22.(1), x always has the same rank in A. If |A| = 2 D + 8n k , then |A| ≤ n 4 , and by assumption, we have 4nε < k. Note that k n (2 D + 8n k ) ≥ 8. Hence, Corollary 25 implies thatT fails with probability at least
η ·e −24· n k ·ε 2 ·|A| = η ·e −24· n k ·ε 2 ·(2 D +⌈ 8n k ⌉) .
To summarize, ifT reaches a leaf after looking at exactly 2 D + 8n k elements, thenT fails with at least this probabilty. By Lemma 22.(3), this happens with probability at least 1 2 , leading to the statement. ◀ Theorem 15. If Q < 1 2 , then the expected number of comparisons performed by any randomized algorithm that solves the FT-APX(k, ε) problem with probability at least 1 − Q is Ω min n, k n ε −2 log 1 Q .
Proof. As discussed in the beginning of Section 6, if k < 4nε, the lower bound Ω(ε −1 log 1 Q ) for approximate minimum selection [23] applies. Similarly, if k ≤ 200, we may increase ε by 200 n , which changes ε by at most a constant factor, and apply the lower bound for the approximate minimum selection [23]. Therefore, it is sufficient to consider the case that 4ε ≤ k n ≤ 1 2 and k ≥ 200. Recall that T is the decision tree of any randomized algorithm that solves FT-APX(k, ε) with probability at least 1 − Q and D is the expected number of elements that T looks at. If D ≥ n 10 , a lower bound Ω(n) follows. Otherwise, we build the auxiliary decision treeT .
By Lemma 22.(2), the success probability ofT is at least 1 − Q, and by Lemma 14, the failure probability ofT is at least 1 2 · η ·e −24ε 2 n k (2 D +⌈ 8n k ⌉) for a constant η, implying that
Q ≥ 1 2 · η ·e −24ε 2 n k (2 D +⌈ 8n k ⌉) , or equivalently D ≥ 1 48 ε −2 k n ln η 2Q − 1 2 8n k .
If Q ≤ η 1000 , then the first term 1 48 ε −2 k n ln η 2Q dominates the second term 1 2 8n k as 4ε ≤ k n , and thus D = Ω( k n ε −2 ln 1 Q ). (η = π 320 · e −24 as stated in Theorem 30.) It remains to analyze the case that Q > η 1000 , for which we construct an auxiliary algorithm that solves the FT-APX(k, ε) problem with probability at least 1− η 1000 . We will use A andà to denote the original algorithm and the auxiliary algorithm, respectively. Recall that A solves the FT-APX(k, ε) problem with probability at least 1 − Q. Select k ′ such that A outputs a small element with probability at most k ′ n − 1−Q 2 and a large element with probability at most 1 − k ′ n − 1−Q 2 . Thus, by using A to get sampled elements instead of sampling from the input, the FT-APX(k, ε) problem is reduced to the FT-APX k ′ , 1−Q 2 problem (with the restriction that we may only use sampled elements). Motivated by this, letà be a modified (fault-free) version of our algorithms (Section 3-5) for the FT-APX(k ′ , 1−Q 2 ) problem with success probability at least 1 − η 1000 in which each sampling from S is implemented by calling A on S. The correctness ofà relies on the fact that our algorithms only sample elements from S uniformly at random and the corresponding analysis only cares about the probability of getting a small / relevant / large element.
As applying our algorithm to solve the FT-APX(k ′ , 1−Q 2 ) problem with probability 1 − η 1000 would sample O( k ′ n (1 − Q) −2 log 1000 η ) times from S,Ã invokes A at most O( k ′ n (1 − Q) −2 log 1000 η ) times and thus performs expected O(D k ′ n (1 − Q) −2 log 1000 η ) comparisons. Since all terms except D are bounded from above by a constant, the above bound is can be reformulated as O(D). On the other hand, we have already proven that the expected number of comparison to solve the FT-APX(k, ε) problem with probability at least 1 − 1000 η is Ω( k n ε −2 log 1000 η ) = Ω( k n ε −2 ). Since the first bound O(D) is an upper bound for the second bound Ω( k n ε −2 ), D = Ω( k n ε −2 ) = Ω( k n ε −2 log 1 Q ). Recall that log 1 Q is a constant since Q ≥ η 1000 and η is an absolute constant. To sum up, when 4ε ≤ k n ≤ 1 2 and k ≥ 200,the expected number of comparisons required by any algorithm that solve FT-APX(k, ε) with probability 1 − Q is Ω min{n, k n ε −2 log 1 Q } .
◀ If k n ε −2 log 1 Q = Ω(n), the lower bound in Theorem 15 becomes just Ω(n). By reducing it to the exact selection problem, we can show a stronger lower bound in this case. 1 − b). We may hence apply Corollary 29 and get
Pr[X = ℓ + t] ≥ π 64 ma ′ (1 − a ′ ) · e −3 (a+ t m −b) 2 b(1−b) m ≥ π 80ma(1 − a) · e −3(a+ t m −b) 2 b(1−b) m
where we used that
a ′ (1 − a ′ ) ≤ 5 4 a(1 − a). Since (a + t m − b) 2 ≤ 2(a − b) 2 + 2( t m ) 2 , we have 3(a + t m − b) 2 b(1 − b) m ≤ 6(a − b) 2 b(1 − b) m + 6t 2 mb(1 − b) where 6t 2 mb(1 − b) ≤ 6ma(1 − a) mb(1 − b) = 6 a(1 − a) b(1 − b) ≤ 24.
Hence we have
Pr[X = ℓ + t] ≥ π 80 ma(1 − a) − t · e − 6(a−b) 2 b(1−b) m · e −24 .
There are at least ma (
E.4 A useful tool
This subsection aims to build up a tool (Theorem 32) for proving Corollary 29 in Appendix E.3. We first introduce an entropy bound (Lemma 31), and then use this entropy bound prove Theorem 32, in which we also prove Lemma 28. .
▶
Lemma 4. (Majority Vote) For any error probability p ∈ [0, 1 2 ), there exists a postive integer c p such that a strategy that compares two elements 2c p · t + 1 times and returns the majority result succeeds with probability at least 1 − e −t , where c p = ⌈ 4(1−p) (1−2p) 2 ⌉. The exact failure probability of this strategy is cp·t i=0
generate a multiset M i of n i elements by repeatedly picking three elements from M i−1 randomly and selecting the median of the three using a symmetric median selection algorithm (Lemma 7 below). 2. Apply the ST-Median(ε L ) algorithm on M L .
▶ Lemma 7 .
7For three elements, consider the following median selection algorithm: 1. For each pair of elements, apply the majority vote strategy with 2c p · 4 + 1 comparisons (Lemma 4), and assign a point to the element that attains the majority result. 2. Return the element with exactly one point. If all three elements get exactly one point, return one of them uniformly at random.
▶
Corollary 17. Let A consist of m ≤ n 4 elements sampled from S without replacement. Suppose that mβ ≥ 8 and that 1 2 ≥ β ≥ 4ε. Then, an arbitrary element u in A is NOT relevant with probability at leastη ·e −24· n k ·ε 2 ·m .for some absolute constant η. Now let us briefly sketch the proof of Lemma 16. The main observation is that the number of small (or large) elements in A has hypergeometric distribution. The probability density function of the hypergeometric distribution can be expressed explicitly with binomial coefficients. By the entropy bound for binomial coefficients and a second order tangent bound based on the second derivative in x = m M , the following theorem follows.▶ Theorem 18. Let X ∼ Hypergeom(M, K, m). Let 0 ≤ ℓ ≤ m be an integer with ℓ < K and m − ℓ < M − K. Put a = k l , b = K M , and x = m M , then we have we have
◀ Lemma 4 .
4(Majority Vote) For any error probability p ∈ [0, 1 2 ), there exists a postive integer c p such that a strategy that compares two elements 2c p · t + 1 times and returns the majority result succeeds with probability at least 1 − e −t , where c p = ⌈ 4(1−p) (1−2p) 2 ⌉. The exact failure probability of this strategy is cp·t i=0
◀ Theorem 5 .
5It takes expected O( k n ε −2 log 1 Q ) comparisons to solve the FT-APX(k, ε) problem with probability at least 1 − Q. Proof. Let m = 2 10 · 3 2 · ln 2 Q as in Lemma 21. The algorithm consists of two stages. The first stage aims to select m elements in which all elements in the range ( 1 8 m, 7 8 m] are relevant, and the second stage aims to select an element from ( 1 8 m, 7 8 m]. For the number of comparisons, by Theorem 13, it takes O( k n ε −2 ) comparisons to select a relevant element with probability at least 1 − 1 9 , so the first stage takes O( k n ε −2 · m) = O( k n ε −2 · log 1 Q ) comparisons. For the second stage, by Section 3, one verification step performs O(log 1 Q ) comparisons.
comparisons. To sum up, the algorithm takes expected O( k n ε −2 · log 1 Q ) comparisons. For the success probability, by Theorem 13 and Lemma 21, the first stage fails with probability at most Q 2 . The second stage fails only when returning an element in [1, 1 8 m] or ( 7 8 m, m]. Since a single round picks an element in [1, 1 8 m] ∪ ( 7 8 m, m] with probability 1 4 and the verification fails with probability at most Q 2 . a single round returns an element in [1, 1 8 m] ∪ ( 7 8 m, m] with probability at most 1 4 · Q 2 = Q 8 , Therefore, the failure probability of the second stage is at most i≥1 ( 3 4 ) i−1 · Q 8 = Q 2 , concluding the following theorem: ◀ C Supplementary material for Section 4 Lemma 7. For three elements, consider the following median selection algorithm:
▶
Theorem 30. Let X ∼ Hypergeom(M, K, m). Let 0 ≤ ℓ ≤ m be a real number with ℓ < K and m − ℓ < M − K. Put a = ℓ m , b = K M and x = m M . If a ≤ 8 5 b, (1 − a) ≤ 2(1 − b) , x ≤ 1 4 and ma(1 − a) ≥ 4, then we have P r[X ≥ ℓ] ≥ π 320 · e −24 · e − 6(a−b) 2 b(1−b) m . Proof. Let 0 ≤ t ≤ ma(1 − a)be a real number such that ℓ + t is an integer and put a ′ = ℓ+t m . As ma(1 − a) ≥ 4, we have t ≤ ma(1 − a) ≤ ma(≤ a ′ ≤ 1 − a ≤ 2(
1 − a) − 1 possible values of t. As ma(1 − a) ≥ 4, we havema(1 − a) − 1 ≥ ma(1 − a) 2 .Thus summing over all possible possible values of t yields the statement. ◀
▶
Lemma 31 (Entropy bound [2]). Let H(x) = −x ln(x) − (1 − x) ln(1 − x)be the entropy function. Let 0 ≤ k ≤ n be an integer and put α = k n . Thene nH(α) 8nα(1 − α) ≤ n k ≤ e n H(α) 2πnα(1 − α)
be 2c p · t + 1 independent Bernoulli random variables such that X i = 1 if the i-th comparison succeeds, i.e., Pr[X i = 1] = 1 − p and Pr[X i
This subsection shows detailed proofs for the lower bound analysis in Section 6. We first prove a number of nice properties for the auxiliary decision treeT (Lemma 22). Then, we use Lemma 22 and a sampling lemma (Corollary 25 in Appendix E.2) to bound the failure probability ofT (Lemma 14). Finally, we combine Lemma 22 and Lemma 14 to prove the lower bound (Theorem 15).▶ Lemma 22.T has the following properties:1.T knows the sorted order of the elements thatT has looked at. 2.T has success probability at least 1 − Q. 3.T looks at exactly 2 D + 8n k elements with probability at least 1/2. Note that this includes the elements thatT looks at during its simulation of T .4i
◀
E
Supplementary material for Section 6
E.1 Derivations Towards Theorem 15
Appendix ▶ Theorem 23. If Q <12 and k n ε −2 log 1 Q = Ω(n), then the expected number of comparisons performed by any randomized algorithm that solves FT-APX(k, ε) with probability at least 1 − Q isProof. The first term n directly comes from the first term n of Theorem 15. Recall that we assume k ≤ n 2 . The second term ε −1 log (k+nε)/(2nε) Q can be reduced from the lower bound Ω(n log k Q ) for the exact k-th smallest element selection problem[10]as follows. Note that as remarked in[10,Section 1], their bound holds both in expectation and in the worst case.Assume we attempt to select the ℓ-th smallest element among m elements. We can duplicate each element 2 · nε times and solve the FT-APX k, ε problem where n = m · nε and k = (2nε) · ℓ − nε. This setting implies that m = ε −1 and ℓ = (k + nε)/(2nε). Since selecting the ℓ-th smallest element among m elements with probability at least 1 − Q requires Ω(m log ℓ Q ) comparisons, a lower bound of Ω(ε −1 log (k+nε)/(2nε) Q ) follows. ◀E.2 Sampling LemmaThis subsection aims to build up a sampling bound (Corollary 25) that is the key ingredient to prove Lemma 14. Corollary 25 roughly states that for a set A of randomly sampled elements (without replacement), the probability that an element of a certain rank in A is NOT relevant decreases as e −Ω( ε 2 β |A|) . To prove Corollary 25, we first derive Lemma 24 that deals with different positions in A. For ease of exposition, we also use β to denote k n in the proofs. As assumed in the whole paper, β ≤ 1 2 , and as stated in Section 6, it is also sufficient to consider β ≥ 4ε since if β < 4ε, we then can apply the lower bound for the approximate minimum selection[23].2.Let v be the r-th largest element of A. If r ≤ ⌈(1 − β)m⌉, then v is large with probability at leastProof. We first prove(1). Let X denote the number of small elements in A. Then X ∼ Hypergeom(n, (β −ε)k, m) has a hypergeometric distribution (Definition 26 in Appendix E.3).Since r ≤ ⌈βm⌉, u is small if and only if A contains at least r small elements, i.e., if and only if X ≥ r. Put a = β and b =β − ε. Then we have a ≤ 8Hence by Theorem 30XX:24 Approximate Selection with Unreliable Comparisons in Optimal Expected Timefor some absolute constant η. Since β ≥ 2ε, we have b ≥ β 2 , and since we also haveimplying thatThe proof of(2)Proof. Let r be the rank of u in A. If r ≤ ⌈βm⌉, then by part(1)of Lemma 24, u is small with probability at leastE.3 A lower tail for hypergeometric distributionThis subsection aims to build a lower tail bound for the hypergeometric distribution (Theorem 30), which is used in the proof of Lemma 24. We first define the hypergeometric distribution and the Kullback-Leibler divergence. Then, we prove Lemma 28 for the Kullback-Leibler divergence and derive Corollary 29. Finally, we adopt Corollary 29 to prove Theorem 30. ▶ Lemma 28. Let a ∈ (0, 1), thenProof. We haveNote that y a ≥ −0.5 and − y 1−a ≥ −0.5 by assumption. ThereforeProof. Note that we have ℓ = axM ≤where the last inequality comes from the fact that x ≤ 1 2 Next, we bound F . Asfor which we apply Lemma 28 to bound the divergence term.where D (a∥b) is the Kullback-Leibler divergence (Definition 27).Proof. By the definition of the hypergeometric distribution, we haveBy the entropy bound (Lemma 31), we havethen we can writeF = P(a, b, x) · M Hence by Lemma 33, we havẽwhich shows the result asF = −x · F . ◀ ▶ Lemma 33. For a, b, x defined as Theorem 32, we haveXX:28 Approximate Selection with Unreliable Comparisons in Optimal Expected TimeProof. For fixed a, b ∈ (0, 1), let C a,b = min b a , 1−b 1−a . LetNote that P is defined for x ∈ (0, C a,b ) and Q is defined for x ∈ (−∞, C a,b ). For x ∈ (0, C a,b ), we have P(a, b, q) = Q a,b (x). In other words, Q is an extension of P to non-positive values of x.A straight-forward computation shows that Q a,b is smooth on (−∞, C a,b ) withFrom these formulas, it is easy to see thatand that Q ′′ a,b (x) is non-decreasing in x. We use this to bound Q by a second order tangent bound as follows: Since Q ′′ a,b (x) is non-decreasing, we haveHence, by the fundamental theorem of calculus, for every z ∈ [0, x], we haveHence, again by the fundamental theorem of calculus, for every y ∈ [0, x], we haveSetting y = x and plugging in the formulas for the derivatives of Q gives the result. ◀
Finding the maximum and minimum. Martin Aigner, Discrete Applied Mathematics. 741Martin Aigner. Finding the maximum and minimum. Discrete Applied Mathematics, 74(1):1-12, 1997.
Information Theory. Robert B Ash, Dover PublicationsRobert B. Ash. Information Theory. Dover Books on Mathematics. Dover Publications, 2012. URL: https://books.google.ch/books?id=1jxfbPz0HRoC.
On sorting in the presence of erroneous information. Amitava Bagchi, Information Processing Letters. 434Amitava Bagchi. On sorting in the presence of erroneous information. Information Processing Letters, 43(4):213-215, 1992.
Comparison-based search in the presence of errors. Ryan S Borgstrom, S. Rao Kosaraju, Proceedings of the Twenty-fifth Symposium on Theory of Computing (STOC93). the Twenty-fifth Symposium on Theory of Computing (STOC93)Ryan S. Borgstrom and S. Rao Kosaraju. Comparison-based search in the presence of errors. In Proceedings of the Twenty-fifth Symposium on Theory of Computing (STOC93), pages 130-136, 1993.
Parallel algorithms for select and partition with noisy comparisons. Mark Braverman, Jieming Mao, S Matthew Weinberg, Proceedings of the Forty-eighth48th Symposium on Theory of Computing (STOC16). the Forty-eighth48th Symposium on Theory of Computing (STOC16)Mark Braverman, Jieming Mao, and S. Matthew Weinberg. Parallel algorithms for select and partition with noisy comparisons. In Proceedings of the Forty-eighth48th Symposium on Theory of Computing (STOC16), pages 851-862, 2016.
Noisy sorting without resampling. Mark Braverman, Elchanan Mossel, Proceedings of the Nineteenth Symposium on Discrete Algorithms (SODA08). the Nineteenth Symposium on Discrete Algorithms (SODA08)Mark Braverman and Elchanan Mossel. Noisy sorting without resampling. In Proceedings of the Nineteenth Symposium on Discrete Algorithms (SODA08), pages 268-276, 2008.
Competitive analysis of the top-k ranking problem. Xi Chen, Sivakanth Gopi, Jieming Mao, Jon Schneider, Proceedings of the Twenty-Eighth Symposium on Discrete Algorithms (SODA17). the Twenty-Eighth Symposium on Discrete Algorithms (SODA17)Xi Chen, Sivakanth Gopi, Jieming Mao, and Jon Schneider. Competitive analysis of the top-k ranking problem. In Proceedings of the Twenty-Eighth Symposium on Discrete Algorithms (SODA17), pages 1245-1264, 2017.
ERSA: error resilient system architecture for probabilistic applications. Hyungmin Cho, Larkhoon Leem, Subhasish Mitra, IEEE Trans. on CAD of Integrated Circuits and Systems. 314Hyungmin Cho, Larkhoon Leem, and Subhasish Mitra. ERSA: error resilient system architec- ture for probabilistic applications. IEEE Trans. on CAD of Integrated Circuits and Systems, 31(4):546-558, 2012.
Fault-Tolerant Search Algorithms -Reliable Computation with Unreliable Information. Ferdinando Cicalese, Monographs in Theoretical Computer Science. SpringerFerdinando Cicalese. Fault-Tolerant Search Algorithms -Reliable Computation with Unreliable Information. Monographs in Theoretical Computer Science. Springer, 2013.
Computing with noisy information. Uriel Feige, Prabhakar Raghavan, David Peleg, Eli Upfal, SIAM Journal on Computing. 235Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy informa- tion. SIAM Journal on Computing, 23(5):1001-1018, 1994.
Optimal resilient sorting and searching in the presence of memory faults. Irene Finocchi, Fabrizio Grandoni, Giuseppe F Italiano, Theoretical Computer Science. 41044Irene Finocchi, Fabrizio Grandoni, and Giuseppe F. Italiano. Optimal resilient sorting and searching in the presence of memory faults. Theoretical Computer Science, 410(44):4457-4470, 2009.
Sorting with recurrent comparison errors. Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, Paolo Penna, Proceedings of the Twenty-Eighth International Symposium on Algorithms and Computation (ISAAC17). the Twenty-Eighth International Symposium on Algorithms and Computation (ISAAC17)3812Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Sorting with recurrent comparison errors. In Proceedings of the Twenty-Eighth International Symposium on Algorithms and Computation (ISAAC17), pages 38:1-38:12, 2017.
Optimal sorting with persistent comparison errors. Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, Paolo Penna, Proceedings of the Twenty-seventh European Symposium on Algorithms (ESA19). the Twenty-seventh European Symposium on Algorithms (ESA19)49Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimal sorting with persistent comparison errors. In Proceedings of the Twenty-seventh European Symposium on Algorithms (ESA19), pages 49:1-49:14, 2019.
Optimal dislocation with persistent errors in subquadratic time. Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, Paolo Penna, Theory Comput. Syst. 643Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimal dislocation with persistent errors in subquadratic time. Theory Comput. Syst., 64(3):508-521, 2020.
Recurring comparison faults: Sorting and finding the minimum. Barbara Geissmann, Matús Mihalák, Peter Widmayer, Proceedings of the Twentieth International Symposium on Fundamentals of Computation Theory (FCT15). the Twentieth International Symposium on Fundamentals of Computation Theory (FCT15)Barbara Geissmann, Matús Mihalák, and Peter Widmayer. Recurring comparison faults: Sorting and finding the minimum. In Proceedings of the Twentieth International Symposium on Fundamentals of Computation Theory (FCT15), pages 227-239, 2015.
Amplification and derandomization without slowdown. Ofer Grossman, Dana Moshkovitz, SIAM Journal on Computing. 495Ofer Grossman and Dana Moshkovitz. Amplification and derandomization without slowdown. SIAM Journal on Computing, 49(5):959-998, 2020.
Approximate computing: An emerging paradigm for energyefficient design. Jie Han, Michael Orshansky, 18th IEEE European Test Symposium (ETS). Jie Han and Michael Orshansky. Approximate computing: An emerging paradigm for energy- efficient design. In 18th IEEE European Test Symposium (ETS), pages 1-6, 2013.
How to rank with few errors. Claire Kenyon, - Mathieu, Warren Schudy, Proceedings of the Thirty-nineth Symposium on Theory of Computing (STOC07). the Thirty-nineth Symposium on Theory of Computing (STOC07)Claire Kenyon-Mathieu and Warren Schudy. How to rank with few errors. In Proceedings of the Thirty-nineth Symposium on Theory of Computing (STOC07), pages 95-103, 2007.
Incorrect systems: it's not the problem, it's the solution. Christoph M Kirsch, Hannes Payer, Proceedings of the 49th Design Automation Conference 2012 (DAC). the 49th Design Automation Conference 2012 (DAC)Christoph M. Kirsch and Hannes Payer. Incorrect systems: it's not the problem, it's the solution. In Proceedings of the 49th Design Automation Conference 2012 (DAC), pages 913-917, 2012.
Tolerant algorithms. Rolf Klein, Rainer Penninger, Christian Sohler, David P Woodruff, Proceedings of the Nineteenth European Symposium on Algorithms (ESA11). the Nineteenth European Symposium on Algorithms (ESA11)Rolf Klein, Rainer Penninger, Christian Sohler, and David P. Woodruff. Tolerant algorithms. In Proceedings of the Nineteenth European Symposium on Algorithms (ESA11), pages 736--747, 2011.
Coping with erroneous information while sorting. K B Lakshmanan, Bala Ravikumar, K Ganesan, IEEE Transactions on Computers. 409K. B. Lakshmanan, Bala Ravikumar, and K. Ganesan. Coping with erroneous information while sorting. IEEE Transactions on Computers, 40(9):1081-1084, 1991.
Tight bounds on the size of fault-tolerant merging and sorting networks with destructive faults. Tom Leighton, Yuan Ma, SIAM Journal on Computing. 291Tom Leighton and Yuan Ma. Tight bounds on the size of fault-tolerant merging and sorting networks with destructive faults. SIAM Journal on Computing, 29(1):258-273, 1999.
Approximate minimum selection with unreliable comparisons in optimal expected time. Algorithmica, in revision. The current version can be. Stefano Leucci, Chih-Hung Liu, Stefano Leucci and Chih-Hung Liu. Approximate minimum selection with unreliable compar- isons in optimal expected time. Algorithmica, in revision. The current version can be found in https://arxiv.org/abs/1805.02033.
Resilient dictionaries for randomly unreliable memory. Stefano Leucci, Chih-Hung Liu, Simon Meierhans, Proceedings of the 27th Annual European Symposium on Algorithms, (ESA19). the 27th Annual European Symposium on Algorithms, (ESA19)7016. Schloss Dagstuhl -Leibniz-Zentrum für InformatikStefano Leucci, Chih-Hung Liu, and Simon Meierhans. Resilient dictionaries for randomly unreliable memory. In Proceedings of the 27th Annual European Symposium on Algorithms, (ESA19), pages 70:1-70:16. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2019.
Sorting and searching with a faulty comparison oracle. Philip M Long, University of California at Santa CruzTechnical reportPhilip M. Long. Sorting and searching with a faulty comparison oracle. Technical report, University of California at Santa Cruz, 1992.
Sorting noisy data with partial information. Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan, Proceedings of the Fourth Conference on Innovations in Theoretical Computer Science (ITCS13). the Fourth Conference on Innovations in Theoretical Computer Science (ITCS13)Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan. Sorting noisy data with partial information. In Proceedings of the Fourth Conference on Innovations in Theoretical Computer Science (ITCS13), pages 515-528, 2013.
Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis. M Mitzenmacher, E , Cambridge University Press2 editionM. Mitzenmacher and E. Upfal. Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis. Cambridge University Press, 2 edition, 2017.
Ten years of building broken chips: The physics and engineering of inexact computing. Krishna Palem, Avinash Lingamneni, ACM Transactions on Embedded Computing Systems. 122sKrishna Palem and Avinash Lingamneni. Ten years of building broken chips: The physics and engineering of inexact computing. ACM Transactions on Embedded Computing Systems, 12(2s):87:1-87:23, 2013.
Searching with known error probability. Andrzej Pelc, Theoretical Computer Science. 632Andrzej Pelc. Searching with known error probability. Theoretical Computer Science, 63(2):185- 202, 1989.
Searching games with errors -fifty years of coping with liars. Andrzej Pelc, Theoretical Computer Science. 2701-2Andrzej Pelc. Searching games with errors -fifty years of coping with liars. Theoretical Computer Science, 270(1-2):71-109, 2002.
On selecting the largest element in spite of erroneous information. K Bala Ravikumar, K B Ganesan, Lakshmanan, Proceedings of the fourth Symposium on Theoretical Aspects of Computer Science (STACs87). the fourth Symposium on Theoretical Aspects of Computer Science (STACs87)Bala Ravikumar, K. Ganesan, and K. B. Lakshmanan. On selecting the largest element in spite of erroneous information. In Proceedings of the fourth Symposium on Theoretical Aspects of Computer Science (STACs87), pages 88-99, 1987.
On software design for stochastic processors. Joseph Sloan, John Sartori, Rakesh Kumar, Proceedings of the 49th Annual Design Automation Conference 2012 (DAC). the 49th Annual Design Automation Conference 2012 (DAC)Joseph Sloan, John Sartori, and Rakesh Kumar. On software design for stochastic processors. In Proceedings of the 49th Annual Design Automation Conference 2012 (DAC), pages 918-923, 2012.
| []
|
[
"Bootstrapping Graph Convolutional Neural Networks for Autism Spectrum Disorder Classification",
"Bootstrapping Graph Convolutional Neural Networks for Autism Spectrum Disorder Classification"
]
| [
"Rushil Anirudh \[email protected] Center for Applied Scientific Computing\[email protected] Center for Applied Scientific Computing Lawrence Livermore National Laboratory Liveremore\nLawrence Livermore National Laboratory Liveremore\nCA, CA\n",
"Jayaraman J Thiagarajan \[email protected] Center for Applied Scientific Computing\[email protected] Center for Applied Scientific Computing Lawrence Livermore National Laboratory Liveremore\nLawrence Livermore National Laboratory Liveremore\nCA, CA\n"
]
| [
"[email protected] Center for Applied Scientific Computing\[email protected] Center for Applied Scientific Computing Lawrence Livermore National Laboratory Liveremore\nLawrence Livermore National Laboratory Liveremore\nCA, CA",
"[email protected] Center for Applied Scientific Computing\[email protected] Center for Applied Scientific Computing Lawrence Livermore National Laboratory Liveremore\nLawrence Livermore National Laboratory Liveremore\nCA, CA"
]
| []
| Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification. While non-invasive imaging measurements, such as the rest state fMRI, are typically used in this problem, it can be beneficial to incorporate a wide variety of non-imaging features, including personal and socio-cultural traits, into predictive modeling. We propose to employ a graph-based approach for combining both types of feature, where a contextual graph encodes the traits of a larger population while the brain activity patterns are defined as a multivariate function at the nodes of the graph. Since the underlying graph dictates the performance of the resulting predictive models, we explore the use of different graph construction strategies. Furthermore, we develop a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs to avoid overfitting and also reduce the sensitivity of the models on the choice of graph construction. We demonstrate its effectiveness on the Autism Brain Imaging Data Exchange (ABIDE) dataset and show that the proposed approach outperforms state-of-the-art approaches for this problem. | 10.1109/icassp.2019.8683547 | [
"https://arxiv.org/pdf/1704.07487v1.pdf"
]
| 16,650,375 | 1704.07487 | bf157da49e7bc60c17f0b1c5c457c1f7fe5fb289 |
Bootstrapping Graph Convolutional Neural Networks for Autism Spectrum Disorder Classification
Rushil Anirudh
[email protected] Center for Applied Scientific Computing
[email protected] Center for Applied Scientific Computing Lawrence Livermore National Laboratory Liveremore
Lawrence Livermore National Laboratory Liveremore
CA, CA
Jayaraman J Thiagarajan
[email protected] Center for Applied Scientific Computing
[email protected] Center for Applied Scientific Computing Lawrence Livermore National Laboratory Liveremore
Lawrence Livermore National Laboratory Liveremore
CA, CA
Bootstrapping Graph Convolutional Neural Networks for Autism Spectrum Disorder Classification
Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification. While non-invasive imaging measurements, such as the rest state fMRI, are typically used in this problem, it can be beneficial to incorporate a wide variety of non-imaging features, including personal and socio-cultural traits, into predictive modeling. We propose to employ a graph-based approach for combining both types of feature, where a contextual graph encodes the traits of a larger population while the brain activity patterns are defined as a multivariate function at the nodes of the graph. Since the underlying graph dictates the performance of the resulting predictive models, we explore the use of different graph construction strategies. Furthermore, we develop a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs to avoid overfitting and also reduce the sensitivity of the models on the choice of graph construction. We demonstrate its effectiveness on the Autism Brain Imaging Data Exchange (ABIDE) dataset and show that the proposed approach outperforms state-of-the-art approaches for this problem.
Introduction
Modeling the relationships between functional or structural regions in the brain is a significant step towards understanding, diagnosing and eventually treating a gamut of neurological conditions including epilepsy, stroke, and autism. A variety of sensing mechanisms, such as functional-MRI, Electroencephalography (EEG) and Electrocorticography (ECoG), are commonly adopted to uncover patterns in both brain structure and function. In particular, the resting state fMRI Kelly et al. (2008) has been proven effective in identifying diagnostic biomarkers for mental health conditions such as the Alzheimer disease Chen et al. (2011) and autism Plitt et al. (2015). At the core of these neuropathology studies are predictive models that map the variations in brain functionality, obtained as time-series measurements in regions of interest, to suitable clinical measures. For example, the Autism Brain Imaging Data Exchange (ABIDE) is a collaborative effort Di Martino et al. (2014), which seeks to build a data-driven approach for autism diagnosis. Further, several published studies have reported that predictive models can reveal patterns in brain activity that act as effective biomarkers for classifying patients with mental illness Plitt et al. (2015). Figure 1: A generic architecture for machine learning driven neuropathology studies. In this paper, we investigate approaches for incorporating patient similarity into predictive modeling. Figure 1 illustrates a generic pipeline used in these studies. Given the rest state fMRI measurements, the functional connectivities between the different regions of the brain can be estimated. Though the network can be constructed using the individual voxels, it is common practice to extract regions of interest (ROI) based on pre-defined atlases or the correlation structure in the data Calhoun et al.. In addition to making the analysis more interpretable, this process enables dimensionality reduction by allowing the use of a single representative time-series for each region Behzadi et al. (2007). Building predictive models requires the use of appropriate features for each subject, whose brain activity is represented as a multivariate time series. While exploiting the statistics of features, e.g. covariance structure of the multivariate time series data, is critical to building effective models, it can be highly beneficial to utilize other non-imaging characteristics shared across subjects from a larger population.
Time-Series Extraction
Graphs are a natural representation to encode the relationships in a population. In addition to revealing the correlations in the imaging features (e.g. fMRI), the graphs could include a wide range of non-imaging features based on more general characteristics of the subjects, for example geographical, socio-cultural or gender. The advances in graph signal processing and the generalization of complex machine learning techniques, such as deep neural networks, to arbitrarily structured data make graphs an attractive solution. Despite the availability of such tools, the choice of graph construction, G(V, E) with subjects as nodes V and their similarities as edges E, is crucial to the success of this pipeline. Existing approaches construct a neighborhood graph based on the imaging features and then remove edges based on other criteria such as gender Parisot et al. (2017). However, as we show in our empirical studies with the ABIDE dataset, these hybrid graphs do not provide significant improvements to the prediction performance over baseline methods, such as kernel machines, based only on the imaging features. Contributions: In this paper, we propose a new approach to predictive modeling, which relies on generating an ensemble of population graphs, utilizing graph convolutional neural networks (G-CNNs) (Defferrard et al. (2016); Kipf and Welling (2017)) as our predictive model, and employing a consensus strategy to obtain inference. First, using a bootstrapping approach to design graph ensembles allows our predictive model to better explore connections between subjects in a large population graph that are not captured by simple heuris- 2: An overview of the proposed approach for predictive modeling with non-imaging features encoded as a population graph and imaging features defined as functions on the nodes. We construct a randomized ensemble of population graphs, employ graph CNNs to build models and utilize a consensus strategy to perform the actual classification.
tics. Second, graph CNNs provide a powerful computing framework to make inferences on graphs, by treating the subject-specific image features as a function, f : V → R N , defined at the nodes of the population graph. In addition to existing population graph construction strategies, we study the use of graph kernel similarities obtained by treating the measurements for each subject as a graph representation. Note, the latter is a subject-specific graph and can effectively model the spatio-temporal statistics of the brain activity. Our results show that the proposed bootstrapped G-CNN approach achieves the state-of-the-art performance in ASD classification and more interestingly, the graph ensemble strategy improves the prediction accuracies of all population graph construction approaches. In addition, the proposed bootstrapping reduces the sensitivity of the prediction performance to the graph construction step. Consequently, even non-experts can design simpler graphs, which, with bootstrapping, can perform on par with more sophisticated graph construction strategies. Figure 2 illustrates an overview of the proposed approach for predictive modeling in ASD classification. As it can be observed, the pipeline requires an initial population graph and the features at each node (i.e. subject) in the graph as inputs. Subsequently, we create an ensemble of randomized graph realizations and invoke the training of a graph CNN model for every realization. The output layer of these neural networks implement the softmax function, which computes the probabilities for class association of each node. Finally, the consensus module fuses the decisions from the ensemble to obtain the final class label.
Proposed Approach
In this section, we begin describing the proposed approach for predictive modeling to classify subjects with autism. More specifically, we describe the feature extraction procedure and the different strategies adopted for constructing population graphs. In the next section, we will present the predictive modeling algorithm, based on graph CNNs, that incorporates both the extracted features and information from the population graph.
Feature Extraction: The Connectivity Matrix
Feature design has become an integral part of advanced machine learning systems. A good feature representation is characterized by its ability to describe the variabilities in the dataset, while being concise and preferably low-dimensional. It has been recently shown in Abraham et al. (2017) that the connectivity matrix can be reliably estimated from the resting state fMRI data as the covariance matrix obtained using the Ledoit-Wolf shrinkage estimator. Denoting the total number of regions of interest for each subject as d, the resulting d×d covariance matrix captures the relationships between the time-series measurements from the different ROIs. As shown in Abraham et al. (2017), these features are informative and can be directly used for classification by using the vectorized upper triangular part of the covariance matrix as the feature for each subject. Though more sophisticated feature learning strategies could be employed, we observed in our experiments that the connectivity matrix produces the best performance. Consequently, we adopt that feature representation in our approach. Since the resulting feature vector, commonly referred as the cursed representation, is high-dimensional, we perform dimensionality reduction on the features before using them to train a classifier.
Population Graph Construction
Though the classifier can be directly trained using the extracted features, such an approach can fail to incorporate non-imaging/sensing information that can be critical to discriminate between different classes. For example, it is likely that there is discrepancy in some aspects of data collection at different sites, or the gender of the subject is important in generalizing autism spectrum disorder classifiers. It is non-trivial to directly incorporate such information into the subject features, but a graph can be a very intuitive way to introduce these relationships into the learning process. Graphs are natural data structures to model data in high dimensional spaces, where nodes represent the subjects and the edges describe the relations between them. This can be particularly effective while studying larger populations for traits of interest, because the graph can encode information that is different from the imaging features estimated for each subject independently. Furthermore, this can provide additional context to the machine learning algorithm used for prediction, thereby avoiding overfitting in cases of limited training data. It is important to distinguish the population graph construction process from graphs that are used to analyze the brain activity of each subject, i.e. connectivity matrix of statistical dependencies between different ROIs. Common data processing tasks such as filtering and localized transforms of signals do not directly generalize to irregular domains such as signals defined on graphs. Consequently, several graph signal processing tools have been developed to generalize ideas from Euclidean domain analysis. In particular, spectral filtering and wavelet analysis have become popular graph signal processing techniques in several applications Shuman et al. (2013). These ideas have been further extended to build convolutional neural networks directly in the graph domain, which take advantage of the fact that convolutions are multiplications in the spectral domain (Defferrard et al. (2016); Kipf and Welling (2017)). In this paper, we propose to use graph convolutional neural networks (G-CNNs) to address the task of ASD classification using a population graph, with each node being characterized by the extracted features. An inherent challenge with such an analysis is that the results obtained are directly dictated by the weighted graph defined for the analysis. Consequently, designing suitable weighted graphs that capture the geometric structure of data is essential for meaningful analysis. In our context, the population graph construction determines how two subjects are connected, so that context information could be shared between them. In addition to approaches that employ simple heuristics (e.g. distances between features) or domain-specific characteristics (e.g. gender), we also investigate the use of a more sophisticated graph construction strategy, aimed at exploiting the spatio-temporal statistics of each subject. Here is the list of graph construction mechanisms adopted in this paper:
• Site and Gender: Previous studies have showed that the geographical site information and the gender of the subject are important to build generalizable models and hence we use them to define the similarity graph between subjects. If two subjects have the same gender, they are given a score of s sex = λ 1 > 1, and 1 if they are not. Similarity, the subjects were given a score of s site = λ 2 > 1, if they were processed at the same site, and 1 if not.
• Linear Kernel Feature Graph: The edge weights of the graph are computed as the Euclidean dot product or a linear kernel, between connectivity features from two different ROIs, for a given subject. As expected, this graph does not provide any additional information because it is directly based on the features that were defined at the nodes. However, the authors in Parisot et al. (2017) showed that this can be combined with the gender and site graphs to build a new graph with additional information for improved prediction.
• Graph Kernel on the Connectivity Matrix: This graph is constructed by interpreting the connectivity matrix as an adjacency matrix of a graph. We refer to this as the subject graph. This has favorable properties in that it can model the spatiotemporal statistics between ROIs explicitly as opposed to vectorizing them. This is followed by defining a graph kernel such as Weisfeler-Lehman (Shervashidze et al. (2011)) on pairs of subject graphs, resulting in a kernel matrix that can be used for classification or prediction Jie et al. (2014). In contrast, we interpret the resulting similarity kernel matrix as the population graph, after multiplying them edge weights from the gender and site graphs.
Predictive Modeling: Randomized Ensemble of G-CNNs
In this section we briefly introduce convolutional neural networks on graphs, and present our bootstrapped training strategy using an ensemble of randomized population graphs.
Graph Convolutional Neural Networks
Convolutional neural networks enable extraction of statistical features from structured data, in the form of local stationary patterns, and their aggregation for different semantic analysis tasks, e.g. image recognition or activity analysis. When the signal of interest does not lie on a regular domain, for example graphs, generalizing CNNs is particuarly challenging due to the presence of convolution and pooling operators, typically defined on regular grids. In graph signal processing theory, this challenge is alleviated by switching to the spectral domain, where the convolution operations can be viewed as simple multiplications. In general, there exists no mathematical definition for the translation operation on graphs. However, a spectral domain approach defines the localization operator on graphs via convolution with a Kronecker delta signal. However, localizing a filter in the spectral domain requires the computation of the graph Fourier transform and hence translations on graphs are computationally expensive. Before we describe the graph CNN architecture used in our approach, we define the Fourier transform for signals on graphs.
Preliminaries: Formally, an undirected weighted graph is represented by G = (V, E, W), where V denotes the set of vertices or nodes, E denotes the set of edges and W is the adjacency matrix that specifies the weights between edges W ij , ∀e i , e j ∈ E. The fundamental component of graph analysis is the normalized graph Laplacian L, which is defined as I − D −1/2 WD −1/2 , where D ii = j W ij is the degree matrix and I denotes the identity matrix. The set of eigenvectors of the Laplacian are referred to as the graph Fourier basis, L = UΛU T , and hence the Fourier transform of a signal x ∈ R N is defined as U T x. Finally, the spectral filtering operation can be defined as y = g θ (L)x = Ug θ (Λ)U T x, where g θ is the parametric filter.
Formulation: Since spectral filtering is computationally prohibitive as the number of nodes in the graph increases, Hammond et al. (2011) proposed to approximate g θ (Λ) as a truncated expansion in terms of the Chebyshev polynomials recursively.
gθ(Λ) ≈ K k=0θ k T k (Λ),(1)
whereθ is the set of Chebyshev coefficients, K is the order for the approximation,Λ are rescaled eigenvalues and the Chebyshev polyomials T k (x) are recursively defined as 2xT k−1 (x) − T k−2 (x). Using this approximation in the spectral filtering operation implies that the filtering is K−localized, i.e., the filtering depends only on the K neighborhood nodes. We can use this K-localized filtering to define convolutional neural networks on graphs.
For example, the approach in Kipf and Welling (2017) uses K = 1 such that the layerwise computation is linear w.r.t L. Though, this is a crude approximation, by stacking multiple layers one can still recover a large class of complex filter functions, that are not just limited to the linear functions supported by the first order Chebyshev approximation. The resulting filtering can then be expressed as
gθ x =θ 0 x −θ 1 D −1/2 WD −1/2 x.(2)
Here, indicates the convolution operation in the spatial domain. Note that, the filter parameters are shared over the whole graph. In summary, the complete processing in a layer is defined as follows: Given an input function X ∈ R T ×N , where T is the number of nodes, and N is the dimension of the multivariate function at each node, the activations can be computed as
σ(D −1/2WD−1/2 XΘ).(3)
Here Θ ∈ R N ×F is the set of filter parameters, the graph Laplacian is reparameterized as D −1/2WD−1/2 = I + D −1/2 WD −1/2 . The activation function σ is chosen to be ReLU in our implementation. This linear formulation of graph convolutional networks is much more computationally efficient compared to the accurate filtering implementation.
Ensemble Learning
As described in the previous section, population graphs provide a convenient way to incorporate non-imaging features into the predictive modeling framework, where the connectivity features from fMRI data are used as a function on the graph. While G-CNN can automatically infer spectral filters for achieving discrimination across different classes, the sensitivity of its performance to the choice of the population graph is not straightforward to understand. Consequently, debugging and fine-tuning these networks can be quite challenging.
The conventional approach to building robust predictive models is to infer an ensemble of weak learners from data and then fuse the decisions using a consensus strategy. The need for ensemble models in supervised learning has been well-studied Dietterich (2000). A variety of bootstrapping techniques have been developed, wherein multiple weak models are inferred using different random subsets of data. The intuition behind the success of this approach is statistical, whereby different models may have a similar training error when learned using a subset of training samples, but their performance on test data can be different since they optimize for different regions of the input data space.
We employ a similar intuition to building models with population graphs, wherein the quality of different similarity metrics for graph construction can lead to vastly different predictive models. More specifically, starting with a population graph, we create an ensemble of graphs, {G p } P p=1 , by dropping out a pre-defined factor of edges randomly. In this paper, we use a uniform random distribution for the dropout, though more sophisticated weighted distributions could be used. For each of the graphs in the ensemble, we build a G-CNN model (details in Section 4) with the connectivity features as the N −dimensional multivariate function at each node. The output of each of the networks is a softmax function for each node, indicating the probability for the subject to be affected by ASD. Note that, unlike conventional ensemble learners, we do not subsample data, but only subsample edges of the population graph. Given the decisions from all the weak learners, we employ simple consensus strategies such as averaging or obtaining the maximum of the probabilities estimated by each of the G-CNNs for a test subject. As we will show in our experimental results with the ABIDE dataset, the proposed ensemble approach boosts the performance of all the population graph construction strategies discussed in Section 2.
Experiments
In this section, we describe our experiments to evaluate the performance of the proposed ensemble approach in ASD classification, for different population graphs. For comparison, we report the state-of-the-art results obtained using the methods in Abraham et al. (2017) and Parisot et al. (2017).
The ABIDE dataset
We present our results on the Autism Brain Imaging Data Exchange (ABIDE) dataset Di Martino et al. (2014) that contains resting state fMRI data (rs-fMRI) for 1112 patients, as part of the preprocessed connectomes project 2 . The pre-processing includes slice-timing correction, motion correction, and intensity normalization, depending on the pipeline used. We follow the same preprocessing pipeline (C-PAC) and atlases (Harvard-Oxford) as described in Abraham et al. (2017), in order to facilitate easy comparison -this resulted in a dataset with 872 of the initial 1112 patients available, from 20 different sites. The task is now to diagnose a patient as being of two classes -Autism Spectrum Disorder (ASD) or Typical Control (TC). These labels are available separately 3 .
The resulting data per subject consists of the mean time series obtained from the rs-fMRI for each Region of Interest (ROI). In total, there are 111 ROIs for the HO atlas considered here, resulting in the data, for the i th subject, of size R 111×T matrix, where T is the total number of slices in the fMRI measurement. We use the same train/test splits as Abraham et al. (2017) and the 10-fold split of the ABIDE dataset, resulted in 696 patients used for training and the remaining 175 patients for testing. Instead of reporting the average performance, we report the accuracies for the best performing and the worst performing splits chosen w.r.t the baseline described in the next subsection (i.e split 1, split 8), since this provides a much better understanding of the performance variabilities.
Methods
Baseline: First, for each patient we estimate the connectivity matrix between the ROIs, as the covariance of the multivariate time-series data estimated using the Ledoit-Wolf shrinkage estimator. In order to establish a baseline, we construct the connectivity feature by vectorizing the upper triangular part of the covariance matrix and performing dimensionality reduction (60D) on the vectorized feature. The connectivity features of the subjects are then fed into a linear SVM Abraham et al. (2017) and a kernel SVM with the radial basis function kernel. Graph Convolutional Neural Network (G-CNN): We use the G-CNN implementation from Kipf and Welling (2017), particularly the Chebychev polynomial approximation for the Fourier basis as described in Defferrard et al. (2016). We constructed a fully convolutional network, with 6 layers of 64 neurons each, a learning rate of 0.005, and dropout of 0.1/0.2 depending on the graph. We used Chebychev polynomials upto degree 3 to approximate the Graph Fourier Transform in the G-CNN in all our experiments. The G-CNN requires a (2014)) using the proposed approach. For comparison, we report the state-of-the-art results obtained using linear SVM Abraham et al. (2017) and linear kernel graphs Parisot et al. (2017). In all the cases, using the proposed Random Graph Ensemble approach leads to 1 − 6% percentage points improvement in accuracy. The results for the ensemble case were obtained using an ensemble of 5 random graphs.
suitable graph G, features defined for each node, and the classification label y ∈ {0, 1} for the binary classification task. For a given graph, the best results were obtained typically in around 100 − 150 epochs, similar numbers have also been reported in Parisot et al. (2017). Ensemble G-CNN: For a given graph G, we generate P new graphs such that the new set of graphs {G 0 , G 1 . . . G P } were obtained by randomly dropping 30 − 40% of the edges of G. The G-CNN for each graph was trained for 100 epochs, with a dropout of 0.1. The predictions from each model obtained using the ensemble are fused using two simple strategies -(a) avg, where we take the mean of the predictions and assign the class as the one with the largest value, (b) max, where we take the most confident value for each class and assign the label which has a higher value. In general, we found that a G-CNN for the ABIDE dataset, that is particularly sensitive, tends to overfit very easily making it extremely challenging to tune the hyperparameters. However, introducing randomness in the graphs significantly improved robustness and led to improved training, behaving a lot like dropout inside traditional neural networks. This behavior can be possibly attributed to the fact that the dropout in the G-CNN tends to ignore random edges in the hidden layers of the neural network, while we are proposing to use multiple randomized graphs in the input layer. In general, the random ensembles behave as weak learners, where the performance for each individual network is suboptimal, but the consensus decision is better than the state-of-the-art.
Classification Results
The classification accuracy for different graphs and their corresponding random ensembles are shown in table 1. A few observations are important to note -There is a clear and obvious advantage in using the graph based approach to classifying populations. This is especially true in datasets where the features alone are not sufficient, but require contextual information, which a population graph can provide. Secondly, the difference in performance for different graphs illustrates the importance of graph construction, further emphasizing the need for robust training strategies such as those proposed in this paper. It is also seen that using a random graph ensemble significantly improves performance by up to 6% higher than the state-of-the-art using only features as in Abraham et al. (2017). We also show improvements over using the linear kernel graph proposed in a recent manuscript (Parisot et al. (2017)). Finally, it can be observed that the best performing split has a consistently high performance across different kinds of graphs and only marginally better than the baseline, perhaps indicating that the connectivity features dictate the performance in that case. In contrast, the worst performing split contains nearly 6% lower performance without using the graph information. We observe that the largest gains in performance are in this split, indicating that the proposed ensemble strategy is able to recover important information for classification, which can be missed by heuristic graph construction. Another interesting observation is that a simple domain-specific graph such as the G sex, site , can perform as well as any state-of-the-art method, when couple with the proposed ensemble strategy.
Discussion and Future Work
In this paper we presented a training strategy for graph convolutional neural networks (G-CNNs), that have been recently proposed as a promising solution to the node classification problem in graph structured data. We focused particularly on autism spectrum disorder classification using time series extracted from resting state fMRI. The problem is challenging because each subject contains time series data from multiple regions of interest in the brain. A population graph, in addition to the imaging features, can provide more contextual information, leading to better classification performance. However, the consequence of building a population graph is that it becomes an important step in the process, and graph construction is hard in general, because it is hard to know beforehand which relationships are actually important to the prediction task. To circumvent this challenge, we propose to use bootstrapping as a way to reduce the sensitivity of the initial graph construction step, by generating multiple random graphs from the initial population graph. We train a G-CNN for each randomized graph, and fuse their predictions at the end, based on simple consensus strategies. These individual predictive models behave as weak learners, that can be aggregated to produce superior classification performance. The proposed work opens several new avenues of future work including (a) pursuing a theoretical justification behind the improved performance from randomized ensembles, (b) extending these ideas by using an ensemble of random binary graphs, with appropriate probability distributions, which are very cheap to construct, and (c) performing fusion in the hidden layers instead of final layer, which involves training the ensemble of networks together.
Figure
Figure 2: An overview of the proposed approach for predictive modeling with non-imaging features
. http://preprocessed-connectomes-project.org/abide/ 3. Column named DX GROUP in https://s3.amazonaws.com/fcp-indi/data/Projects/ABIDE_ Initiative/Phenotypic_V1_0b_preprocessed1.csv
Deriving reproducible biomarkers from multi-site resting-state data: An autism-based example. Alexandre Abraham, P Michael, Adriana Di Milham, Cameron Martino, Craddock, Dimitris Samaras, Bertrand Thirion, and Gael Varoquaux. 147Alexandre Abraham, Michael P Milham, Adriana Di Martino, R Cameron Craddock, Dim- itris Samaras, Bertrand Thirion, and Gael Varoquaux. Deriving reproducible biomarkers from multi-site resting-state data: An autism-based example. NeuroImage, 147:736-745, 2017.
A component based noise correction method (compcor) for bold and perfusion based fmri. Yashar Behzadi, Khaled Restom, Joy Liau, Thomas T Liu, NeuroImage. 37Yashar Behzadi, Khaled Restom, Joy Liau, and Thomas T Liu. A component based noise correction method (compcor) for bold and perfusion based fmri. NeuroImage, 37 1:90-101, 2007.
A method for making group inferences from functional mri data using independent component analysis. V D Calhoun, T Adali, G D Pearlson, J J Pekar, Human Brain Mapping. 143V.D. Calhoun, T. Adali, G.D. Pearlson, and J.J. Pekar. A method for making group inferences from functional mri data using independent component analysis. Human Brain Mapping, 14(3).
Classification of alzheimer disease, mild cognitive impairment, and normal cognitive status with large-scale network analysis based on resting-state functional mr imaging. B Douglas Gang Chen, Chunming Ward, Wenjun Xie, Zhilin Li, Jennifer L Wu, Malgorzata Jones, Piero Franczak, Shi-Jiang Antuono, Li, Radiology. 2591Gang Chen, B. Douglas Ward, Chunming Xie, Wenjun Li, Zhilin Wu, Jennifer L. Jones, Malgorzata Franczak, Piero Antuono, and Shi-Jiang Li. Classification of alzheimer dis- ease, mild cognitive impairment, and normal cognitive status with large-scale network analysis based on resting-state functional mr imaging. Radiology, 259(1):213-221, 2011.
Convolutional neural networks on graphs with fast localized spectral filtering. Michaël Defferrard, Xavier Bresson, Pierre Vandergheynst, Advances in Neural Information Processing SystemsNIPS. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural net- works on graphs with fast localized spectral filtering. In Advances in Neural Information Processing SystemsNIPS, pages 3837-3845, 2016.
The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Adriana Di Martino, Chao-Gan Yan, Qingyang Li, Erin Denio, X Francisco, Kaat Castellanos, Alaerts, S Jeffrey, Michal Anderson, Assaf, Y Susan, Mirella Bookheimer, Dapretto, Molecular psychiatry. 196Adriana Di Martino, Chao-Gan Yan, Qingyang Li, Erin Denio, Francisco X Castellanos, Kaat Alaerts, Jeffrey S Anderson, Michal Assaf, Susan Y Bookheimer, Mirella Dapretto, et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular psychiatry, 19(6):659-667, 2014.
Ensemble methods in machine learning. Thomas G Dietterich, Proceedings of the First International Workshop on Multiple Classifier Systems, MCS '00. the First International Workshop on Multiple Classifier Systems, MCS '00Thomas G. Dietterich. Ensemble methods in machine learning. In Proceedings of the First International Workshop on Multiple Classifier Systems, MCS '00, 2000.
Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis. David K Hammond, Pierre Vandergheynst, Rmi Gribonval, 30David K. Hammond, Pierre Vandergheynst, and Rmi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129 -150, 2011.
Topological graph kernel on multiple thresholded functional connectivity networks for mild cognitive impairment classification. Biao Jie, Daoqiang Zhang, Chong-Yaw Wee, Dinggang Shen, Human brain mapping. 357Biao Jie, Daoqiang Zhang, Chong-Yaw Wee, and Dinggang Shen. Topological graph kernel on multiple thresholded functional connectivity networks for mild cognitive impairment classification. Human brain mapping, 35(7):2876-2897, 2014.
Competition between functional brain networks mediates behavioral variability. A M , Clare Kelly, Lucina Q Uddin, Bharat B Biswal, F Xavier Castellanos, Michael P Milham, NeuroImage. 391A. M. Clare Kelly, Lucina Q. Uddin, Bharat B. Biswal, F. Xavier Castellanos, and Michael P. Milham. Competition between functional brain networks mediates behavioral variability. NeuroImage, 39(1):527-527, January 2008.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, International Conference on Learning Representations (ICLR. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR, 2017.
Sarah Parisot, Sofia Ira Ktena, Enzo Ferrante, Matthew Lee, Ricardo Guerrerro Moreno, Ben Glocker, Daniel Rueckert, arXiv:1703.03020Spectral graph convolutions on population graphs for disease prediction. arXiv preprintSarah Parisot, Sofia Ira Ktena, Enzo Ferrante, Matthew Lee, Ricardo Guerrerro Moreno, Ben Glocker, and Daniel Rueckert. Spectral graph convolutions on population graphs for disease prediction. arXiv preprint arXiv:1703.03020, 2017.
Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards. Mark Plitt, Kelly Anne Barnes, Alex Martin, NeuroImage: Clinical. 7Mark Plitt, Kelly Anne Barnes, and Alex Martin. Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards. NeuroImage: Clinical, 7:359 -366, 2015.
Weisfeiler-lehman graph kernels. Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, Karsten M Borgwardt, Journal of Machine Learning Research. 12Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(Sep):2539-2561, 2011.
The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains. David I Shuman, K Sunil, Pascal Narang, Antonio Frossard, Pierre Ortega, Vandergheynst, IEEE Signal Processing Magazine. 303David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Van- dergheynst. The emerging field of signal processing on graphs: Extending high- dimensional data analysis to networks and other irregular domains. IEEE Signal Pro- cessing Magazine, 30(3):83-98, 2013.
| []
|
[
"032060(R) (2020) Rapid Communications",
"032060(R) (2020) Rapid Communications"
]
| [
"Giacomo Torlai \nCenter for Computational Quantum Physics\nFlatiron Institute\n10010New YorkNew YorkUSA\n",
"Juan Carrasquilla \nVector Institute for Artificial Intelligence\nMaRS Centre\nM5G 1M1TorontoOntarioCanada\n\nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1OntarioCanada\n",
"Matthew T Fishman \nCenter for Computational Quantum Physics\nFlatiron Institute\n10010New YorkNew YorkUSA\n",
"Roger G Melko \nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1OntarioCanada\n\nPerimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooOntarioCanada\n",
"Matthew P A Fisher \nDepartment of Physics\nUniversity of California\n93106Santa BarbaraCaliforniaUSA\n"
]
| [
"Center for Computational Quantum Physics\nFlatiron Institute\n10010New YorkNew YorkUSA",
"Vector Institute for Artificial Intelligence\nMaRS Centre\nM5G 1M1TorontoOntarioCanada",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1OntarioCanada",
"Center for Computational Quantum Physics\nFlatiron Institute\n10010New YorkNew YorkUSA",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1OntarioCanada",
"Perimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooOntarioCanada",
"Department of Physics\nUniversity of California\n93106Santa BarbaraCaliforniaUSA"
]
| [
"PHYSICAL REVIEW RESEARCH"
]
| We introduce a procedure to systematically search for a local unitary transformation that maps a wave function with a nontrivial sign structure into a positive-real form. The transformation is parametrized as a quantum circuit compiled into a set of one-and two-qubit gates. We design a cost function that maximizes the average sign of the output state and removes its complex phases. The optimization of the gates is performed through automatic differentiation algorithms, widely used in the machine learning community. We provide numerical evidence for significant improvements in the average sign for a two-leg triangular Heisenberg ladder with next-to-nearestneighbor and ring-exchange interactions. This model exhibits phases where the sign structure can be removed by simple local one-qubit unitaries, but also an exotic Bose-metal phase whose sign structure induces "Bose surfaces" with a fermionic character and a higher entanglement that requires deeper circuits. | 10.1103/physrevresearch.2.032060 | null | 184,486,985 | 1906.04654 | 6476ef559627e3d1a1c056f87e1943536934fba5 |
032060(R) (2020) Rapid Communications
Giacomo Torlai
Center for Computational Quantum Physics
Flatiron Institute
10010New YorkNew YorkUSA
Juan Carrasquilla
Vector Institute for Artificial Intelligence
MaRS Centre
M5G 1M1TorontoOntarioCanada
Department of Physics and Astronomy
University of Waterloo
N2L 3G1OntarioCanada
Matthew T Fishman
Center for Computational Quantum Physics
Flatiron Institute
10010New YorkNew YorkUSA
Roger G Melko
Department of Physics and Astronomy
University of Waterloo
N2L 3G1OntarioCanada
Perimeter Institute for Theoretical Physics
N2L 2Y5WaterlooOntarioCanada
Matthew P A Fisher
Department of Physics
University of California
93106Santa BarbaraCaliforniaUSA
032060(R) (2020) Rapid Communications
PHYSICAL REVIEW RESEARCH
210.1103/PhysRevResearch.2.032060(Received 17 June 2019; revised 10 February 2020; accepted 11 August 2020; published 2 September 2020)
We introduce a procedure to systematically search for a local unitary transformation that maps a wave function with a nontrivial sign structure into a positive-real form. The transformation is parametrized as a quantum circuit compiled into a set of one-and two-qubit gates. We design a cost function that maximizes the average sign of the output state and removes its complex phases. The optimization of the gates is performed through automatic differentiation algorithms, widely used in the machine learning community. We provide numerical evidence for significant improvements in the average sign for a two-leg triangular Heisenberg ladder with next-to-nearestneighbor and ring-exchange interactions. This model exhibits phases where the sign structure can be removed by simple local one-qubit unitaries, but also an exotic Bose-metal phase whose sign structure induces "Bose surfaces" with a fermionic character and a higher entanglement that requires deeper circuits.
I. INTRODUCTION
The most striking contrast between the classical and quantum world is the fact that quantum wave functions contain "probability" amplitudes that are not strictly real and positive. This so-called sign (or phase) structure is an essential feature of a variety of quantum phenomena with no classical counterpart, such as the Pauli exclusion principle, entanglement, and quantum interference. It lies at the heart of any algorithm for quantum computing [1].
A sign structure often hinders the simulation of quantum many-body states by means of classical resources, and it essentially defines the threshold for what can be considered truly quantum mechanical. Indeed, there is a one-to-one mapping between a real, non-negative wave function and a classical probability distribution, formulated explicitly by the Born rule. However, the sign structure is not a universal feature of a quantum state, since it strongly depends on the choice of basis. As such, for a given state it is only natural to wonder: Is there a local change of basis that removes the sign structure, leading to a non-negative wave function?
Given a preferred "computational basis," finding and applying a change of basis involves implementing a unitary transformation. The resources required for this task can however be nontrivial. For example, any ground state becomes non-negative in the energy eigenbasis, but finding the corresponding (nonlocal) unitary transformation is equivalent to diagonalization, with a complexity that scales exponentially in the number of qubits. The question becomes, can a change of basis be discovered with a transformation represented by a local unitary circuit of small depth?
Such transformations are typically identified based on simple physical principles related to the structure of the Hamiltonian and its symmetries. The most notable example is the Marshall sign rule [2], eliminating the sign structure from the ground states of quantum antiferromagnets on bipartite lattices. The resulting theoretical insight means that new bases that simplify the sign structure for a specific frustrated magnet or fermion model are routinely discovered [3][4][5][6]. In turn, in a few instances it has also been rigorously proven that efficient transformations do not exist [7,8], rendering the sign structure "intrinsic." However, if no obvious transformation is known, it is generally unclear whether the offending sign structure is intrinsic or whether it only persists due to a lack of physical insight. An automated procedure to search for relevant transformations is therefore highly desirable.
In this Rapid Communication, we propose an algorithm to tackle this question which combines tensor networks and differentiable programming. We formulate the search for the local basis as an optimization task over quantum circuits compiled into a set of local quantum gates. By optimizing a suitable cost function, a quantum circuit is used to positivize a quantum state with a sign structure. We show how this procedure can be realized in practice by adopting a tensor network representation of the quantum circuit, and applying automatic differentiation to obtain a "learning signal" for each quantum gate. We present a proof-of-principle demonstration for a two-leg triangular Heisenberg ladder with a four-spin ring-exchange interaction, which harbors a sign structure of
Ψ ϑ (σ) σ ΨÛ ϑ FIG. 1.
Projection of the wave function on the basis state σ| after the application of the quantum circuitÛ ϑ implementing the change of basis. Here, the circuit is compiled into a set of local two-qubit gates.
tunable complexity, including that of an exotic highly entangled spin Bose-metal phase.
II. LEARNING THE SIGN STRUCTURE
We study a system composed of N qubits described by a wave function | . For a given choice of basis of the many-body Hilbert space |σ = |σ 1 , . . . , σ N , we assume that the wave function has a sign structure, i.e., the coefficients (σ) = σ| appear with both positive and negative signs. We note that, while we restrict to real wave functions, the following approach identically applies to the case where the wave function is complex-valued.
Given the sign structure sign[ (σ)], how can we run an automated search for a local unitary transformationÛ generating a non-negative wave function? For this purpose, it is natural to express the unitary as a quantum circuit, where locality can be imposed at the level of the quantum gates ( Fig. 1). Because of their universality [1], we can restrict to single-and two-qubit gates acting on pairs of adjacent sites. Then, the unitary transformation is written in terms of parameters ϑ = {ϑ [1] , ϑ [2] , . . . }, where ϑ [k] are a set of real and continuous parameters characterizing each single gate.
Starting from a wave function (σ) displaying a sign structure, provided as input to the quantum circuit, the goal is to discover a set of gates such that the output state is non-negative. We choose to phrase this problem as an optimization task, where the non-negativity of the output state is enforced upon minimizing a suitable cost function C(ϑ). More precisely, the optimal set of parameters ϑ * = argmin ϑ C(ϑ) should satisfy ϑ * (σ) 0 ∀|σ , where ϑ * (σ) = σ|Û ϑ * | . Given some initial configuration of the circuit, the optimization is solved by iteratively updating the gates according to the gradient of the cost function, ϑ → ϑ − η G(ϑ), where G(ϑ) = ∇ ϑ C(ϑ) and η is the step size of the update (often called learning rate). More sophisticated algorithms developed within the machine learning community can also be implemented, such as the adaptive learning rates [9,10] or higher-order gradients [11].
The cost function is the most crucial ingredient. On one hand, it needs to correctly capture the objective of the optimization. On the other hand, the sign structure is a global property of the quantum state, and thus the calculation of the cost function (and its gradients) should also remain scalable with the number of qubits. For the latter, it is prudent to express C(ϑ) as an expectation value over the probability distribution underlying the quantum state at the output of the circuit,
C(ϑ) = σ | ϑ (σ)| 2 C ϑ (σ).(1)
In fact, provided one can sample the distribution p ϑ (σ) = | ϑ (σ)| 2 , the expectation value of Eq. (1) can be approximated with a sum over a finite number of configurations {σ j } drawn from p ϑ (σ). Now, the only task that remains is designing an appropriate function C ϑ (σ).
Besides the sign of the wave function, an additional constraint that should be taken into account is that the complex phases, necessarily accumulated by a universal gate set, are eliminated by the end of the unitary evolution. To capture both conditions on the imaginary part and the sign, it is convenient to split the cost function into a convex sum of two contributions: (2) where γ ∈ [0, 1]. By tuning the parameters according to the gradient G(ϑ), the quantum circuit will try to increase the sign of the real part of the wave function, while forcing the imaginary part to be zero. Note that the initial average sign can always be set to a positive value by an appropriate global transformation.
C ϑ (σ) = γ |Im[ ϑ (σ)]| − (1 − γ )sign[Re[ ϑ (σ)]],
III. DIFFERENTIABLE PROGRAMMING
Next, in order to evaluate the gradients of the cost function we need to adopt a representation of the input quantum state and the quantum circuit amenable to scalable simulations. To this end, we assume that the initial state admits an efficient matrix product state (MPS) representation, and obtain the final state by contracting the MPS with the various gates in the circuit. At each intermediate step, provided the circuit depth is not too large, the quantum state can be restored into an MPS form by means of singular value decompositions.
The calculation of the gradients is the most involved step in the procedure, and analytical approaches would clearly be intractable. We leverage automatic differentiation (AD) techniques [12], routinely used in machine learning applications to train neural-network architectures [13] and recently applied to optimize tensor network states [14]. The core object in AD is the computational graph implementing the set of elementary computations (edges) acting on the variables (nodes). We specifically implement reverse-accumulation AD, where a forward pass first calculates the output of the graph, and derivatives are calculated starting from the output, and backpropagated through the graph using a sequence of Jacobianvector products.
. . . . . . . . . Ψ ϑ (σ 1 ) σ 1 | σ 2 | σ n | Ψ ϑ (σ 2 ) Ψ ϑ (σ n ) C ϑ (σ 1 ) C ϑ (σ 2 ) C ϑ (σ n ) ϑ |Ψ ϑ |Ψ C(ϑ) S vN (Tr|Ψ ϑ Ψ ϑ |) FIG. 2.
Schematic of the computational graph for the calculation of the cost function. The output quantum state | ϑ , obtained by contracting the circuit tensor network, is sampled to generate the configurations {σ j }, which are used to compute the cost function. In addition, the entanglement entropy of the output state is added as a regularization to mitigate the growth of entanglement in deep circuits.
The computational graph implementing the positivization is divided into three stages (Fig. 2). First, the circuitÛ ϑ is applied to the input state through a series of tensor contractions. The resulting output quantum state | ϑ =Û ϑ | is then sampled to generate a set of n configurations {σ j } approximating the sum in Eq. (1). The projections of | ϑ into these configurations are used to estimate the sample-wise cost function
C(ϑ) = 1 n n j=1 C ϑ (σ j ) + αS vN (ρ A ).(3)
Note that we have also added a term proportional to the entanglement entropy S vN (ρ A ) = −Tr(ρ A logρ A ), whereρ A is the reduced density matrix for a equal bipartition of the qubits and α is a small weight. We introduce this type of regularization to the cost function to limit the growth of entanglement generated by the application of the gates, particularly relevant in the optimization of deep quantum circuits. Once the computational graph is compiled, the reverse-accumulation step evaluates the derivatives with respect to each gate parameter in the circuit (see Supplemental Material [15] for more details).
IV. RESULTS
We focus on the ground-state wave functions of a two-leg triangular ladder with the Hamiltonian
H = J 1 jŜ j ·Ŝ j+1 + J 2 jŜ j ·Ŝ j+2 + J r 2 jP j, j+1, j+3, j+2 +P † j, j+1, j+3, j+2 ,(4)
whereŜ j are spin-1/2 operators. Here, the ring-exchange term corresponds to the cyclic exchange of spin states, P i, j,k,l |S z i , S z j , S z k , S z l = |S z l , S z i , S z j , S z k , and the couplings are J 1 = 1, J 2 , J r > 0. The model in Eq. (4) exhibits a range of ground states with a sign structure of tunable complexity, so it serves as a representative test bed for our experiments. Whereas for J 1 = J r = 0 or J 2 = J r = 0 the sign of the ground-state wave function can be eliminated via a unitary transformation acting on single spins [16], for J r /J 1 1 and J 2 /J 1 1 the model displays an exotic spin Bose-metal (SBM) phase endowed with a complex sign structure associated with the presence of singular wave vectors or "Bose surfaces" [17]. After obtaining the ground-state MPS using standard density matrix renormalization group techniques [18,19], we implement the AD graph using the machine learning library TENSORFLOW [20].
We first consider the case of J r = 0, corresponding to the one-dimensional J 1 -J 2 model. In the limit of J 2 = 0 we recover the Heisenberg model, where the sign structure of the ground state (S z ) in the Ising basis |S z = |S z 1 , . . . , S z N follows the Marshall sign rule [2,16]. The transformation removing this sign structure can be composed as a set of N/2 rotations of angle π about the z axis, corresponding to a depth-one quantum circuit. To check if this can be recovered by our procedure, we construct the variational quantum circuit U ϑ using one layer of single-qubit rotations around the z axis. We run the positivization procedure for a chain containing N = 40 spins. After randomly initializing the circuit parameters (i.e., N = 40 angles) we train the circuit to minimize the cost function using n S = 10 3 configurations sampled from the final MPS distribution | ϑ (S z )| 2 [21,22], and update the parameters using the Adam optimizer [10]. We show the behavior of the positivization algorithm in Fig. 3, where we plot the values of each single rotation angle as a function of the training iteration. For J 2 = 0 [ Fig. 3(a)] we observe that all angles corresponding to rotations on sublattice A (B) converge to the value ϑ [k] = π /2 (−π /2), equivalent to the Marshall sign rule. We then repeat the optimization for an initial ground state obtained by setting J 2 = 2.0 [ Fig. 3(b)]. Here, we observe that the angles converge to two values separated by π , but now the rotations on sites from different sublattices are mixed together. It is easy to see that this circuit implements the Marshall sign rule in the limit of J 1 = 0, corresponding to two decoupled Heisenberg chains. In both cases we measure an average sign of about 0.99.
Although the relationship is not fully understood, the sign structure of a quantum state is related to its entanglement. For example, a typical random positive wave function exhibits a constant law for Renyi entanglement entropies with Renyi index n > 1, while states with Renyi entropy scaling as a volume law will have a complex sign structure [23], suggesting a nonlocal positivization transformation. It therefore stands to reason that circuits of large depths may be required to remove the sign structure when the entanglement needs significant modification.
In order to increase the entanglement of the starting state, we turn to the exotic spin Bose-metal (SBM) phase which contains significant entanglement due to the presence of a Bose surface [17]. We set J 2 = 0 and examine different initial ground-state MPSs obtained for J r ∈ [0, 1], which spans the phase transition into the SBM phase. We optimize circuits with different depths, where a single layer consists of a set of simultaneous commuting two-qubit gates (Fig. 1). In all simulations, the truncation error in the singular value decompositions performed to restore the MPS representation of the quantum state was kept below 10 −6 .
We first examine a spin ladder with N = 20 sites, and optimize circuits of increasing depth for initial ground states obtained at different values of J r . We plot the average sign (circles) and imaginary part (triangles) in Fig. 4(a). As expected, a larger depth systematically increases the effectiveness of the positivization, which becomes significantly harder as the system is driven into the SBM phase (J r ≈ 0.6). The transition in complexity is highlighted in Fig. 4(b), where we show the scalings with the system size for different values of J r near the critical point for a circuit of fixed depth. In the Bethe phase (small J r ), the sign remains sufficiently high as the system size is increased, while the positivization becomes ineffective for larger N in the SBM phase. Finally, we show the scaling against the circuit depth for several sizes N in the two phases of the spin ladder [Figs. 4(c) and 4(d)]. The results confirm that in the SBM phase, in contrast to the Bethe phase, the depth required to achieve a given average sign increases with the number of spins N. In all instances, the optimization succeeds in producing quantum states with real coefficients to a good approximation.
V. CONCLUSIONS
We have introduced a procedure to systematically search for a local unitary transformation that maps a wave function with a sign structure into a non-negative form. The transformation is parametrized as a universal quantum circuit, and the gates are optimized through automatic differentiation algorithms, widely adopted in the machine learning community and implemented with TENSORFLOW [20]. We demonstrated this technique for ground states of a triangular spin ladder with Heisenberg interactions. For the limit of the J 1 -J 2 model, we have shown that the optimization is capable of removing the sign structure, recovering the well-known Marshall sign rule. In the presence of a ring-exchange interaction, we observed that the SBM phase demands circuits where the depths scale with the size of the ladders.
The ability to discover a local basis where the average sign of a quantum state becomes substantially higher is particularly relevant for the alleviation of the sign problem in quantum Monte Carlo simulations [24][25][26][27]. In this context, our positivization algorithm could be repurposed to increase the "stoquasticity" of a target Hamiltonian [28]. This would require the optimization of a suitably modified cost function, where the input is a matrix product operator representation of the Hamiltonian. This opens interesting prospects for path integrals and projective quantum Monte Carlo simulations, which should be explored in future studies.
The non-negativity of a wave function in a local basis also has direct implications for the data-driven reconstruction of quantum states, which is becoming increasingly important for validating noisy-intermediate-scale quantum hardware [29]. In fact, for wave functions with positive amplitudes, experimental data from a single measurement basis is sufficient for the quantum reconstruction of the state, with a particularly favorable scaling with both the system size and the number of measurements [30,31].
Finally, our procedure provides a universal, automated method of assessing the complexity associated with the sign problem of a given wave function. Ultimately, by performing a systematic finite-size scaling analysis of the resources required to achieve a given average sign, this procedure could be used to determine the complexity class associated with removing the sign structure in various cases, including gapped, critical, or fermionic wave functions. In the future, automated numerical methods based on machine learning technology may be the most promising route to determining the relative "difficulty" of various sign structures, and will play a crucial role in formulating a complete theory relating a wave function's sign to its entanglement structure and simulation complexity.
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
FIG. 3 .
3Dynamics of the parameters ϑ during training for a ladder of N = 40 spins (J r = 0) with (a) J 2 = 0.0 and (b) J 2 = 2.0. The quantum circuit contains only one-qubit rotations around the z axis.
FIG. 4 .
4Optimization performance for the two-leg triangular Heisenberg chain with (a) different depths as a function of J r , (b) different values of J r as a function of the number of spins, and different system sizes as a function of the depth for (c) J r = 0.25 and (d) J r = 0.75. The transition into the SBM phase occurs at J r ≈ 0.6.
ACKNOWLEDGMENTSWe thank F. Becca, J. Eisert, M. Ganahl, J. Liu, B. Sanders, M. Stoudenmire, and L. Wang for enlightening discussions.The density-matrix renormalization group (DMRG) calculations and the optimization of the circuits were performed using the ITENSOR
G Benenti, G Casati, G Strini, Principles of Quantum Computation and Information. SingaporeWorld ScientificG. Benenti, G. Casati, and G. Strini, Principles of Quantum Computation and Information (World Scientific, Singapore, 2004).
. W Marshall, R E Peierls, 10.1098/rspa.1955.0200Proc. R. Soc. London, Ser. A. 23248W. Marshall and R. E. Peierls, Proc. R. Soc. London, Ser. A 232, 48 (1955).
. Z.-X Li, Y.-F Jiang, H Yao, 10.1103/PhysRevB.91.241117Phys. Rev. B. 91241117Z.-X. Li, Y.-F. Jiang, and H. Yao, Phys. Rev. B 91, 241117(R) (2015).
. R K Kaul, R G Melko, A W Sandvik, 10.1146/annurev-conmatphys-030212-184215Annu. Rev. Condens. Matter Phys. 4179R. K. Kaul, R. G. Melko, and A. W. Sandvik, Annu. Rev. Condens. Matter Phys. 4, 179 (2013).
. S Wessel, B Normand, F Mila, A Honecker, 10.21468/SciPostPhys.3.1.005SciPost Phys. 35S. Wessel, B. Normand, F. Mila, and A. Honecker, SciPost Phys. 3, 005 (2017).
. A Honecker, S Wessel, R Kerkdyk, T Pruschke, F Mila, B Normand, 10.1103/PhysRevB.93.054408Phys. Rev. B. 9354408A. Honecker, S. Wessel, R. Kerkdyk, T. Pruschke, F. Mila, and B. Normand, Phys. Rev. B 93, 054408 (2016).
. M B Hastings, 10.1063/1.4936216J. Math. Phys. 5715210M. B. Hastings, J. Math. Phys. 57, 015210 (2016).
. 10.1126/sciadv.1701758Z. Ringel and D. L. Kovrizhin, Sci. Adv. 31701758Z. Ringel and D. L. Kovrizhin, Sci. Adv. 3, e1701758 (2017).
. M D Zeiler, arXiv:1212.5701M. D. Zeiler, arXiv:1212.5701.
. D P Kingma, J Ba, arXiv:1412.6980D. P. Kingma and J. Ba, arXiv:1412.6980.
S Amari, Advances in Neural Information Processing Systems. M. I. Jordan, Y. LeCun, and S. A. SollaCambridge, MAMIT PressS. Amari, in Advances in Neural Information Processing Sys- tems, edited by M. I. Jordan, Y. LeCun, and S. A. Solla (MIT Press, Cambridge, MA, 1997), pp. 127-133.
. M Bartholomew-Biggs, S Brown, B Christianson, L Dixon, 10.1016/S0377-0427(00)00422-2J. Comput. Appl. Math. 124171M. Bartholomew-Biggs, S. Brown, B. Christianson, and L. Dixon, J. Comput. Appl. Math. 124, 171 (2000).
. D E Rumelhart, G E Hinton, R J Williams, 10.1038/323533a0Nature. 323533D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Nature (London) 323, 533 (1986).
. H.-J Liao, J.-G Liu, L Wang, T Xiang, 10.1103/PhysRevX.9.031041Phys. Rev. X. 931041H.-J. Liao, J.-G. Liu, L. Wang, and T. Xiang, Phys. Rev. X 9, 031041 (2019).
for details on the automatic differentiation scheme used to optimize the quantum circuits. http:/link.aps.org/supplemental/10.1103/PhysRevResearch.2.032060See Supplemental Material atSee Supplemental Material at http://link.aps.org/supplemental/ 10.1103/PhysRevResearch.2.032060 for details on the auto- matic differentiation scheme used to optimize the quantum circuits.
. L Capriotti, 10.1142/S0217979201004605Int. J. Mod. Phys. B. 151799L. Capriotti, Int. J. Mod. Phys. B 15, 1799 (2001).
. D N Sheng, O I Motrunich, M P A Fisher, 10.1103/PhysRevB.79.205112Phys. Rev. B. 79205112D. N. Sheng, O. I. Motrunich, and M. P. A. Fisher, Phys. Rev. B 79, 205112 (2009).
. S R White, 10.1103/PhysRevLett.69.2863Phys. Rev. Lett. 692863S. R. White, Phys. Rev. Lett. 69, 2863 (1992).
TensorFlow: Large-scale machine learning on heterogeneous systems. M Abadi, M. Abadi et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015, software available from http: //tensorflow.org.
. S R White, 10.1103/PhysRevLett.102.190601Phys. Rev. Lett. 102190601S. R. White, Phys. Rev. Lett. 102, 190601 (2009).
. A J Ferris, G Vidal, 10.1103/PhysRevB.85.165146Phys. Rev. B. 85165146A. J. Ferris and G. Vidal, Phys. Rev. B 85, 165146 (2012).
. T Grover, M P A Fisher, 10.1103/PhysRevA.92.042308Phys. Rev. A. 9242308T. Grover and M. P. A. Fisher, Phys. Rev. A 92, 042308 (2015).
. M Marvian, D A Lidar, I Hen, 10.1038/s41467-019-09501-6Nat. Commun. 101571M. Marvian, D. A. Lidar, and I. Hen, Nat. Commun. 10, 1571 (2019).
. J Klassen, B M , 10.22331/q-2019-05-06-1393139J. Klassen and B. M. Terhal, Quantum 3, 139 (2019).
. D Hangleiter, I Roth, D Nagaj, J Eisert, 10.1126/sciadv.abb8341Sci. Adv. 68341D. Hangleiter, I. Roth, D. Nagaj, and J. Eisert, Sci. Adv. 6, eabb8341 (2020).
. L Gupta, I Hen, arXiv:1910.13867L. Gupta and I. Hen, arXiv:1910.13867.
. S Bravyi, D P Divincenzo, R I Oliveira, B M , Quant. Inf. Comp. 8361S. Bravyi, D. P. DiVincenzo, R. I. Oliveira, and B. M. Terhal, Quant. Inf. Comp. 8, 0361 (2008).
. J , 10.22331/q-2018-08-06-79279J. Preskill, Quantum 2, 79 (2018).
. G Torlai, G Mazzola, J Carrasquilla, M Troyer, R Melko, G Carleo, 10.1038/s41567-018-0048-5Nat. Phys. 14447G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko, and G. Carleo, Nat. Phys. 14, 447 (2018).
. G Torlai, R G Melko, 10.1146/annurev-conmatphys-031119-050651Annu. Rev. Condens. Matter Phys. 11325G. Torlai and R. G. Melko, Annu. Rev. Condens. Matter Phys. 11, 325 (2020).
| []
|
[
"DECLOAK: Enable Secure and Cheap Multi-Party Transactions on Legacy Blockchains by a Minimally Trusted TEE Network putation, Trusted Execution Environment",
"DECLOAK: Enable Secure and Cheap Multi-Party Transactions on Legacy Blockchains by a Minimally Trusted TEE Network putation, Trusted Execution Environment"
]
| [
"Qian Ren \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Yue Li \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Yingjun Wu [email protected]. \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Yuchen Wu [email protected]. \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Hong Lei [email protected] \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Lei Wang [email protected]. \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Bangdao Chen [email protected]. \nwith School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China\n",
"Yi Wu \nOxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (\n",
"Yu Wu \nOxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (\n",
"H Lei \nOxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (\n",
"B Chen \nOxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (\n"
]
| [
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"with School of Cyberspace Security (School of Cryptology)\nHainan University\n100871., 570228Beijing, HainanChina, China",
"Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (",
"Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (",
"Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. (",
"Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd\nWok Park, Laocheng, Chengmai, Hainan571924China. ("
]
| []
| As the confidentiality and scalability of smart contracts have become a crucial demand of blockchains, off-chain contract execution frameworks have been promising. Some have recently expanded off-chain contracts to Multi-Party Computation (MPC), which seek to transition the on-chain states by offchain MPC. The most general problem among these solutions is MPT, since its off-chain MPC takes on-and off-chain inputs, delivers on-and off-chain outputs, and can be publicly verified by the blockchain, thus capable of covering more scenarios. However, existing Multi-Party Transaction (MPT) solutions lack at least one of data availability, financial fairness, delivery fairness, and delivery atomicity. The data availability means entities can independently access the data required to rebuild new states and verify outputs; financial fairness implies at least one adversary will be punished monetarily; delivery fairness means parties can receive their outputs at almost the same time; delivery atomicity means that parties receive their outputs and new states are committed must both happen or neither. These properties are crucially valued by communities, e.g., the Ethereum community and users. Even worse, these solutions require high-cost interactions between the blockchain and offchain systems.This paper proposes a novel MPT-enabled off-chain contract execution framework, DECLOAK. DECLOAK is the first to achieve data availability of MPT, and our method can apply to other fields that seek to persist user data on-chain. Moreover, DECLOAK solves all mentioned shortcomings with lower gas cost and weaker assumption. Specifically, DECLOAK tolerates all-butone Byzantine party and TEE executors. Evaluating on 10 MPTs, DECLOAK reduces the gas cost of the SOTA, Cloak, by 65.6%. Consequently, we are the first to not only achieve such level secure MPT in practical assumption, but also demonstrate that evaluating MPT in the comparable average gas cost to Ethereum transactions is possible. And the cost superiority of DECLOAK increases as the number of MPT' parties grows. | null | [
"https://export.arxiv.org/pdf/2202.10206v2.pdf"
]
| 258,832,746 | 2202.10206 | 29cc8eae6bf6fe8e7a38c98af5b79dbee3d7ee61 |
DECLOAK: Enable Secure and Cheap Multi-Party Transactions on Legacy Blockchains by a Minimally Trusted TEE Network putation, Trusted Execution Environment
Qian Ren
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Yue Li
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Yingjun Wu [email protected].
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Yuchen Wu [email protected].
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Hong Lei [email protected]
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Lei Wang [email protected].
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Bangdao Chen [email protected].
with School of Cyberspace Security (School of Cryptology)
Hainan University
100871., 570228Beijing, HainanChina, China
Yi Wu
Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd
Wok Park, Laocheng, Chengmai, Hainan571924China. (
Yu Wu
Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd
Wok Park, Laocheng, Chengmai, Hainan571924China. (
H Lei
Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd
Wok Park, Laocheng, Chengmai, Hainan571924China. (
B Chen
Oxford-Hainan Blockchain Research Institute and SSC Holding Company Ltd
Wok Park, Laocheng, Chengmai, Hainan571924China. (
DECLOAK: Enable Secure and Cheap Multi-Party Transactions on Legacy Blockchains by a Minimally Trusted TEE Network putation, Trusted Execution Environment
1 (email: liyue [email protected]) L. Wang is with the Department of Computer Science and Engineer-ing, Shanghai Jiao Tong University, Shanghai, China 200240. (email: wan-
As the confidentiality and scalability of smart contracts have become a crucial demand of blockchains, off-chain contract execution frameworks have been promising. Some have recently expanded off-chain contracts to Multi-Party Computation (MPC), which seek to transition the on-chain states by offchain MPC. The most general problem among these solutions is MPT, since its off-chain MPC takes on-and off-chain inputs, delivers on-and off-chain outputs, and can be publicly verified by the blockchain, thus capable of covering more scenarios. However, existing Multi-Party Transaction (MPT) solutions lack at least one of data availability, financial fairness, delivery fairness, and delivery atomicity. The data availability means entities can independently access the data required to rebuild new states and verify outputs; financial fairness implies at least one adversary will be punished monetarily; delivery fairness means parties can receive their outputs at almost the same time; delivery atomicity means that parties receive their outputs and new states are committed must both happen or neither. These properties are crucially valued by communities, e.g., the Ethereum community and users. Even worse, these solutions require high-cost interactions between the blockchain and offchain systems.This paper proposes a novel MPT-enabled off-chain contract execution framework, DECLOAK. DECLOAK is the first to achieve data availability of MPT, and our method can apply to other fields that seek to persist user data on-chain. Moreover, DECLOAK solves all mentioned shortcomings with lower gas cost and weaker assumption. Specifically, DECLOAK tolerates all-butone Byzantine party and TEE executors. Evaluating on 10 MPTs, DECLOAK reduces the gas cost of the SOTA, Cloak, by 65.6%. Consequently, we are the first to not only achieve such level secure MPT in practical assumption, but also demonstrate that evaluating MPT in the comparable average gas cost to Ethereum transactions is possible. And the cost superiority of DECLOAK increases as the number of MPT' parties grows.
I. INTRODUCTION
W HILE blockchains are rapidly developed and adopted in various domains, e.g., DeFi, NFT, IoT industries, contract privacy and scalability of blockchains have now become two of the top concerns. Unfortunately, in most existing blockchains [1], [2], blockchain data must be publicly accessible and verifiable so that miners can access the transaction data and re-execute transactions to verify all state transitions. Off-chain contract execution with MPC. The demand for both privacy and scalability motivates off-chain smart contract execution frameworks. Their common idea is to offload the smart contract execution from the blockchain to off-chain systems. The blockchain then functions only as a trust anchor to verify the execution and store states. Subsequently, some promising solutions extend the off-chain contract execution to multi-party scenarios, including auction [3], personal finance [4] and deal matching [5], [6]. This problem is generalized and defined in [7] as Multi-Party Transaction (MPT) [7], [8]. It means transitioning blockchain states by a publicly verifiable off-chain MPC, where the MPC takes on-and offchain inputs from, and delivers on-and off-chain outputs to multiple parties, without leaking their inputs/outputs to the public or each other. For example, in a second-price auction [3], multiple mutually distrustful parties jointly perform an auction on their confidential on-chain balance and off-chain bids. When the auction finishes, the party with the highest bid wins and pays the second-highest price on-chain. To enable MPT, two kinds of solutions exist. The first is cryptographybased solutions, which adopt MPC [9]- [11] or Homomorphic Encryption (HE) [12] to allow parties jointly and confidentially evaluate a program off-chain, then commit the evaluation status/outputs on-chain. The second, TEE-based solutions [4], [7], [8], [13], uses TEE to collect private data from parties, evaluates a program with the data inside enclaves, and finally commits the evaluation status/outputs on-chain. Limitations. However, existing solutions of MPT suffer from at least one of the following flaws: (i) Do not achieve data availability, making them vulnerable to data lost when offchain systems fail. For example, even with ZKP or TEE to prove the correct state transitions, users cannot know their balances if an off-chain operator withholds the states. This property is keenly required by the Ethereum community [14] and the community has designed a series of measures to arXiv:2202.10206v2 [cs.CR] 22 May 2023 uphold it, e.g., calldata [15]- [17] and blob [18], which are keys of the coming Cancun upgrade [19]; (ii) Do not achieve financial fairness, so they can only assume a rate of honest nodes exists but cannot monetarily urge profit-driven nodes to behave honestly or punish the misbehaved nodes; (iii) Do not achieve delivery fairness, which requires delivering outputs to corresponding parties at almost the same time. Formally, we say a MPC protocol achieves ∆-fairness if the time of different parties receiving their outputs distributes in a ∆-bounded period. A large ∆ will lead to several attacks, e.g., a party prior to others knowing that the MPT buys an ERC20 token and change the trade rate can front-run an arbitrage transaction, so-called front-running attacks, e.g., MEV [20]. (iv) Do not achieve delivery atomicity, i.e., either both committing new states and delivering outputs are guaranteed, or none of them happens. The lack of atomicity either enables adversary knowing outputs before they are being committed on-chain to abort or rewind the MPT, or leads party to permanently lost their outputs when the outputs have been committed [21]; (v) Require high-cost interactions with the blockchain. Our work. In this paper, we propose DECLOAK, a novel MPT-enabled off-chain contract execution framework. DE-CLOAK solves all above problems with lower gas cost and weaker assumption. Specifically, to enable MPTs on a legacy blockchain, e.g., Ethereum [2], we require multiple TEE executors to register their TEEs on a deployed DECLOAK contract. The contract thus be aware of all TEEs and will specify a specific TEE to serve all MPT. Then, multiple parties can interact with the specified TEE off-chain to send MPTs. To achieve data confidentiality and availability (cf., i), we propose a novel data structure of commitments. The structure allows each party and TEE to independently access the newest states from the blockchain, even though all other entities are unavailable. To achieve financial fairness (cf., ii) and low cost (cf., v), we propose a novel challenge-response subprotocol. With the subprotocol, all honest entities among parties and TEE executors will never lose money, and at least one misbehaved entity will be punished. Especially, it enables the DECLOAK contract to identify the misbehaviour of the specified TEE and replace it with another TEE. To achieve atomicity (cf., iv) and delivery fairness (cf., iii), we require all TEEs to release the keys of output ciphertext only when verifying that the output commitments have been accepted and confirmed by the blockchain. This way, we achieve the complete fairness of output delivery, where multiple parties obtain their corresponding outputs in almost the same time. Consequently, DECLOAK achieves the data availability, financial fairness, delivery fairness, and delivery atomicity of MPTs simultaneously with only 34.4% gas cost of the SOTA, Cloak [8], while assuming at least one parties and TEE executors are honest. Last, we demonstrate how to optimize or prune DECLOAK for simpler or less secure scenarios, including how to ignore some secure properties for lower gas costs further. Contributions. Our main contributions are as follows. • We design a novel off-chain contract execution framework, DECLOAK, which enables MPTs on legacy blockchains. • We propose a DECLOAK protocol which handles the problem of how to maximize the security of MPTs by using a minimally trusted TEE network. Specifically, the protocol achieves confidentiality, data availability, financial fairness, delivery fairness, and delivery atomicity simultaneously, while requiring at least one party and at least one of TEE executors to be honest. • We implement and evaluate DECLOAK on 10 MPTs with varying parties from 2 to 11. • We demonstrate how to optimize further or fine-tune DE-CLOAK protocol to make trade-offs between security and cost for simpler scenarios. Organization. We organize the paper as follows. Section II introduces MPT and how DECLOAK advances related work. Section III sketches DECLOAK. Section IV details the DE-CLOAK protocol. Section V illustrates the implementation of DECLOAK prototype. In Section VI, we conduct a security analysis of DECLOAK. In Section VIII, we discuss how to optimize the DECLOAK protocol and make trade-offs between the security and gas cost when degenerating MPT to simpler scenarios. Finally, we evaluate DECLOAK in Section VII and conclude in Section IX.
II. BACKGROUND AND RELATED WORK A. Multi-Party Transaction
Informally, Multi-Party Transaction (MPT) refers to a transaction which transitions states on-chain by publicly verifiable off-chain MPC. The off-chain MPC in an MPT takes both on-/off-chain inputs and delivers both on-/off-chain outputs. Therefore, so far, MPT is the most general definition of offchain contract execution in multi-party scenarios and can be easily applied to various domains. For example, recall the second-price auction in Section I. During the process, the bids should keep private to their corresponding parties, i.e., confidentiality is held; The public (e.g., the blockchain miners) ought to verify that the output is the correct output of a claimed joint auction, i.e., the correctness and public verifiability hold. We demonstrate more MPT scenarios in Section VII.
Formally, MPT is modeled as below [7], [8].
c s 1 , . . . , c s n c f , c x 1 ,...,c xn =⇒ c s 1 , . . . , c s n , c r 1 , . . . , c r n , proo f | s 1 , . . . , s n f (x 1 ,...,x n ) =⇒ s 1 , . . . , s n , r 1 , . . . , r n
For a blockchain and an array of parties P where |P| = n (n ∈ Z * ∧ n > 1), we denote a party P[i] as the party P i . An MPT takes secret transaction parameter x i and old state s i from each P i , confidentially evaluates f off-chain, then delivers the secret return value r i and new state s i to P i , while publishing their commitments c x i , c s i , c f , c s i , c r i and a proo f on the blockchain. MPT should satisfy the following properties.
• Correctness: When each P i providing x i , s i obtains s i , r i , it must hold that s 1 , . . . , s n f (x 1 ,...,x n ) =⇒ s 1 , . . . , s n , r 1 , . . . , r n • Confidentiality: Each P i cannot know {x j , s j , s j , r j | j = i} except those that can be derived from public info and the secrets it provides.
B. Related Work
Here we highlights the difference and novelty of DECLOAK, as shown in Table I. TEE-based confidential smart contract. Ekiden [21], [26], CCF [27], Confide [22], and POSE [23] are designed for confidential smart contracts where transaction inputs/outputs and contract states are confidential and all transactions are regarded as independent. These frameworks never consider properties specific in multi-party scenarios, e.g., fairness.
Ekiden is a confidential smart contract framework which features appointing the consensus, execution, and key management functionality to different nodes. Specifically, besides consensus nodes, Ekiden sets up multiple TEE-enabled executors to serve users independently, where consensus nodes can obtain outputs as long as at least one executor is honest. Yet it requires executors' TEEs to fetch keys from a TEE-based key management committee to evaluate each transaction. This requirement additionally assumes the number of available TEE executors in the committee should be over a specific threshold, where the threshold depends on the threshold of the distributed key generation algorithms adopted by the committee. On atomicity, Ekiden proposes a two-phase protocol which delivers keys encrypting the outputs to users off-chain when the outputs have been committed on-chain, thus achieving atomicity. On data availability, users cannot access their states on-chain since the states are encrypted by TEEs. However, each executor also cannot decrypt on-chain states without requesting keys from the committee. Therefore, it is flawed in data availability.
Confide and CCF are permissioned network where TEEenabled executors maintain a consensus, e.g., RAFT, thereby can tolerate less than 1/2 unavailable executors. They store contract data (e.g., code, states) by encrypting data with keys shared among TEEs, thereby achieving data availability of TEEs. However, like Ekiden, if TEE executors are unavailable, users will temporarily lose accessibility to and even permanently lose their private data and on-chain assets. The TEEs' data availability holds.
POSE propose an off-chain contract execution which features high availability and no interaction with blockchain in optimistic cases. It introduces a challenge-response mechanism, ensuring the system's availability even if all-but-one executors are Byzantine. The protocol additionally requires the transaction sender to be honest to initiate the challenge. Users of POSE cannot access their states independently. Each TEE need to synchronize with other TEEs to obtain state updates. On atomicity, while involving reading inputs from and writing blockchain, POSE does not consider the atomicity of the onchain writing and off-chain output delivery. TEE-based smart contracts enabling MPTs. Choudhuri et al. [24] is the first to achieve complete fairness for generalpurpose functions with the help of blockchain and TEEs. It requires each party to hold a TEE itself. Bhavani et al. does not consider executing contract relying on on-chain states, committing states on blockchains, or punishing misbehaved nodes, thus being non-related to delivery atomicity, data availability, or financial fairness.
Fastkitten [13] seeks to enable arbitrary contracts, especially including multi-round MPC, on Bitcoin. It lets parties execute a transaction with private inputs in TEEs, persists the outputs locally, and only submit new state commitments with TEE signatures on-chain. Therefore, the party must persist all its latest private contract states and corresponding keys to ensure its ability to transition the states next time, lacking data availability. In long-running systems, parties' persisted data keep growing, making it a disaster for parties to maintain. Moreover, it involves a challenge-response mechanism to achieve financial fairness but requires each party to send a deposit transaction before each MPT, leading to O(n) transactions.
LucidiTEE [4] loosely requires part of parties to hold TEEs to achieve delivery fairness. However, the time of parties receiving outputs distributes in a period the length of which equals the generation time of Proof of Publication (PoP) 1 [13], [21], [28] for proving TEE that a key-releasing transaction has been finalized on-chain, which costs more than 50 block intervals on Ethereum 2 . Moreover, LucidiTEE requires each party to send a transaction to join an MPT or deposit, leading to O(n) transactions. On financial fairness, LucidiTEE lacks mechanisms to punish misbehaviours. With a similar state confidentiality mechanism as Ekiden and commitment as Fastkitten, LucidiTEE also lacks data availability.
Cloak [8] firstly propose a one-deposit-multi-transaction mechanism, where each honest party deposits coins once globally and then joins MPTs ultimately. The mechanism reduces it required on-chain transactions to O(1). Cloak only commits the hash of transaction data on-chain, e.g., inputs, outputs, keys, and policies. Thereby their parties also cannot access their states without TEE executors, i.e., lacking data availability.
DECLOAK propose a novel commitment structure to confidentially persist states on the blockchain with low cost. Each party can access their MPT-specific data from the blockchain with only its own account private key. Each TEE can read MPT-specific data from the blockchain without the help of either parties or other TEEs. Consequently, even if the whole off-chain system is unavailable, the data availability of the newest states is still guaranteed. As DECLOAK adopt the same one-deposit-multi-transaction and a novel challenge-response protocol, it only requires O(1) transactions in optimistic cases. Finally, while achieving complete delivery fairness, DECLOAK frees parties from maintaining TEEs Cryptography-based smart contracts enabling MPTs. Cryptography-based schemes usually combine MPC/HE with ZKP to enable MPTs. Before the combination, MPC/HEbased works like [29]- [31] achieve great confidentiality but not targets public verifiability. ZKP-based solutions achieve public verifiability but lack confidentiality. For example, Hawk [3] requires a tight-lipped manager to collect parties' secrets, execute a contract, and generate the ZKP proof, thus the confidentiality of Hawk is limited. ZEXE [25] proves the satisfaction of predicates by ZKP proof without revealing party secrets to the public. However, generating the proof requires a party to know all predicate's secrets, thereby violating inter-party confidentiality. Combining MPC with ZKP, public auditable MPC (PA-MPC) [9] achieves the publicly verifiable MPC, allowing multiple parties jointly evaluate a program and prove it. Nevertheless, existing PA-MPC primitives are not designed for committing data or proving state transitions, e.g., MPCs expressed in Solidity that operate both on-and offchain inputs/outputs. Moreover, they have flaws at inefficiency and weaker adversary model, and still fail in practically supporting nondeterministic negotiation or achieving financial fairness. Specifically, [9] requires trusted setup or un-corrupted 1 Recall that PoP is a proof constructed for proving that a transaction has been confirmed on a blockchain 2 For achieving ≤ 0.001 false negative and false positive under an adversary with ≤ 1/3 computing power of Ethereum parties. [32] is function-limited. [33] very recently achieves general-purpose PA-MPC but only support circuit-compatible operations. None of the above solutions are for confidential smart contracts or can punish adversaries. Instead, using the same proof structure with Cloak, DECLOAK conforms to both confidentiality and public verifiability. For security, while the underlining MPC of [24], [29]- [31] requires honest-majority parties, DECLOAK secure the system under an Byzantine adversary corrupting all parties and all-but-one TEE executors.
III. DECLOAK DESIGN
In this section, we first overview the system model, adversary model, and system goals of DECLOAK. Then, we overview DECLOAK protocol and highlight the challenges we handled and corresponding countermeasures. Figure 1 shows the framework of DECLOAK, i.e., a TEE-Blockchain system consisting of three components. Blockchain (BC). A blockchain, e.g., Ethereum [2], that can deploy and evaluate Turing-complete smart contracts. Parties (P). an array of parties who participate a specific MPT. DECLOAK network (DN). A DECLOAK Network consists of multiple TEE executors and TEEs, where each executor E is a server hosting a TEE E . We denote the set of all executors as E and all TEEs as E.
A. System model
B. Adversary model
We assume that a Byzantine adversary presents in a DE-CLOAK system. The assumptions and threats are as follows. Blockchain. We assume that BC satisfies the common prefix, chain quality and chain growth, so it can continuously handle and reach consistency on new transactions. Moreover, there is a Proof of Publication (PoP) scheme to prove to TEEs that a transaction has been finalized on BC, which is for against eclipse attack and also adopted by [8], [13], [21], [28]. The PoP of a transaction is a block sequence that contains the transaction and is provided to TEEs in the expected time.
Parities. An honest party can access the latest view of the blockchain and trust the data it reads from the blockchain. It trusts its platform and running code but not others. An honest party also trusts the integrity and confidentiality of all TEEs it attested. An honest party never reveal its secrets to others except attested TEEs. DECLOAK network. An honest TEE executor can access the latest blockchain view and trust the data it reads from the blockchain. An honest executor also trusts its platform and running code but not others. An honest executor also trusts the integrity and confidentiality of attested TEEs. Threat model. A Byzantine adversary can corrupt all parties and all-but-one TEE executors. A corrupted party or executor can behave arbitrarily, e.g., mutating, delaying and dropping messages, but never break the integrity/confidentiality of TEE. Moreover, the adversary cannot interfere with the communications among honest entities, e.g., the communications among honest parties or between honest parties and honest executors.
C. System goals
Informally, we seek to achieve following properties. Correctness. If an MPT succeeds, the outputs must be the correct results of the MPT applied to the inputs committed. Confidentiality. The inputs and outputs of MPT are always confidential to their corresponding parties. Public verifiability. The public, including the blockchain, can verify the correctness of the state transition caused by a MPT. Particularly, to accept a state transition, the blockchain will verify that the old states from which the new state is transitioning match its current states. Data availability. If an MPT successfully completes, it holds that (i) each honest party can access the plaintext of its newest states independently, and (ii) each honest executor's TEE can access the plaintext of the newest states independently to restore the newest states. This means honest parties will never lose their newest states, no matter how TEE executors behave. Financial fairness. If at least one party is honest, then either (i) the protocol correctly completes the MPT or (ii) all honest parties know that negotiation of the MPT failed and stay financially neutral or (iii) all honest parties know the protocol aborted, stay financially neutral, and at least one of malicious entities must have been financially punished. Delivery fairness. If at least one TEE executor is honest, then either (i) all parties know the plaintext return values and new states in a ∆-bounded period, or (ii) the new states and return values are not committed on-chain, and none of the parties or executors can know the plaintext of new states and return values. Delivery atomicity. If at least one TEE executor is honest, then either (i) some parties know the plaintext new states or return values, and the new states must have been committed on-chain, or (ii) new states are not committed on-chain, and none of the parties obtains its plaintext new states or return values. Figure 1 shows the workflow of π DECLOAK . We assume all TEEs have been registered on-chain as a TEE list E before the protocol started. Then, π DECLOAK starts to serve an MPT in four phases, i.e., global setup, negotiation, execution, and delivery phases. The global setup phase happens only once for any party. Other three phases of π DECLOAK happen in evaluating each MPT.
D. Protocol workflow
• (0) Global setup phase: All parties and TEEs deposit some coins to the network account ad E on BC. • (1) Negotiation phase: A party sends an MPT proposal p to the first executor E * in the registered TEE executor list to initiate an MPT. Upon receiving the proposal, the TEE E * starts a nondeterministic negotiation subprotocol Proc nneg . Specifically, the E * signs and broadcasts the proposal to all parties. If any party want to join or is required by the proposal, it responds with an acknowledgement to E * . The E * keeps collecting parties' acknowledgements. When the collected acknowledgements match the settlement condition of the negotiation phase (e.g., The number of parties exceeds the number specified in the policy), E * settles the proposal, deducts parties' collaterals from their coins cached in E * , and broadcasts the settled MPT proposal p to all parties. • (2) Execution phase: Upon receiving p , each party involved in the proposal submits its signed plaintext inputs (i.e., parameters) to E * . E * first read old states on the blockchain with their PoP 3 . Then, E * evaluates the MPT to obtain the outputs (i.e., return values and new states) inside. • (3.1-3.2) Delivery phase: When the E * gets the MPT outputs, it starts a ∆-fair delivery subprotocol Proc fdel . First, it generates one-time symmetric keys to compute the commitments of the outputs and sends a Commit T X cmt to publish output commitments on BC with the ciphertext of the symmetric keys (encrypted by the network key k E ). Upon T X cmt being confirmed on the blockchain, each E ∈ E independently verifies the PoP of T X cmt , obtains the symmetric keys from T X cmt , then sends a T X com to reveal the committed outputs to each party respectively. Consequently, both parties and executors do not need to persist any MPTspecific commitments or keys.
If any misbehaviour appears during the negotiation, execution, or delivery phase, we adopt a novel challenge-response mechanism to identify the misbehaved entities in parties and TEE executors.
Design Idea and Workflow
The workflow of Cloak, a development framework of general-purpose confiden
E. Design challenges and highlights
1) Achieve data availability of both TEEs and parties
The challenge here is (i) how to achieve the data availability of both parties and TEEs and (ii) ensuring confidentiality and living in harmony with the protocol for delivery atomicity and fairness. To achieve these, we introduce a novel data commitment subprotocol. Specifically, say each party P i has its account (sk i , pk i , ad i ), where sk i , pk i , ad i refer to the private key, public key and address of the account. As each party P i is identified by its address, we refer P i to ad i indiscriminately. We assume a common TEE network account (sk E , pk E , ad E ) has been synchronized among all TEEs. Then, we require all entities commit private data d i on blockchain in the following structure c d i . k d i denotes a one-time symmetric key for encrypting d i . k ie denotes the symmetric key generated by ECDH, i.e., k ie ← ECDH(sk i , pk E ) and k ie ← ECDH(sk E , pk i ). Consequently, on the one hand, either P i or TEEs can independently obtain k ie without interacting with the other. And a party needs only to hold the account private key sk i to access and operate its all commitments on-chain. On the other hand, when DECLOAK release c * d i (c d i without Enc k ie (k d i )) to commit and verify the state transition on-chain first for atomicity and fairness, any adversary cannot obtain k d i to decrypt Enc k d i (d i ).
c d i := [Enc k d i (d i ), Enc k ie (k d i ), P i ]
2) Achieve complete delivery fairness In DECLOAK, when a TEE executor evaluated the MPT inside its TEE, the TEE cannot release the output immediately. Instead, the TEE first generates one-time symmetric keys to encrypt the outputs, then sends a T X cmt to publish the output ciphertext and the ciphertext of the keys on-chain. The keys' ciphertext can be decrypted by all TEEs independently but each TEE only releases the decrypted keys after T X cmt has been finalized on-chain. Since we assume the blockchain is ideally available, all honest TEE executors can feed the PoP of T X cmt to their TEE. Therefore, if at least one honest executor exists, parties communicating with all executors can obtain the keys to decrypt the output ciphertext at almost the same time.
3) Resist Byzantine adversary with minimal transactions In this paper, we propose a novel challenge response subprotocol Proc rcha . At a high level, Proc rcha is designed with the following idea: when an honest party does not receive protocol messages off-chain from the specified TEE, it can publicly challenge the TEE with the proposal on-chain. The TEE can only avoid being punished if it can respond with expected outputs or prove that the problem is caused by some misbehaved parties rather than itself. Specifically, an MPT proposal only has three possible results: (i) NEGOFAILED, which means the negotiation of the proposal failed; (ii) COMPLETED, which means the completion of the MPT, i.e., an T X com is sent and accepted by the blockchain (iii) ABORTED, i.e., some entities misbehaved, making the MPT aborted. Therefore, the challenged TEE needs to respond with one of the following three results to prove its honesty: (i) sending a transaction T X f neg to prove that the negotiation of the MPT failed; (ii) sending a transaction T X com to complete the MPT and release its outputs; (iii) sending a transaction T X pnsP to prove that it cannot complete the MPT as expected because some parties misbehaved after the negotiation succeeded rather than itself. If none of the above transactions can be sent, the TEE will be punished. However, while (ii) is inherited by the success of MPT, how to achieve (i) and (iii) becomes challenging. To achieve (i), we require each MPT proposal should specify a block height h neg to notify when the negotiation phase is expected to finish. Then, a TEE can send a T X f neg to fail the proposal on-chain if it verifies that the collected acknowledgements from both off-chain ack and on-chain TX ack before h neg -th block still cannot satisfy the settlement condition of the proposal. To achieve (iii), when a TEE cannot complete the MPT, the TEE needs to challenge those misbehaved parties to prove that the reason is some parties did not submit their inputs rather than itself.
IV. DECLOAK PROTOCOL
In this section, we present the DECLOAK protocol π DECLOAK in detail. Given a blockchain BC, a DECLOAK Network DN having an array of executors E and TEEs E, we assume a common network account (sk E , pk E , ad E ) has been synchronized among all TEEs E. For an MPT F with its party set P, we assume |E| = |E| = m and |P| = n. Since π DECLOAK involves data from different parties, we use d i to denote the private data of P i (e.g.,
x i , s i , k s i ), d to denote an array [d i | 1..n ] including all d i from n parties (e.g., x, s, k s ). We let H d i denote hash(d i ) and H d denote [hash(d i )| 1..n ] (e.g., H c x denotes the hash of the array of transaction parameters [hash(c x i )| 1..n ]).
The main symbols we will use are summarized in Table II. Next, we picture the whole protocol in Figure 2.
A. Global setup phase
Before evaluating any MPT, each party P i is supposed to register their account public key pk i and deposit some coins with amount Q i to the DECLOAK contract V (Algorithm 1). We stress that each party only needs to do it once.
B. Negotiation phase
An MPT is started from its negotiation phase, where DECLOAK uses the nondeterministic negotiation subprotocol (Proc nneg ) to guide parties to reach a agreement on an MPT proposal. In detail, Proc nneg proceeds in two steps.
1.1: A party who wants to call an MPT F sends an MPT proposal p = (F , P, q, h neg ) to the first executor E * in the registered TEE executor list, i.e., E * = E[0], to initiate an MPT. Sending proposals to other TEEs will be rejected by the TEEs. P denotes a privacy policy of F . Briefly, P captures what data are needed by the MPT F and how to confide these data. We detail and formalize the P in Appendix IX-A. q denotes the collateral required for joining or executing the proposed MPT. h neg denotes that the proposal is expected to be negotiated before the block height h neg . Then, the specified executor's TEE E * computes hash p to be the proposal id id p and broadcasts a signed (id p , p) to parties.
1.2: Upon receiving (id p , p), each P i interested in the MPT autonomously responds with a signed acknowledgement ack i to E * . The E * receiving ack i knows P i 's intent of joining the proposal id p . E * keeps collecting ack i until the acknowledgements match the settlement condition 4 in P. Then, E * constructs a settled proposal p that expands p with the settled parties' addresses P. Meanwhile, E * caches its and parties' coin balances and deducts q collateral from their balance, respectively, ensuring that any involved entity has enough collateral to be punished when it misbehaves. Then, E * broadcasts p to notify the involved parties of the settled proposal.
Otherwise, if E * does not collect satisfied acknowledgements, a challenge-response subprotocol Proc rcha in section IV-E will be triggered to identify misbehaviour. We defer the detail in section IV-E.
send T X reg i ← V .register(pk i ) send T X dep i ← V .deposit(Q i ) initializes p ← (H F , H P , q, h neg ) sends p generates id p ← H p broadcasts (id p , p) generates ack i sends (id p , ack i ) generates p ← (H F , H P , q, h neg , P) broadcasts (id p , p ) sends T X f neg ← V . f ailNegotiation(id p ) h neg sends T X chaT ← V .challengeT EE(p) sends T X ack i ← V .acknowledge(id p , Enc k ie (ack i ))) if P j ∈ P * M ⊂ P fail to submit inputs send T X chaP ← V .challengeParties(id p , P * M ) if P i ∈ P * M is challenged in i ← (x i , k x i ) sends T X resP i ← V .partyResponse(id p , Enc k ie (in i ))
checks PoP chaT , reads T X chaT and TX ack checks PoP resP and reads T X resP if P M ⊂ P M still fail to submit send T X pnsP ← V .punishParties(id p , P M )
generates in i ← (x i , k x i ) sends (id p , in i )
checks PoP s and read c s , pk i s , r ← F P (s, x)
generate k s , k r and e k ← Enc k E (k s , k r )
generate c * s i ← [Enc k s i (s i ), 0, P i ] generate c * r i ← [Enc kr i (r i ), 0, P i ] generate proo f ← [H cs , H c s ] send T X cmt ← V .commit(id p , proo f , c * s , c * r , e k ) read PoP cmt , decrypt e k broadcast T X com send T X com ← V .complete(id p , [Enc k ie (k s i )| i∈[n] ], [Enc k ie (k r i )| i∈[n] ]) h neg + τ com sends T X pnsT ← V .punishT EE(id p )
C. Execution phase
In this phase, E * collects plaintext inputs from parties and executes F to obtain outputs inside TEE.
2: Upon receiving (id p , p ), each party P i knowing they are involved in the settled proposal p feeds their inputs (i.e., parameters x i and old states s i ) to E * . The E * keeps collecting parties' inputs and, especially, reads F -needed old state s from BC according to the policy P. If all involved parties' inputs are collected and matched, E * executes F (s, x) to obtain the MPT outputs, i.e., return values r and new states s inside. Then, E * goes to the step 3.1.
Otherwise, if some parties do not submit their inputs as expected, the Proc rcha will identify them and punish them. We defer the detail in section IV-E.
D. Delivery phase
This phase adopts an ∆-fair delivery subprotocol (Proc fdel ) to reveal the plaintext outputs (i.e., s i , r i ) to corresponding parties in a ∆-bounded period. The Proc fdel proceeds in two steps.
3.1 E * generates two arrays of symmetric keys k s , k r to computes the commitments of old state and return values s i , r i , i.e., c s i , c r i , and generates a proo f ← [H c s , H c s ]. The transaction with proo f signed by E * can prove the MPTcaused state transition. Then, E * sends a Commit transaction T X cmt ← V .commit(id p , proo f , c * s , c * r , e k ) to commit the outputs on-chain. We note that the published c * s , c * r do not include the ciphertext of k s , k r so that parties cannot reveal the commitments of s , r. Instead, we require E * encrypts the keys with the network key k E , where k E ← ECDH(sk E , pk E ), and attaches the obtained ciphertext e k ← Enc k E (k s , k r ) in T X cmt . So when T X cmt is confirmed, all E ∈ E can read k s , k r on-chain without interacting with each other. Moreover, the proo f in T X cmt proves the validity of state transition caused by the MPT F . V will validate the proo f and lock the on-chain states corresponding to old and new states, which signals the acceptance of the state transition and prevents its corresponding on-chain states from being updated by other concurrent MPTs before this MPT completes.
3.2: When T X cmt becomes confirmed on-chain, each E ∈ E feeds the PoP cmt (The PoP of the transaction T X cmt which is an enough long and timely block sequence that contains T X cmt to prove T X cmt has been finalized) of T X cmt to its E . Each E reads key array k s , k r from the T X cmt , then sends an transaction T X com = V .complete(id p , [Enc k ie (k s i )], [Enc k ie (k r i )]) to add the ciphertext of k s , k r to c * s , c * r . The T X com signals the COMPLETED of this MPT.
Here, the delivery fairness is achieved as follows: In 3.1, each party P i has received the incomplete output commitments c * s , c * r but cannot decrypt them without corresponding k s i , k r i . In 3.2, each E first verifies PoP cmt to ensure that MPT outputs have been committed on BC. Then, each E can send a T X com to complete the protocol with COMPETED. Since parties can directly communicate with all executors to obtain T X com , they can obtain the k s , k r within the network latency ∆, as long as at least one E honestly respond parties with T X com . Otherwise, if T X cmt is rejected by V , any E cannot feed valid PoP cmt to its TEE E . Therefore, no TEE can release T X com to reveal the plaintext outputs or complete the MPT before h neg + τ comth block. Therefore, DECLOAK guarantees the ∆-fairness of delivery, where ∆ is the network latency of the blockchain.
E. Challenge-response subprotocol
When in any phase one of the honest parties did not receive TEE's protocol messages as expected, the party can initiate an challenge-response subprotocol Proc rcha . Specifically, it can send a challengeTEE transaction T X chaT to challenge the TEE on-chain publicly. The TEE being challenged can only avoid being punished by successfully responding with one of the following transactions:
• (i) T X f neg : If the h neg -th block has not been produced, the TEE E * should keep collecting ack, which are sent by parties from off-chain channels, and TX ack , which are sent by parties to the blockchain and accepted before the h neg -th block. Only if all collected acknowledgement cannot satisfy the settlement condition of MPT policy P (If a party P i send different ack i by the off-chain channel and the onchain transaction T X ack i , respectively, the off-chain ack i will be chosen), E * then is allowed to send a T X f neg to fail the proposal on-chain. In all other cases where the h neg -th has not been confirmed, or the E * has successfully settled the proposal, it's impossible for a TEE to release a T X f neg . T X f neg will finish the MPT as NEGOFAILED. • (ii) T X com : If the negotiation phase succeeds and the MPT completes, a T X com will be sent to the blockchain inherently. T X com will finish the MPT as COMPLETED. • (iii) T X pnsP : If the negotiation phase succeeds, but the E * cannot complete the MPT as expected, both parties and the specified TEE's executor E * can be misbehaved entities. Therefore, to avoid being punished in default, E * should call its E * to challenge parties publicly. Specifically, if E * does not receive some parties' inputs or match some parties' inputs with their on-chain commitments, E * marks these parties as suspicious parties P * M and returns P * M to its host E * . The E * calls E * .challengeParties to send a T X chaP to challenge all parties in P * M on-chain. When T X chaP is confirmed on-chain, honest parties in P * M are supposed to send a T X resP to publish the ciphertext of their inputs x i , s i . All published T X resP are required to be confirmed before block height h neg + τ resP . Otherwise, the late T X resP will be regarded as invalid by E * . Upon the confirmation of the h neg + τ resP -th block, E * reads the PoP resP of all TX resP . If E * successfully reads matched inputs of a party P i ∈ P * M from its T X resP i , it removes P i from P * M . Otherwise, if PoP resP shows that no T X resP i is published on-chain or the inputs in T X resP i are still mismatched, E * retains P i in P * M . After that, if P * M becomes empty, which means all inputs are collected, E * goes to the step 2. Otherwise, if P * M is not empty, which means the misbehaviour of parties left is confirmed, E * marks these parties as P M . Then, E * sends a T X pnsP . T X pnsP calls punishParties to punish the misbehaved parties in finance and signal the MPT with ABORTED. If the E * being challenged by a party either fails (by T X f neg ), stops (by T X pnsP ), or completes (by T X com ) the MPT, anyone can send a T X pnsT after the h neg + τ com -th block to punish E * and signal the MPT with ABORTED.
V. IMPLEMENTATION
DECLOAK is designed to depend on contract-based infrastructure. A service provider of DECLOAK can deploy a contract V on a legacy BC. Then, anyone can interact with the BC and TEEs in DN to transition the states of BC by MPTs.
A. DECLOAK contract
We implement the DECLOAK contract in Solidity 0.8.10 [34]. As shown in Algorithm 1, V is constructed by the config of DN, e.g., ad E , so that parties can authenticate and build secure channels with all E ∈ E. Moreover, V provides functions to manage the life cycle of each MPT. Specifically, a party calls V .challengeT EE by T X chaT to challenge the specified TEE. and signal the negotiation as NEGOTIATED. When an MPT was evaluated, a E calls V .commit by T X cmt to validate the state transition and commit the outputs. Finally, a E calls V .complete by T X com to release keys' ciphertext and signal the MPT as COMPLETED.
B. DECLOAK network
To construct the DN, we instantiate each TEE E (Algorithm 2) based on SGX [35]. Anyone with a TEE device can instantiate a E (Algorithm 2) to become a executor E. The first E generates the network account (sk E , pk E , ad E ) to initialize a network DN. Then, other E must be attested by one of E in the DN to join the DN and obtain the network key and account.
To evaluate MPT, we express F in Solidity 0.8.10 [34] and port EVM [36] into SGX. P is expressed in JSON. P for
P * M ← P 22 for x i , k x i in in.{x, k x } 23 P * M ← P * M \{P i } 24 if |P * M | > 0 then return (id p , P * M ) // evaluates F (x)P i ∈ P * M do 43 if x i , k x i ← T X resP i .{x i , k x i } then 44 P M ← P M \{P i } 45 if |P M | > 0 then 46 return T X pnsP (id p , P M ) 47 Procedure complete(T X cmt , PoP cmt ) 48 if status = NEGOTIATED or veriPoP(b cp , T X cmt , PoP cmt ) = 1 then abort 49 status ← COMPLETED 50 return T X com (id p , [Enc k ie (k s i )| i∈[n] ], [Enc k ie (k r i )| i∈[n] ])
is introduced to specify the parameters, states to read and write, and return values of F , which is for TEE to know the I/O of the MPT. The hash of both F and P are registered and updated on BC , while their codes are provided by the MPT' developers/initiators and cached by E. Admittedly, P is now pre-specified thus restricting that the I/O of F should be statically identified. However, this problem could solved by hooking EVM's sstore and sload instructions [26], and we leave it for future work.
VI. SECURITY ANALYSIS
A. Assumption reliability
Our assumption that TEE's confidentiality and attestable integrity hold is still practical now. While attacks against SGX, e.g., memory-corruption attacks and side-channel attacks, keep coming out, the community has developed efficient softwarebased [37]- [39] and hardware-based countermeasures [40], [41]. So far, most of existing attacks against SGX are either function-limited [42], [43], solved, or patched [44], [45]. For some very recent and considerable attacks like xAPIC and MMIO, they are also mitigated in Dec. 22 and will be solved in Jan. 23 [46].
B. Protocol security
Informally, we claim that the following theorem holds. We leave the formal security property definition and corresponding game theory-based proof in Appendix IX-C. Limited by space, here we will briefly outline the idea of how we prove financial fairness, and delivery fairness.
Theorem 1 (Informal statement). The protocol π DECLOAK satisfies correctness, confidentiality, public verifiability, data availability, financial fairness, delivery fairness, and delivery atomicity To prove DECLOAK holds financial fairness, we prove that there are only three possible statuses of an MPT, i.e., / 0 (negotiation not started or gets failed), ABORTED (negotiation succeeded, but the MPT did not complete as expected) and COMPLETED (the MPT complete as expected). Then, we exhaustively prove that parties' balance will stay fair in any of the three statuses: i) if the status of an MPT stays at / 0, all entities' balances would have no change; ii) if an MPT's status is ABORTED, then either some parties misbehaved and were punished, or the specified TEE executor misbehaved and were punished; iii) if an MPT's status becomes COMPLETED, the MPT succeeds, and all entities' balances would have no change.
To prove the delivery fairness being held, we utilize the ideal availability of blockchain and the assumption that allbut-one TEE executors are Byzantine. Specifically, to release outputs, the T X cmt , which contains data ciphertext and the ciphertext of their corresponding keys, must have been published on the blockchain. Therefore, if each party communicate with all TEE executors directly and at least one TEE node is honest, all parties can obtain their corresponding outputs in the ∆bounded period. The ∆ equals to the message delivery upper bound of the (semi-)synchronous network among parties and TEE nodes.
VII. EVALUATION
Methodology and setup. To evaluate the effectiveness of DECLOAK, we propose 3 research questions.
• Q1: Can DECLOAK capably serve real-world MPTs? • Q2: What is the cost of enabling MPTs on a blockchain? • Q3: What is the cost of evaluating MPTs using DECLOAK?
The experiment is based on a server with Ubuntu 18.04, 32G memory, and 2.2GHz Intel(R) Xeon(R) Silver 4114 CPU. The memory used by TEE is set up to 200M. Answering Q1. We evaluate DECLOAK on 5 contracts which involve 10 MPTs in different scenarios. All them are in Solidity and the number of parties they involved varies from 2-11.
SupplyChain is a contract allowing suppliers to negotiate and privacy-preservedly bids off-chain, and commit the evaluation with their new balances on-chain. It has 39 LOC and contains one MPT.
Scores is a contract allowing students to join and get mean scores off-chain and commit the evaluation on-chain. It has 95 LOC and contains one MPT.
ERC20Token is a contract allowing accounts to pair and transfer without revealing balances off-chain, and commit the evaluation with new balances on-chain. It has 55 LOC and contains three MPTs.
YunDou is a fine-tuned ERC20 token contract with comanaged accounts where account managers self-selectly vote to transfer tokens without revealing the votes. It has 105 LOC and contains three MPTs.
Oracle is a Oracle contract that allows parties to negotiate to join then jointly and verifiably generate random numbers. It has 60 LOC and contains three MPTs.
Answering Q2. Table III shows the gas cost of all methods of V in different phases. To answer Q2, here we focus on the initialization and global setup phase.
Global setup register (T X reg i ) 127068 deposit (T X dep i ) 42325 MPT commit (T X cmt ) 104568 complete (T X com ) 110570 Proc rcha challengeTEE (T X chaT ) 131762 acknowledge (T X ack i ) 26999 failNegotiation (T X f neg ) 30563 challengeParties (T X chaP ) 33786 partyResponse (T X resP i ) 34313 punishParties (T X pnsP ) 45518 punishTEE (T X pnsT ) 53254
DeFi: ERC20: Transfer Gas cost of initialization. It costs 4.9M gas to deploy V to enable DECLOAK on a blockchain. This cost is only once paid by DECLOAK service provider, thereby is irrelevant.
Gas cost of global setup. A party pays 12.7k to register its public key and 4.2k gas to deposit coins. This setup happens once for each party, thus being acceptable. Answering Q3. We analyze the gas and off-chain cost for evaluating each MPT, respectively. Especially, we compare the gas cost of DECLOAK with the most related MPT-oriented work, Fastkitten [13] and Cloak [8].
On-chain cost of MPTs. Figure 3 shows the gas cost of each MPT. Overall, DECLOAK reduces gas by 72.5% against Fastkitten. Specifically, for six 2-party MPTs, DECLOAK costs 0.27-0.46X gas. For two 3-party and two 10/11-party MPTs, the gas significantly reduces to 0.22-0.25X and 0.09-0.11X, respectively. For Cloak, the cost of DECLOAK decreases by 65.6% in average. Specifically, DECLOAK costs 0.27-0.56X gas against Cloak in 2/3-party MPTs, while just 0.17-0.22X gas in 10/11-party MPTs. Therefore, DECLOAK enables a more secure MPTs with lower on-chain cost. The on-chain cost not only surpasses Cloak, but is comparable to typical single-party transactions, e.g., NFT sale and ERC20 swap, on Ethereum. Moreover, as the number of parties growing, the cost superiority of DECLOAK improves.
Off-chain cost of MPTs. All 10 MPTs complete in constant 2 transactions. Specifically, the negotiation, execution, and delivery phases cost 0.21-0.58s, 0.39-1.15s, and 0.30-0.77s, respectively, which can be ignored. Here we adapt the protocol of Faskkitten to Ethereum. "Cloak" refers to the gas cost sum of its 2 transactions for each MPT. "TXcmt" and "TXcom" refers to gas cost of T X cmt , T X com in π DECLOAK , respectively.
VIII. OPTIMIZATION AND FINE-TUNING A. Improve the scalability of DECLOAK 1) Reduce gas cost in optimistic cases Recall that serving an MPT in optimistic scenarios only involves 2 transactions, T X cmt and T X com . Therefore, to serve a n-party MPT without adversary, DECLOAK needs to send O(1) transactions. We note that we can adopt the following measures to furthermore reduce the optimistic cost of DE-CLOAK. batch processing. According to the height of the blockchain, we can split the execution of MPT to different slots. In each slot, DECLOAK handles λ MPTs (λ ≥ 1) and sends only two transactions, i.e. T X cmt , T X com , to finish all MPTs in the slot in a batch. This way, it can reduce the complexity to O(1/λ ) without sacrificing the security or changing the adversary model. making trade-off. We note that by intentionally sacrificing some of our system goals, DECLOAK can furthermore reduce its on-chain cost. First, we can drop data availability to delete the last transaction T X com . Specifically, in the delivery phase, TEEs will first send T X cmt to commit outputs on-chain. If the proo f in T X cmt passes, V will accept the state transition immediately. Then, upon T X cmt being accepted and confirmed, TEEs will release the keys of the output ciphertext in T X cmt to parties by off-chain channels, rather than sending a T X com . Consequently, the required transactions of DECLOAK reduce to only 1, i.e., T X cmt . However, in this variant, parties need to keep all received keys to access their plaintext states. Second, we can furthermore drop delivery atomicity and delivery fairness to delete T X cmt , meaning that no transactions are required in the optimistic case. Specifically, MPT involves reading on-chain inputs. If we delete T X cmt , when the specified TEE obtains outputs, the blockchain has no change to ensure that old states that MPT read have not been mutated. This way, the MPT outputs that TEE regard as valid cannot be accepted by the blockchain, breaking the atomicity. Moreover, as we cannot utilize the T X cmt to ensure that output ciphertext can be ideally delivered to all TEEs, delivery fairness is broken.
2) Reduce gas cost in pessimistic cases In the pessimistic scenarios, the challenge-response protocol (Proc rcha ) will be triggered. In the protocol, each party being challenged on-chain has to respond with their acknowledgements or inputs independently. We can introduce an off-chain third-party service to collect parties' responses and publish an aggregated T X resP to the blockchain. In this, way, even though a Proc rcha is being triggered, the on-chain transaction complexity is still O(1). And combining with the batch processing technique of MPT, the complexity of Proc rcha can furthermore reduce to O(1/m), where m is the number of MPTs in a batch.
3) Reduce storage cost To minimize the trust of off-chain TEE network, DECLOAK stores parties' privacy-preserved data on blockchain and ensure the plaintext of the stored data are still accessible to parties even without DECLOAK. This sounds indicating a heavy storage cost. However, as we demonstrated in Section VII, the storage cost is acceptable. Actually, storing off-chain states onchain as calldata has been well-adopted in Ethereum Rollup projects [16], [17]. Moreover, reducing the storage cost is also a main issue of Ethereum 2.0. Specifically, Ethereum propose to reduce the gas cost of calldata from 16 to 3, which means a 81% decrease [15]. Furthermore, Ethereum 2.0 will introduce blob [18], a new storage mechanism which allows different Ethereum Layer-2 projects to cheaply store all their transactions and states on the Beacon chain. Therefore, the design of DECLOAK strongly match the need and tendency of Ethereum.
B. Improve the availability of DECLOAK An industry tee service usually has a robust error-handling mechanism and is DDoS-resistant. Therefore, we practically assume that the service provided by the specified honest TEE executor is highly available. However, it does mean we cannot further improve the availability of DECLOAK. For example, DECLOAK can adopt a similar availability enhancement mechanism as in POSE [23]. Specifically, every time the specified TEE executor changes its local state, it should synchronize the state updates to all other registered TEEs and collect their signatures in off-chain channels to carry on the next state transfer. If the specified TEE is not available off-chain, parties can publicly change it on-chain. If the unavailability of the specified TEE is because that other TEE executors do not respond with signatures as expected, the specified TEE can publicly challenge other unavailable TEEs on the blockchain. Finally, if the on-chain challenge-response mechanism finally punishes the specified TEE, it will be kicked out, and the next TEE in the registered list will be specified to serve MPTs. As a result, in an optimistic scenario, i.e., all other TEEs honestly respond with their signatures, DECLOAK will not lose its offchain states if at least one TEE is available. In a word, we stress that improving the availability of TEE network is an orthogonal field with DECLOAK, and DECLOAK can combine with the related work [23] to further improve its availability.
IX. CONCLUSION
In this paper, we develop a novel framework, DECLOAK, which can support MPT-enabled off-chain contract execution on legacy blockchains by using a TEE network. DECLOAK features maximising the security of MPT and minimising the gas cost and the network's trust. Comparing with the SOTA, Cloak [8], DECLOAK not only realizes all security properties the SOTA claimed but also additionally achieves data availability, delivery fairness, and delivery atomicity. To our knowledge, DECLOAK achieves the most general and secure MPT. Meanwhile, it assumes at least one party and executor are honest, which is also one of the weakest assumptions compared to related work. Moreover, according to our evaluation, DECLOAK reduces the gas cost of the SOTA by 65.6%, and the superiority of DECLOAK increases as the number of parties grows.
Supplementary Material for "DECLOAK: Enable Secure and Cheap Multi-Party Transactions on Legacy Blockchains by a Minimally Trusted TEE Network"
A. MPT-enabled contracts
We have introduced the program F , verifier V , and enclave for achieving an MPT. Here we model the privacy policy P we used for better managing parties' private data on-chain and specify the privacy demand of the MPT.
Since each E need to encrypt/decrypt states before evaluating F , E must aware about the states sets to read and write sets of F . Therefore, we bind a privacy policy P to each F .
adr V := {0, 1} * v := [a − zA − z0 − 9]+ P := {0, 1} * P v := / 0 | (v : P) | v : P ? P F := { P x := {P v } * P s := {P v } * P s := {P v } * P r := {P v } * }
A privacy policy P is modeled as the above. adr V , adr V denote the address of its corresponding deployed V and verifier contract V on BC, respectively. v refers to the identifiers of variables. The P refers to parties' addresses. (P x ) refers to transaction parameters of F . P s refers to states variables to be read to evaluate F . P s refers to states variables to update by the F . P r refers to return variables of F . Each variable is denoted by the tuple (v : P), containing its identifier v and the address of of its owner P (i.e., the party that the variable private to). v is owned by P meaning that v is confidential to P. Consequently, E expect to receive the v from P and commit v with P's public key. If the owner of an variable is unknown before MPT, we write (n : P ? ). The unknown party will be settled after the negotiation phase in Section III.
B. Notations and Definitions
In this section, we fine-tuned the notation system of [8], [13] to denote variables involved in DECLOAK.
1) Common notations
Generally, we denote a domain as S and its n-ary Cartesian power S × S × · · · × S as S n . Therefore, each s ∈ S n is a array [s 1 , · · · , s n ] and we refer s[i] or s i to the i-th element of s. Moreover, S n×m denotes the set of all n-ny-m matrices consisting of elements from S. Similarly, we denote S[i][ j] as the element in i-th row and j-th column of S, S[i][·] as the i-th row, and S[·][ j] as the j-th column.
2) Coins
We define a set D coin as a coin domain, which includes all possible balance of parties' global coins and is a subset of non-negative rational numbers Q ≥ 0. Therefore, we define a coin array q ∈ D n coin where Q i denotes the balance of party P i 's global coins. Then, we define the set D dep ← D coin \{0} as a deposit domain, and define a deposit array d ∈ D n dep where d[i] denotes the deposit of party P i for joining an MPT.
3
) Multi-Party Transactions
We define a set D pa as a plaintext domain which is application-specific. Therefore, for each MPT, we have its plaintext parameter array x, old state array s, new state array s , and return array array r, where x, s, s , r ∈ D n pa . Correspondingly, we define a set D cm as a cryptography commitment domain which is specific to the cryptography commitment algorithm we adopted in Section III. Then, for each MPT, we denote its parameter commitment array, old state commitment array, new state commitment array, and return value commitment array as c x , c s , c s , c r , respectively, where c x , c s , c s , c r ∈ D n cm . We define a party domain D addr . D addr is the set of all possible addresses of parties, thus depends on the address generation algorithm the BC adopted. Then, the parties of an MPT are modeled as a party array P where P i denotes the i-th party of P and P i ∈ D addr . We define the target function of an MPT which multiple parties jointly evaluate as F , and the privacy policy of an MPT as P which specifies the meta data of F , e.g., expected x, s, s , r. Then, we denote F P as a P-conformed F .
Algorithm 3: Evaluation function
Input: An n-party MPT F and its policy P, a parameter array x, a parameter key array k x , a old state array s, a a old state key array k s , a old state commitment array c s , and a party array P. Output: A new state array s , new state key array k s , return value array r, return value key array k r , new state commitment array c s , return value commitment array c r , parameter commitment array c x , and a proo f . 1 Function eval(F , P, x, k x , c s , P)
2 foreach c s i in c s 3 assert c s i = [Enc k s i (s i ), Enc k ie (k s i ), P i ] 4 s , r ← F P (s, x) 5 k s , k r ← Gen(1 κ ) 6 c s i ← [Enc k s i (s i ), Enc k ie (k s i ), P i ] 7 c r i ← [Enc k r i (r i ), Enc k ie (k r i ), P i ] 8 c x i ← [Enc k x i (x i ), Enc k ie (k x i ), P i ] 9 proo f ← [H P , H F , H c s ] 10
return (s , k s , r, k r , c s , c r , c x , proo f )
4) Protocol execution
While P, E, and E denote the party array, executor array and TEE array of an MPT, respectively, we define P H and E H as the honest parties in P and E respectively. P M and E M denote the malicious parties in P and malicious executors of TEEs, i.e., P M ← P\P H , E M ← E\E H . For convenience, we also define P + ← P ∪ E and P + M ← P M ∪ E M . According to our adversary model in Section III, DECLOAK protocol π DECLOAK , or simply π, proceeds in presence of an byzantine adversary A who can corrupts all-but-one P i ∈ P + . And we define a coin balance array Q ∈ D n+m coin . Q i | i<n denotes the coin balance of P i ∈ P pre-deposited to k E . Q n+i | i<m denotes the coin pre-deposited balance of E i ∈ E.
Classically, we define any protocol execution of π under the adversary A as REAL π,A . The inputs of an execution include an n-party MPT F and its policy P, a parameter array x, a parameter key array k x , a old state array s, a a old state key array k s , a old state commitment array c s , a party array P, a deposit array q and a account coin balance array Q. Therefore, we formalize a protocol execution as follows.
Q , s , k s , r, k r , c s , c r , c x , proo f , sta ← REAL π,A (Q, F , P, x, k x , c s , P, q)
The outputs of π include a new coin balance array Q after the execution, new state array s , new state key array k s , return value array r, return value key array k r , and the commitment array of new states, return values, and parameters, i.e., c s , c r , c x , respectively, and proo f of the MPT-caused state transition.
5) Security goals
We first define the basic correctness property. Intuitively, correctness states that if all entities in P + behave honestly, ∀P i ∈ P obtain their correct MPT outputs correspondingly and collateral back.
Definition 1 (Correctness). For any n-party MPT F P , q ∈ D n dep , s ∈ D n pa , x ∈ D n pa and Q ∈ D n coin , there is a negligible function ε that for the output of the protocol REAL π (Q, F , P, x, k x , c s , P, q) and ∀P i ∈ P Theorem 1 (Formal statement). Assume a EUF-CMA secure signature scheme, a IND-CCA2 encryption scheme, a hash function that is collision-resistant, preimage and second-preimage resistant. a TEE emulating the TEE ideal functionality and a BC emulating the BC ideal functionality, π DECLOAK holds correctness, confidentiality, public verifiability, data availability, financial fairness, delivery fairness, and delivery atomicity.
1) Proof of correctness
Consider adversaries absent in π DECLOAK . The evaluation of an MPT starts by the specified E * receiving an MPT proposal p ← (H F , H P , q, h neg ) and starting the negotiation phase. E * first deterministically generates an id id p of the proposal and broadcast the id p with the proposal to P i ∈ P. When E * s collects satisfied acknowledgement from P, it broadcasts the settled p . In the execution phase, E * collects the plaintext inputs in from P and read s i from BC.c s . Then, E * obtains the MPT's outputs by s , k s , r, k r , c s , c r , c x , proo f ← eval(F , P, x, k x , c s , P)
Then it moves to the delivery phase. E * releases a T X cmt to commit the outputs without publishing the symmetric key ciphertext. Upon the only one T X cmt is confirmed on BC, each E reads the T X cmt to obtain the shared symmetric keys k s i , k r i . Then, each E encrypts the keys k s i , k r i with the k ie and broadcasts a T X com to both P and BC immediately. As no P i ∈ P + is punished, we have Q i ← Q i ≥ Q i .
Since all protocol messages are sent in secure channels between P and E s and we ignore the leakage caused by F and parties' voluntarily revealing, the confidentiality is axiomatic. Therefore, we proves data availability, financial fairness, and delivery (∆−)fairness in the following.
2) Proof of data availability According to the Algorithm 1, when sta = COMPLETED, there must be c s published on BC. Recall the data structure of c s i ← [Enc k s i (s i ), Enc k ie (k s i ), P i ], we construct a polynomial function in Algorithm 4. With the function, any E ∈ E or P i ∈ P can construct the newest states of all completed MPT independently. Therefore, the data availability holds.
Algorithm 4: States construction function
Function constructStates(sk, pk, c s i ) k i e ← ECDH(sk, pk) k s i ← Dec k i e (c s i [1]) s i ← Dec k s i (c s i [0]) return s i
3) Proof of financial fairness
Here we prove that in all possible sta, the financial fairness of π DECLOAK holds. First, we consider the Negotiation phase. Briefly, we prove that if the phase does not complete successfully then the proposal will have sta = NEGOFAILED and ∀P i ∈ P H stays financially neutral. Lemma 2. If there ∃P i ∈ P H stays at sta = NEGOFAILED, then the statement (i) of the financial fairness property holds.
Proof: There is only one cases when an P i ∈ P H has sta = NEGOFAILED:
• (i) T X f neg is confirmed on BC after Proc nneg . Specifically, this scenario happens when the collected ack from both on-chain and off-chain channels cannot satisfy the settlement condition of MPT proposal or ∃P i ∈ P holds that Q i ≤ q. No matter what reasons cause the failure, we require ∀P i ∈ P H identifying the sta of an MPT by reading it from the BC. As we assume that the BC emulates the ideal blockchain functionality which achieves ideal consistency and availability, ∀P i ∈ P can access the consistent BC view. Therefore, if a T X f neg is successfully confirmed on-chain. The result will be the unique result of the proposal ensured by DECLOAK contract V , and ∀P i ∈ P H will immediately identify that sta = NEGOFAILED. Then Q i = Q i , i.e., Q i ≥ Q i holds. Lemma 3. If ∃P i ∈ P H such that sta = COMPLETED, then the statement (i) of the financial fairness property holds.
Proof: According to Algorithm 1 , the protocol outputs sta = COMPLETED iff a transaction T X com is contained on BC before the h cp + τ com -th block. Therefore, ∀P i ∈ P + the Q i = Q i ≥ Q i holds.
Next, we show that the financial fairness also holds even if an MPT fails by ABORTED after an successful Negotiation phase.
Lemma 4. If ∃P i ∈ P H is such that sta = ABORTED, then the statement (ii) of the financial fairness property holds.
Proof: There are two cases when ∃P i ∈ P H outputs ABORTED: • (i) Before the h cp + τ com -th block, T X pnsP (id p , P M ) is published on BC. • (ii) After the h cp + τ com -th block, T X pnsT (id p ) is published on BC.
We first consider the case (i) where ∃P j ∈ P M does not provide inputs in j after the negotiation succeeded. According to Algorithm 2, the E * releases a transaction T X pnsP (id p , P M ) iff E * calls the E * .punishParties with a PoP resP which proves that P j ∈ P M | P M = / 0 did not provide their inputs even though they were challenged by a T X chaP . The T X pnsP will deduct coins of ∀P i ∈ P M by the MPT-specific collaterals q. In other word, for ∀P i ∈ P M , it holds that Q i = Q i − q i . Since Q i > q i , which has been ensured by Proc nneg , and P M = / 0, it holds that ∑ j∈P M Q j < ∑ j∈P M Q j . Notably, no malicious party earned coins in this case.
Second, we consider the case (ii) which indicates that T X com fails to be contained before the h cp + τ com -th block. Since the case (i) not happens, then either E * have collected correct inputs from all parties, which means that P M = / 0, or E * detains the T X pnsT or T X cmt , or T X cmt fails on validation, e.g., the old state commitments c s that E * read from and executed MPT on has been changed, which fails the veri f y(proo f , H F , H P , H c s ) in T X cmt . In any case, when the timeout transaction T X pnsT is posted by an honest party on the BC, it p will be marked as ABORTED and ∀P i ∈ P gets i.e., Q i = Q i . The Q i ≥ Q i holds.
Lemma 5. When π DECLOAK terminates, it must hold sta ∈ {NEGOFAILED, NEGOFAILED, COMPLETED}.
Proof: As we stressed, ∀P i ∈ P H and ∀E ∈ E H , E ∈ E identify current sta from the V on BC. If an MPT succeeds, a T X com must be sent, which leads to std ← COMPLETED. Otherwise, we claim that there must be std ← NEGOFAILED/NEGOFAILED. According to the Algorithm 1, there are additionally one temporary status. When T X cmt is accepted, it indicates that the MPT outputs are successfully validated. Recall that BC can continuously serve new transactions, T X com has no output validation logic, and at least one executor is honest. There must be a executor who can send T X com to set sta ← COMPLETED.
4) Delivery (∆-)fairness
Recall that the Lemma 5 holds. In the following, we prove that the delivery (∆-)fairness holds in all three values of sta that π DECLOAK terminates at. We first consider the negotiation phase. Intuitively, if no sufficient acknowledgement is
Figure 1 .
1The framework and workflow of DECLOAK.
Figure 2 .
2The DECLOAK protocol π DECLOAK . The DN F ,P denotes a DECLOAK Network in which all executors hold TEEs with deployed F , P. BC V ,V denotes a blockchain with deployed DECLOAK contract V . Proc nneg and Proc fdel denote the nondeterministic negotiation, and ∆-fair delivery subprotocols, respectively. Double dashed arrows denote reading BC and double arrows denote writing BC. Orange arrows denote the messages of challenge-response. Other arrows denote off-chain communications in secure channels. Specifically, messages sent by parties are signed by parties and encrypted by k ie of DN, where k ie ← ECDH(sk i , pk E ). All messages broadcast by DN are plaintext in default and signed by sk E . For simplicity, we omit marking ciphertext of messages that parties are sending to DN, but mark the ciphertext explicitly in each transaction sent to BC.
on states s 25 s , r ← F (PoP s .s, x) 26 b cp ← PoP s .getLastComfBlock() 27 status ← EXECUTED 28 Procedure commit(id p ) 29 if status = EXECUTED then abort30k s , k r ← Gen(1 κ ) 31 c s i ← [Enc k s i (s i ), Enc k ie (k s i ), P i ] 32 proo f ← [PoP s .H cs , H c s ] * r i ← [Enc k s i (s i ),0, P i ], [Enc kr i (r i ), 0, P i ] 34 return T X cmt (id p , proo f , c * s , c * r , e k ) 35 Procedure challengeParties(P * M ) 36 if status = NEGOTIATED then abort 37 if |P * M | > 0 then 38 return TX chaP (id p , P * M ) 39 Procedure punishParties(TX chaP , TX resP , PoP resP ) 40 if status = NEGOTIATED or veriPoP(b cp , TX chaP , PoP resP ) = 1 then abort 41 P M ← P * M 42
65000 DeFi: Uniswap V3: Swap 184523 DeFi: Balancer: Swap 196625 NFT: OpenSea: Sale 71645 NFT: LooksRare: Sale 326897
Figure 3 .
3The gas cost of DECLOAK. "Fastkitten" refers to the gas cost sum of n + 1 transactions for each MPT.
Table I
ICOMPARING DECLOAK WITH RELATED WORK. THE SYMBOLS , H, Q AND G REFER TO "NON-RELATED", "NOT-MATCHED", "PARTIALLY-MATCHED" AND "FULLY-MATCHED" RESPECTIVELY. "ADVERSARY MODEL" MEANS HOW MANY BYZANTINE ENTITIES CAN BE TOLERANT. "DATA AVAILABILITY" MEANS WHETHER PARTIES OR TEES HOLD MPT-SPECIFIC DATA. "FINANCIAL FAIRNESS" MEANS HONEST PARTIES NEVER LOST MONEY WHILE AT LEAST ONE MISBEHAVED NODE MUST BE PUNISHED. "DELIVERY FAIRNESS" MEANS EITHER THE MPT FAILS OR PARTIES OBTAIN THEIR OUTPUTS IN ALMOST THE SAME TIME. "DELIVERY ATOMICITY" MEANS WHETHER BOTH COMMITTING OF OUTPUTS AND THE DELIVERY OF OUTPUT OR NONE OF THEM ARE GUARANTEED.Approach
Adversary Model
min(#TX) Confidentiality
Data availability Financial
Fairness
Delivery
Fairness
Delivery
Atomicity
Parties TEE Executors
Parties
TEEs
Table II A
IISUMMARY OF MAIN SYMBOLSTopic
Symbol
Name
Description
Framework
BC
Blockchain
A BC enables Turing-complete smart contracts
P
Parties
An array of an MPT's participants
DN (E, E)
DECLOAK network A network DN consisting of an array of executors E and TEEs E
E *
TEE executor
The server hosting the specified TEE E *
E *
TEE
The specified TEE running the enclave program E .
Protocol
ad E , sk E
Enclave account
The address and private key of the common network account controlled by E
Proc nneg
-
Nondeterministic negotiation subprotocol
Proc rcha
-
challenge-response subprotocol
Proc fdel
-
∆-fair delivery subprotocol
MPT
T X chaT
challengeTEE
A transaction from the specified TEE E[0] to publicly challenge the malicious parties
T X ack i
acknowledge
A transaction from the party P i to publicly join the MPT proposal
T X f neg
failNegotiation
A public response from the specified TEE E[0] to T X chaT to signal the negotiation failure
T X chaP
challengeParties
A transaction from the specified TEE E[0] to publicly challenge the malicious parties
T X resP i
partyResponse
A public response from the party P i to T X chaP
T X cmt
commit
A transaction from the specified TEE E[0] to commit and lock the MPT outputs
T X com
complete
A public response from the specified TEE E[0] to T X chaT to complete the MPT
T X pnsP
punishParties
A public response from the specified TEE E[0] to T X chaT to punish malicious parties
T X pnsT
punishTEEx
A transaction from anyone to punish the misbehaved TEE
Algorithm 1: DECLOAK contract V // This contract is constructed by the network config ad E and a T EE list E. ad E is the called by TX chaP from the specified TEE 12 Function partyResponse(id p , Enc k E (in)) // called by TX resP from parties require(BC.getHeight() < h neg + τ resP ) 14 Function punishParties(id p , P M ) // called by T X pnsP from the specified TEE called by T X cmt from the specified TEE require(msg.sender = prsls[id p ].E ) require(verify(proo f , H cs )) // match old states 22 Function complete(id p , [Enc k ie (k s i )| 1..n ], [Enc k ie (k r i )| 1..n ]) // called by T X com from any registered TEE require(msg.sender ∈ E) H cs ← proo f .H c s // set new states prsls[id p ].sta ← COMPLETED coins[prsls[id p ].E ] ← coins[prsls[id p ].E ] − q prsls[id p ].sta ← ABORTEDnetwork account for managing coins deposited
by parties. For simplicity, we ignore the
register and deposit functions here.
1 Function challengeTEE(p)
// called by T X chaT from one of parties
2
id p ← hash(p)
3
require(prsls[id p ] = /
0)
4
prsls[id p ].{q, h neg , τ com , E } ← p.{q, h neg }, τ com , E[0]
5
prsls[id p ].sta ← PROPOSED
6 Function acknowledge(id p , Enc k E (ack i ))
// called by T X ack from parties
7
require(BC.getHeight() < h neg )
8 Function failNegotiation(id p )
// called by T X f neg from the specified TEE
9
require(msg.sender = prsls[id p ].E )
10
prsls[id p ].sta ← NEGOFAILED
11 Function challengeParties(id p , P *
M )
// 13
15
require(msg.sender = prsls[id p ].E )
// update coins for punishment
16
for P i ∈ P M do
17
coins[P i ] ← coins[P i ] − q
18
prsls[id p ].sta ← ABORTED
19 Function commit(id p , proo f , c *
s , c *
r , e k )
// 20
21
23
24
25
26 Function punishTEE(id p )
// called by T X pnsT from anyone
27
require(prsls[id p ] = /
0 and BC.getHeight() > h neg + τ com )
28
require(prsls[id p ].sta /
∈
{NEGOFAILED, ABORTED, COMPLETED})
29
30
Algorithm 2: DECLOAK enclave program (E ) // For simplicity, we assume each E has obtained the network config and cached the balances of parties' coins by synchronization. The config includes a secure parameter κ, a checkpoint b cp of BC, and the network account (sk E , pk E , ad E ). TX ack ← all PoP chaT .TX ack i before p.h neg 16 ack ← ack ∪ TX ack .ack1 Procedure generateIDp(p)
// check this is the specified TEE
2
if sel f = BC.E[0] then abort
3
id p ← hash(p)
4
return (id p , p)
5 Procedure negotiate(id p , ack)
6
if status = NEGOTIATED then return (id p , p )
7
if status = /
0 or conform(ack, P) = 1
8
or cacheCoins[sel f ] − q < 0
9
or ∃P i ∈ P, cacheCoins[P i ] − q < 0 then abort
10
p , status ← (p.{H F , H P , q, h neg }), NEGOTIATED
11
return (id p , p )
12 Procedure failNegotiation(id p , T X chaT , PoP chaT )
13
if status = /
0 or veriPoP(b cp , PoP chaT , T X chaT ) = 1 then abort
14
if PoP chaT .getComfHeight() > p.h neg then
15
17
if conform(ack, P) = 1 then abort
18
return TX f neg (id p )
19 Procedure execute(id p , in, PoP s )
20
if status = NEGOTIATED then abort
21
Table III
IIION-CHAIN COST OF CHALLENGE-RESPONSE SUBMISSION PHASE. FOR EACH MPT, WE ASSUME ALL PARTIED INVOLVED ARECHALLENGED
Phase
TX
Gas cost
We use the same PoP as[8],[13],[28]
Settlement condition of negotiation is flexible, e.g., the number of parties exceeds a specified threshold.
Pr (s , k s , r, k r , c s , c r , c x , proo f ) = eval(F , P, x, k x , c s , P) Q i ≥ Q i sta = COMPLETED − 1 ≤ ε Definition 2 (Confidentiality). For any n-party MPT F P , any adversary A corrupting parties from P + M in which P M P, any q ∈ D n dep , s ∈ D n pa , x ∈ D n pa and Q ∈ D n coin , the protocol REAL π,A (Q, F , P, x, k x , c s , P, q) is such that: There is a negligible function ε ensuring that ∀x * 1 , s * 1 , s * 1 , r * 1 , x * 2 , s * 2 , s * 2 , r * 2 , ∈ D pa and ∀P i ∈ P H :Definition 3 (Data availability). For any n-party MPT F P , any adversary A corrupting parties from P + , any q ∈ D n dep , s ∈ D n pa , x ∈ D n pa and Q ∈ D n coin , the protocol REAL π,A (Q, F , P, x, k x , c s , P, q) is such that: There is a negligible function ε satisfies that if sta = COMPLETED, one of the following statements must be true.Definition 4 (Financial fairness). For any n-party MPT F P , any adversary A corrupting parties from P + M P + , any q ∈ D n dep , s ∈ D n pa , r ∈ D n pa and Q ∈ D n coin , the output of the protocol REAL π,A (Q, F , P, x, k x , c s , P, q) is such that one of the following statements must be true:Definition 5 (Delivery fairness). For any n-party MPT F P , any adversary A corrupting parties from P + M in which E M E, any q ∈ D n dep , s ∈ D n pa , r ∈ D n pa and Q ∈ D n coin , there is a negligible function ε that for the output of the protocol REAL π,A (Q, F , P, x, k x , c s , P, q), one of the following statements must be true:(i) s , r = / 0, / 0 (ii) s , r, = / 0, / 0, and the following two hold simultaneously :Definition 6 (Delivery atomicity). For any n-party MPT F P , any adversary A corrupting parties from P + M in which E M E, any q ∈ D n dep , s ∈ D n pa , r ∈ D n pa and Q ∈ D n coin , there is a negligible function ε that for the output of the protocol REAL π,A (Q, F , P, x, k x , c s , P, q), one of the following statements must be true:(i) sta ∈ { / 0, NEGOFAILED, ABORTED}, and s , r = / 0, / 0 (ii) sta = COMPLETED, and s , r, = / 0C. Security ProofIn this section, we claim that the following theorem holds in the DECLOAK protocol π DECLOAK .Proof: As proved in Lemma 2, an honest party P i stays at sta = NEGOFAILED only when there is a T X f neg being successfully confirmed on the BC. Consequently, the E * with the Execution phase. Therefore, parties in P obtain no outputs, i.e., s , r = / 0, / 0.Lemma 7.If there exist an honest party P i such that sta = ABORTED, then the statement (i) of the delivery (∆−)fairness holds.Proof: One of E releases the T X com only when it validates that the predecessor T X cmt has been confirmed on BC. When sta = ABORTED, it means that, according to Algorithm 1, the protocol terminates and there is no possibility for sta = COMMITTED, so as to releasing T X com . Therefore, it holds that s , r = / 0, / 0.Lemma 8.If there exist an honest party P i such that sta = COMPLETED, then the statement (ii) of the delivery (∆−)fairness holds.Proof: According to Algorithm 1, sta = COMPLETED only when T X com is accepted and confirmed by BC, which means that T X com is released by at least one E s. In fact, if T X cmt has been confirmed on BC, any E ∈ E can validate the PoP cmt of T X cmt and read the k s , k r from T X cmt to constructs and releases a T K com . As we assume that BC is ideally accessible to any honest entity. Therefore, say T X cmt is confirmed on BC in a wall-time t com , then the time of all honest entities in P + knowing that T X cmt has been confirmed is also t com , i.e., t i ← t com | t i ∈t + com . Moreover, as P i ∈ P H undisturbedly obtain T X com from honest Es within the network latency ∆, then we conclude that t s = t r , i.e., the (a) and (b) of (ii) are satisfied, if at least one honest E exists.
Bitcoin: A peer-to-peer electronic cash system. S Nakamoto, Decentralized Business Review. 21260S. Nakamoto, "Bitcoin: A peer-to-peer electronic cash system," Decen- tralized Business Review, p. 21260, 2008.
Ethereum: A secure decentralised generalised transaction ledger. G Wood, Ethereum project yellow paperG. Wood et al., "Ethereum: A secure decentralised generalised transac- tion ledger," Ethereum project yellow paper, 2014.
Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts. A Kosba, A Miller, E Shi, Z Wen, C Papamanthou, 2016 IEEE Symposium on Security and Privacy (SP). A. Kosba, A. Miller, E. Shi, Z. Wen, and C. Papamanthou, "Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts," 2016 IEEE Symposium on Security and Privacy (SP), pp. 839-858, 2016.
Luciditee: A tee-blockchain system for policy-compliant multiparty computation with fairness. R Sinha, R. Sinha, "Luciditee: A tee-blockchain system for policy-compliant multiparty computation with fairness," 2020.
Privacy-preserving decentralized exchange marketplaces. K Govindarajan, D Vinayagamurthy, P Jayachandran, C Rebeiro, 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). K. Govindarajan, D. Vinayagamurthy, P. Jayachandran, and C. Re- beiro, "Privacy-preserving decentralized exchange marketplaces," in 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), 2022, pp. 1-9.
Futuresmex: Secure, distributed futures market exchange. F Massacci, C N Ngo, J Nie, D Venturi, J Williams, 2018 IEEE Symposium on Security and Privacy. F. Massacci, C. N. Ngo, J. Nie, D. Venturi, and J. Williams, "Futuresmex: Secure, distributed futures market exchange," in 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 335-353.
Demo: Cloak: A framework for development of confidential blockchain smart contracts. Q Ren, H Liu, Y Li, H Lei, 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS). Q. Ren, H. Liu, Y. Li, and H. Lei, "Demo: Cloak: A framework for development of confidential blockchain smart contracts," in 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), 2021, pp. 1102-1105.
Cloak: Transitioning states on legacy blockchains using secure and publicly verifiable off-chain multi-party computation. Q Ren, Y Wu, H Liu, Y Li, A Victor, H Lei, L Wang, B Chen, Proceedings of the 38th Annual Computer Security Applications Conference. the 38th Annual Computer Security Applications ConferenceQ. Ren, Y. Wu, H. Liu, Y. Li, A. Victor, H. Lei, L. Wang, and B. Chen, "Cloak: Transitioning states on legacy blockchains using secure and publicly verifiable off-chain multi-party computation," in Proceedings of the 38th Annual Computer Security Applications Conference, 2022, pp. 117-131.
Publicly auditable secure multi-party computation. C Baum, I Damgård, C Orlandi, Security and Cryptography for Networks. R. De PriscoSpringer International PublishingC. Baum, I. Damgård, and C. Orlandi, "Publicly auditable secure multi-party computation," in Security and Cryptography for Networks, M. Abdalla and R. De Prisco, Eds. Cham: Springer International Publishing, 2014, pp. 175-196.
Zero-knowledge proofs on secret-shared data via fully linear pcps. D Boneh, E Boyle, H Corrigan-Gibbs, N Gilboa, Y Ishai, Cryptology ePrint Archive, PaperD. Boneh, E. Boyle, H. Corrigan-Gibbs, N. Gilboa, and Y. Ishai, "Zero-knowledge proofs on secret-shared data via fully linear pcps," Cryptology ePrint Archive, Paper 2019/188, 2019, https://eprint.iacr.or g/2019/188. [Online]. Available: https://eprint.iacr.org/2019/188
Mpc-in-multi-heads: A multi-prover zero-knowledge proof system. H Cui, K Zhang, Y Chen, Z Liu, Y Yu, European Symposium on Research in Computer Security. SpringerH. Cui, K. Zhang, Y. Chen, Z. Liu, and Y. Yu, "Mpc-in-multi-heads: A multi-prover zero-knowledge proof system," in European Symposium on Research in Computer Security. Springer, 2021, pp. 332-351.
Zeestar: Private smart contracts by homomorphic encryption and zero-knowledge proofs. S Steffen, B Bichsel, R Baumgartner, M Vechev, 2022 IEEE Symposium on Security and Privacy (SP). S. Steffen, B. Bichsel, R. Baumgartner, and M. Vechev, "Zeestar: Private smart contracts by homomorphic encryption and zero-knowledge proofs," in 2022 IEEE Symposium on Security and Privacy (SP), 2022, pp. 179-197.
Fastkitten: Practical smart contracts on bitcoin. P Das, L Eckey, T Frassetto, D Gens, K Hostáková, P Jauernig, S Faust, A.-R Sadeghi, 28th USENIX Security Symposium. USENIX SecurityP. Das, L. Eckey, T. Frassetto, D. Gens, K. Hostáková, P. Jauernig, S. Faust, and A.-R. Sadeghi, "Fastkitten: Practical smart contracts on bitcoin," in 28th USENIX Security Symposium (USENIX Security
Data availability. Ethhub, EthHub, "Data availability," https://ethereum.org/en/developers/docs/da ta-availability, accessed on 05/21/2023.
Eip-4488: Transaction calldata gas cost reduction with total calldata limit. V Buterin, A Dietrichs, V. Buterin and A. Dietrichs, "Eip-4488: Transaction calldata gas cost reduction with total calldata limit," https://eips.ethereum.org/EIPS/eip -4488, Nov 2021. [Online]. Available: https://eips.ethereum.org/EIPS/e ip-4488
Zk-rollups. Ethhub, EthHub, "Zk-rollups," https://docs.ethhub.io/ethereum-roadmap/layer- 2-scaling/zk-rollups/, accessed on 07/13/2022.
Optimistic rollups. --, "Optimistic rollups," https://docs.ethhub.io/ethereum-roadmap/la yer-2-scaling/optimistic rollups/, accessed on 07/13/2022.
Eip-4844: Shard blob transactions. V Buterin, D L Feist, G Kadianakis, M Garnett, A Dietrichs, V. Buterin, D. L. Dankrad Feist, G. Kadianakis, M. Garnett, and A. Dietrichs, "Eip-4844: Shard blob transactions," https: //eips.ethereum.org/EIPS/eip-4844, Feb 2022. [Online]. Available: https://eips.ethereum.org/EIPS/eip-4844
Cancun network upgrade specification. Ethereum, Ethereum, "Cancun network upgrade specification," https://github.com /ethereum/execution-specs/blob/master/network-upgrades/mainnet-upg rades/cancun.md#included-eips, accessed on 05/21/2023.
Quantifying blockchain extractable value: How dark is the forest?. K Qin, L Zhou, A Gervais, 2022 IEEE Symposium on Security and Privacy (SP). IEEEK. Qin, L. Zhou, and A. Gervais, "Quantifying blockchain extractable value: How dark is the forest?" in 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. 198-214.
Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts. R Cheng, F Zhang, J Kos, W He, N Hynes, N Johnson, A Juels, A Miller, D Song, 2019 IEEE European Symposium on Security and Privacy. 00R. Cheng, F. Zhang, J. Kos, W. He, N. Hynes, N. Johnson, A. Juels, A. Miller, and D. Song, "Ekiden: A Platform for Confidentiality- Preserving, Trustworthy, and Performant Smart Contracts," 2019 IEEE European Symposium on Security and Privacy (EuroS&P), vol. 00, pp. 185-200, 2019.
. D Maier, R Pottinger, A Doan, W.-C Tan, A Alawini, H Q Ngo, Y Yan, C Wei, X Guo, X Lu, X Zheng, Q Liu, C Zhou, X Song, B Zhao, H Zhang, G Jiang, Support over Financial Grade Consortium Blockchain," 2020D. Maier, R. Pottinger, A. Doan, W.-C. Tan, A. Alawini, H. Q. Ngo, Y. Yan, C. Wei, X. Guo, X. Lu, X. Zheng, Q. Liu, C. Zhou, X. Song, B. Zhao, H. Zhang, and G. Jiang, "Confidentiality Support over Financial Grade Consortium Blockchain," 2020, pp. 2227-2240.
Pose: Practical off-chain smart contract execution. T Frassetto, P Jauernig, D Koisser, D Kretzler, B Schlosser, S Faust, A.-R Sadeghi, abs/2210.07110Proceedings of the 2022 Network and Distributed System Security Symposium. the 2022 Network and Distributed System Security SymposiumT. Frassetto, P. Jauernig, D. Koisser, D. Kretzler, B. Schlosser, S. Faust, and A.-R. Sadeghi, "Pose: Practical off-chain smart contract execution," in Proceedings of the 2022 Network and Distributed System Security Symposium, vol. abs/2210.07110, 2022.
Fairness in an Unfair World: Fair Multiparty Computation from Public Bulletin Boards. A R Choudhuri, M Green, A Jain, G Kaptchuk, I Miers, ser. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. A. R. Choudhuri, M. Green, A. Jain, G. Kaptchuk, and I. Miers, "Fairness in an Unfair World: Fair Multiparty Computation from Public Bulletin Boards," ser. Proceedings of the 2017 ACM SIGSAC Confer- ence on Computer and Communications Security, 2017, pp. 719-728.
ZEXE: Enabling Decentralized Private Computation. S Bowe, A Chiesa, M Green, I Miers, P Mishra, H Wu, 2020 IEEE Symposium on Security and Privacy. S. Bowe, A. Chiesa, M. Green, I. Miers, P. Mishra, and H. Wu, "ZEXE: Enabling Decentralized Private Computation," 2020 IEEE Symposium on Security and Privacy, 2020.
Confidential Ethereum Smart Contracts. S State, O Labs, Tech. Rep. 122020S. State and O. Labs, "Confidential Ethereum Smart Contracts," Tech. Rep., 12 2020.
Ccf: A framework for building confidential verifiable replicated services. M Russinovich, E Ashton, C Avanessians, M Castro, A Chamayou, S Clebsch, Microsoft Research and Microsoft Azure. M. Russinovich, E. Ashton, C. Avanessians, M. Castro, A. Chamayou, S. Clebsch, and et al., "Ccf: A framework for building confidential verifiable replicated services," Microsoft Research and Microsoft Azure, Tech. Rep., Apr. 2019.
Tesseract: Real-Time Cryptocurrency Exchange Using Trusted Hardware," ser. L Cavallaro, J Kinder, X Wang, J Katz, I Bentov, Y Ji, F Zhang, L Breidenbach, P Daian, A Juels, Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. the 2019 ACM SIGSAC Conference on Computer and Communications SecurityL. Cavallaro, J. Kinder, X. Wang, J. Katz, I. Bentov, Y. Ji, F. Zhang, L. Breidenbach, P. Daian, and A. Juels, "Tesseract: Real-Time Cryp- tocurrency Exchange Using Trusted Hardware," ser. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 1521-1538.
Amortizing Secure Computation with Penalties. E Weippl, S Katzenbeisser, C Kruegel, A Myers, S Halevi, R Kumaresan, I Bentov, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityE. Weippl, S. Katzenbeisser, C. Kruegel, A. Myers, S. Halevi, R. Ku- maresan, and I. Bentov, "Amortizing Secure Computation with Penal- ties," Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 418-429, 2016.
Improvements to Secure Computation with Penalties. E Weippl, S Katzenbeisser, C Kruegel, A Myers, S Halevi, R Kumaresan, V Vaikuntanathan, P N Vasudevan, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityE. Weippl, S. Katzenbeisser, C. Kruegel, A. Myers, S. Halevi, R. Ku- maresan, V. Vaikuntanathan, and P. N. Vasudevan, "Improvements to Secure Computation with Penalties," Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 406-417, 2016.
How to use bitcoin to play decentralized poker. I Ray, N Li, C Kruegel, R Kumaresan, T Moran, I Bentov, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. the 22nd ACM SIGSAC Conference on Computer and Communications SecurityI. Ray, N. Li, C. Kruegel, R. Kumaresan, T. Moran, and I. Bentov, "How to use bitcoin to play decentralized poker," Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 195-206, 2015.
Crowd verifiable zero-knowledge and end-to-end verifiable multiparty computation. F Baldimtsi, A Kiayias, T Zacharias, B Zhang, 10.1007/978-3-030-64840-4_24Advances in Cryptology -ASIACRYPT 2020: 26th International Conference on the Theory and Application of Cryptology and Information Security. Daejeon, South Korea; Berlin, HeidelbergSpringer-VerlagProceedings, Part IIIF. Baldimtsi, A. Kiayias, T. Zacharias, and B. Zhang, "Crowd verifiable zero-knowledge and end-to-end verifiable multiparty computation," in Advances in Cryptology -ASIACRYPT 2020: 26th International Conference on the Theory and Application of Cryptology and Information Security, Daejeon, South Korea, December 7-11, 2020, Proceedings, Part III. Berlin, Heidelberg: Springer-Verlag, 2020, p. 717-748. [Online]. Available: https://doi.org/10.1007/978-3-030-6484 0-4 24
Experimenting with collaborative zk-SNARKs: Zero-Knowledge proofs for distributed secrets. A Ozdemir, D Boneh, 31st {USENIX} Security Symposium ({USENIX} Security 22). Boston, MAUSENIX AssociationA. Ozdemir and D. Boneh, "Experimenting with collaborative zk- SNARKs: Zero-Knowledge proofs for distributed secrets," in 31st {USENIX} Security Symposium ({USENIX} Security 22). Boston, MA: USENIX Association, Aug. 2022, pp. 4291-4308. [Online].
. Ethereum, 10Solc 0.8.Ethereum, "Solc 0.8.10," https://github.com/ethereum/solidity/releases/ tag/v0.8.10, July 2021. [Online]. Available: https://github.com/ethereu m/solidity/releases/tag/v0.8.10
Intel sgx explained. V Costan, S Devadas, IACR Cryptol. ePrint Arch. 201686V. Costan and S. Devadas, "Intel sgx explained." IACR Cryptol. ePrint Arch., vol. 2016, no. 86, pp. 1-118, 2016.
Ethereum virtual machine. E Foundation, E. Foundation, "Ethereum virtual machine," Dec 2020. [Online].
Strong and efficient cache side-channel protection using hardware transactional memory. D Gruss, J Lettner, F Schuster, O Ohrimenko, I Haller, M Costa, 26th {USENIX} Security Symposium ({USENIX} Security 17. D. Gruss, J. Lettner, F. Schuster, O. Ohrimenko, I. Haller, and M. Costa, "Strong and efficient cache side-channel protection using hardware trans- actional memory," in 26th {USENIX} Security Symposium ({USENIX} Security 17), 2017, pp. 217-233.
T-sgx: Eradicating controlled-channel attacks against enclave programs. M.-W Shih, S Lee, T Kim, M Peinado, NDSS. M.-W. Shih, S. Lee, T. Kim, and M. Peinado, "T-sgx: Eradicating controlled-channel attacks against enclave programs." in NDSS, 2017.
Mole: Mitigation of side-channel attacks against sgx via dynamic data location escape. F Lang, W Wang, L Meng, J Lin, Q Wang, L Lu, Proceedings of the 38th Annual Computer Security Applications Conference. the 38th Annual Computer Security Applications ConferenceF. Lang, W. Wang, L. Meng, J. Lin, Q. Wang, and L. Lu, "Mole: Mitigation of side-channel attacks against sgx via dynamic data location escape," Proceedings of the 38th Annual Computer Security Applications Conference, 2022.
Sancus: Low-cost trustworthy extensible networked devices with a zero-software trusted computing base. J Noorman, P Agten, W Daniels, R Strackx, A Van Herrewege, C Huygens, B Preneel, I Verbauwhede, F Piessens, 22nd {USENIX} Security Symposium ({USENIX} Security 13. J. Noorman, P. Agten, W. Daniels, R. Strackx, A. Van Herrewege, C. Huygens, B. Preneel, I. Verbauwhede, and F. Piessens, "Sancus: Low-cost trustworthy extensible networked devices with a zero-software trusted computing base," in 22nd {USENIX} Security Symposium ({USENIX} Security 13), 2013, pp. 479-498.
Sanctum: Minimal hardware extensions for strong software isolation. V Costan, I Lebedev, S Devadas, 25th {USENIX} Security Symposium ({USENIX} Security 16. V. Costan, I. Lebedev, and S. Devadas, "Sanctum: Minimal hardware extensions for strong software isolation," in 25th {USENIX} Security Symposium ({USENIX} Security 16), 2016, pp. 857-874.
The guard's dilemma: Efficient Code-Reuse attacks against intel SGX. A Biondo, M Conti, L Davi, T Frassetto, A.-R Sadeghi, 27th USENIX Security Symposium (USENIX Security 18). Baltimore, MDUSENIX AssociationA. Biondo, M. Conti, L. Davi, T. Frassetto, and A.-R. Sadeghi, "The guard's dilemma: Efficient Code-Reuse attacks against intel SGX," in 27th USENIX Security Symposium (USENIX Security 18). Baltimore, MD: USENIX Association, Aug. 2018, pp. 1213-1227. [Online].
Software grand exposure: SGX cache attacks are practical. F Brasser, U Müller, A Dmitrienko, K Kostiainen, S Capkun, A.-R Sadeghi, 11th USENIX Workshop on Offensive Technologies (WOOT 17). Vancouver, BC: USENIX Association. F. Brasser, U. Müller, A. Dmitrienko, K. Kostiainen, S. Capkun, and A.-R. Sadeghi, "Software grand exposure: SGX cache attacks are practical," in 11th USENIX Workshop on Offensive Technologies (WOOT 17). Vancouver, BC: USENIX Association, Aug. 2017. [Online]. Available: https://www.usenix.org/conference/woot17/works hop-program/presentation/brasser
Foreshadow: Extracting the keys to the intel SGX kingdom with transient Out-of-Order execution. J V Bulck, M Minkin, O Weisse, D Genkin, B Kasikci, F Piessens, M Silberstein, T F Wenisch, Y Yarom, R Strackx, 27th USENIX Security Symposium (USENIX Security 18). Baltimore, MDUSENIX AssociationJ. V. Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx, "Foreshadow: Extracting the keys to the intel SGX kingdom with transient Out-of-Order execution," in 27th USENIX Security Symposium (USENIX Security 18). Baltimore, MD: USENIX Association, Aug. 2018, p. 991-1008. [Online]. Available: https: //www.usenix.org/conference/usenixsecurity18/presentation/bulck
Resources and response to side channel l1 terminal fault. Intel, Intel, "Resources and response to side channel l1 terminal fault," Dec 2021. [Online]. Available: https://www.intel.com/content/www/us/en/ar chitecture-and-technology/l1tf.html?wapkw=l1tf
How stuff gets exposed. collected, E * cannot move to the Execution phase, therefore no outputs are obtained or delivered"How stuff gets exposed," https://sgx.fail/, Jan 2022. collected, E * cannot move to the Execution phase, therefore no outputs are obtained or delivered.
If there exist an honest party P i staying at sta = NEGOFAILED, then the statement (i) of the delivery (∆−)fairness holds. Lemma 6Lemma 6. If there exist an honest party P i staying at sta = NEGOFAILED, then the statement (i) of the delivery (∆−)fairness holds.
| [
"https://github.com/ethereum/solidity/releases/",
"https://github.com/ethereu"
]
|
[
"CHARACTER DEGREE GRAPH OF SOLVABLE GROUPS WITH ODD DEGREE",
"CHARACTER DEGREE GRAPH OF SOLVABLE GROUPS WITH ODD DEGREE"
]
| [
"G Sivanesan ",
"C Selvaraj "
]
| []
| []
| Let G be a finite group, let Irr(G) be the set of all complex irreducible characters of G and let cd(G) be the set of all degrees of characters in Irr(G). Let ρ(G) be the set of primes that divide degrees in cd(G). The character degree graph ∆(G) of G is the simple undirected graph with vertex set ρ(G) and in which two distinct vertices p and q are adjacent if there exists a character degree r ∈ cd(G) such that r is divisible by the product pq. In this paper, we obtain a necessary condition for the character degree graph ∆(G) with all of its vertices are odd degree of a finite solvable group G .2020 Mathematics Subject Classification. 05C45, 20F16, 20C15. | null | [
"https://export.arxiv.org/pdf/2305.12324v1.pdf"
]
| 258,833,093 | 2305.12324 | 7c5f4e7f156124a9577b921d79fa7aa007d91f35 |
CHARACTER DEGREE GRAPH OF SOLVABLE GROUPS WITH ODD DEGREE
21 May 2023
G Sivanesan
C Selvaraj
CHARACTER DEGREE GRAPH OF SOLVABLE GROUPS WITH ODD DEGREE
21 May 2023
Let G be a finite group, let Irr(G) be the set of all complex irreducible characters of G and let cd(G) be the set of all degrees of characters in Irr(G). Let ρ(G) be the set of primes that divide degrees in cd(G). The character degree graph ∆(G) of G is the simple undirected graph with vertex set ρ(G) and in which two distinct vertices p and q are adjacent if there exists a character degree r ∈ cd(G) such that r is divisible by the product pq. In this paper, we obtain a necessary condition for the character degree graph ∆(G) with all of its vertices are odd degree of a finite solvable group G .2020 Mathematics Subject Classification. 05C45, 20F16, 20C15.
Introduction
Throughout this paper, G will be a finite solvable group with identity 1. We denote the set of complex irreducible characters of G by Irr(G). Here cd(G) = {χ(1) | χ ∈ Irr(G)} is the set of all distinct degrees of irreducible characters in Irr(G). Let ρ(G) be the set of all primes that divide degrees in cd(G). There is a vast literature devoted to study of ways through which one can associate a graph with a group and that literature can be used for investigating the algebraic structure of groups using graph theoretical properties of associated graphs. One of these graphs is the character degree graph ∆(G) of G. In fact, ∆(G) is an undirected simple graph with vertex set ρ(G) in which p, q ∈ ρ(G) are joined by an edge if there exists a character degree χ(1) ∈ cd(G) which is divisible by pq. This graph was first defined in [12] and studied by many authors (see [9], [16] and [17]). When G is a solvable group, some interesting results on the character graph of G have been obtained. Actually, Manz [11] proved that ∆(G) has at most two connected components. Also, Manz et al. [13] have proved that diameter of ∆(G) is at most 3. Mahdi Ebrahimi et al. [3] proved that the character graph ∆(G) of a solvable group G is Hamiltonian if and only if ∆(G) is a block with at least 3 vertices. [18] We obtain a necessary condition for the character degree graph ∆(G) of a finite solvable group G to be Eulerian. Motivated by these studies on character degree graphs of solvable groups, we find a necessary condition for the character degree graph ∆(G) of a solvable group G with odd degree vertices only.
Preliminaries
In this section, we present some preliminary results which are used in the paper. All graphs are assumed to be simple, undirected and finite. Let Γ be a graph with vertex set V (Γ) and edge set E(Γ). If Γ is connected, then the distance d(u, v) between two distinct vertices u, v ∈ V (Γ) is the length of the shortest path between them. The supremum of all distance between possible pairs of distinct vertices is known as the diameter of the graph. A complete graph with n vertices in which any of the two distinct vertices are adjacent is denoted by K n . A cycle with n vertices is denoted by C n . The vertex connectivity k(Γ) of Γ is defined to be the minimum number of vertices whose removal from Γ results in a disconnected subgraph of Γ. A cut vertex v of a graph Γ is a vertex such that the number of connected components of Γ − v is more than the number of connected components of Γ. Similar definition is applicable in the case of cut edge of graph Γ. A maximal connected subgraph without a cut vertex is called a block. By their maximality, different blocks of Γ overlap in at most one vertex, which is then a cut vertex. Thus, every edge of Γ lies in a unique block and Γ is the union of its blocks. The degree of a vertex v in Γ is the number of edges incident with v and the same is denoted by d(v) or degv. A graph Γ is called k-regular, if the degree of each vertex is k. A graph Γ is said to be Eulerian if it contains a cycle containing all vertices of Γ. We will take into account the following well known facts concerning character degree graphs and they are needed in the next section. Remark 2.2. Let G be a solvable group. Then diam(∆(G)) ≤ 3 [13,Theorem 3.2]. Assume that the diameter diam(∆(G)) = 3 and let r, s ∈ ρ(G) be two distinct vertices in ∆(G) such that distance d(r, s) = 3. When ∆(G) has exactly diameter 3 and contains at least 5 vertices, Lewis [6, p. 5487] proved that the vertex set ρ(G) of ∆(G) can be partitioned into ρ 1 , ρ 2 , ρ 3 and ρ 4 . Actually ρ 4 is the set of all vertices of ∆(G) which are at distance 3 from the vertex r and so we have that s ∈ ρ 4 , ρ 3 is the set of all vertices of ∆(G) which are distance 2 from the vertex r, ρ 2 is the set of all vertices that are adjacent to vertex r and adjacent to some prime in ρ 3 and ρ 1 consists of r, the set of all vertices which are adjacent to r and not adjacent to any vertex in ρ 3 . This implies that, no vertex in ρ 1 is adjacent to any vertex in ρ 3 ∪ ρ 4 and no vertex in ρ 4 is adjacent to any vertex in ρ 1 ∪ ρ 2 , every vertex in ρ 2 is adjacent to some vertex in ρ 3 and vice versa, and ρ 1 ∪ ρ 2 and ρ 3 ∪ ρ 4 both induce complete subgraphs of ∆(G).
Remark 2.3. Huppert [5, p. 25] listed all possible graphs ∆(G) for solvable groups G with at most 4 vertices. In fact, every graph with 3 or few vertices that satisfies Pálfy's condition occurs as ∆(G) for some solvable group G.
Theorem 2.4. [3, Lemma 2.7] Let G be a group with | ρ(G) |≥ 3. If ∆(G)
is not a block and the diameter of ∆(G) is at most 2, then each block of ∆(G) is a complete graph.
Theorem 2.5. [19,Theorem 5] The graph with four vertices in Figure 1 is not the character degree graph of a solvable group. Definition 2.11. [7] Using direct product, one can find bigger groups from smaller groups. The same may be used to construct higher order character degree graphs. Given groups A and B, we have that ρ(A × B) = ρ(A) ∪ ρ(B). Define an edge between vertices p and q in ρ(A × B) if any of the following is satisfied:
(i) p, q ∈ ρ(A) and there is an edge between p and q in ∆(A); (ii) p, q ∈ ρ(B) and there is an edge between p and q in ∆(B); (iii) p ∈ ρ(A) and q ∈ ρ(B); (iv) p ∈ ρ(B) and q ∈ ρ(A). Now we get a higher order character degree graph and it is called direct product.
Character Degree Graphs with odd degree
Theorem 3.1. Let G be a solvable group and ∆(G) is a complete graph with n ≥ 2 is even, then ∆(G) is a graph with odd degree.
Proof. Every complete graph is a character degree graph [2, Lemma 2.1]. By assumption, n is even. So all the vertices of ∆(G) are odd degree.
Theorem 3.2. If ∆(G) is a non-complete and regular character degree graph of a finite solvable group G with n vertices, then ∆(G) is not a character degree graph with odd degrees .
Proof. By [14, Theorem A], then ∆(G) is n − 2 regular graph. Case 1. n is even. If n is even, then the regular graph is even. So ∆(G) is not a character degree graph with odd degree Case 2. n is odd. The number of vertices of odd degree is even. So n can not be odd.
Theorem 3.3. Let G be a solvable group and ∆(G) is a graph with odd degree, then ∆(G) is a block .
Proof. Case 1. For, assume that ∆(G) has diameter 3. By assumption ∆(G) is not a block, and each block of ∆(G) contain even number of vertices. ∆(G) has two blocks say B 1 and B 2 and it has a cut vertex. This cut vertex is belongs to ρ 2 . By Theorem 2.8, | ρ 2 |= 1. Since ρ 1 ∪ ρ 2 and ρ 3 ∪ ρ 4 induce complete subgraphs, each vertex in ρ 3 ∪ ρ 4 is of odd degree. Since | ρ 2 |= 1, and every vertex in ρ 3 is adjacent to some vertex in ρ 2 by Remark 2.2. So each vertex in ρ 3 is of even degree. Hence ∆(G) is not a graph with odd degree vertices. Case 2. Suppose ∆(G) has diameter at most 2. By assumption ∆(G) is not a block, and each block of ∆(G) contain even number of vertices. ∆(G) has two blocks say B 1 and B 2 and it has a cut vertex. By Theorem 2.4, each block of ∆(G) is a complete graph. So, the cut vertex of ∆(G) is even degree. Therefore ∆(G) is not a graph with odd degree vertices.
We prove that the character degree graph ∆(G) is a graph with odd degree vertices only when the diameter of ∆(G) is 3.
(i) ∆(G) is a block; (ii) Both | ρ 1 ∪ ρ 2 | and | ρ 3 ∪ ρ 4 | are even; (iii) The subgraph of ∆(G) induced by ρ 2 ∪ ρ 3 is Eulerian.
Proof. Assume that ∆(G) with odd degree vertices.
(i) If ∆(G) is not a block, by Theorem 3.3 ∆(G) is not a graph with odd degree vertices. Hence ∆(G) is a block.
(ii) We claim that both | ρ 1 ∪ ρ 2 | and | ρ 3 ∪ ρ 4 | are even. Suppose not, let us assume that | ρ 1 ∪ ρ 2 | is odd. As mentioned in Remark 2.2, ρ 1 ∪ ρ 2 is a complete graph and no prime in ρ 1 is adjacent to any prime in ρ 3 ∪ ρ 4 . Hence the vertices in ρ 1 are of even degree and so ∆(G) is not a graph with odd degree vertices, which is a contradiction. Therefore | ρ 1 ∪ ρ 2 | is even.
Suppose that | ρ 3 ∪ ρ 4 | is odd. As mentioned in Remark 2.2, ρ 3 ∪ ρ 4 is a complete graph and no prime in ρ 4 is adjacent to any prime in ρ 1 ∪ ρ 2 . Hence the vertices in ρ 4 are of even degree and so ∆(G)is not a graph with odd degree vertices, which is a contradiction. Therefore |ρ 3 ∪ ρ 4 | is even.
(iii) Suppose the subgraph induced by ρ 2 ∪ ρ 3 is non Eulerian. One can refer Remark 2.2 for the following facts. No prime in ρ 1 is adjacent to any prime in ρ 3 ∪ ρ 4 . Similarly no prime in ρ 4 is adjacent to any prime in ρ 1 ∪ ρ 2 . Since ρ 1 ∪ ρ 2 and ρ 3 ∪ ρ 4 are complete graphs with even number of vertices. Hence each vertex in ρ 2 and ρ 3 is of odd degree in sub graphs induced by ρ 1 ∪ ρ 2 and ρ 3 ∪ ρ 4 . Therefore the vertices in ρ 1 and ρ 4 are of odd degree. But the degree of each vertex in ρ 2 is the sum of the degree of that vertex in ρ 2 in the subgraph induced by ρ 2 ∪ ρ 3 and degree of that vertex in ρ 2 in the subgraph induced by ρ 1 ∪ ρ 2 . Similarly degree of each vertex in ρ 3 is the sum of degree of that vertex in ρ 3 in the subgraph induced by ρ 2 ∪ ρ 3 and degree of that vertex in ρ 3 in the subgraph induced by ρ 3 ∪ ρ 4 . By assumption ρ 2 ∪ ρ 3 is non Eulerian and hence there are vertices of odd degree in ρ 2 or ρ 3 in the subgraph induced by ρ 2 ∪ ρ 3 . Hence some vertices in ρ 2 or ρ 3 are of even degree, which is a contradiction to ∆(G) is a graph with odd degree vertices. Therefore the vertex induced subgraph ρ 2 ∪ ρ 3 is Eulerian.
Conversely, assume that conditions (i)-(iii) are true. According to Lewis partition, the subgraphs induced by ρ 1 ∪ ρ 2 and ρ 3 ∪ ρ 4 are complete subgraphs of ∆(G), no prime in ρ 1 is adjacent to any prime in ρ 3 ∪ ρ 4 and no prime in ρ 4 is adjacent to any prime in ρ 1 ∪ ρ 2 . Since both | ρ 1 ∪ ρ 2 | and | ρ 3 ∪ ρ 4 | are even, vertices in ρ 1 and ρ 4 are of odd degree. Since ∆(G) is a block, by Theorem 2.10 we get that | ρ 2 |, | ρ 3 |≥ 2. Since the subgraph induced by ρ 2 ∪ρ 3 is Eulerian, we get that vertices in ρ 2 and ρ 3 are of odd degree. According to Lewis partition, all subsets ρ 1 , ρ 2 , ρ 3 and ρ 4 in ρ(G) are non-empty disjoint subsets. Therefore all the vertices in ρ(G) are odd degree. Hence ∆(G) is a graph with odd degree vertices.
Number of Character Degree Graphs with odd degree
In this section, we obtain the number of character degree graphs with odd degree in terms of number of vertices. Actually, we obtain below that the number of character degree graphs with n vertices (n ≥ 6 is even) which are non regular by assuming that the diameter of ∆(G) is two.
Theorem 4.1. The character degree graphs with n vertices (n ≥ 6 is even) for some solvable group G has atleast 1 non regular character degree graphs with odd degree, Here ∆(G) is a block with diameter 2.
Proof. In this proof all the character degree graphs are constructed via direct products only by definition 2.11. Case i:| ρ(G) |= 6. Bissler et. al [1, p. 503] classified all character degree graphs with 6 vertices expect 9 graphs listed in [1, p. 509]. But none of these 9 graphs are character degree graphs with odd degree. On the other hand, among twelve graphs, there exists only one non regular graph with all the vertices as odd degree and the same is given in Figure 2. The number of character degree graphs with odd degree which are not regular graph is 1. By induction method, Let ∆(G) be the character degree graph with n − 2 vertices in which 4 vertices have degree n − 5 and remaining n − 6 vertices have degree n − 3 and ∆(H) be K 2 which are disjoint from ρ(G). By direct product, ∆(G × H) have n vertices in which 4 vertices have degree n − 3 and remaining n − 4 vertices have degree n − 1.
Remark 4.2. By Theorem 2.9 solvable group in Theorem 4.1 has Fitting height is at least 3.
Remark 2 . 1 .
21For results regarding ∆(G), we start with Pálfy's three prime theorem on the character degree graph of solvable groups. Pálfy theorem [15, Theorem, p. 62] states that given a solvable group G and any three distinct vertices of ∆(G), there exists an edge incident with the other two vertices. On applying Pálfy's theorem, ∆(G) has at most two connected components.
Figure 1 .
1Graph with four vertices Theorem 2.6. [10, Theorem 1.1] Let G be a solvable group. Then ∆(G) has at most one cut vertex. Theorem 2.7. [14, Theorem A] If ∆(G) is a non-complete and regular character degree graph of a finite solvable group G with n vertices, then Γ(G) is n − 2 is a regular graph. Theorem 2.8. [10, Lemma 2.1] Let G be a solvable group and assume that ∆(G) has diameter 3. Then G is 1-connected if and only if | ρ 2 |= 1 in the diameter 3 partition of ρ(G). In this case, if p is the unique prime in ρ 2 , then p is also the unique cut vertex for ∆(G). In particular, ∆(G) has at most one cut vertex. Theorem 2.9. [8, Corollary B] If G is a solvable group with n =| ρ(G) | and ∆(G) contains two vertices of degree less than n − 2 that are not adjacent, then the Fitting height of G is at least 3.
Theorem 2 .
210. [4, Lemma 4.1] Let G be a solvable group where the diameter of ∆(G) is 3 with Lewis partition ρ(G) = ρ 1 ∪ ρ 2 ∪ ρ 3 ∪ ρ 4 . Then ∆(G) is a block if and only if | ρ 2 |, | ρ 3 |≥ 2.
Theorem 3 . 4 .
34Let G be a solvable group with diameter of ∆(G) is 3. Let the Lewis' partition of G be ρ(G) = ρ 1 ∪ ρ 2 ∪ ρ 3 ∪ ρ 4 . Then ∆(G) is a graph with only odd degree vertices if and only if the following conditions are hold:
Case ii: | ρ(G) |= 8. Let ∆(G) be the character degree graphs as in Figure 2. Let ∆(H) be K 2 and ρ(H) are disjoint from ρ(G). By direct product, ∆(G×H) have 8 vertices in which 4 vertices have degree 5 and remaining 4 vertices have degree 7.
Figure 2 .
2Graph with six vertices The number of character degree graphs with odd degree which are not regular graph is 1. Case iii: | ρ(G) |= 10. Let ∆(G) be 8 vertices graph as in case 3 and ∆(H) be K 2 and ρ(H) are disjoint from ρ(G). By direct product, ∆(G × H) have 10 vertices in which 4 vertices have degree 7 and remaining 6 vertices have degree 9.
AcknowledgmentThe second author is partially supported by DST-FIST (Letter No: SR / FST / MSI -115 / 2016 dated: 10 -10 -2017 ).
Classifying character degree graphs with six vertices. M Bissler, J Laubacher, M L Lewis, Beitr. Algebra Geom. 60M. Bissler, J. Laubacher and M.L. Lewis, Classifying character degree graphs with six vertices, Beitr. Algebra Geom. 60(2019), 499-511.
Classifying family of character degree graphs of solvable groups. M Bissler, J Laubacher, Int. J.Group Theory. 84M. Bissler and J. Laubacher, Classifying family of character degree graphs of solvable groups, Int. J.Group Theory 8(4)(2019), 37-46.
Hamiltonian character graphs. M Ebrahimi, A Iranmanesh, M A Hosseinzadeh, J. Algebra. 4286M. Ebrahimi, A. Iranmanesh and M. A. Hosseinzadeh, Hamiltonian character graphs, J. Algebra 428 (6) (2015), 54-66.
Coloring of character graphs. M Ebrahimi, A Iranmanesh, Comm. Algebra. 45M. Ebrahimi and A. Iranmanesh, Coloring of character graphs, Comm. Algebra 45:1(2017), 227-233.
Research in representation theory at Mainz (1984-1990). B Huppert, Progr. Math. 95B. Huppert , Research in representation theory at Mainz (1984-1990), Progr. Math. 95(1991), 17-36.
Solvable groups with character degree graphs having 5 vertices and diameter 3. M L Lewis, Comm. Algebra. 30M.L. Lewis, Solvable groups with character degree graphs having 5 vertices and diameter 3, Comm. Algebra 30 (2002), 5485-5503.
Classifying character degree graphs with 5 vertices, in Finite Groups. M L Lewis, Walter de Gruyter GmbH and Co. KGBerlinM.L. Lewis, Classifying character degree graphs with 5 vertices, in Finite Groups 2003, Walter de Gruyter GmbH and Co. KG, Berlin, (2004), 247-265.
Character Degree Graphs of Solvable Groups of Fitting Height 2. M L Lewis, Canad. Math. Bull. 491M.L. Lewis, Character Degree Graphs of Solvable Groups of Fitting Height 2, Canad. Math. Bull. 49 (1), (2006), 127-133.
An overview of graphs associated with character degrees and conjugacy class sizes in finite groups. M L Lewis, Rocky Mountain J. Math. 381M.L. Lewis, An overview of graphs associated with character degrees and conjugacy class sizes in finite groups, Rocky Mountain J. Math. 38 (1) (2008), 175-211.
Solvable groups whose prime divisor character degree graphs are 1-connected. M L Lewis, Q Meng, Monatsh. Math. 190M.L. Lewis and Q. Meng, Solvable groups whose prime divisor character degree graphs are 1-connected, Monatsh. Math. 190 (2019), 541-548.
Degree problems II: π-separable character degrees. O Manz, Comm. Algebra. 13O. Manz, Degree problems II: π-separable character degrees, Comm. Algebra 13 (1985), 2421-2431.
On the number of components of a graph related to character degrees. O Manz, R Staszewski, W Willems, Proc. AMS. 1031O. Manz, R. Staszewski and W. Willems, On the number of components of a graph related to character degrees, Proc. AMS 103 (1) (1988), 31-37.
The diameter of the character degree graph. O Manz, W Willems, T R Wolf, J. Reine Angew. Math. 402O. Manz, W. Willems and T.R. Wolf, The diameter of the character degree graph, J. Reine Angew. Math. 402 (1989), 181-198.
Regular character degree graphs. C P Morresi Zuccari, J. Algebra. 411C.P. Morresi Zuccari, Regular character degree graphs, J. Algebra 411(2014), 215-224.
On the character degree graph of solvable groups I: three primes. P P Pálfy, Period. Math. Hungar. 361P.P. Pálfy, On the character degree graph of solvable groups I: three primes, Period. Math. Hungar. 36 (1) (1998), 61-65.
On the character degree graph of solvable groups II: Disconnected graphs. P P Pálfy, Studia Sci. Math. Hungar. 38P.P. Pálfy, On the character degree graph of solvable groups II: Disconnected graphs, Studia Sci. Math. Hungar. 38 (2001), 339-355.
Character degree graphs of Solvable groups with diameter three. C B Sass, J. Algebra. 196C.B. Sass, Character degree graphs of Solvable groups with diameter three J. Algebra 19 (6) (2016), 1097-1127.
G Sivanesan, C Selvaraj, T Tamizh, arXiv:2304.13472Eulerian Character degree graphs of solvable groups (2023). G. Sivanesan, C. Selvaraj and T. Tamizh chelvam, Eulerian Character degree graphs of solvable groups (2023), arXiv: 2304.13472.
On a problem by Huppert, Bejing Daxue Xuebao Ziran Kexue Ban. J Zhang, 34J. Zhang, On a problem by Huppert, Bejing Daxue Xuebao Ziran Kexue Ban, 34 (2-3) (1998) , 143-150.
| []
|
[
"On the weak Harnack inequality for unbounded non-negative super-solutions of degenerate double-phase parabolic equations",
"On the weak Harnack inequality for unbounded non-negative super-solutions of degenerate double-phase parabolic equations"
]
| [
"Mariia O Savchenko ",
"Igor I Skrypnik ",
"Yevgeniia A Yevgenieva "
]
| []
| []
| In the case q > p n + 2 n , we give a proof of the weak Harnack inequality for non-negative super-solutions of degenerate double-phase parabolic equations under the additional assumption that u ∈ L s loc (Ω T ) with some s > p n + 2 n . Keywords: weak Harnack inequality, unbouded super-solutions, double-phase parabolic equations MSC (2010): 35B40, 35B45, 35B65.On the weak Harnack inequality.... | null | [
"https://export.arxiv.org/pdf/2305.13053v1.pdf"
]
| 258,833,395 | 2305.13053 | ccf02d8c3efcb1dde2336774d9d972b1c083ba3e |
On the weak Harnack inequality for unbounded non-negative super-solutions of degenerate double-phase parabolic equations
22 May 2023 May 23, 2023
Mariia O Savchenko
Igor I Skrypnik
Yevgeniia A Yevgenieva
On the weak Harnack inequality for unbounded non-negative super-solutions of degenerate double-phase parabolic equations
22 May 2023 May 23, 2023
In the case q > p n + 2 n , we give a proof of the weak Harnack inequality for non-negative super-solutions of degenerate double-phase parabolic equations under the additional assumption that u ∈ L s loc (Ω T ) with some s > p n + 2 n . Keywords: weak Harnack inequality, unbouded super-solutions, double-phase parabolic equations MSC (2010): 35B40, 35B45, 35B65.On the weak Harnack inequality....
Introduction and main results
In this paper we are concerned with double-phase parabolic equations. Let Ω be a domain in R n , T > 0, Ω T := Ω × (0, T ), we study unbounded super-solutions to the equation u t − divA(x, t, ∇u) = 0, (x, t) ∈ Ω T .
(1.1)
Throughout the paper we suppose that the functions A : Ω T × R n → R n are such that A(·, ·, ξ)
are Lebesgue measurable for all ξ ∈ R n , and A(x, t, ·) are continuous for almost all (x, t) ∈ Ω T .
We also assume that the following structure conditions are satisfied
A(x, t, ξ)ξ K 1 |ξ| p + a(x, t)|ξ| q := K 1 ϕ(x, t, |ξ|), 2 < p < q,
|A(x, t, ξ)| K 2 |ξ| p−1 + a(x, t)|ξ| q−1 = K 2 ϕ(x, t, |ξ|) |ξ| , (1.2) where K 1 , K 2 are positive constants and function a(x, t) 0 : Ω T × R + → R + satisfies the following condition:
(A) for any cylinder Q r,r 2 (x 0 , t 0 ) := B r (x 0 , t 0 ) × (t 0 , t 0 + r 2 ) ⊂ Q 8r,(16r) 2 (x 0 , t 0 ) ⊂ Ω T there holds osc Q r,r 2 (x 0 ,t 0 )
a(x, t) Ar α , with some A > 0 and some α ∈ (0, 1].
It is known that for integrands with (p, q)-growth, it is crucial that the gap between p and q is not too large. Otherwise, in the case q > np n − p , p < n there exist unbounded minimizers (we refer the reader to [1-6, 8-12, 18-21, 25, 27-31, 33, 34] for results, references, historical notes and extensive survey of regularity issues). It was Ok [24], who proved the boundedness of minimizers of elliptic functionals of double-phase type in the case q > np n − p under some additional assumption. More precisely, under the condition osc Br(x 0 ) a(x) Ar α , the minimizer is bounded by a constant depending on ||u|| L s with s > np n − p , provided that α q−p and s (q − p)n α + p − q . This condition, for example, gives a possibility to improve the regularity results [4-6, 9, 10] for unbounded minimizers with constant depending on ||u|| L s . The weak
Harnack inequality for unbounded super-solutions of the corresponding elliptic equations with generalized Orlicz growth under similar condition was proved in [7]. This result was generalized in [26] for unbounded functions from the corresponding De Giorgi's classes DG − ϕ (Ω). The parabolic theory for quasi-linear parabolic equations differs substantially from the elliptic case. This becomes clear by looking at the Barenblatt solution of the parabolic p-Laplace equation. DiBenedetto developed an innovative intrinsic scaling method (see [13] and the references to the original papers therein) and proved the Hölder continuity of weak solutions to (1.1) for p = q = 2. The intrinsic Harnack inequality for parabolic p-Laplace evolution equation was proved in the breakthrough papers [14,15]. The weak Harnack inequality for parabolic p-Laplacian was obtained by Kuusi [23] by using the Krylov-Safonov covering argument. A similar result was proved in [16] by using the local clustering lemma. As for parabolic equations with nonstandard growth, this question remains open.
The local boundedness of solutions of parabolic equations is known under the condition q p n + 2 n (see, for example [32]). The upper bound for the number q stems from the parabolic embedding. The intrinsic Harnack inequality for bounded solutions to corresponding singular parabolic equations with (p, q)-growth was proved in [29]. The weak Harnack inequality for bounded super-solutions of the p(x)-Laplace evolution equation was obtained in [33]. In this paper, using De Giorgi's approach we prove the weak Harnack inequality for unbounded nonnegative super-solutions to equation (1.1) in the case q > p n + 2 n under the condition similar to that of [24]. We will focus only on the case p > 2, we leave the case p < 2 < q for further research.
To formulate our results, let us remind the reader of the definition of a weak super-solution to equation (1.1). We say that function u is a weak super-solution to Eq. (1.1) if u ∈ V 2,q (Ω T ) := C loc (0, T ; L 2 loc (Ω)) ∩ L q loc (0, T ; W 1,q loc (Ω)), and for any compact set E ⊂ Ω and every subinterval [t 1 ,
t 2 ] ⊂ (0, T ] there holdŝ E uζ dx t 2 t 1 + t 2 t 1Ê {−uζ τ + A(x, τ, ∇u)∇ζ} dxdτ 0 (1.3)
for any testing functions ζ ∈ W 1,2 (0, T ; L 2 (E)) ∩ L q (0, T ; W 1,q 0 (E)), ζ 0. Technically, it would be convenient to have a formulation of a weak super-solution that
involves u t . Let ρ(x) ∈ C ∞ 0 (R n ), ρ(x) 0 in R n , ρ(x) ≡ 0 for |x| > 1 andŔ n ρ(x) dx = 1, and set ρ h (x) := h −n ρ (x/h) , u h (x, t) := h −1 t+ĥ tR n u(y, τ )ρ h (x − y) dydτ.
Fix t ∈ (0, T ) and let h > 0 be so small that 0 < t < t + h < T . In (1.3) take t 1 = t, t 2 = t + h and replace ζ byŔ n ζ(y, t)ρ h (x − y) dy. Dividing by h, since the testing function does not depend on τ , we obtainÊ
×{t} ∂u h ∂t ζ + [A(x, t, ∇u)] h ∇ζ dx 0, (1.4)
for all t ∈ (0, T − h) and for all ζ ∈ W 1,q 0 (E), ζ 0. Our main result reads as follows. be fulfilled. Assume additionally that u ∈ L s (Ω T ) and
s p − 2 + (q − p)(n + p) α + p − q . (1.5)
Then there exist positive constants C 1 , C 2 , C 3 > 0 depending only on n, p, q, K 1 , K 2 , A and
d := Ω T u s dxdt 1 s such that for a.a. (x 0 , t 0 ) ∈ Ω T , either I := Bρ(x 0 ) u(x, t 0 )dx C 1 ρ + ρ ψ −1 Q 12ρ,(12ρ) 2 (x 0 ,t 0 ) ρ 2 T − t 0 , (1.6) or I C 1 inf B 4ρ (x 0 ) u(·, t), (1.7)
for any time levels
t 0 + C 2 θ t t 0 + C 3 θ, θ := ρ 2 ψ Q 12ρ,(12ρ) 2 (x 0 ,t 0 ) ( I ρ )
,
(1.8) provided that Q 16ρ,(16ρ) 2 (x 0 , t 0 ) ⊂ Ω T . Here ffl Bρ(x 0 ) u(x, t 0 )dx := |B ρ (x 0 )| −1B ρ(x0) u(x, t 0 )dx, ψ Q (v) := ϕ + Q (v) v 2 = v p−2 + a + Q v q−2 , v > 0, a + Q := max Q a(x, t) and ψ −1 Q (·) is inverse function to ψ Q (·). Remark 1.1. If inequality (1.6) is violated, i.e. I C 1 ρ + ρ ψ −1 Q 12ρ,(12ρ) 2 (x 0 ,t 0 ) ρ 2 T − t 0 , (1.9) then the inclusion Q 16ρ,C 3 θ (x 0 , t 0 ) ⊂ Q 16ρ,(16ρ) 2 (x 0 , t 0 ) holds, provided that C 1 is large enough.
We need this inclusion only in order to use the condition (A) in the cylinder Q 12ρ,(12ρ) 2 (x 0 , t 0 ).
In the case when a(x, t) does not depend on t, the first inequality in (1.6) is not required. In this case, it is enough for the inclusion Q 16ρ,C 3 θ (x 0 , t 0 ) ⊂ Ω T , which holds by the second inequality in (1.9).
The main difficulty arising in the proof of our main result is related to the so-called theorem on the expansion of positivity. Roughly speaking, having information on the measure of the "positivity set" of u over the ball B r (x):
|{B r (x) : u(·, t) λ}| β|B r (x)|,
with some r, λ > 0 and β ∈ (0, 1) and for some time level t we need to translate it into an expansion of the set of positivity to a ball B 2r (x) and with some time level τ > t. We divide the proof of this fact into two subcases: the first one is so-called p-phase case, and the second one is so-called (p, q)-phase case (see (3.7) and (3.8) below). It seems that the most difficult case is when the p-phase condition holds in the large cylinder, and simultaneously the (p, q)-phase condition holds in a small cylinder, which is generated by local clustering lemma (see Lemma 2.1 below). In the proof, we do not use the classical covering argument of Krylov and Safonov [22], DiBenedetto and Trudinger [17] as it was done in [23], instead we use the local clustering lemma due to DiBenedetto, Gianazza and Vespri [16]. Moreover, instead of sup
Q 2r,2η (x,t)
u we are forced to use averages of u over the cylinder Q 2r,2η (x,t), and, in addition, we need to obtain estimates
of I byΩ T u s dxdt.
The rest of the paper contains the proof of Theorem 1.1. In Section 2 we collect some auxiliary propositions and required integral estimates of super-solutions. Expansion of positivity is proved in Section 3. Finally, in Section 4 we prove the weak Harnack inequality, Theorem 1.1.
2 Auxiliary material and integral estimates
Local Clustering Lemma
The following lemma will be used in the sequel, it is the local clustering lemma, see [16].
Lemma 2.1. Let K r (y) be a cube in R n of edge r centered at y and let u ∈ W 1,1 (K r (y)) satisfy ||(u − k) − || W 1,1 (Kr(y)) K k r n−1 , and |{K r (y) : u k}| β|K r (y)|, (2.1)
with some β ∈ (0, 1), k ∈ R 1 and K > 0. Then for any ξ ∈ (0, 1) and any ν ∈ (0, 1) there exists
x ∈ K r (y) and δ = δ(n) ∈ (0, 1) such that
|{Kr(x) : u ξ k}| (1 − ν)|Kr(y)|,r := δβ 2 (1 − ξ)ν K r. (2.2)
De Giorgi-Poincare Lemma
The following lemma is the well-known De Giorgi-Poincare lemma (see for example [16]).
Lemma 2.2. Let u ∈ W 1,1 (B r (y)) for some r > 0. Let k and l be real numbers such that k < l.
Then there exists a constant γ > 0 depending only on n such that
(l − k)|A k,r ||B r (y) \ A l,r | γr n+1Â l,r \A k,r |∇u|dx,
where A k,r = {x ∈ B r (y) : u(x) < k}.
Local Energy Estimates
We refer to the parameters n, p, q, K 1 , K 2 , A and d as our structural data, and we write γ if it can be quantitatively determined a priori only in terms of the above quantities. The generic constant γ may change from line to line.
Lemma 2.3. Let u be a weak non-negative super-solution to equation (1.1), then for any
Q r,η (x,t) ⊂ Q r,r (x,t) ⊂ Q 8r,8r (x,t) ⊂ Ω T , any k > 0, any σ ∈ (0, 1), any ζ 1 (x) ∈ C ∞ 0 (B r (x)), 0 ζ 1 (x) 1, ζ 1 (x) = 1 in B r(1−σ) (x), |∇ζ 1 (x)| (σr) −1 , any ζ 2 (t) ∈ C 1 (R + ), 0 ζ 2 (t) 1, ζ 2 (t) = 1 for t t + η(1 − σ), ζ 2 (t) = 0 for t t + η, | d dt ζ 2 (t)| (ση) −1 next inequalities hold sup t<t<t+ηB r (x) (u − k) 2 − ζ 1 ζ 2 q dx+ + γ −1 1 + a − Qr,η(x,t) k r q−p Q + r,η (x,t) |∇(u − k) − | p ζ 1 ζ 2 q dxdt γσ −q ϕ + Qr,η(x,η) k r 1 + k 2 ηϕ + Qr,η (x,η) ( k r ) |A − k,r,η |, (2.3) sup t<t<t+ηB r (x) (u − k) 2 − ζ q 1 dx B r (x)×{t} (u − k) 2 − ζ q 1 dx + γ η ϕ + Qr,η(x,η) k r |A − k,r,η |, (2.4) where A − k,r,η = Q r,η (x,t)∩ u k , ϕ + Qr,η(x,η) k r = k r p +a + Qr,η(x,t) k r q , a − Qr,η(x,t) = min Qr,η(x,t) a(x, t), a + Qr,η(x,t) = max Qr,η(x,t)
a(x, t).
Proof. Test (1.4) by (u h − k) − ζ 1 ζ 2 q , integrating over (t,t + η), letting h → 0, using conditions (1.2), (A) and the Young inequality we arrive at
sup t<t<t+ηB r (x) (u − k) 2 − ζ 1 ζ 2 q dx + γ −1Q r,η (x,t) |∇(u − k) − | p ζ 1 ζ 2 q dxdt+ + γ −1Q r,η (x,t) a(x, t)|∇(u − k) − | q ζ 1 ζ 2 q dxdt γσ −1 k 2 η |A − k,r,η | + γσ −qÄ − k,r,η ϕ x, t, k r dxdt γσ −q k 2 η + ϕ + Qr,η(x,η) k r |A − k,r,η |.
Using the Young inequality we obtain
1 + a − Qr,η(x,t) k r q−p Q r,η (x,t) |∇(u − k) − | p ζ 1 ζ 2 q dxdt Q r,η(x,t) |∇(u − k) − | p ζ 1 ζ 2 q dxdt +Q r,η (x,t) a(x, t)|∇(u − k) − | q ζ 1 ζ 2 q dxdt + a + Qr,η(x,t) k r q |A − k,r,η |,
from which the required inequality (2.3) follows. Now test (1.4) by (u h − k) − ζ q 1 , completely similar to the previous we arrive at (2.4), this proves the lemma. Lemma 2.4. Let u be a weak non-negative super-solution to equation (1.1), then for any Q r,η (x,t) ⊂ Q r,r 2 (x,t) ⊂ Q 8r,(8r) 2 (x,t) ⊂ Ω T , any δ > 0, any ε, σ ∈ (0, 1), next inequality
holds 1 1 − ε sup t<t<t+ηB r (x) (u + δ) 1−ε ζ 1 ζ 2 q dx + ε γQ r,η (x,t) (u + δ) −ε−1 |∇u| p ζ 1 ζ 2 q dxdt+ + ε γQ r,η(x,t) a(x, t)(u + δ) −ε−1 |∇u| q ζ 1 ζ 2 q dxdt 1 (1 − ε)σηQ r,η(x,t) (u + δ) 1−ε dxdt+ + γε 1−p (σr) pQ r,η(x,t) (u + δ) p−ε−1 dxdt + γε 1−q (σr) q a + Qr,η(x,t)Q r,η (x,t) (u + δ) q−ε−1 dxdt. (2.5) Proof. Test (1.4) by (u h + δ) −ε ζ 1 ζ 2 q ,
integrating over (t,t + η), letting h → 0, using conditions (1.2) and the Young inequality we arrive at the required (2.5).
De Giorgi Type Lemma
Let u ∈ V 2,m (Ω T ), m > 2n n+1 , u 0 and let next inequalities hold
sup t<t<t+ηB r (x) (u − k) 2 − ζ 1 ζ 2 m dx + γ −1Q r,η (x,t) |∇(u − k) − | m ζ 1 ζ 2 m dxdt Kσ −m k 2 η + k r m |A − k,r,η |, (2.6)
for any k > 0, any cylinder Q r,η (x,t) ⊂ Q 8r,8η (x,t) ⊂ Ω T and any σ ∈ (0, 1), and with some K > 0. Here ζ 1 , ζ 2 and A − k,r,η were defined in Lemma 2.3. The following lemma is the standard De Giorgi-type lemma (cf. [16], Chapter 3).
Lemma 2.5. Let (2.6) hold, then there exists ν ∈ (0, 1) depending only on K, n, m and r, η
such that if (x, t) ∈ Q r,η (x,t) : u(x, t) k ν|Q r,η (x,t)|, (2.7) then u(x, t) k 2 , (x, t) ∈ Q r 2 , η 2 (x,t). (2.8)
The number ν is chosen to satisfy
ν := 1 γ r m ηk m−2 1 + ηk m−2 r m − n+m m . (2.9) 3 Expansion of Positivity Fix (x 0 , t 0 ) ∈ Ω T such that Q 8ρ,(8ρ) 2 (x 0 , t 0 ) ⊂ Ω T and let Q 8r,(8r) 2 (x,t) ⊂ Q ρ,ρ 2 (x 0 , t 0 ).
In what follows, we will assume that k > 0 satisfies conditions
C * ρ k, k s ε 0 ϕ + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) ( k ρ ) ρ n k 2 = ε 0 k p−2 ρ n+p + a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 ρ n+q ,(3.1)
where C * > 1, ε 0 ∈ (0, 1) depend only on the known data to be chosen later.
First, we will prove the following result.
Proposition 3.1. Let u be a weak non-negative super-solution to equation (1.1), let k satisfy (3.1) and let also
B r (x) : u(·,t) > k β 0 |B r (x)|, (3.2)
with some β 0 ∈ (0, 1). Then there exist numbers C * , b 1 , b 2 > 0 and ε 0 , σ 0 ∈ (0, 1) depending only on the data and β 0 such that
u(x, t) σ 0 k, x ∈ B 2r (x), (3.3)
for allt
+ η 1 :=t + b 1 (σ 0 k) 2 ϕ + Q 6r,(6r) 2 (x,t) σ 0 k r t t + b 2 (σ 0 k) 2 ϕ + Q 6r,(6r) 2 (x,t) σ 0 k r :=t + η 2 . (3.4)
Lemma 3.1. Let conditions of Proposition 3.1 hold, then there exist ε, δ ∈ (0, 1), depending only on the data and β 0 such that
B r (x) : u(·, t) εk β 0 4 |B r (x)|, (3.5) for anyt < t t + δk 2 ϕ + Q 6r,(6r) 2 (x,t) ( k r )
.
Proof. Use inequality (2.4), which yields for anyt < t <t +
δk 2 ϕ + Q 6r,(6r) 2 (x,t) ( k r ) B r (x) : u(·, t) εk nσ|B r (x)| + 1 (1 − ε) 2 1 − β 0 + γσ −q δ |B r (x)|. Choose σ = β 0 8n , β 0 8 ε = 1 − (1 − 3 4 β 0 ) 1 2 (1 − 1 2 β 0 ) 1 2 β 0 2 , δ = β q+1 0 32(8n) q γ , (3.6)
we arrive at the required (3.5), which proves the lemma.
In the proof of Proposition 3.1 we will distinguish between two different cases. The first one is the so-called case of p-phase
a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 ρ q k p−2 ρ p ,(3.7)
and the second is the case of (p, q)-phase
a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 ρ q k p−2 ρ p . (3.8)
In turn, we divide case (3.7) into subcases. Fix j * > 1, which will be chosen later depending only on the data and β 0 and set τ * = 2 j * ε q−2
. We will assume that either
(i) a + Q 6r,(6r) 2 (x,t) εk r2 j * e τ * q−p 1, or (ii) a + Q 6r,(6r) 2 (x,t) εk r2 j * e τ * q−p 1.
Proof of Proposition in the case (3.7) and (i)
For τ τ * := τ * + j * log 2 + log 1 ε we have a + Q 6r,(6r) 2 (x,t) k re τ q−p a + Q 6r,(6r) 2 (x,t) εk r2 j * e τ * q−p 1, therefore, ϕ + Q 6r,(6r) 2 (x,t) k re τ q 2 k re τ p for τ τ * and inequality (3.5) with k replaced by e −τ k, τ τ * yields B r (x) : u ·,t + δ 2 r p e τ k p−2 εe −τ k β 0 4 |B r (x)|, for all τ τ * .
Following [16] we consider function
w(y, τ ) := e τ k u x + ry,t + δ 2 r p e τ k p−2 , τ τ * .
The previous inequality translates into w as
B 1 (0) : w(·, τ ) ε β 0 4 |B 1 (0)|, which implies B 4 (0) : w(·, τ ) ε β 0 4 n+1 |B 4 (0)|, for all τ τ * . (3.9)
Since w 0, formal differentiation, which can be justified in a standard way, gives
w τ = w + δ 2 (p − 2)r p e τ k p−1 u t divĀ(y, τ, ∇w),(3.10)
whereĀ satisfies the inequalities
A(y, τ, ∇w)∇w K 1 δ 2 (p − 2) |∇w| p +ā(y, τ ) k e τ r q−p |∇w| q , |Ā(y, τ, ∇w)| K 2 δ 2 (p − 2) |∇w| p−1 +ā(y, τ ) k e τ r q−p |∇w| q−1 , (3.11) andā(y, τ ) := a x + ry,t + δ 2 r p e τ k p−2 .
Lemma 3.2. For any ν ∈ (0, 1) there exists j * , depending only on the data and ν, such that
Q * : w ε 2 j * ν|Q * |, (3.12) Q * = B 4 (0) × (τ * + 1 2 2 j * ε p−2 ,τ * + 2 j * ε p−2 ).
Proof. Using Lemma 2.2 with k = k j := ε 2 j and l = k j−1 , 1 j j * , due to (3.9) we obtain
k j |A k j ,4 (τ )| γ(β 0 )Â k j−1 ,4 (τ )\A k j ,4 (τ ) |∇w|dx, τ τ * , where A k j ,4 (τ ) := B 4 (0) ∩ {w(·, τ ) < k j }.
Integrating this inequality with respect to τ , τ ∈ (τ * + 1 2 k 2−p j * ,τ * +k 2−p j * ) and using the Hölder inequality we have
k p p−1 j |A j | p p−1 γ(β 0 ) Ä j−1 |∇w| p dy dτ 1 p−1 |A j−1 \ A j |, (3.13) where A j :=τ * +k 2−p j * τ * + 1 2 k 2−p j * A k j ,4 (τ ) dτ .
To estimate the first factor, similarly to Lemma 2.3 with | d dτ ζ 2 | γk p−2 j * , by structure inequalities (3.11) we obtain
sup τ * + 1 2 k 2−p j * <τ <τ * +k 2−p j * B 4 (0) (w − k j−1 ) 2 − dx +Ä j−1 |∇w| p dy dτ γ k 2 j−1 k p−2 j * + k p j−1 |Q * 1 ∩ {w < k j−1 }| + γk q j−1Q * 1 ∩{w<k j−1 }ā (y, τ ) k e τ r q−p dydτ γk p j |Q * ∩ {w < k j−1 }| 1 + k j k eτ * r q−pā + Q * 6 , where Q * 6 := B 6 (0) × (τ * + 1 4 k 2−p j * ,τ * + 2k 2−p j * ),ā + Q * 6 = max Q * 6ā (y, τ ).
To estimate the last term on the right-hand side of this inequality, we note that by condition (i)
k j k eτ * r q−pā + Q * 6 1,
and hence
sup τ * + 1 2 k 2−p j * <τ <τ * +k 2−p j * B 4 (0) (w − k j−1 ) 2 − dx +Ä j−1 |∇w| p dy dτ γk p j |Q * ∩ {w < k j−1 }|. (3.14)
Combining estimates (3.13) and (3.14) we obtain
|A j | p p−1 γ(β 0 )|Q * | 1 p−1 |A j−1 \ A j |.
Summing up the last inequalities over j, 1 j j * , we conclude that
j p−1 p * |A j * | γ(β 0 )|Q * |.
Choosing j * by the condition
j − p−1 p * γ(β 0 ) ν,
we obtain inequality (3.12), which proves Lemma 3.2.
Use Lemma 2.5, similarly to that of (3.14) inequality (2.6) holds for u replaced by w, m = p, k = k j * and η = γk 2−p j * we obtain w(y, τ ) k j * +1 , y ∈ B 2 (0),
for allτ * + 5 8 k 2−p j * τ τ * + 3 4 k 2−p j * . This inequality for u translates into u(x, t) εke −τ * − 3 4 ( 2 j * ε ) p−2 2 j * +1 = ε 2 ke −τ * − 3 4 ( 2 j * ε ) p−2 2 2j * +1 , x ∈ B 2r (x),
for allt + δ 2 r p 2 j * e τ * + 5
8 ( 2 j * ε ) p−2 εk p−2 t t + δ 2 r p 2 j * e τ * + 3 4 ( 2 j * ε ) p−2 εk p−2 , τ * = 2 j * ε q−2 . Choose σ 0 = ε 2 e −τ * − 3 4 ( 2 j * ε ) p−2 2 2j * +1 , by condition (i) δ 2 r p 2 j * e τ * + 5 8 ( 2 j * ε ) p−2 εk p−2 = δε p−2 e − 1 8 ( 2 j * ε ) p−2 2 p+(p−2)j * (σ 0 k) 2−p r p b 1 (σ 0 k) 2 ϕ + Q 6r,(6r) 2 (x,t) ( σ 0 k r ) and δ 2 r p 2 j * eτ * + 3 4 ( 2 j * ε ) p−2 εk p−2 = δε p−2 2 p+(p−2)j * (σ 0 k) 2−p r p b 2 (σ 0 k) 2 ϕ + Q 6r,(6r) 2 (x,t) ( σ 0 k r ) , b 1 = δε p−2 e − 1 1 8 ( 2 j * ε ) p−2 .
Proof of Proposition 3.1 in the case (3.7) and (ii)
Set l := s − p + 2 s − q + 2 > 1, by conditions (ii), (A), (3.1), the Young inequality, and using the fact
that (α + p − q) l l−1 − p − n = (α + p − q) s−p+2 q−p − p − n 0 we obtain a + Q 6r,(6r) 2 (x,t) k q−2 r q − a − Q 6r,(6r) 2 (x,t) k q−2 r q 6 α Ar α−q k q−2 = 6 α Ar α−q k p−2 l + (p−2)(l−1) l +q−p k p−2 4r p + γ(l, A)r (α+p−q) s−p+2 q−p −p k s a + Q 6r,(6r) 2 (x,t) k q−2 4r q + + ε 0 γ(l, A)r (α+p−q) s−p+2 q−p −p−n ϕ + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k ρ a + Q 6r,(6r) 2 (x,t) k q−2 4r q + 2ε 0 γ(l, A) k p−2 ρ p a + Q 6r,(6r) 2 (x,t) k q−2 4r q + 2ε 0 γ(l, A) k p−2 r p 1 4 + 2ε 0 γ(l, A) a + Q 6r,(6r) 2 (x,t) k q−2 r q ,
and hence
a + Q 6r,(6r) 2 (x,t) 2a − Q 6r,(6r) 2 (x,t) ,(3.15)
provided that ε 0 is chosen to satisfy So, inequality (3.5) with k replaced by e −τ k, τ τ * yields
B r (x) : u ·,t + δ 2a + Q 6r,(6r) 2 (x,t) r q e τ k q−2 εe −τ k β 0 4 |B r (x)|, for all 0 < τ τ * .
Consider the function
w(y, τ ) := e τ k u x + ry,t + δ 2a + Q 6r,(6r) 2 (x,t) r q e τ k q−2 , 0 < τ τ * .
The previous inequality translates into w as B 1 (0) : w(·, τ ) ε β 0 4 |B 1 (0)|, which implies
B 4 (0) : w(·, τ ) ε β 0 4 n+1 |B 4 (0)|, for all 0 < τ τ * . (3.18)
By differentiation
w τ = w + δ(q − 2) 2a + Q 6r,(6r) 2 (x,t) r q e τ k q−1 u t divĀ(y, τ, ∇w),(3.19)
whereĀ satisfies the inequalities
A(y, τ, ∇w)∇w K 1 δ 2 (q − 2) 1 a + Q 6r,(6r) 2 (x,t) e τ r k q−p |∇w| p +ā (y, τ ) a + Q 6r,(6r) 2 (x,t) |∇w| q , |Ā(y, τ, ∇w)| K 2 δ 2 (q − 2) 1 a + Q 6r,(6r) 2 (x,t) e τ r k q−p |∇w| p−1 +ā (y, τ ) a + Q 6r,(6r) 2 (x,t) |∇w| q−1 , (3.20) andā(y, τ ) := a x + ry,t + δ 2a + Q 6r,(6r) 2 (x,t) r q e τ k q−2 .
Lemma 3.3. For any ν ∈ (0, 1) there exists j * , depending only on the data and ν such that
Q * : w ε 2 j * ν|Q * |, (3.21) Q * = B 4 (0) × ( 1 2 2 j * ε q−2 , 3 4 2 j * ε q−2 ).
Proof. Using Lemma 2.2 with k = k j := ε 2 j and l = k j−1 , 1 j j * , due to (3.18) we obtain
k j |A k j ,4 (τ )| γ(β 0 )Â k j−1 ,4 (τ )\A k j ,4 (τ ) |∇w| dx, 0 < τ τ * , where A k j ,4 (τ ) := B 4 (0) ∩ {u(·, τ ) < k j }.
Integrating this inequality with respect to τ , τ ∈ ( 1 2 k 2−q j * , 3 4 k 2−q j * ) and using the Hölder inequality we have
k q q−1 j |A j | q q−1 γ(β 0 ) Ä j−1 |∇w| q dy dτ 1 q−1 |A j−1 \ A j |,(3.22)
where A j :=
3 4 k 2−q j * 1 2 k 2−q j * A k j ,4 (τ ) dτ .
Similarly to Lemma 3.2 with | d dτ ζ 2 | γk q−2 j * , by structure conditions (3.20), estimate (3.15) we estimate the first factor on the right-hand side of (3.22) as follows
sup 1 2 k 2−q j * <τ < 3 4 k 2−q j * B 4 (0) (w − k j−1 ) 2 − dx + 1 2Ä j−1 |∇w| q dy dτ sup 1 2 k 2−q j * <τ < 3 4 k 2−q j * B 4 (0) (w − k j−1 ) 2 − dx +Ä j−1ā (y, τ ) a + Q 6r,(6r) 2 (x,t) |∇w| q dy dτ γk 2 j−1 k q−2 j * |Q * 1 ∩ {w < k j−1 }| + γ k p j a + Q 6r,(6r) 2 (x,t)Q * 6 ∩{w<k j−1 } e τ r k q−p dydτ + + γk q jQ * 6 ∩{w<k j−1 }ā (y, τ ) a + Q 6r,(6r) 2 (x,t) dydτ γk q j |Q * 6 ∩ {w < k j−1 }|+ + γ k p j a + Q 6r,(6r) 2 (x,t) e 7 8 k 2−q j * r k q−p |Q * 6 ∩ {w < k j−1 }|, (3.23)
where Q * 6 := B 6 (0) × ( 1 4 k 2−q j * , 7 8 k 2−q j * ). By our choices and (3.17)
k p j a + Q 6r,(6r) 2 (x,t) e 7 8 k 2−q j * r k q−p k q j a + Q 6r,(6r) 2 (x,t) eτ * r k q−p k q j .
Therefore, inequality (3.23) yields
sup 1 2 k 2−q j * <τ < 3 4 k 2−q j * B 4 (0) (w − k j−1 ) 2 − dx +Ä j−1 |∇w| q dy dτ γk q j |Q * ∩ {w < k j−1 }|. (3.24)
Combining (3.22) and (3.24) we arrive at
|A j | q q−1 γ(β 0 )|Q * | 1 q−1 |A j−1 \ A j |.
Summing up this inequalities in j, 1 j j * and choosing j * by the condition j − q−1 q * γ(β 0 ) ν, we arrive at the required (3.21), which proves the lemma. Use Lemma 2.5, similarly to that of (3.24) inequality (2.6) holds for u replaced by w, m = q, k = k j * and η = γk 2−q j * we obtain w(y, τ ) k j * +1 , y ∈ B 2 (0), for all 9 16 k 2−q j * τ 5 8 k 2−q j * . This inequality for u translates into
u(x, t) εke − 5 8 ( 2 j * ε ) q−2 2 j * +1 , x ∈ B 2r (x), for allt + δ 2a + Q 6r,(6r) 2 (x,t) r q k 2−q e 9 16 (q−2)( 2 j * ε ) q−2 t t + δ 4a + Q 6r,(6r) 2 (x,t) r q k 2−q e 5 8 (q−2)( 2 j * ε ) q−2 . Choose σ 0 = εke − 5 8 ( 2 j * ε ) q−2 2 j * +1 , therefore δ 2a + Q 6r,(6r) 2 (x,t) r q k 2−q e 9 16 (q−2)( 2 j * ε ) q−2 = δε q−2 e − 1 16 (q−2)( 2 j * ε ) q−2 2 q+j * (q−2) a + Q 6r,(6r) 2 (x,t) r q (σ 0 k) 2−q b 1 (σ 0 k) 2 ϕ + Q 6r,(6r) 2 (x,t) ( σ 0 k r ) , and δ 2a + Q 6r,(6r) 2 (x,t) r q k 2−q e 5 8 (q−2)( 2 j * ε ) q−2 = δε q−2 2 q+j * (q−2) a + Q 6r,(6r) 2 (x,t) r q (σ 0 k) 2−q b 2 (σ 0 k) 2 ϕ + Q 6r,(6r) 2 (x,t) ( σ 0 k r )
,
whereb 1 = (1 + A)δε q−2 e − 1 16 (q−2)( 2 j * ε ) q−2 2 (j * +1)(q−2) ,b 2 = δε q−2 2 q+j * (q−2)
. This proves Proposition 3.1 in the case (3.7) and (ii).
To complete the proof of Proposition 3.1, we note that in the case (3.8)
a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 ρ q − a − Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 ρ q 6 α Aρ α−q k q−2 = 6 α Aρ α−q k p−2 l + (p−2)(l−1) l +q−p k p−2 4ρ p + γ(l, A)ρ (α+p−q) s−p+2 q−p −p k s a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 4ρ q + +ε 0 γ(l, A)ρ (α+p−q) s−p+2 q−p −p−n k −2 ϕ + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k ρ 1 4 + 2ε 0 γ(l, A) a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) k q−2 4ρ q ,
choose ε 0 from the condition (3.16) and therefore Our main result of this Section reads as follows Theorem 3.1. Let u be a weak non-negative super-solution to equation (1.1), let k satisfy
a + Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) 2a − Q 6ρ,(6ρ) 2 (x 0 ,t 0 ) .0 < k s ε 0 k p−2 ρ n+p + a + Q 6ρ,(6ρ) 2 (y,τ ) k q−2 ρ n+q ,(3.
25)
and let also B ρ (y) : u(·, τ ) > k β|B ρ (y)|, (3.26) with some β ∈ (0, 1). Then there exist numbers C, B, 0 < B 1 B 2 2 and σ 1 ∈ (0, 1) depending only on the data such that either
β B k Cρ, (3.27) or u(x, t) σ 1 β B k, x ∈ B 2ρ (y),(3.
28)
and for all
τ + B 1 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ ) t τ + B 2 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ )
, (3.29) provided that Q 16ρ,(16ρ) 2 (y, τ ) ⊂ Ω T .
Proof. In what follows, we assume that
β B k Cρ. (3.30)
Condition (3.25) and Lemma 3.1 (see (3.6)) yield
B ρ (y) : u(·, t) > β 8 k β 4 |B ρ (y)|, (3.31) for all τ < t τ + δk 2 ϕ + Q 6ρ,(6ρ) 2 (y,τ ) ( k ρ ) , δ = β q+1 γ .
Write down the energy estimates (2.3) with k replaced by β 8 k, for the pair of cylinders
Q := B ρ (y) × (τ + η 2 , τ + η), Q 1 := B 2ρ (y) × (τ, τ + η), η = δk 2 ϕ + Q 6ρ,(6ρ) 2 (y,τ ) ( k ρ )
and take
d dt ζ 2 γ ϕ + Q 6ρ,(6ρ) 2 (y,τ ) ( k ρ ) δk 2
and |∇ζ 1 | γ ρ . By condition (A) and (3.25) we havë
Q ∇ u − β 8 k − p dxdt γ β q+1 βk ρ p 1 + a + Q 6ρ,(6ρ) 2 (y,τ ) βk 8ρ q−p 1 + a − Q 6ρ,(6ρ) 2 (y,τ ) βk 8ρ q−p |Q| γ β q+1 βk ρ p |Q|.
From this and (3.31) it follows that there exists t
1 ∈ (τ + η 2 , τ + η) such that Bρ(y)×{t 1 } ∇ u − β 8 k − dx γ β q+1 p βkρ n−1 and B ρ (y) : u(·, t 1 ) > β 8 k β 4 |B ρ (y)|.
(3.32)
The local clustering Lemma 2.1 with K = γ β q+1 p , α = β 4 , ν = 1 2 , ξ = 1 2 and k replaced by β 8 k yields
B r (x) : u(·, t 1 ) > β 16 k 1 2 |B r (x)|, r = ǫ 0 β 2+ q+1 p ρ (3.33)
with somex ∈ B ρ (y) and some ǫ 0 ∈ (0, 1) depending only on the data. Proposition 3.1 with β 0 = 1 2 and k replaced by β 16 k implies
u(x, t) σ 0 βk, x ∈ B 2r (x),
for all
t 2 := t 1 + b 1 (σ 0 βk) 2 ϕ + Q 6r,(6r) 2 (x,t 1 ) ( σ 0 βk r ) t t 1 + b 2 (σ 0 βk) 2 ϕ + Q 6r,(6r) 2 (x,t 1 ) ( σ 0 βk r )
, with some σ 0 ∈ (0, 1) and b 1 , b 2 > 0 depending only on the data. From this by iteration we obtain u(x, t) σ j 0 βk, x ∈ B 2 j r (x), (3.34) for all
t j+1 := t j + b 1 (σ j 0 βk) 2 ϕ + Q 2 j 6r,(2 j 6r) 2 (x,t j ) ( σ j 0 βk 2 j r ) t t j + b 2 (σ j 0 βk) 2 ϕ + Q 2 j 6r,(2 j 6r) 2 (x,t j ) ( σ j 0 βk 2 j r )
.
(3.35)
Choosing j by the condition 2 j r = 2ρ, from (3.34) we obtain
u(x, t) k γ(σ 0 , ǫ 0 ) β 1+(2+ q+1 p ) log 1 σ 0 = σ 1 β B k, x ∈ B 2ρ (y),
for all t satisfying (3.35). We have
t j +b 2 (σ j 0 βk) 2 ϕ + Q 2 j 6r,(2 j 6r) 2 (x,t j ) ( σ j 0 βk 2 j r ) τ +b 2 (σ j 0 βk) 2 ϕ + Q 2 j 6r,(2 j 6r) 2 (x,t j ) ( σ j 0 βk 2 j r ) τ +b 2 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ ) = = τ + B 2 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ )
.
In addition, by condition (A) and (3.25)
t j+1 − τ − β q+1 k 2 γϕ + Q 6ρ,(6ρ) 2 (y,τ ) ( k ρ ) b 1 j i=0 (σ i 0 βk) 2 ϕ − Q 12ρ,(12ρ) 2 (y,τ ) σ i 0 βk 2 i r b 1 (σ j 0 βk) 2 ϕ − Q 12ρ,(12ρ) 2 (y,τ ) σ j 0 βk 2 j r j i=0 2 p σ p−2 0 i−j γb 1 (σ j 0 βk) 2 ϕ − Q 12ρ,(12ρ) 2 (y,τ ) σ j 0 βk 2 j r γ(A)b 1 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ )
.
So, by our choices of b 1 , b 2 and by possible reducing of σ 0 , if needed, we have
t j+1 τ + γb 1 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ ) = τ + B 1 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ ) τ + B 2 2 (σ 1 β B k) 2 ϕ + Q 12ρ,(12ρ) 2 (y,τ ) ( σ 1 β B k ρ )
.
This completes the proof of Theorem 3.1.
Weak Harnack Inequality, Proof of Theorem 1.1
Fix ξ 0 ∈ (0, 1) depending only on the data to be chosen later. In the proof of Theorem 1.1, we will distinguish two alternative cases: either there exist a time levelt ∈ (t 0 , t 0 +
I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) ( I ρ )
) and a number λ 0 > 1 such that
B 2ρ (x 0 ) : u(·,t) λ 0 I λ − ξ 0 B 0 |B 2ρ (x 0 )|, (4.1)
or such inequality is violated, i.e. for all t ∈ (t 0 , t 0 +
I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) ( I ρ )
) and for any λ > 1 there
holds B 2ρ (x 0 ) : u(·, t) λI λ − ξ 0 B |B 2ρ (x 0 )|,(4.2)
here B > 1 is the number defined in Theorem 3.1 and I = ffl
Bρ(x 0 )
u(x, t 0 )dx.
Proof of Theorem 1.1 under Condition (4.1)
Lemma 4.1. Let (4.1) hold then there exists positive number γ 0 depending only on the data such that
λ ξ 0 0 I s γ 0 d s (λ ξ 0 0 I) p−2 ρ n+p + a + Q 6ρ,(6ρ) 2 (x 0 ,t) (λ ξ 0 0 I) q−2 ρ n+q . (4.3)
Proof. Lemma 3.1 and conditions (3.6) yield
B 2ρ (x 0 ) : u(·, t) 1 8 λ 1− ξ 0 B 0 I 1 2 λ − ξ 0 B 0 |B 2ρ (x 0 )|, for all t ∈ (t,t + η), η = γ −1 λ − (q+1)ξ 0 B 0 (λ 0 I) 2 ϕ + Q 6ρ,(6ρ) 2 (x 0 ,t) ( λ 0 I ρ )
.
From this
1 16 λ 1− ξ 0 B − ξ 0 Bs 0 |B 2ρ (x 0 )| 1 s η 1 s I Q 2ρ,η (x 0 ,t) u s 1 s d. (4.4) If a + Q 6ρ,(6ρ) 2 (x 0 ,t) λ 0 I ρ q−p 1, then (4.4) implies λ 1− ξ 0 (s+q+2) B(s−q+2) 0 I γ a + Q 6ρ,(6ρ) 2 (x 0 ,t) 1 s−q+2 ρ − n+q s−q+2 d s s−q+2 , and if 1 − ξ 0 (s+q+2) B(s−q+2)
ξ 0 , i.e. ξ 0 (1 + s+q+2 B(s−q+2) ) 1, then
λ ξ 0 0 I γ a + Q 6ρ,(6ρ) 2 (x 0 ,t) 1 s−q+2 d s s−q+2 ρ − n+q s−q+2 .
And if a + Q 6ρ,(6ρ) 2 (x 0 ,t) λ 0 I ρ q−p 1, then (4.4) yields
λ ξ 0 0 I λ 1− ξ 0 (s+q+2) B(s−p+2) 0 I γρ − n+p s−p+2 d s s−p+2 , provided that 1 − ξ 0 (s+q+2) B(s−p+2)
ξ 0 , i.e. ξ 0 (1 + s+q+2 B(s−p+2) ) 1. This completes the proof of the lemma.
We use Theorem 3.
1 with k = ε 0 λ ξ 0 0 I γ 0 d s , β 0 = 1 2 λ − ξ 0 B 0 and τ =t + η, where ε 0 is defined in (3.1) and η is defined in Lemma 4.1, then either I C γ 0 ε 0 d s ρ, or u(x, t) ε 0 σ 1 γ 0 d s I =σ 1 I, x ∈ B 4ρ (x 0 ), (4.5) for allt + η + B 1 (σ 1 I) 2 ϕ + Q 12ρ,(12ρ) 2 (x 0 ,t+η) (σ 1 I ρ ) t t + η + B 2 (σ 1 I) 2 ϕ + Q 12ρ,(12ρ) 2 (x 0 ,t+η) (σ 1 I ρ )
, which by (4.3) and condition (A) yields that (4.5) holds for all time levels
t 0 +B 1 (σ 1 I) 2 ϕ + Q 12ρ,(12ρ) 2 (x 0 ,t 0 ) (σ 1 I ρ ) t t 0 +B 2 (σ 1 I) 2 ϕ + Q 12ρ,(12ρ) 2 (x 0 ,t 0 ) (σ 1 I ρ )
.
Proof of Theorem 1.1 Under Condition (4.2)
In what follows, we will assume that
I C 1 ρ + ρ ψ −1 Q 12ρ,(12ρ) 2 (x 0 ,t 0 ) ρ 2 T − t 0 , (4.6)
with some positive C 1 > 0 to be chosen later.
The following lemma is an upper bound of I, similar to that of Lemma 4.1.
Lemma 4.2. Next inequality holds
I s γd s I p−2 ρ n+p + a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+q . (4.7) Proof. Test (1.4) by ζ q (x) ∈ C 1 0 (B 3 2 ρ (x 0 ), 0 ζ(x) 1, ζ(x) = 1 in B ρ (x 0 ), |∇ζ(x)| 2 ρ . Integrating over (t 0 , t), t ∈ (t 0 , t 0 + η 2 ), η = I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I ρ and letting h → 0 we obtain Bρ(x 0 ) u(x, t 0 )dx B 3 2 ρ (x 0 ) u(x, t)dx+ γ ρQ 3 2 ρ, η 2 (x 0 ,t 0 ) |∇u| p−1 dxdt+ γ ρQ 3 2 ρ, η 2 (x 0 ,t 0 )
a(x, t)|∇u| q−1 dxdt.
Integrating over (t 0 , t 0 + η 2 ), from the previous we have
I Q 3 2 ρ, η 2 (x 0 ,t 0 ) u dxdt+ γ ρ 1+nQ 3 2 ρ, η 2 (x 0 ,t 0 ) |∇u| p−1 dxdt+ γ ρ 1+nQ 3 2 ρ, η 2 (x 0 ,t 0 )
a(x, t)|∇u| q−1 dxdt.
(4.8)
Let us estimate the terms on the right-hand side of (4.8). Set I 1 := I s d s |Q ρ,η (x 0 , t 0 )|, and assume that with some sufficiently largeγ 1
I 1 γ.
(4.9)
By the Hölder inequality
Q 3 2 ρ, η 2 (x 0 ,t 0 ) u dxdt Q 3 2 ρ, η 2 (x 0 ,t 0 ) u s dxdt 1 s γ d s ρ n η 1 s γ γ 1 s I. (4.10)
Using the Hölder inequality, we obtain with ε, δ ∈ (0, 1)
1 ρ 1+nQ 3 2 ρ, η 2 (x 0 ,t 0 ) |∇u| p−1 dxdt 1 ρ 1+n Q 3 2 ρ, η 2 (x 0 ,t 0 ) (u + δI) −1−ε |∇u| p dxdt p−1 p × × Q 3 2 ρ, η 2 (x 0 ,t 0 ) (u + δI) (1+ε)(p−1) dxdt 1 p .((x 0 ,t 0 ) (u + δI) −1−ε |∇u| p dxdt +Q 3 2 ρ, η 2 (x 0 ,t 0 ) a(x, t)(u + δI) −1−ε |∇u| q dxdt γ(ε) ηQ 2ρ,η (x 0 ,t 0 ) (u + δI) 1−ε dxdt + γ(ε) ρ pQ 2ρ,η (x 0 ,t 0 ) (u + δI) −1−ε+p dxdt+ + γ(ε) ρ q a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 )Q 2ρ,η (x 0 ,t 0 ) (u + δI) −1−ε+q dxdt γ(ε) η (d s + δ s I s |Q ρ,η (x 0 , t 0 )|) 1−ε s |Q ρ,η (x 0 , t 0 )| 1− 1−ε s + + γ(ε) ρ p (d s + δ s I s |Q ρ,η (x 0 , t 0 )|) p−1−ε s |Q ρ,η (x 0 , t 0 )| 1− p−1−ε s + + γ(ε) ρ q a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) (d s + δ s I s |Q ρ,η (x 0 , t 0 )|) q−1−ε s |Q ρ,η (x 0 , t 0 )| 1− q−1−ε s γ(ε)I 1−ε |Q ρ,η (x 0 , t 0 )| 1 η 1 I 1 + δ s 1−ε s + I p−2 ρ p 1 I 1 + δ s p−1−ε s + + I q−2 ρ q a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) 1 I 1 + δ s q−1−ε s γ(ε)I 1−ε |Q ρ,η (x 0 , t 0 )| 1 η + I p−2 ρ p + I q−2 ρ q a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) γ(ε)I 1−ε ρ n . (4.12)
By the Hölder inequalitÿ
Q 3 2 ρ, η 2 (x 0 ,t 0 ) (u + δI) (1+ε)(p−1) dxdt γ(d s + δ s I s |Q ρ,η (x 0 , t 0 )|) (1+ε)(p−1) s |Q ρ,η (x 0 , t 0 )| 1− (1+ε)(p−1) s γ 1 γ + δ s (1+ε)(p−1) s I (1+ε)(p−1) |Q ρ,η (x 0 , t 0 )| γ 1 γ + δ s (1+ε)(p−1) s I 1+ε(p−1) ρ n+p . (4.13)
Combining (4.11)-(4.13) we obtain 1 ρ 1+nQ
3 2 ρ, η 2 (x 0 ,t 0 ) |∇u| p−1 dxdt γ(ε) 1 γ + δ s (1+ε)(p−1) sp I.
(4.14)
By the Hölder inequalitÿ
Q 3 2 ρ, η 2 (x 0 ,t 0 ) a(x, t)|∇u| q−1 dxdt Q 3 2 ρ, η 2 (x 0 ,t 0 ) a(x, t)(u + δI) −1−ε |∇u| q dxdt q−1 q × × Q 3 2 ρ, η 2 (x 0 ,t 0 ) a(x, t)(u + δI) (1+ε)(q−1) dxdt 1 q .
The first integral on the right-hand side of this inequality was estimated in (4.12), while the second one can be estimated similarly to (4.13)
Q 3 2 ρ, η 2 (x 0 ,t 0 ) a(x, t)(u + δI) (1+ε)(q−1) dxdt γa + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) (d s + δ s I s |Q ρ,η (x 0 , t 0 )|) (1+ε)(q−1) s |Q ρ,η (x 0 , t 0 )| 1− (1+ε)(q−1) s γa + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) 1 γ + δ s (1+ε)(q−1) s I (1+ε)(q−1) |Q ρ,η (x 0 , t 0 )| γ 1 γ + δ s (1+ε)(q−1) s I 1+ε(q−1) ρ n+q (4.15)
Collecting estimates (4.8), (4.10), (4.14) and (4.15) we arrive at
I γ γ 1 s I + γ(ε) 1 γ + δ s (1+ε)(p−1) sp I + γ(ε) 1 γ + δ s (1+ε)(q−1) sq I.
Choosing ε = 1 2 andγ, δ by the condition γ γ 1 s
+ γ( 1 γ + δ s ) 3(p−1) 2sp + γ( 1 γ + δ s ) 3(q−1) 2sq 1 2
, we reach a contradiction to (4.9), which completes the proof of the lemma.
Now we note that condition (4.2) yields
B 2ρ (x 0 ) u(x, t) κ dx = κ |B 2ρ (x 0 )| ∞ 0 | u > λ |λ κ−1 dλ = = κI κ |B 2ρ (x 0 )| ∞ 0 | B 2ρ (x 0 ) : u > λI |λ κ−1 dλ I κ + κI κ ∞ 1 λ κ− ξ 0 B −1 dλ 3I κ , κ = ξ 0 2B ,(4.16)
for all t ∈ t 0 , t 0 +
I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) ( I ρ )
.
The following lemma is the uniform upper bound for the super-solutions.
1 < l := s − p + 2 s − q + 2 < n + p n . (4.17)
Then for all m in the range
q − 2 < m < q − 1 + p − n(l − 1) ln (4.18)
there holds
I p−2 ρ n+pQ 7 4 ρ, 3 4 η (x 0 ,t 0 ) u I + 1 m−q+p dxdt + a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ 7 4 ρ, 3 4 η (x 0 ,t 0 ) u I + 1 m dxdt γ,(4.19)
where η =
I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I ρ .
which together with (4.21) yield a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 )
I q−2 ρ n+qQ r,η (x 0 ,t 0 ) (u + I) q−2+κ n+p ln ζ q (x)dxdt γσ −q I p−2 ρ n+pQ r,η(x0,t0) u I + 1 p−2+ κ l dxdt+a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ r,η(x0,t0) u I + 1 q−2+ κ l dxdt + + γρ α−q−nQ r,η (x 0 ,t 0 ) u I + 1 κ n+p ln (u + I) q−2 ζ q (x)dxdt. (4.22)
To estimate the last term on the right-hand side of (4.22) we use the Young inequality
ρ α−q−nQ r,η (x 0 ,t 0 ) u I + 1 κ n+p ln (u + I) q−2 ζ q (x)dxdt = = ρ α−q−nQ r,η(x0,t0) u I + 1 κ n+p ln (u + I) p−2 l + (p−2)(l−1) l +q−p ζ q (x)dxdt γ ρ n+pQ r,η(x0,t0) u I + 1 κ n+p n (u + I) p−2 ζ q (x)dxdt+ + γρ (α+p−q) l l−1 −n−pQ r,η (x 0 ,t 0 ) (u + I) p−2+ (q−p)l l−1 dxdt. (4.23)
The first integral on the right-hand side of (4.23) we estimate similarly to (4.20)
1 ρ n+pQ r,η(x0,t0) u I + 1 κ n+p n (u + I) p−2 ζ q (x)dxdt = I p−2 ρ n+pQ r,η(x0,t0) u I + 1 p−2+κ n+p n ζ q (x)dxdt γσ −q I p−2 ρ n+pQ r,η(x0,t0) u I + 1 p−2+κ dxdt+a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ r,η (x 0 ,t 0 ) u I + 1 q−2+κ
dxdt .
(4.24)
To estimate the last term on the right-hand side of (4.23) we use Lemma 4.2. By our choice of
l ρ (α+p−q) l l−1 −n−pQ r,η (x 0 ,t 0 ) (u + I) p−2+ (q−p)l l−1 dxdt = = γρ (α+p−q) s−p+2 q−p −n−pQ r,η (x 0 ,t 0 ) (u + I) s dxdt γ d s + I s |Q r,η (x 0 , t 0 )| γd s . (4.25)
So, collecting estimates (4.20), (4.22)-(4.25) we arrive at
J σ := I p−2 ρ n+pQ (1−σ)r,η (x 0 ,t 0 ) u I + 1 p−2+κ n+p ln dxdt+ + a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ (1−σ)r,η (x 0 ,t 0 ) u I + 1 q−2+κ n+p ln dxdt γσ −γ I p−2 ρ n+pQ r,η (x 0 ,t 0 ) u I + 1 p−2+κ dxdt+a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ r,η (x 0 ,t 0 ) u I + 1 q−2+κ dxdt+ + I p−2 ρ n+pQ r,η (x 0 ,t 0 ) u I + 1 p−2+ κ l dxdt+a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ r,η (x 0 ,t 0 ) u I + 1 q−2+ κ l dxdt+γ ,
which by the Young inequality with any ǫ ∈ (0, 1) yields
J σ ǫJ 0 + γσ −γ ǫ −γ ,
from which by iteration we obtain
I p−2 ρ n+pQ 15 8 ρ,η (x 0 ,t 0 ) u I + 1 p−2+κ n+p ln dxdt+ + a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I q−2 ρ n+qQ 15 8 ρ,η (x 0 ,t 0 ) u I + 1 q−2+κ n+p ln dxdt γ. (4.26)
To complete the proof of the lemma we need to obtain the reverse Hölder inequality. Define the numberκ κ by the condition
(m − q + 2) ln n + p j+1 =κ, in this settinḡ κ n + p ln i = (m − q + 2) ln n + p j+1−i < (1 + p − n(l − 1) ln ) ln n + p j+1−i = = ln n + p j−i 1, 1 i j.
We use Lemma 2.4 with ε =κ l n + p ln i for the pair of cylinders
Q i := B i × (t 0 , t 0 + η i ) and Q i+1 , B i := B ρ i (x 0 ), η i = I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I ρ i , ρ i := 15 8 ρ(1 − 1 15 1 − 2 −i+1 1 − 2 −(j+1) ), i = 1, ..., j.
Choose ζ 1 (x) ∈ C 1 0 (B i ), ζ 1 (x) = 1 in B i+1 , 0 ζ 1 (x) 1, |∇ζ 1 (x)| γ 2 i ρ and ζ 2 (t) ∈ C 1 (R + ), 0 ζ 2 (t) 1, ζ 2 (t) = 1 for t t 0 +η i+1 , ζ 2 = 0 for t t 0 +η i , | d dt
ζ 2 (t)| γ2 iγ ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I ρ I 2 .
By the Sobolev embedding theorem, choosing y i := p − 2 +κ n + p ln i , z i := q − 2 + +κ n + p ln i we obtain To estimate the second term on the right-hand side of (4.29) we use the Hölder inequality, by our choice of l ρ α−n−q I q−2Q Collecting the previous inequalities we arrive at the required (4.30), which completes the proof of the lemma. where η = I 2 ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) ( I ρ )
Q i+1 (u + I) y i+1 dxdt γ sup t 0 <t<t 0 +η iB i (u + I)κ l ( n+p ln ) i (ζ 1 (x)ζ 2 (t)) q dx p n × ×Q i (u + I) −2+κ l ( n+p ln ) i | ∇u(ζ 1 (x)ζ 2 (t)) q p | p dxdt γ n + p ln γi 2 γi ϕ + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) I ρ I 2Q i (u + I)κ l ( n+p ln ) i dxdt+ + ρ −pQ i (u + I) p−2+κ l ( n+p ln ) i dxdt + a + Q 2ρ,(2ρ) 2 (x 0 ,t 0 ) ρ qQ i (u + I) q−2+κ l ( n+p ln ) i dxdti+1 u I + 1 z i+1 dxdt = ρ α−n−qQ i+1 u I +1 κ( n+p ln ) i+1
.
Proof. Let ζ(x) ∈ C 1 0 (B 3 2 ρ (x 0 )), ζ(x) = 1 in B ρ (x 0 ), 0 ζ(x) 1, |∇ζ(x)| 2 ρ , test (1.4) by ζ q (x), then by Lemma 4.4 we obtain for any δ ∈ (0,
Bρ(x 0 ) u(x, t 0 )dx u(x, t)dx + γδ ε p(1+2ε) Iρ n , t 0 < t < t 0 + δη, from which the required (4.31) follows, provided that δ is small enough.
To complete the proof of Theorem 1.1 we note that by the Hölder inequality and Lemma 4.3
Q 3
Theorem 1 . 1 .
11Let u be a weak super-solution to equation (1.1), let conditions (1.2) and (A)
0 < τ τ * . (3.17)
Lemma 4. 3 .
3Fix l by the condition
t)(u + I) −2+κ l ( n+p ln ) i | ∇uζ 1 (x)ζ 2 (t) i+1 dxdt. (4.29)
+ I s |Q ρ,η (x 0 , t 0 )| q dxdt γI 2 ρ n .
,δη (x 0 ,t 0 ) ,δη (x 0 ,t 0 ) a(x, t)|∇u| q−1 dxdt B 3 2 ρ (x 0 )
( 2 j * ε ) p−2 2 (p−2)(j * +1)+1 ,b 2 = δε p−2 2 p+(p−2)j * ,which proves Proposition 3.1 in the case (3.7) and (i) with b 1 =b 1 and b 2 =b 2 = 2b 1 e
η (x 0 ,t 0 )
ρ,δ 0 η u
Proof. Fix σ ∈ (0, 1), let15 8ρ < (1 − σ)r < r < 2ρ and let ζ(x) ∈ C 1 0 (B r (x 0 )), 0 ζ(x) 1, ζ(x) = 1 in B (1−σ)r (x 0 ) and |∇ζ(x)| 1 σr . We use Lemma 2.4 with ε = 1 − κ l , where κ is the number defined in(4.16). By the Sobolev embedding theorem and by(4.16)we obtain Qr,η(x 0 ,t 0 ) (u + I) p−2+κ n+p ln ζ q (x)dxdt γ supwhich yieldsBy condition (A) we have(4.21)Let us estimate the terms on the right-hand side of (4.21). Similarly to (4.20)The integral on the right-hand side of this inequality we estimate similarly to (4.27)Collecting estimates (4.27)-(4.29) we arrive atFrom this, after a finite number of iterations, using (4.26), we obtain (4.19), which completes the proof of the lemma.Proof. By the Hölder inequality we have 1 ρQ a(x, t)|∇u| q−1 dxdtBy Lemma 2.4 with the appropriate choice ofwhich by Lemma 4.3 yieldswith some m > 1. This inequality implies the existence oft ∈ (t 0 , t 0 + δ 0 η) such thatTherefore, by (4.31)which together with (4.32) yieldsγ(δ 0 )|B 3 2 ρ (x 0 )|. (4.33)Using Theorem 3.1 from (4.33) we arrive at u(x, t) ε 0 σ 1 γ(δ 0 ) I =σ * 1 I, x ∈ B 4ρ (x 0 ),(4.34)for all time levels.A further application of the expansion of positivity Theorem 3.1 implies from (4.5) and
The Harnack inequality and the Hölder property of solutions of nonlinear elliptic equations with a nonstandard growth condition (Russian). Yu A Alkhutov, Differ. Uravn. 3312translation in Differential EquationsYu. A. Alkhutov, The Harnack inequality and the Hölder property of solutions of nonlin- ear elliptic equations with a nonstandard growth condition (Russian), Differ. Uravn. 33 (1997), no. 12, 1651-1660; translation in Differential Equations 33 (1997), no. 12, 1653- 1663 (1998).
Hölder Continuity and Harnack's Inequality for p(x)-Harmonic Functions. Yu A Alkhutov, M D Surnachev, Proceedings of the Steklov Inst. of Math. 308Yu. A. Alkhutov, M. D. Surnachev, Hölder Continuity and Harnack's Inequality for p(x)- Harmonic Functions, Proceedings of the Steklov Inst. of Math., 308 (2020), 1-21.
A Harnack inequality in Orlicz-Sobolev spaces. W Arriagada, J Huentutripay, Stud. Math. 2432W. Arriagada, J. Huentutripay, A Harnack inequality in Orlicz-Sobolev spaces, Stud. Math. 243 (2) (2018), 117-137.
Harnack inequalities for double phase functionals. P Baroni, M Colombo, G Mingione, Nonlinear Anal. 121P. Baroni, M. Colombo, G. Mingione, Harnack inequalities for double phase functionals, Nonlinear Anal. 121 (2015), 206-222.
Non-autonomous functionals, borderline cases and related function classes. P Baroni, M Colombo, G Mingione, St. Petersburg Math. J. 27P. Baroni, M. Colombo, G. Mingione, Non-autonomous functionals, borderline cases and related function classes, St. Petersburg Math. J. 27 (2016), 347-379.
Regularity for general functionals with double phase. P Baroni, M Colombo, G Mingione, Calc. Var. Partial Differential Equations. 572P. Baroni, M. Colombo, G. Mingione, Regularity for general functionals with double phase, Calc. Var. Partial Differential Equations 57 (2018), no. 2, 1-48.
The weak Harnack inequality for unbounded supersolutions of equations with generalized Orlicz growth. A Benyaiche, P Harjulehto, P Hästö, A Karppinen, J. of Diff. Equations. 275A. Benyaiche, P. Harjulehto, P. Hästö, A. Karppinen, The weak Harnack inequality for unbounded supersolutions of equations with generalized Orlicz growth, J. of Diff. Equations 275 (2021), 790-814.
Local continuity and Harnack's inequality for doublephase parabolic equations. K O Buryachenko, I I Skrypnik, Potential Analysis. 56K. O. Buryachenko, I. I. Skrypnik, Local continuity and Harnack's inequality for double- phase parabolic equations, Potential Analysis 56 (2020), 137-164.
Bounded minimisers of double phase variational integrals. M Colombo, G Mingione, Arch. Rational Mech. Anal. 2181M. Colombo, G. Mingione, Bounded minimisers of double phase variational integrals, Arch. Rational Mech. Anal. 218 (2015), no. 1, 219-273.
Regularity for double phase variational problems. M Colombo, G Mingione, Arch. Rational Mech. Anal. 2152M. Colombo, G. Mingione, Regularity for double phase variational problems, Arch. Ratio- nal Mech. Anal. 215 (2015), no. 2, 443-496.
Calderon-Zygmund estimates and non-uniformly elliptic operators. M Colombo, G Mingione, J. Funct. Anal. 270M. Colombo, G. Mingione, Calderon-Zygmund estimates and non-uniformly elliptic oper- ators, J. Funct. Anal. 270 (2016), 1416-1478.
Local boundedness of minimizers with limit growth conditions. G Cupini, P Marcellini, E Mascolo, J. Optim. Theory Appl. 166G. Cupini, P. Marcellini, E. Mascolo, Local boundedness of minimizers with limit growth conditions, J. Optim. Theory Appl. 166 (2015), 1-22.
Degenerate Parabolic Equations. E Dibenedetto, SpringerNew YorkE. DiBenedetto, Degenerate Parabolic Equations, Springer, New York, 1993.
Harnack Estimates for Quasi-Linear Degenerate Parabolic Differential Equation. E Dibenedetto, U Gianazza, V Vespri, Acta Math. 200E. DiBenedetto, U. Gianazza, V. Vespri, Harnack Estimates for Quasi-Linear Degenerate Parabolic Differential Equation. Acta Math. 200 (2008), 181-209.
Backward and Elliptic Harnack Inequalities for Non-Negative Solutions to Certain Singular Parabolic Partial Differential Equations. E Dibenedetto, U Gianazza, V Vespri, Forward , Ann Scuola Norm Sup Pisa Cl Sci. 95E. DiBenedetto, U. Gianazza, V. Vespri, Forward, Backward and Elliptic Harnack In- equalities for Non-Negative Solutions to Certain Singular Parabolic Partial Differential Equations, Ann Scuola Norm Sup Pisa Cl Sci (5), 9 (2010), 385-422.
Harnack's inequality for Degenerate and Singular Parabolic Equations. E Dibenedetto, U Gianazza, V Vespri, SpringerNew YorkE. DiBenedetto, U. Gianazza, V. Vespri, Harnack's inequality for Degenerate and Singular Parabolic Equations. New York: Springer; 2012.
Harnack Inequalities for Quasi-Minima of Variational Integrals. E Dibenedetto, N S Trudinger, Ann. Inst. H. PoincarȂ '´e Anal. Non LinȂ '´e aire. 14E. DiBenedetto and N.S. Trudinger, Harnack Inequalities for Quasi-Minima of Variational Integrals, Ann. Inst. H. PoincarȂ '´e Anal. Non LinȂ '´e aire 1(4) (1984), 295-308.
Interior continuity, continuity up to the boundary and Harnack's inequality for double-phase elliptic equations with non-logarithmic growth. O V Hadzhy, I I Skrypnik, M V Voitovych, Math. Nachrichten, in pressO. V. Hadzhy, I. I. Skrypnik, M. V. Voitovych, Interior continuity, continuity up to the boundary and Harnack's inequality for double-phase elliptic equations with non-logarithmic growth, Math. Nachrichten, in press.
Unbounded super-solutions of nonlinear equations with nonstandard growth, Boundary Value Problems. P Harjulehto, J Kinnunen, T Lukkari, ID 4834820P. Harjulehto, J. Kinnunen, T. Lukkari, Unbounded super-solutions of nonlinear equations with nonstandard growth, Boundary Value Problems 2007 (2007): Article ID 48348, 20 p.
Hölder continuity of quasiminimizers and ω-minimizers of functionals with generalized Orlicz growth. P Harjulehto, P Hästö, M Lee, Ann. Sc. Norm. Super Pisa Cl. Sci 5 XXII. 2P. Harjulehto, P. Hästö, M. Lee, Hölder continuity of quasiminimizers and ω-minimizers of functionals with generalized Orlicz growth, Ann. Sc. Norm. Super Pisa Cl. Sci 5 XXII (2021), no.2, 549-582.
Harnack's inequality for quasiminimizers with nonstandard growth conditions. P Harjulehto, T Kuusi, T Lukkari, N Marolo, M Parviainen, J. Math. Anal. Appl. 3441P. Harjulehto, T. Kuusi, T. Lukkari, N. Marolo, M. Parviainen, Harnack's inequality for quasiminimizers with nonstandard growth conditions, J. Math. Anal. Appl. 344(1) (2008), 504-520.
A property of the solutions of parabolic equations with measurable coefficients. N V Krylov, M V Safonov, Izv. Akad. Nauk SSSR Ser. Mat. 441N. V. Krylov, M. V. Safonov, A property of the solutions of parabolic equations with measurable coefficients, Izv. Akad. Nauk SSSR Ser. Mat. 44(1) (1980), 161-175.
The weak Harnack estimate for weak super-solutions to nonlinear degenerate parabolic equations. T Kuusi, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 75T. Kuusi, The weak Harnack estimate for weak super-solutions to nonlinear degenerate parabolic equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (5), 7(4) (2008), 673-716.
Regularity for double phase problems under additional integrability assumptions. J Ok, Nonlinear Anal. 194111408J. Ok, Regularity for double phase problems under additional integrability assumptions, Nonlinear Anal. 194 (2020) 111408.
Regularity for minimizers for functionals of double phase with variable exponents. M A Ragusa, A Tachikawa, Adv. Nonl. Anal. 91M. A. Ragusa, A. Tachikawa, Regularity for minimizers for functionals of double phase with variable exponents, Adv. Nonl. Anal. 9(1) (2020), 710-728.
A note on the weak Harnack inequality for unbounded minimizers of elliptic functionals with generalized Orlicz growth. M O Savchenko, I I Skrypnik, Ye A Yevgenieva, to appearM. O. Savchenko, I. I. Skrypnik, Ye. A. Yevgenieva, A note on the weak Harnack inequality for unbounded minimizers of elliptic functionals with generalized Orlicz growth, to appear.
Harnack's inequality for degenerate double phase parabolic equations under the non-logarithmic Zhikov's condition, Ukrainian Mat. Visnyk. M O Savchenko, I I Skrypnik, Ye A Yevgenieva, in pressM. O. Savchenko, I. I. Skrypnik, Ye. A. Yevgenieva, Harnack's inequality for degenerate double phase parabolic equations under the non-logarithmic Zhikov's condition, Ukrainian Mat. Visnyk, in press.
Harnack's inequality for quasilinear elliptic equations with generalized Orlicz growth. M A Shan, I I Skrypnik, M V Voitovych, Electr. J. of Diff. Equations. 27M. A. Shan, I. I. Skrypnik, M. V. Voitovych, Harnack's inequality for quasilinear elliptic equations with generalized Orlicz growth, Electr. J. of Diff. Equations, 2021 (2021), no. 27, 1-16.
Harnack's inequality for singular parabolic equations with generalized Orlicz growth under the non-logarithmic Zhikov's condition. I I Skrypnik, 10.1007/s00028-022-00794-7Journal of Evolution Equations. 222I. I. Skrypnik, Harnack's inequality for singular parabolic equations with generalized Orlicz growth under the non-logarithmic Zhikov's condition, Journal of Evolution Equations 22(2) (2022), no. 45, https://doi.org/10.1007/s00028-022-00794-7.
B 1 classes of De Giorgi-Ladyzhenskaya-Ural'tseva and their applications to elliptic and parabolic equations with generalized Orlicz growth conditions. I I Skrypnik, M V Voitovych, Nonlinear Anal. 202I. I. Skrypnik, M. V. Voitovych, B 1 classes of De Giorgi-Ladyzhenskaya-Ural'tseva and their applications to elliptic and parabolic equations with generalized Orlicz growth conditions, Nonlinear Anal. 202 (2021) 112-135.
On the continuity of solutions of quasilinear parabolic equations with generalized Orlicz growth under non-logarithmic conditions. I I Skrypnik, M V Voitovych, Matematica Pura ed Applicatathis201I. I. Skrypnik, M. V. Voitovych, On the continuity of solutions of quasilinear parabolic equa- tions with generalized Orlicz growth under non-logarithmic conditions, Annali di Matem- atica Pura ed Applicatathis, 201(3) (2022) 1381-1416.
Local boundedness of variational solutions to evolutionary problems with nonstandard growth. T Singer, Nonl. DEA. 232T. Singer. Local boundedness of variational solutions to evolutionary problems with non- standard growth, Nonl. DEA. 23 (2) (2016) Art. 19, 23,
M D Surnachev, 10.3233/ASY-211746On the weak Harnack inequality for the parabolic p(x)-Laplacian. M. D. Surnachev, On the weak Harnack inequality for the parabolic p(x)-Laplacian, Asymptotic Analysis, DOI:10.3233/ASY-211746 (2021).
Intrinsic Harnack inequalities for parabolic equations with variable exponents. Y Wang, Nonlinear Anal. 83CONTACT INFORMATION Mariia O. SavchenkoWang Y., Intrinsic Harnack inequalities for parabolic equations with variable exponents. Nonlinear Anal. 83 (2013), 12-30. CONTACT INFORMATION Mariia O. Savchenko
. Germany Institute of Applied Mathematics and Mechanics. 3810684116Technische Universität Braunschweig, Institute for Partial Differential Equations, Universitätsplatz 2, Braunschweig ; National Academy of Sciences of UkraineBatiouk Str.Technische Universität Braunschweig, Institute for Partial Differential Equations, Universitätsplatz 2, Braunschweig, 38106, Germany Institute of Applied Mathematics and Mechanics, National Academy of Sciences of Ukraine, Batiouk Str. 19, 84116 Sloviansk, Ukraine
. I Igor, Skrypnik Institute of Applied Mathematics and Mechanics. 1984116National Academy of Sciences of UkraineBatiouk Str.Igor I. Skrypnik Institute of Applied Mathematics and Mechanics, National Academy of Sciences of Ukraine, Batiouk Str. 19, 84116 Sloviansk, Ukraine
Stus Donetsk National University, 600-richcha Str. 21, 21021 Vinnytsia, Ukraine [email protected] Yevgeniia A. Yevgenieva. Vasyl, Vasyl' Stus Donetsk National University, 600-richcha Str. 21, 21021 Vinnytsia, Ukraine [email protected] Yevgeniia A. Yevgenieva
[email protected] 1, 39106 Magdeburg. Sloviansk, Ukraine19Max Planck Institute for Dynamics of Complex Technical Systems ; National Academy of Sciences of UkraineBatiouk Str.Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstrasse 1, 39106 Magdeburg, Germany Institute of Applied Mathematics and Mechanics, National Academy of Sciences of Ukraine, Batiouk Str. 19, 84116 Sloviansk, Ukraine [email protected]
| []
|
[
"Webly Supervised Learning of Convolutional Networks",
"Webly Supervised Learning of Convolutional Networks"
]
| [
"Xinlei Chen [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\n\n",
"Abhinav Gupta [email protected] \nCarnegie Mellon University\nCarnegie Mellon University\n\n"
]
| [
"Carnegie Mellon University\nCarnegie Mellon University\n",
"Carnegie Mellon University\nCarnegie Mellon University\n"
]
| []
| In the last few years, we have made enormous progress in learning visual representations via convolutional neural networks (CNNs). We believe CNNs get their edge due to their ability to imbibe large amounts of data. Therefore, as we move forward a key question arises: how do we move from million image datasets to billion image counterparts? Do we continue to manually label images with the hope of scaling up the labeling to billion images? It is in this context that webly supervised learning assumes huge importance: if we can exploit the images on the web for training CNNs without manually labeling them, it will be a win-win for everyone. We present a simple yet powerful approach to exploit web data for learning CNNs. Specifically, inspired by curriculum learning algorithms, we present a two-step approach for learning CNNs. First, we use simple, easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder Flickr style scene images by exploiting the structure of data and categories (using a relationship graph). We demonstrate that our two-stage CNN performs very competitively to the Im-ageNet pretrained network architecture for object detection without even using a single ImageNet training label. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style[19]detector. To the best of our knowledge, we show the best performance on VOC 2007 where no VOC training data is used. | 10.1109/iccv.2015.168 | [
"https://arxiv.org/pdf/1505.01554v1.pdf"
]
| 5,658,192 | 1505.01554 | 79851b2e76c5316b2a4074448c4d1663e43a32f0 |
Webly Supervised Learning of Convolutional Networks
Xinlei Chen [email protected]
Carnegie Mellon University
Carnegie Mellon University
Abhinav Gupta [email protected]
Carnegie Mellon University
Carnegie Mellon University
Webly Supervised Learning of Convolutional Networks
In the last few years, we have made enormous progress in learning visual representations via convolutional neural networks (CNNs). We believe CNNs get their edge due to their ability to imbibe large amounts of data. Therefore, as we move forward a key question arises: how do we move from million image datasets to billion image counterparts? Do we continue to manually label images with the hope of scaling up the labeling to billion images? It is in this context that webly supervised learning assumes huge importance: if we can exploit the images on the web for training CNNs without manually labeling them, it will be a win-win for everyone. We present a simple yet powerful approach to exploit web data for learning CNNs. Specifically, inspired by curriculum learning algorithms, we present a two-step approach for learning CNNs. First, we use simple, easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder Flickr style scene images by exploiting the structure of data and categories (using a relationship graph). We demonstrate that our two-stage CNN performs very competitively to the Im-ageNet pretrained network architecture for object detection without even using a single ImageNet training label. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style[19]detector. To the best of our knowledge, we show the best performance on VOC 2007 where no VOC training data is used.
Introduction
With an enormous amount of visual data online, web and social media are among the most important sources of data for vision research. Vision datasets such as ImageNet [43], PASCAL-VOC [13] and MS-COCO [30] have been created from Google or Flickr by harnessing human intelligence to filter out the noisy images returned by search engines. The resulting clean data has helped significantly advance performance on relevant tasks [15,25,19,62]. For example, training a neural network on ImageNet followed by fine-mitsubishi saxophone jasmine chihuahua Easy Images Hard Images tuning on PASCAL-VOC has led to the state-of-the-art performance on the object detection challenge [25,19]. But human supervision comes with a cost and its own problems (e.g. inconsistency, incompleteness and bias [55]). Therefore, an alternative, and more appealing way is to learn visual representations from the web data directly, without using any manual labeling. But the big question is, can we actually use millions of images online without using any human supervision? In fact, researchers have pushed hard to realize this dream of learning visual representations from web data. These efforts have looked at different aspects of webly supervised learning such as:
• What are the good sources of data? Researchers have tried various search engines ranging from text/image search engines [4,59,57,16] to Flickr images [35].
• What types of data can be exploited? Researchers have tried to explore different types of data, like images-only [28,9], images-with-text [4,45] or even images-with-n-grams [12]).
• How do we exploit the data? Extensive algorithms (e.g. probabilistic models [16,28], exemplar based models [9], deformable part models [12], self organizing map [20] etc.) have been developed.
• What should we learn from Web data? There has been lot of effort ranging from just cleaning data [14,60,35] to training visual models [28,56,29], to even discovering common-sense relationships [9].
Nevertheless, to the best of our knowledge, while many of these systems have seen orders of magnitudes larger number of images, their performance has never shown to match up against contemporary methods that receive extensive supervision from humans. Why is that? Of course the biggest issue is the data itself: 1) it contains noise, and 2) is has bias -image search engines like Google usually operate in the high-precision low-recall regime and tend to be biased toward images where a single object is centered with a clean background and a canonical viewpoint [31,3,30]. But is it just the data? We argue that it is not just the data, but also the ability of algorithms to learn from large data sources and generalize. For example, traditional approaches which use hand-crafted features (e.g. HOG [9]) and classifiers like support vector machines [12] have very few parameters (less capacity to memorize) and are therefore unlikely to effectively use large-scale training data. On the other hand, memory based nearest neighbors classifier can better capture the distribution given a sufficient amount of data, but are less robust to the noise. Fortunately, Convolutional Neural Networks (CNNs) [25] have resurfaced as a powerful tool for learning from large-scale data: when trained with ImageNet [43] (∼1M images), it is not only able to achieve state-of-the-art performance for the same image classification task, but the learned representation can be readily applied to other relevant tasks [19,62].
Attracted by its amazing capability to harness large-scale data, in this paper, we investigate webly supervised learning for CNNs (See Figure 1). Specifically, 1) we present a two-stage webly supervised approach to learning CNNs. First we show that CNNs can be readily trained for easy categories with images retrieved by search engines with no bells or whistles. We then adapt this network to hard (Flickr style) web images using the relationships discovered in easy images; 2) we show webly supervised CNNs also generalize well to relevant vision tasks, giving state-of-the-art performance compared to ImageNet pretrained CNNs if there is enough data; 3) we show state-of-the-art performance on VOC data for the scenario where not a single VOC training image is used -just the images from the web. 4) We also show competitive results on scene classification. To the best of our knowledge, our paper is one of the first papers to achieve competitive or even better object detection performance than ImageNet trained CNNs for the same model architecture. We believe this paper opens up avenues for exploitation of Web data to achieve next cycle of performance gain in vision tasks (and at no human labeling costs!).
Why Webly Supervised?
Driven by CNNs, the field of object detection has seen a dramatic churning in the past two years, which has resulted in a significant improvement in the state-of-the-art performance. But as we move forward, how do we further improve performance of CNN-based approaches? We believe there are two directions. The first and already explored area is designing deeper networks [48,53]. We believe a more juicier direction is to feed more data into these networks (in fact, deeper networks would often need more data to train). But more data needs more human labeling efforts. Therefore, if we can exploit web data for training CNNs, it would help us move from million to billion image datasets in the future. In this paper, we take the first step in demonstrating that it is indeed possible to have competitive or even better performance to ImageNet pretrained CNNs by just exploiting web data at much larger scales.
Related Work
Mining high-quality visual data and learning good visual representation for recognition from the Web naturally form two aspects of a typical chicken-and-egg problem in vision. On one hand, clean and representative seed images can help build better and more powerful models; but on the other hand, models that recognize concepts well are crucial for indexing and retrieving image sets that contain the concept of interest. How to attack this problem has long been attractive to both industry and academia. From models to data: Image retrieval [50,49] is a classical problem in this setting. It is not only an active research topic, but also fascinating to commercial image search engines and photo-sharing websites since they would like to better capture data streams on the Internet and thus better serve user's information need. Over the years, various techniques (e.g. click-through data) have been integrated to improve search engine results. Note that, using pretrained models (e.g. CNN [60] Figure 2. Outline of our approach. We first train a CNN using easy images from Google Search (above). This CNN is then used to find relationships and initialize another network (below) which will train on harder scene images on the web. Finally, we use this network to localize objects in the images and train R-CNN detectors by using CNN features from our network.
into this category, since extensive human supervision has already been used.
From data to models: A more interesting and challenging direction is the opposite -can models automatically discover the hidden structures in the data and be trained directly from Web data? Many people have pushed hard in this line of research. For example, earlier work focused on jointly modeling images and text and used text based search engines for gathering the data [4,45,44]. This tends to offer less biased training pairs, but unfortunately such an association is often too weak and hard to capture, since visual knowledge is usually regarded as common sense knowledge and too obvious to be mentioned in the text [9]. As the image search engines became mature, recent work focused on using them to filter out the noise when learning visual models [17,59,57,56,29,12,20]. But using image search engines added more bias to the gathered data [6,31,30]. To combat both noise and data bias, recent approaches have taken a more semi-supervised approach. In particular, [28,9] proposed iterative approaches to jointly learn models and find clean examples, hoping that simple examples learned first can help the model learn harder, more complex examples [2,26]. However, to the best of our knowledge, human supervision is still a clear winner in performance, regardless of orders of magnitudes more data seen by many of these Web learners.
Our work is also closely related to another trend in computer vision: learning and exploiting visual representation via CNNs [25,19,54,21]. However, learning these CNNs from noisy labeled data [52,42] is still an open challenge. Following the recent success of convolutional networks and curriculum learning [2,26,27], we demonstrate that, while directly training CNNs with high-level or finegrained queries (e.g. random proper nouns, abstract con-cepts) and noisy labels (e.g. Flickr tags) can still be challenging, a more learning approach might provide us the right solution. Specifically, one can bootstrap CNN training with easy examples first, followed by a more extensive and comprehensive learning procedure with similarity constraints to learn visual representations. We demonstrate that visual representations learned by our algorithm performs very competitively as compared to ImageNet trained CNNs.
Finally, our paper is also related to learning from weak or noisy labels [10,36,11,51,58]. There are some recent works showcasing that CNNs trained in a weakly supervised setting can also develop detailed information about the object intrinsically [47,34,38,5,37]. However, different from the assumptions in most weakly-supervised approaches, here our model is deprived of clean human supervision altogether (instead of only removing the location or segmentation). Most recently, novel loss layers have also been introduced in CNNs to deal with noisy labels [52,42]. On the other hand, we assume a vanilla CNN is robust to noise when trained with simple examples, from which a relationship graph can be learned, and this relationship graph provides powerful constraints when the network is faced with more challenging and noisier data.
Approach
Our goal is to learn deep representations directly from the massive amount of data online. While it seems that CNNs are data-guzzlers -small datasets plus millions of parameters can easily lead to over-fitting, we found it is still hard to train a CNN naively with random image-text/tag pairs. For example, most Flickr tags correspond to meta information and specific locations, which usually results in extremely high intra-tag variation. One possibility is to use commercial text-based image search engine to increase di-versity in the training data. But if thousands of query strings are used some of them might not correspond to a visualizable concept and some of the query strings might be too fine grained (e.g. random names of a person or abstract concepts). These non-visualizable concepts and fine-grained categories incur unexpected noise during the training process 1 . One can use specifically designed techniques [9,12] and loss layers [52,42] to alleviate some of these problems. But these approaches are based on estimating the empirical noise distribution which is non-trivial. Learning the noise distribution is non-trivial since it is heavily dependent on the representation, and weak features (e.g. HOG or when the network is being trained from scratch) often lead to incorrect estimates. On the other hand, for many basic categories commonly used in the vision community, the top results returned by Google image search are pretty clean. In fact, they are so clean that they are biased towards iconic images where a single object is centered with a clean background in a canonical viewpoint [31,40,3,30]. This is good news for learning algorithm to quickly grasp the appearance of a certain concept, but a representation learned from such data is likely biased and less generalizable. So, what we want is an approach that can learn visual representation from Flickr-like images.
Inspired by the philosophy of curriculum learning [2,26,27], we take a two-step approach to train CNNs from the Web. In curriculum learning, the model is designed to learn the easy examples first, and gradually adapt itself to harder examples. In a similar manner, we first train our CNN model from scratch using easy images downloaded from Google Image Search. Once we have this representation learned we try to feed harder Flickr images for training. Note that training with Flickr images is still difficult because of noise in the labels. Therefore, we apply constraints during fine-tuning with Flickr images. These constraints are based on similarity relationships across different categories. Specifically, we propose to learn a relationship graph and initial visual representation from the easy examples first, and later during fine-tuning, the error can backpropagate through the graph and get properly regularized. To demonstrate the effectiveness of our representation, we do two experiments: (a) First, we use our final trained network using both Google and Flickr images to test on VOC 2007 and 2012 dataset. We use R-CNN pipeline for testing our representations; (b) We train object detectors from the cleaned out web data and perform localization. These detectors are tested on standard VOC 2007 dataset. The outline of our approach is shown in Figure 2.
Initial Network
As noted above, common categories used in vision nowadays are well-studied and search engines give relatively clean results. Therefore, instead of using random noun phrases, we obtained three lists of categories from ImageNet Challenge [43], SUN database [61] and NEIL knowledge base [9]. ImageNet syn-sets are transformed to its surface forms by just taking the first explanation, with most of them focusing on object categories. To better assist querying and reducing noise, we remove the suffix (usually correspond to attributes, e.g. indoor/outdoor) of the SUN categories. Since NEIL is designed to query search engines, its list is comprehensive and favorable, we collected the list for objects and attributes and removed the duplicate queries with ImageNet. The category names are directly used to query Google for images. Apart from removing unreadable images, no pre-processing is performed. This leave us with ∼600 images for each query. All the images are then fed directly into the CNN as training data.
For fair comparison, we use the same architecture (besides the output layer) as the BLVC reference network [24], which is a slight variant of of the original network proposed by [25]. The architecture has five convolutional layers followed by two fully connected layers. After seventh layer, another fully connected layer is used to predict class labels.
Representation Adaptation with Graph
After converging, the initial network has already learned favorable low-level filters to represent the "visual world" outlined by Google Image Search. However, as mentioned before, this "visual world" is biased toward clean and simple images. For example, it was found that more than 40% of the cars returned by Google are viewed from a 45 degree angle, and horses rarely occur lying on the ground [31]. Moreover, when a concept is a product, lots of the images are wallpapers and advertisements with artificial background and the concept of interest centered (and of course, viewed from the best selling view). On the other hand, photo-sharing websites like Flickr have more realistic images since the users upload their own pics. Though photographic bias still exist, most of the images are closer-looking to the visual world we experience everyday. Datasets constructed from them are shown to generalize better [55,30]. Therefore, as a next step, we aim to narrow the gap by fine-tuning our representation on Flickr images 2 .
For fine-tuning the network with hard Flickr images, we again feed these images as-is for training, with the query words acting as class labels. While we are getting more realistic images, we did notice that the data becomes noisier. Powerful and generalizable as CNNs are, they are still likely to be diluted by the noisy examples over the fine-tuning process.
In an noisy open-domain environment, mistakes are unavoidable. But humans are more intelligent: we not only learn to recognize concepts independently, but also build up interconnections and develop theories to help themselves better understand the world [7]. Inspired by this, we want to train CNNs with such relationships -with their simplest form being pair-wise look-alike relationships [46,9,12].
One way to obtain relationships is through extra knowledge sources like WordNet [33] or Word2Vec [32]. However, they are not developed for the visual domain we are interested in. Instead, we take a data-driven approach to discover such relationships in our data: we assume the network will intrinsically develop connections between different categories when clean examples are offered, and all we have to do is to distill the knowledge out.
We take a simple approach by just testing our network on the training set, and take the confusion matrix as the relationships. Mathematically, for any pair of concepts i and j, the relationship R ij is defined as:
R ij = P (i|j) = k∈Ci CN N (j|I k ) |C i | ,(1)
where C i is the set of indexes for images that belong to concept i, | · | is the cardinality function, and given pixel values I k , CN N (j|I k ) is the network's belief on how likely image k belongs to concept i. We want our graph to be sparse, therefore we just used the top K (K = 5 in our experiments) and re-normalized the probability mass. After constructing the relationship graph, we put this graph (represented as a matrix) on top of the seventh layer of the network, so that now the soft-max loss function becomes:
L = k i R il k log(CN N (i|I k )).(2)
In this way, the network is trained to predict the context of a category (in terms of relationships to other categories), and the error is back-propagated through the relationship graph to lower layers. Note that, this extra layer is similar to [52], in which R ij is used to characterize the label-flip noise. Different from them, we do not assume all the categories are mutually exclusive, but instead inter related. For example, "cat" is a hyper-class of "Siamese cat", and its reasonable if the model believes some examples of "Siamese cat" are more close to the average image of a "cat" than that of the "Siamese cat" and vice versa. Please see experimental section for our empirical validation of this assumption. For fear of semantic drift, in this paper we keep the initially learned graph structure fixed, but it would be interesting to see how updating the relationship graph performs (like [9]).
Localizing Objects
To show the effectiveness of our representation, after fine-tuning we go back to the problem of organizing the data on the web: that is, clean up the data by removing noise and localizing objects in the images. But shouldn't the CNN have learned intrinsically the salient regions in an image for the concepts of interest [47,5,37]? Isn't getting clean data as simple as ranking the initial set of images based on the soft-max output? We argue that, while the network has already learned to model the positive examples when solving the multi-way classification problem, it has not yet learned the distribution of negative data, e.g. background clutter. While scenes and attributes are more "stufflike" and thus finding clean full images might be enough, it is important for objects to be localized well, particularly when they are small in the original image. In fact, since the network is optimized for a classification loss, the representation is learned to be spatially invariant (e.g., the network should output "orange" regardless of where it exists in the image, and how many there are), precisely localizing the object is a very challenging task.
To overcome the difficulty, we developed a subcategory discovery based approach similar to [9] to localize the object given a collection of search engine results. It is based on Google's bias toward images with a single centered object, so we can use them as seeds to locate similar examples in other images of the collection. Apart from the exemplar based pipeline, there are some significant differences:
• Instead of sliding window based detection framework, we used object proposals from EdgeBox [63], so that for each image, only a few hundred of patches 3 are examined.
• Given the proposals, we compute the seventh layer output (f c7) to represent each patch, instead of HOG. The original alignment is lost, but the feature has better generalization power (See qualitative results from Figure 4 ).
• For Exemplar-LDA [22], we extracted random patches from all the downloaded Web data to build the negative correlation matrix.
• Affinity propagation [18] is used in [9] for subcategories, whereas we just merged the initial clusters (formed by top detections) from bottom up to get the final subcategories, which works well and takes less time.
Finally after getting the clean examples, we train detectors following the R-CNN [19] approach. In the first trial, It can be seen that the relationships are pretty good: for top ones, even though the network can differentiate the categories really well, when it gets confused, it gets confused to similar looking ones. Even for bottom ones when the network gets confused heavily, it is confusing between semantically related categories. Even for very noisy categories like "bossa nova", the network is able to figure out it is related to musical instruments.
we simply used the positive examples as-is, and negative patches are randomly sampled from YFCC dataset 4 . Typically, hundreds of positive instances per category are available for training. While this number is comparable to the PASCAL VOC 2007 trainval set (except car, chair and person), one big advantage of Internet is its nearly infinite limit on data. Therefore, we tried two augmentation strategies:
Data augmentation We followed [19] and did data augmentation on the positive training examples. We again used EdgeBox [63] to propose regions of interest on images where the positive example lies in. And whenever a proposal has a higher than 0.5 overlapping (measured by IoU, intersection over union) with any of the positive bounding box, we add it to the pool of our training data.
Category expansion
Experimental Results
We now describe our experimental results. Our goal is to demonstrate that the visual representation learned using 4 labs.yahoo.com/news/yfcc100m/ two-step webly supervised learning is meaningful. For this, we will do four experiments: 1) First, we will show that our learned CNN can be used for object detection. Here, we use the approach similar to R-CNN [19] where we will fine-tune our learned CNN using VOC data. This is followed by learning SVM-detectors using CNN features. 2) We will also show that our CNN can be used to clean up the Web data: that is, discover subcategories and localize the objects in Web images. 3) We will train detectors using the cleaned up web data and evaluate them on VOC data. Note in this case, we will not use any VOC training images. We will only use web images to train both the CNN and the subsequent SVMs. 4) Finally, we will show scene classification results to further showcase the usefulness of the trained representation.
All the networks are trained with the Caffe Toolbox [24]. In total we have 2,240 objects, 89 attributes, and 874 scenes. Two networks are trained: 1) The object-attribute network (GoogleO), where the output dimension is 2,329, and 2) All included network (GoogleA), where the output dimension is 3,203. For the first network, ∼1.5 million images are downloaded from Google Image Search. Combining scene images, ∼2.1 million images are used in the second network. The first network is then fine-tuned with ∼1.2 million Flickr images (Flickr). We set the batch size to be 256 and start with a learning rate of 0.01. The learning rate is reduced by a factor of 10 after every 150K iterations, and we stop training at 450K iterations. For fine-tuning, we choose a step size of 30K and train the network for a total of 100K iterations. Is Confusion Matrix Informative for Relationships? Before we delve into the results, we want to first show if the following assumption holds: whether the network has learned to discover the look-alike relationships between concepts in the confusion matrix. To verify the quality of the network, we take the GoogleO net and visualize the top-5 most confusing concepts (including self) to some of the categories. To ensure our selection has a good coverage, we first rank the diagonal of the confusing matrix (accuracy) in the descending order. Then we randomly sample 3 categories from the top-100, bottom-100, and middle-100 from the list. The visualization can be seen in Figure 3.
PASCAL VOC Object Detection
Next, we test our webly trained CNN model for the task of object detection. We run our experiments on VOC 2007 and VOC 2012 datasets. We follow the R-CNN pipeline: given our trained CNN, we first fine-tune the network using trainval images. We then learn a SVM using trainval on fine-tuned f c7 features. For VOC 2007, we used a step size of 20K and 100K iterations of fine-tuning. For VOC 2012, since the number of trainval images is doubled, we use 200K iterations of fine-tuning with a step size of 50K. For fair comparison, since we did not tune any parameters in R-CNN, the settings for SVM training are kept identical to those for ImageNet. Since we trained three different networks with different types of training data, we report three different numbers (GoogleO-FT, GoogleA-FT, Flickr-FT). Note that Flickr-FT network corresponds to learning both on Google and Flickr data using two step process and is ini-tialized with GoogleO network.
As baselines we compare against R-CNN trained using CNN-Scratch features [1] (VOC-Scratch), R-CNN trained on ImageNet features without fine-tuning (ImageNet-NFT), R-CNN trained on ImageNet features with fine-tuning on VOC trainval (ImageNet-FT) and our webly trained CNN without fine-tuning (GoogleO-NFT, GoogleA-NFT and Flickr-NFT). The results on VOC 2007 are indicated in Table 1. As the results show, all our networks outperform VOC-Scratch by a huge margin. When it comes to results without fine-tuning on VOC, our Flickr-NFT performs exactly similar to Imagenet-NFT (mAP = 44.7). This indicates that the webly supervised CNN learns visual representation comparable to ImageNet pretrained CNN. After fine-tuning, all of our webly supervised CNN perform comparably to Imagenet pretrained CNN.
The results on VOC 2012 are reported in Table 2. In this case, our two-stage CNN with fine-tuning (Flickr-FT) outperforms the ImageNet pretrained network. Both in case of VOC 2007 and 2012, our webly supervised CNN seems to work better for vehicles since we have lots of data for cars and other vehicles (∼500). On the other hand, ImageNet CNN seems to outperform our network on animals such as cat and dog. This is probably because ImageNet has a lot more data for animals. This indicates that the performance of our network might increase further if more query strings for animals are added. Note that the original R-CNN paper fine-tuned the ImageNet network using train data alone and therefore reports lower performance. For fair comparison, we fine-tuned both ImageNet network and our webly supervised network on combined trainval images. Figure 4. We use the learned CNN representation to discover bounding boxes for different categories in the training data as well as discover subcategories. Sample results are shown in the figure.
Object Localization
In this subsection, we are interested to see if we can detect objects without using a single PASCAL training image. We believe this is possible since we can localize objects automatically in web images with our proposed approach (see Section 3.3). Please refer to Figure 4 for the qualitative results on the training localization we can get with f c7 features. Compared to [9], the subcategories we get are less homogeneous (e.g. people are not well-aligned, objects in different view points are clustered together). But just because of this more powerful representation (and thus better distance metric), we are able to dig out more signal from the training set -since semantically related images can form clusters and won't be purged as noise when an image is evaluated by its nearest neighbors.
Using localized objects, we train R-CNN based detectors to detect objects on the PASCAL VOC 2007 test set. We compare our results against [12], who used Google Ngrams to expand the categories (e.g. "horse" is expanded to "jumping horse", "racing horse" etc.) and the models were also directly trained from the web. The results are shown in Table 3. This demonstrates that our framework could be a powerful way to learn detectors on the fly without labeling any training images that still yields respectable results. We plan to release this as a service for everyone to train R-CNN detectors on the fly.
Failure Modes for Webly Trained Detectors
In this section, we would like to gain more insights about the potential issues of our webly supervised object detection pipeline. We took the results from our best model (Flickr-C) and fed them to the publicly available diagnosis tool [23]. Figure 5 and 6 highlight some of the interesting observations we found.
Firstly, localization error accounts for a majority of the false positives. Since Google Image Search do not provide precise location information, the background is inevitably included when the detector is trained (e.g. aeroplane, dining table). Multiple instances of an object can also occur in the image, but the algorithm has no clue that they should be treated as separate pieces (e.g. bottle). Moreover, since our CNN is directly trained on full images, the objective function also biases the representation to be invariant (to spatial locations, etc.). All these factors caused localization issues.
Second, we did observe some interesting semantic drift between PASCAL categories and Google categories. For example, bicycle can also mean motorcycle on Google. Sense disambiguation for this polysemous word [44,8] is Loc Sim Oth BG Figure 5. Diagnosis analysis using [23] for better understanding of the failure modes of our webly supervised pipeline. Please see top false positives in Figure 6.
Indoor-67 Accuracy needed here. Also note that our person detector is confused with cars, we suspect it is because "caprice" was added as a related category but it can also mean a car ("chevy caprice"). How to handle such issues is a future research topic by itself.
Scene Classification
To further demonstrate the usage of CNN features directly learned from the web, we also conducted scene classification experiments on the MIT Indoor-67 dataset [39]. For each image, we simply computed the f c7 feature vector, which has 4096 dimensions. We did not use any data augmentation or spatial pooling technique, with the only pre-processing step normalizing the feature vector to unit 2 length [41]. The default SVM parameters (C=1) were fixed throughout the experiments. Table 4 summarizes the results on the default train/test split. We can see our web based CNNs achieved very competitive performances: all the three networks achieved an accuracy at least on par with ImageNet pretrained models. Fine-tuning on hard images enhanced the features, but adding scene-related categories gave a huge boost to 66.5 (comparable to the CNN trained on Places database [62], 68.2). This indicates CNN features learned directly from the web are indeed generic.
Moreover, since we can easily get images for semantic labels (e.g. actions, N -grams, etc.) other than objects or scenes from the web, webly supervised CNN bears a great potential to perform well on many relevant tasks -with the cost as low as providing a category list to query for that domain.
Conclusion
We have presented an approach to train CNNs using noisy web data. Specifically, we have presented a two-stage approach. In the first stage we train CNN with easy images downloaded from Google Image Search. This network is then used to discover structure in the data in terms of similarity relationships. Finally, we fine-tune the original network on hard (but realistic) Flickr images using the relationship graph. We demonstrate that our two-stage CNN comes close to ImageNet pretrained architecture on VOC 2007, and outperforms on VOC 2012. We would like to emphasize that our CNN was trained with zero explicit human labels. We even show that our representation is so powerful that we can use it to organize the web data and learn category detectors directly from web data. To the best of our knowledge, we show the best performance on VOC 2007 where no VOC training data is used. Additional results on scene understanding further demonstrate the effectiveness of our webly learned representation.
Figure 1 .
1We investigate the problem of training a webly supervised network. Two types of visual data are freely available online: image search engine results (left) and photo-sharing websites (right). We train a two-stage network bootstrapping from clean examples retrieved by Google and enhanced by potentially noisy images from Flickr.
Figure 3 .
3Relationships between different categories with the confusion matrix. The horizontal axis is for categories, which are ranked based on CNN's accuracy. Here we show random examples from three parts of the distribution: 3 top ones, 3 middle ones, and 3 bottom ones in terms of accuracy.
For our approach, we try five different settings: a) GoogleO: Features are based on GoogleO CNN and the bounding boxes are also extracted only on easy Google Images; b) GoogleA: Features are based on GoogleA CNN and the bounding boxes are extracted on easy images alone; c) Flickr: Features are based on final two-stage CNN and the bounding boxes are extracted on easy images; d) Flickr-M: Features are based on final two-stage CNN and the bounding boxes are extracted on easy and hard images; e) Flickr-C: Features are based on final two-stage CNN and the posi-tive data includes bounding box of original and related categories. From the results, we can see that in all cases the CNN based detector boosts the performance a lot.
Figure 6 .
6Top false positives for selected categories on PASCAL VOC 2007 detection with Flickr-C. From top down: aeroplane, bicycle, bottle, dinning table, and person.
) to clean up Web data also falls96
256
384
384
256
4096
4096
Easy
images
Hard
images
School
bus
Tabby
Yellow
Person
Tiger
Bill Gates
Bus
Lemon
Lemon?
Lemon?
Bill Gates?
Bus?
Tiger
School
bus
Bus
Tabby
Yellow
Bill Gates
Person
Lemon
Section 3.3: Localizing Objects
Section 3.1: Initialize Network
Section 3.2: Representation
Adaptation
VOC 2007 test aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAPImageNet-NFT [19] 57.6 57.9 38.5 31.8 23.7 51.2 58.9 51.4 20.0 50.5 40.9 46.0 51.6 55.9
43.3 23.3 48.1 35.3 51.0 57.4 44.7
GoogleO-NFT
57.1 59.9 35.4 30.5 21.9 53.9 59.5 40.7 18.6 43.3 37.5 41.9 49.6 57.7
38.4 22.8 45.2 37.1 48.0 54.5 42.7
GoogleA-NFT
54.9 58.2 35.7 30.7 22.0 54.5 59.9 44.7 19.9 41.0 34.5 40.1 46.8 56.2
40.0 22.2 45.8 36.3 47.5 54.2 42.3
Flickr-NFT
55.3 61.9 39.1 29.5 24.8 55.1 62.7 43.5 22.7 49.3 36.6 42.7 48.9 59.7
41.2 25.4 47.7 41.9 48.8 56.8 44.7
VOC-Scratch [1]
49.9 60.6 24.7 23.7 20.3 52.5 64.8 32.9 20.4 43.5 34.2 29.9 49.0 60.4
47.5 28.0 42.3 28.6 51.2 50.0 40.7
ImageNet-FT [19]
64.2 69.7 50.0 41.9 32.0 62.6 71.0 60.7 32.7 58.5 46.5 56.1 60.6 66.8
54.2 31.5 52.8 48.9 57.9 64.7 54.2
GoogleO-FT
65.0 68.1 45.2 37.0 29.6 65.4 73.8 54.0 30.4 57.8 48.7 51.9 64.1 64.7
54.0 32.0 54.9 44.5 57.0 64.0 53.1
GoogleA-NFT
64.2 68.3 42.7 38.7 26.5 65.1 72.4 50.7 28.5 60.9 48.8 51.2 60.2 65.5
54.5 31.1 50.5 48.5 56.3 60.3 52.3
Flickr-FT
63.7 68.5 46.2 36.4 30.2 68.4 73.9 56.9 31.4 59.1 46.7 52.4 61.5 69.2
53.6 31.6 53.8 44.5 58.1 59.6 53.3
Table 1. Results on VOC-2007 (PASCAL data used).
VOC 2012 test
aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv
mAP
ImageNet-FT [19] 68.1 63.8 46.1 29.4 27.9 56.6 57.0 65.9 26.5 48.7 39.5 66.2 57.3 65.4
53.2 26.2 54.5 38.1 50.6 51.6 49.
Table 2 .
2Results on VOC-2012 (PASCAL data used).
VOC 2007 test aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tvmAP
LEVAN [12]
14.0 36.2 12.5 10.3 9.2 35.0 35.9 8.4 10.0 17.5 6.5 12.9 30.6 27.5
6.0
1.5 18.8 10.3 23.5 16.4 17.1
GoogleO
30.2 34.3 16.7 13.3 6.1 43.6 27.4 22.6 6.9 16.4 10.0 21.3 25.0 35.9
7.6
9.3 21.8 17.3 31.0 18.1 20.7
GoogleA
29.5 38.3 15.1 14.0 9.1 44.3 29.3 24.9 6.9 15.8 9.7 22.6 23.5 34.3
9.7
12.7 21.4 15.8 33.4 19.4 21.5
Flickr
32.6 42.8 19.3 13.9 9.2 46.6 29.6 20.6 6.8 17.8 10.2 22.4 26.7 40.8
11.7 14.0 19.0 19.0 34.0 21.9 22.9
Flickr-M
32.7 44.3 17.9 14.0 9.3 47.1 26.6 19.2 8.2 18.3 10.0 22.7 25.0 42.5
12.0 12.7 22.2 20.9 35.6 18.2 23.0
Flickr-C
30.2 41.3 21.7 18.3 9.2 44.3 32.2 25.5 9.8 21.5 10.4 26.7 27.3 42.8
12.6 13.3 20.4 20.9 36.2 22.8 24.4
Table 3 .
3Webly supervised VOC 2007 detection results (No VOC training data used).alligator
lizard
hulk
Polo
ball
Table 4. Scene Classification Results on MIT Indoor-67 Dataset.ImageNet [62]
56.8
OverFeat [41]
58.4
GoogleO
58.1
GoogleA
66.5
Flickr
59.2
We tried to train a network with search engine results of ∼7000 entities randomly sampled from Web noun phrases but the network does not converge.
Flickr images are downloaded using tag search. We use the same query strings as used in Google Image Search.
EdgeBox usually outputs ∼2000 proposals per image. To further reduce the computation overhead, we only used windows that cover more than 1% of the entire image. We find it only has negligible effect on the final clustering quality, but purged more than 90% of the proposals.
AcknowledgementsThis research is supported by ONR MURI N000141010934, Yahoo-CMU InMind program and a gift from Google. AG and XC were partially supported by Bosch Young Faculty Fellowship and Yahoo Fellowship respectively. The authors would also like to thank Yahoo! for the donation of a computing cluster and NVIDIA for the Tesla K40 GPUs.
Analyzing the performance of multilayer neural networks for object recognition. P Agrawal, R Girshick, J Malik, ECCV. P. Agrawal, R. Girshick, and J. Malik. Analyzing the performance of multilayer neural networks for object recognition. In ECCV. 2014.
Curriculum learning. Y Bengio, J Louradour, R Collobert, J Weston, ICML. Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009.
Finding iconic images. T L Berg, A C Berg, CVPRW. T. L. Berg and A. C. Berg. Finding iconic images. In CVPRW, 2009.
Animals on the web. T L Berg, D A Forsyth, CVPR. T. L. Berg and D. A. Forsyth. Animals on the web. In CVPR, 2006.
A Bergamo, L Bazzani, D Anguelov, L Torresani, arXiv:1409.3964Self-taught object localization with deep networks. A. Bergamo, L. Bazzani, D. Anguelov, and L. Torresani. Self-taught object localization with deep networks. arXiv:1409.3964, 2014.
Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. A Bergamo, L Torresani, NIPS. A. Bergamo and L. Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In NIPS, 2010.
Theories of theories of mind. P Carruthers, P K Smith, Cambridge Univ PressP. Carruthers and P. K. Smith. Theories of theories of mind. Cam- bridge Univ Press, 1996.
Sense discovery via co-clustering on images and text. X Chen, A Ritter, A Gupta, T Mitchell, CVPR. X. Chen, A. Ritter, A. Gupta, and T. Mitchell. Sense discovery via co-clustering on images and text. In CVPR, 2015.
Neil: Extracting visual knowledge from web data. X Chen, A Shrivastava, A Gupta, ICCV. X. Chen, A. Shrivastava, and A. Gupta. Neil: Extracting visual knowledge from web data. In ICCV, 2013.
Weakly supervised learning of part-based spatial models for visual object recognition. D J Crandall, D P Huttenlocher, ECCV. D. J. Crandall and D. P. Huttenlocher. Weakly supervised learning of part-based spatial models for visual object recognition. In ECCV. 2006.
Weakly supervised localization and learning with generic knowledge. T Deselaers, B Alexe, V Ferrari, IJCVT. Deselaers, B. Alexe, and V. Ferrari. Weakly supervised localiza- tion and learning with generic knowledge. IJCV, 2012.
Learning everything about anything: Webly-supervised visual concept learning. S K Divvala, A Farhadi, C Guestrin, CVPR. S. K. Divvala, A. Farhadi, and C. Guestrin. Learning everything about anything: Webly-supervised visual concept learning. In CVPR, 2014.
Zisserman. The pascal visual object classes(voc) challenge. M Everingham, L Vangool, C Williams, J Winn, A , IJCV 10M. Everingham, L. VanGool, C. Williams, J. Winn, and A. Zisser- man. The pascal visual object classes(voc) challenge. IJCV 10.
Harvesting large-scale weaklytagged image databases from the web. J Fan, Y Shen, N Zhou, Y Gao, CVPR. J. Fan, Y. Shen, N. Zhou, and Y. Gao. Harvesting large-scale weakly- tagged image databases from the web. In CVPR, 2010.
Object detection with discriminatively trained part based models. P F Felzenszwalb, R B Girshick, D Mcallester, D Ramanan, TPAMIP. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. TPAMI, 2010.
Learning object categories from internet image searches. R Fergus, L Fei-Fei, P Perona, A Zisserman, Proceedings of the IEEE. the IEEER. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from internet image searches. Proceedings of the IEEE, 2010.
A visual category filter for google images. R Fergus, P Perona, A Zisserman, ECCV. R. Fergus, P. Perona, and A. Zisserman. A visual category filter for google images. In ECCV. 2004.
Clustering by passing messages between data points. science. B J Frey, D Dueck, B. J. Frey and D. Dueck. Clustering by passing messages between data points. science, 2007.
Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, CVPR. R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hier- archies for accurate object detection and semantic segmentation. In CVPR, 2014.
Conceptmap: Mining noisy web data for concept learning. E Golge, P Duygulu, ECCV. E. Golge and P. Duygulu. Conceptmap: Mining noisy web data for concept learning. In ECCV. 2014.
Simultaneous detection and segmentation. B Hariharan, P Arbeláez, R Girshick, J Malik, ECCV. B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In ECCV. 2014.
Discriminative decorrelation for clustering and classification. B Hariharan, J Malik, D Ramanan, ECCV. B. Hariharan, J. Malik, and D. Ramanan. Discriminative decorrela- tion for clustering and classification. In ECCV.
Diagnosing error in object detectors. D Hoiem, Y Chodpathumwan, Q Dai, ECCV. D. Hoiem, Y. Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. In ECCV. 2012.
Caffe: Convolutional architecture for fast feature embedding. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, ACM MM. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM MM, 2014.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, NIPS. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classifica- tion with deep convolutional neural networks. In NIPS, 2012.
Self-paced learning for latent variable models. M P Kumar, B Packer, D Koller, NIPS. M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In NIPS, 2010.
Learning the easy things first: Self-paced visual category discovery. Y J Lee, K Grauman, CVPR. Y. J. Lee and K. Grauman. Learning the easy things first: Self-paced visual category discovery. In CVPR, 2011.
Optimol: automatic online picture collection via incremental model learning. IJCV. L.-J Li, L Fei-Fei, L.-J. Li and L. Fei-Fei. Optimol: automatic online picture collection via incremental model learning. IJCV, 2010.
Harvesting mid-level visual concepts from large-scale internet images. Q Li, J Wu, Z Tu, CVPR. Q. Li, J. Wu, and Z. Tu. Harvesting mid-level visual concepts from large-scale internet images. In CVPR, 2013.
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, ECCV. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV. 2014.
Learning about canonical views from internet image collections. E Mezuman, Y Weiss, NIPS. E. Mezuman and Y. Weiss. Learning about canonical views from internet image collections. In NIPS, 2012.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, NIPS. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Dis- tributed representations of words and phrases and their composition- ality. In NIPS, 2013.
Wordnet: a lexical database for english. G A Miller, Communications of the ACM. G. A. Miller. Wordnet: a lexical database for english. Communica- tions of the ACM, 1995.
Weakly supervised object recognition with convolutional neural networks. M Oquab, L Bottou, I Laptev, J Sivic, Technical reportM. Oquab, L. Bottou, I. Laptev, and J. Sivic. Weakly supervised object recognition with convolutional neural networks. Technical re- port, 2014.
Im2text: Describing images using 1 million captioned photographs. V Ordonez, G Kulkarni, T L Berg, NIPS. V. Ordonez, G. Kulkarni, and T. L. Berg. Im2text: Describing images using 1 million captioned photographs. In NIPS, 2011.
Scene recognition and weakly supervised object localization with deformable part-based models. M Pandey, S Lazebnik, ICCV. M. Pandey and S. Lazebnik. Scene recognition and weakly super- vised object localization with deformable part-based models. In ICCV, 2011.
Weaklyand semi-supervised learning of a dcnn for semantic image segmentation. G Papandreou, L.-C Chen, K Murphy, A L Yuille, arXiv:1502.02734G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a dcnn for semantic image segmen- tation. arXiv:1502.02734, 2015.
Fully convolutional multi-class multiple instance learning. D Pathak, E Shelhamer, J Long, T Darrell, arXiv:1412.7144D. Pathak, E. Shelhamer, J. Long, and T. Darrell. Fully convolutional multi-class multiple instance learning. arXiv:1412.7144, 2014.
Recognizing indoor scenes. A Quattoni, A Torralba, CVPR. A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
Computing iconic summaries of general visual concepts. R Raguram, S Lazebnik, CVPRW. R. Raguram and S. Lazebnik. Computing iconic summaries of gen- eral visual concepts. In CVPRW, 2008.
Cnn features off-the-shelf: an astounding baseline for recognition. A S Razavian, H Azizpour, J Sullivan, S Carlsson, CVPRWA. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In CVPRW, 2014.
S Reed, H Lee, D Anguelov, C Szegedy, D Erhan, A Rabinovich, arXiv:1412.6596Training deep neural networks on noisy labels with bootstrapping. S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabi- novich. Training deep neural networks on noisy labels with boot- strapping. arXiv:1412.6596, 2014.
. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, arXiv:1409.0575Imagenet large scale visual recognition challengeO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014.
Unsupervised learning of visual sense models for polysemous words. K Saenko, T Darrell, NIPS. K. Saenko and T. Darrell. Unsupervised learning of visual sense models for polysemous words. In NIPS, 2009.
Harvesting image databases from the web. F Schroff, A Criminisi, A Zisserman, TPAMIF. Schroff, A. Criminisi, and A. Zisserman. Harvesting image databases from the web. TPAMI, 2011.
Constrained semi-supervised learning using attributes and comparative attributes. A Shrivastava, S Singh, A Gupta, ECCV. A. Shrivastava, S. Singh, and A. Gupta. Constrained semi-supervised learning using attributes and comparative attributes. In ECCV, 2012.
Deep inside convolutional networks: Visualising image classification models and saliency maps. K Simonyan, A Vedaldi, A Zisserman, arXiv:1312.6034K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside con- volutional networks: Visualising image classification models and saliency maps. arXiv:1312.6034, 2013.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
Video google: A text retrieval approach to object matching in videos. J Sivic, A Zisserman, ICCV. J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In ICCV, 2003.
Content-based image retrieval at the end of the early years. A W Smeulders, M Worring, S Santini, A Gupta, R Jain, TPAMIA. W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain. Content-based image retrieval at the end of the early years. TPAMI, 2000.
On learning to localize objects with minimal supervision. H O Song, R Girshick, S Jegelka, J Mairal, Z Harchaoui, T Darrell, ICML. H. O. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to localize objects with minimal supervision. In ICML.
S Sukhbaatar, R Fergus, arXiv:1406.2080Learning from noisy labels with deep neural networks. S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. arXiv:1406.2080, 2014.
C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, arXiv:1409.4842Going deeper with convolutions. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Er- han, V. Vanhoucke, and A. Rabinovich. Going deeper with convolu- tions. arXiv:1409.4842, 2014.
Deepface: Closing the gap to human-level performance in face verification. Y Taigman, M Yang, M Ranzato, L Wolf, CVPR. Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, 2014.
Unbiased look at dataset bias. A Torralba, A A Efros, CVPR. A. Torralba and A. A. Efros. Unbiased look at dataset bias. In CVPR, 2011.
Efficient object category recognition using classemes. L Torresani, M Szummer, A Fitzgibbon, ECCV. L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient object cate- gory recognition using classemes. In ECCV. 2010.
Keywords to visual categories: Multiple-instance learning forweakly supervised object categorization. S Vijayanarasimhan, K Grauman, CVPR. S. Vijayanarasimhan and K. Grauman. Keywords to visual cate- gories: Multiple-instance learning forweakly supervised object cate- gorization. In CVPR, 2008.
Weakly supervised object localization with latent category learning. C Wang, W Ren, K Huang, T Tan, ECCV. C. Wang, W. Ren, K. Huang, and T. Tan. Weakly supervised object localization with latent category learning. In ECCV. 2014.
Annotating images by mining image search results. X.-J Wang, L Zhang, X Li, W.-Y Ma, TPAMIX.-J. Wang, L. Zhang, X. Li, and W.-Y. Ma. Annotating images by mining image search results. TPAMI, 2008.
Well begun is half done: Generating high-quality seeds for automatic image dataset construction from web. Y Xia, X Cao, F Wen, J Sun, ECCV. Y. Xia, X. Cao, F. Wen, and J. Sun. Well begun is half done: Gen- erating high-quality seeds for automatic image dataset construction from web. In ECCV. 2014.
Sun database: Large-scale scene recognition from abbey to zoo. J Xiao, J Hays, K A Ehinger, A Oliva, A Torralba, CVPR. J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010.
Learning deep features for scene recognition using places database. B Zhou, A Lapedriza, J Xiao, A Torralba, A Oliva, NIPS. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014.
Edge boxes: Locating object proposals from edges. C L Zitnick, P Dollár, ECCV. C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In ECCV. 2014.
| []
|
[
"arXiv:hep-ph/9805290v1 12 May 1998 Meson Correlators at Finite Temperature",
"arXiv:hep-ph/9805290v1 12 May 1998 Meson Correlators at Finite Temperature"
]
| [
"Varun Sheel \nTheory Division\nPhysical Research Laboratory\n380 009Navrangpura, AhmedabadIndia\n",
"Hiranmaya Mishra \nTheory Division\nPhysical Research Laboratory\n380 009Navrangpura, AhmedabadIndia\n",
"Jitendra C Parikh \nTheory Division\nPhysical Research Laboratory\n380 009Navrangpura, AhmedabadIndia\n"
]
| [
"Theory Division\nPhysical Research Laboratory\n380 009Navrangpura, AhmedabadIndia",
"Theory Division\nPhysical Research Laboratory\n380 009Navrangpura, AhmedabadIndia",
"Theory Division\nPhysical Research Laboratory\n380 009Navrangpura, AhmedabadIndia"
]
| []
| We evaluate equal time point to point spatial correlation functions of mesonic currents at finite temperature. For this purpose we consider the QCD vacuum structure in terms of quark antiquark condensates and their fluctuations in terms of an irreducible four point structure of the vacuum. The temperature dependence of quark condensates is modeled using chiral perturbation theory for low temperatures and lattice QCD simulations near the critical temperature.We first consider the propagation of quarks in a condensate medium at finite temperature. We then determine the correlation functions in a hot medium. Parameters such as mass, coupling constant and threshold energy are deduced from the finite temperature correlators. We find that all of them decrease close to the critical temperature.PACS number(s): 12.38.Gc | 10.1103/physrevd.59.034501 | [
"https://export.arxiv.org/pdf/hep-ph/9805290v1.pdf"
]
| 18,249,111 | hep-ph/9805290 | c36b3a0c310e1d058c285c11fb15bcfd0e757e10 |
arXiv:hep-ph/9805290v1 12 May 1998 Meson Correlators at Finite Temperature
Varun Sheel
Theory Division
Physical Research Laboratory
380 009Navrangpura, AhmedabadIndia
Hiranmaya Mishra
Theory Division
Physical Research Laboratory
380 009Navrangpura, AhmedabadIndia
Jitendra C Parikh
Theory Division
Physical Research Laboratory
380 009Navrangpura, AhmedabadIndia
arXiv:hep-ph/9805290v1 12 May 1998 Meson Correlators at Finite Temperature
We evaluate equal time point to point spatial correlation functions of mesonic currents at finite temperature. For this purpose we consider the QCD vacuum structure in terms of quark antiquark condensates and their fluctuations in terms of an irreducible four point structure of the vacuum. The temperature dependence of quark condensates is modeled using chiral perturbation theory for low temperatures and lattice QCD simulations near the critical temperature.We first consider the propagation of quarks in a condensate medium at finite temperature. We then determine the correlation functions in a hot medium. Parameters such as mass, coupling constant and threshold energy are deduced from the finite temperature correlators. We find that all of them decrease close to the critical temperature.PACS number(s): 12.38.Gc
I. INTRODUCTION
The structure of vacuum in Quantum Chromodynamics (QCD) is one of the most interesting question in strong interaction physics [1]. The evidence for quark and gluon condensates in vacuum is a reflection of its complex nature [2]. Determination of correlation functions [3,4] of hadronic currents in such a vacuum state provides rich information regarding interquark interaction as a function of their spatial separation as well as on hadron spectroscopy. These are some of the nonperturbative feature of QCD and are of great value in understanding the ground state structure of the theory of strong interactions [3,4].
We have studied mesonic and baryonic current correlators at zero temperature with a non-trivial structure for the ground state with quark antiquark condensates [5,6]. It was shown that the square of the quark propagator does not reproduce the correlation function for the pion deduced from phenomenology. In order to match the data it was necessary to introduce an irreducible four point structure for the quarks in the vacuum. This may be looked upon as an effective way of incorporating gluon condensate contribution to the correlator.
As is well known [7] the QCD vacuum state changes with temperature. Lattice monte carlo simulations suggest that chiral symmetry is restored around 150 MeV. In view of this the present note is aimed at looking at the behaviour of the meson correlation functions at finite temperature. This is of great interest in the context of behaviour of hadrons around the chiral phase transition associated with quark gluon plasma [8,9]. It may be noted that there is little phenomenological information in this regime but there are several theoretical studies [10] essentially using sum rule methods. The main objective here is to employ a different nonperturbative approach developed by us [11]. This has been sucessful at zero temperature, and, its extension to finite temperature is therefore of interest. In particular, we will obtain temperature dependence of masses, coupling constants and threshold energies for the pion and rho mesons.
We organise the paper as follows. In section II we discuss the quark condensate at finite temperature to fix the parameter appearing in the ansatz of the ground state of QCD. We then discuss in section III the quark propagation in the thermal vacuum. In section IV we calculate meson correlation functions at finite temperature. Finally we discuss the results in section V.
II. QUARK CONDENSATE AT FINITE TEMPERATURE
To calculate the correlators at finite temperature we need the expression for the equal time propagator for the interacting quark field operators. We have developed earlier [5] a vacuum structure in terms of quark antiquark condensates with a condensate function h(k).
The equal time propagator could then be calculated in terms of the condensate function [6]. One can generalise this to finite temperature using the method of thermofield dynamics.
Here the thermal average is obtained as an expectation value of the operator over the thermal vacuum [12]. This leads to
ψ i α ( x)ψ j † β ( 0) T = δ ij (2π) 3 e i k· x Λ +αβ ( k, T )d k ψ i † α ( x)ψ j β ( 0) T = δ ij (2π) 3 e −i k· x Λ −βα ( k, T )d k(1)
The thermal vacuum is obtained from the zero temperature vacuum by a thermal Bogoliubov transformation in an extended Hilbert space involving extra field operators (thermal doubling of operators) [12]. The functions Λ ± (Eq. 1) for the case of two flavour massless quarks, are given as(with k = | k|),
Λ ± ( k, T ) = 1 2 1 ± cos 2θ(γ 0 sin2h(k) + α ·k cos2h(k)) .(2)
In the above h(k) is the condensate function [5,6,11] corresponding to the Bogoliubov transformation to include a condensate structure in the vacuum. The function θ is associated with the thermal Bogoliubov transformation and is related to the distribution function as [12] sin 2 θ(k) = 1 exp[βǫ(k)] + 1 ,
β being the inverse temperature. Further ǫ(k) is the single particle energy given as ǫ(k) = k 2 + m(k) 2 . In the presence of condensate the dynamical mass is given as m(k) = m + k tan 2h(k), m being a possible current quark mass [5].
We had earlier taken a gaussian ansatz for the condensate function sin2h(k) = e −R 2 k 2 /2 .
In order to determine the parameter R, we had taken a value of R consistent with hadronic correlator phenomenology. We choose a similar structure for the condensate function at finite temperature namely sin2h(k) = e −R(T ) 2 k 2 /2 with R(T ) now being temperature dependent.
In order to determine R(T ) or equivalently the ratio S(T ) = R(T = 0)/R(T ), we first evaluate our expression of the order parameter (the condensate value) at finite temperature.
In terms of the dimensionless variable η = Rk, this is given as
qq T qq T =0 = S(T ) 3 1 − 2 2 π e −η 2 /2 sin 2 (z, η)η 2 dη ,(4)
where sin 2 θ(z, η) = 1 e zǫ(η) +1 , with z = β/R(T ) and ǫ(η) = η/ cos 2h(η). We can obtain S(T ) = R(T = 0)/R(T ) if we know the temperature dependence of the order parameter on the left hand side of Eq.(4). As there are no phenomenological inputs for this, we shall consider the results from chiral perturbation theory (CHPT) which is expected to be valid at least for small temperatures. For higher temperatures near the critical temperature, lattice simulations seem to yield the universal behaviour [7] with a large correlation length associated with a second order phase transition for two flavor massless QCD. We shall use such a critical behaviour to consider the temperature dependence of the order parameter near the critical temperature.
We quote here the results of CHPT obtained by Gerber and Leutwyler [13]. The condensate ratio at temperatures small compared to the pion mass is given as
qq T qq T =0 = 1 + c F 2 3 2 T 4 h ′ 0 + 4πT 4 a ′ h 2 1 + 2ah 1 h ′ 1 + πT 8 h ′ b ef f + b ′ ef f h .(5)
where the functions h are defined as
h 0 = H 4 (µ)/(3π 2 ) h ′ 0 = −H 2 (µ)/(2π 2 T 2 ), h 1 = H 2 (µ)/(2π 2 ) h ′ 1 = −H 0 (µ)/(4π 2 T 2 ), h = 3h 0 [h 0 + µ 2 h 1 ] h ′ = 3h ′ 0 [h 0 + µ 2 h 1 ] + 3h 0 [h ′ 0 + h 1 /T 2 + µ 2 h ′ 1 ](6)with µ = M π /T . Also b ef f = b − 0.6T π 3 F 4 Mπ , b ′ ef f = − 1 π 3 F 4 M 2 π 5 16 − 0.3T Mπ and a ′ = 2a m 2 π + 3 32πF 2 (1 − 35m 2 π 32π 2 F 2 )
. The constant c ≃ 0.9 and F π /F = 1.057 ± 0.012 with F π = 93MeV [13]. The constants a and b are related to the S-wave and D-wave π-π scattering lengths respectively [13]. Finally the functions H n (µ) are given as [14] H
n (µ) = ∞ 0 x n dx √ x 2 + µ 2 1 e √ x 2 +µ 2 − 1 .(7)
We have extracted the temperature dependence of the condensate as in Eq.(5) for low temperatures. For temperatures close to T c , the critical behaviour is that of O(4) spin model in three dimension [15] and has also been seen in lattice QCD simulations [16]. The order parameter here is given as qq [7]. We have taken T c = 150 MeV [7]. The two regions are joined smoothly and the result is shown in Fig. 1
T qq T =0 = (1 − T Tc ) β , where β = 0.39
(a).
This result is fitted with Eq. (4) to determine S(T ) = R(0)/R(T ), which is plotted in Fig. 1(b). We shall use it to calculate the quark propagator and the hadronic correlation functions.
III. QUARK PROPAGATION IN THERMAL VACUUM
In the calculation of correlators, quark propagators enter in a direct manner and hence it is instructive to study aspects of the interacting propagator in some detail [6].
The equal time interacting quark Feynman propagator in the condensate vacuum is given
as S αβ ( x) = 1 2 ψ i α ( x),ψ i β (0) , which at finite temperature reduces to S( x, T ) = 1 2 δ ij (2π) 3 e i k. x cos2θ[sin2h − γ ·k cos2h]d k. (8) = i 4π 2 γ · x x 2 [I 1 (x) − I 2 (x)] + 1 4π 2 I 3 (x) x(9)
where,
I 1 (x) = ∞ 0 k cos kx − sin kx kx cos2θdk,(10)I 3 (x) = ∞ 0 k sin kx cos2θe −R 2 (T )k 2 dk,(11)I 2 (x) = ∞ 0 k cos kx − sin kx kx cos2θ e −R 2 (T )k 2 1 + (1 − e −R 2 (T )k 2 ) 1/2 dk,(12)with x = | x|, k = | k|.
The free massive propagator, which can be derived from S( x, T ) by the substitutions sin 2f (k) = m q /ǫ and cos 2f (k) = k/ǫ, is given as
S 0 (m q , x, T ) = 1 (2π) 2 1 x m q (m q K 1 (m q x) − 2I 5 (x)) − i γ · x x m 2 q K 2 (m q x) + 2I 6 (u)(13)
where and Verbaarschot [17]. The normalisation is discussed in our earlier work [6] To compare with the constituent quark models with an effective constituent mass, we have also plotted the behaviour of free massive quark propagator with masses 100 MeV, Clearly therefore, similar to the situation at zero temperature, whereas a constituent quark description is adequate to describe the behaviour of the chirality nonflip part of the propagator, it is not so for the chirality flip part.
I 5 (x) = ∞ 0 k ǫ sin(kx) sin 2 θdk, I 6 (x) = ∞ 0 k 2 ǫ cos kx − sin kx kx sin 2 θdk. K 1 (m q x) and K 2 (m q x)
IV. MESON CORRELATION FUNCTIONS
In our earlier work, we noted that phenomenology of correlation functions necessitated introduction of irreducible four point structure (or fluctuations of the condensate fields) in vacuum [11]. In fact, the meson correlation functions were different from square of the two point function (propagator) and the difference could be expressed in terms of the four point function. The expression of the meson correlation function at zero temperature defined in our earlier work [11] can be extended to finite temperatures as,
R( x, T ) = T r S( x, T )Γ ′ S(− x, T )Γ + T r | Σ( x)Γ ′ Σ(− x)Γ | T(14)
Where J(x) =ψ i α (x)Γ αβ ψ j β (x), is a generic meson current with Γ being a 4 × 4 matrix (1, γ 5 , γ µ or γ µ γ 5 ), x is a four vector; α and β are spinor indices; i and j are flavour indices.
The field Σ( x) is the condensate fluctuation field introduced in Ref. [11] to include four point irreducible structures in QCD vacuum.. Thus, at finite temperature the correlator (Eq. 14) is now the square of the interacting equal time thermal propagator plus the four point contribution at finite temperature. The thermal quark propagator was obtained in the earlier section. We keep the structure of the fields Σ(x, T ) the same as for zero temperature [11].
Σ αβ ( x) = Σ V αβ ( x) + Σ S αβ ( x)(15)= µ 2 1 (γ i γ j ) αβ ǫ ijk φ k ( x) + µ 2 2 δ αβ φ( x)(16)
where the first term corresponds to vector fluctuations and the second to scalar. µ 1 and µ 2 in the above equations are dimensional parameters which give the strength of the fluctuations and φ k ( x) and φ( x) are vector and scalar fields such that, with |Ω as the ground state of QCD, we have
Ω|φ i ( x)φ j (0)|Ω = δ ij g V ( x); Ω|φ( x)φ(0)|Ω = g S ( x)(17)
At finite temperature, the functions g V and g S will be temperature dependent. We do not know how to calculate it except for a general property that the effect of the four point structure should decrease with temperature. We take here a simple ansatz for the temperature dependence of g V and g S ,
g S,V (x, T ) = qq T qq T =0 2 g S,V (x, T = 0)(18)
Similar to calculations at zero temperature, we shall consider the ratio of the physical correlation function to that of massless noninteracting quarks at zero temperature given as
R 0 (x) = T r S 0 (x)Γ ′ S 0 (−x)Γ .
The normalised correlation functions thus defined as
C( x, T ) = R( x, T ) R 0 ( x)(19)
are plotted in Figure 3 for the pseudoscalar and vector channels.
As expected (on physical grounds) the amplitude of the correlator decreases with increasing temperature. The peak of the vector correlator shifts towards the right after T = 0.9T c .
We might remind ourselves that the position of the peak of the correlator is inversely proportional to the mass of the particle in the relevant channel [4].
The spatial hadronic correlators have been used to extract the hadronic screening masses and widths at finite temperature [9]. To extract the hadronic properties at finite temperature, we use a phenomenological parameterisation as is usually done in sum rule calculations [18,19]. We may note here however, that the phenomenological inputs are not available at finite temperature. The correlators are parameterised with the mass, decay width and the coupling of the particle to the vacuum, all three parameters being temperature dependent.
We first express the correlator in terms of spectral density function.
R ph ( x) = ∞ 0 ds √ s 4π 2 x K 1 ( √ sx) ρ(s)(20)
Then we use the following phenomenological parameterisation for the spectral density function [18,19],
ρ V (s) = 3λ 2 ρ δ(s − M 2 ρ ) + 3s 4π 2 tanh √ s 4T θ(s − s o ) + T 2 S ρ δ(s) (21) ρ P (s) = λ 2 π δ(s − M 2 π ) + 3s 8π 2 tanh √ s 4T θ(s − s o )(22)
where λ is the coupling of the bound state to the current, M is mass of the bound state and s 0 is the threshold for continuum contributions. The last term in Eq. (21) is the scattering term for soft thermal dissociations (mainly through pions), which exists only at finite temperature [18]. This term is given as
S ρ = lim | p|→0 1 2π | p| 2 0 dω 2 ∞ v x 2 n( | p|x − ω 2T ) − n( | p|x + ω 2T )(23)
The derivation of the above expression is slightly tricky and we have given it in the appendix.
Following Ref. [18] we take S ρ ≈ T 2 9 . The mass, threshold and coupling are then extracted such that the correlators as obtained from Eq. 20 agree with the normalised correlation functions as calculated by us (Fig. 3) [11]. This is done for each temperature. The results are plotted in Fig. 4 for the pseudoscalar channel and in Fig. 5 for the vector channel. The results are also shown in Table I.
V. SUMMARY AND CONCLUSIONS
As can be seen from Fig. 3, with increase in temperature, the correlation functions have a lower peak indicating lack of correlations with temperature. In the vector channel the mass of the ρ meson appears to decrease beyond 120 MeV. The threshold for the continuum also decreases around the same temperature. The behaviour with temperature of these quantities is qualitatively similar to that found by Hatsuda et al [19]. We have also plotted the temperature dependence of the coupling of the boundstate to the current which decreases with temperature but rather slowly as compared to mass or the threshold for the continuum.
The temperature dependence of these parameters can be used to calculate the lepton pair production rate from ρ in the context of ultra relativistic heavy ion collision experiments to estimate vector meson mass shift in the medium.
In the pseudoscalar channel the mass remains almost constant till the critical temperature whereas the thershold and the coupling decrase with the temparature [20]. We may note here that in the pseudoscalar channel, the contribution to the correlation function mostly comes from the fluctuating fields and the temperature behaviour as taken in Eq.(18) essentially does not shift the position of the peak whereas the magnitude of the correlator decreases. That is reflected in the above behaviour of the parameters in the pseudoscalar channel. We may note here that similar behaviour of pion mass becoming almost insensitive to temperature below the critical temperature was also observed in Ref. [20] where correlation functions were calculated in a QCD motivated effective theory namely the Nambu-Jona Lasinio model.
We would like to add here that the present analysis will be valid for temperatures below the critical temperature. Above the critical temperature there have been calculations essentilly using finite temperature perturbative QCD in random phase approximations [21]. However, in the region above T C , nonperurbative features have been seen to exist from studies in lattice QCD simulations [7]. In view of this, one possibly has to do a hard thermal loop calculation where a partial resummation is done [22].
VI. ACKNOWLEDGEMENT
The present work was initiated when one of the authors (HM) was visiting Department of Physics, University of Bielefeld. He would like to thank the Physics Department there for providing the facilities and Alexander von Humboldt Foundation, Germany for a fellowship during that period.
APPENDIX A:
Here we shall derive the scattering term S ρ . This may be calculated by considering the imaginary part of the longitudinal correlator for space like four momenta and can be written
as ρ s l (ω, p) = Im Π 00 | p 2 | (A1)
which is explicitly written as [24]
ρ s l (ω, p) = 2 × (2π) 4 | p 2 | d 3 k 1 2E 1 (2π) 3 d 3 k 2 2E 2 (2π) 3 | π( k 1 )|J 0 |π( k 2 ) | 2 (A2) × δ(ω − E 1 + E 2 )δ (3) ( p − k 1 + k 2 )(n 2 − n 1 ).(A3)
Here E 1 = k 2 1 + m 2 π ; E 2 = k 2 2 + m 2 π and n i ≡ n(E i ) is the Bose distribution function for pions.
In general the expectation of a vector current with respect to a pion state is given as [23] π(k 1 )|J µ |π(k 2 ) = (k 1 + k 2 ) µ G π (p) (A4)
where, p = k 1 − k 2 and G π (p) is the pion form factor with G π (0) = 1 Substituting this in Eq. (A3) and integrating over k 2 we obtain
S ρ (ω, p) = 2 × 2 (2π) 2 4| p 2 | d 3 k 1 E 1 (G π (p)) 2 δ(ω − E 1 + E − 2)(2E 1 − ω) 2 (n 2 − n 1 ) (A5)
with k 2 = p − k 1 . Next, since the delta function above contribute to space like (p 2 < 0) region we write it as
δ(ω − E 1 + E 2 ) = 2E 2 δ((ω − E 1 ) 2 − E 2 2 )θ(−p 2 )
To simplify further, we may change the integration over three momentum k 1 to the integration over energy E 1 and the angle cos θ p, k . Performing the integration over angles restricts the lower limit of the enegry integral E 1 as E 1min = 1 2 (ω + | p|v), where, v = (1 − 4m 2 π p 2 ) 1/2 . Thus we have
ρ s l (ω, p) = 1 4π | p| 3 ∞ E min G 2 π (p)(2E 1 − ω) 2 (n 2 − n 1 ) (A6) with E 2 − E 1 − ω. Next, defining the varible x through E 1 = 1 2 (ω + | p|x) 1/2 , leads to ρ s l (ω, p) = 1 8π ∞ v dxx 2 n( | p|x − ω 2T )G 2 π (p)(2E 1 − ω) 2 (n 2 − n 1 ) (A7)
We shall consider the longitudinal form factor S ρ (ω, p) in a frame which is at rest with respect to the medium which implies that p → 0. In this limit the constraint 0 < ω < p 2 also forces ω to approach zero. However the above integral becomes increasingly large as p → 0 such that the integrated quantity of S ρ (ω, p) within the phase space for ω remains finite. Thus we first integrate over this region with p finite and then take the limit p → 0.
Thus let
I = lim | p|→0 | p| 2 0 dω 2 ρ s l (ω, p) = S ρ 2π (A8)
so that ρ s l (ω, p) effectively becomes a delta function. Thus the spectral density reduces to
lim | p|→0 ρ s l (ω, p) = δ(ω 2 ) S ρ 2π (A9)
We also note that there arises no ambiguity from the pion form factor as G π (p = 0) = 1.
Now the integral I can be written as
I = 1 8π lim | p|→0 | p| 2 0 dω 2 ∞ v dx x 2 [n((| p|x − ω)/2T ) − n((| p|x + ω)/2T )](A10)
We change the integration variables [25] by putting ω = | p|λ and x = 1 + y 2 | p| 2 (1−λ 2 ) . Hence the spectral density function can be written as
I = 1 4π lim | p|→0 1 0 dλ λ ∞ 2mπ xy dy (1 − λ 2 ) 2 n( | p|x − ω 2T ) − n( | p|x + ω 2T ) (A11)
In the limit of | p| → 0, we may Taylor expand the difference of the distribution functions in the square bracket of eq.(A11) and have
n( | p|x − ω 2T ) − n( | p|x + ω 2T ) ≈ − 2x| p| 2 (1 − λ 2 ) 2 y 2 dn dλ .
Substituting back in eq. (A11)and performing an integration by parts for dλ integration we have
I = 1 2π 1 0 dλ ∞ 2mπ dyn( y 2T √ 1 − λ 2 )y.(A12)
In the limit of vanishing pion mas we have I = 2πT 2 /9 so that S ρ = T 2 /9.
are the first and second order modified Bessel functions of the second kind respectively. In Fig. 2 we plot the two components Tr S( x, T ) and Tr (γ ·x)S( x, T ) of the propagator for massless interacting quarks given by Eq. (9) at T = 0 MeV, T = 100 MeV and T = 135 MeV, corresponding to the chirality flip and non-flip components considered by Shuryak
200
MeV and 300MeV. In the chirality flip part, the propagator in the condensate medium starts from zero, consistent with zero quark mass at small distances, attains a maximum value of about 250 MeV at a distance of about 0.9 fm and then falls off gradually. Further the interacting propagator overshoots the massive propagators after about 0.6 fm. We also see that with increasing temperature, the chirality flip component has a lower peak and the position of the peak shifts towards higher distances indicating the decrease of the dynamical mass with temperature.In the chirality non flip part, the interacting propagator starts from 1, again consistent with the behaviour expected from asymptotic freedom. But at larger separation it falls rather fast indicative of an effective mass of the order of 150 MeV. These features are qualitatively similar to that of the quark propagator at zero temperature[6,17], though quantitatively there are differences. Also, the non-flip component falls faster with increase of temperature.
FIG. 1 .
1Figure (a) shows quark condensate at finite temperature normalised to that at zero temperature obtained from CHPT and Lattice. Figure (b) shows R(0)/R(T) as determined from Fig (a).
FIG. 2 .FIG. 3 .
23The two components of the thermal quark propagator, (a) Tr(S) and (b) Tr[( γ ·x)S] versus the distance x(in fm). The three lines, dot, short dash-long dash and solid corresponds to massless quark interacting propagator S(x,T) at temperatures of 0, 100 and 135 MeV respectively. The three lines, short dashed, dot-short dashed and long dashed correspond to a massive free propagator with a mass of 100, 200 and 300 MeV, respectively at T=135 MeV. The ratio of the meson correlation functions at finite temperature to the correlation functions for noninteracting massless quarks at zero temperature R(x, T ) R 0 (x, T = 0) , vs. distance x (in fm). The solid, dashed, dotted and dot-dashed lines correspond to temperatures T =0 MeV, T =130 MeV, T =140 MeV and T =148 MeV respectively. FIG. 4. The temperature dependence of mass, threshold (S 0 ) and coupling for the pseudoscalar channel. T c =150 MeV. The vertical lines represent the errors obtained while fitting TABLES
TABLE I .
IFitted parametersCHANNEL
Temp.(MeV)
M (GeV)
λ
√ s 0 (GeV)
Vector
0
0.780 ± 0.005
(0.420 ± 0.041 GeV) 2
2.070 ± 0.035
100
0.771 ± 0.001
(0.411 ± 0.062 GeV) 2
1.897 ± 0.033
The QCD vacuum, hadrons and the superdense matter. E V Shuryak, World ScientificSingaporeE.V. Shuryak, The QCD vacuum, hadrons and the superdense matter (World Scientific, Singapore, 1988).
. M A Shifman, A I Vainshtein, V I Zakharov, Nucl.Phys. 147519M.A. Shifman, A.I. vainshtein and V.I. Zakharov, Nucl.Phys. B147, 385, 448 and 519(1979).
. E V Shuryak, Rev. Mod. Phys. 651E.V. Shuryak, Rev. Mod. Phys. 65, 1 (1993).
. M.-C Chu, J M Grandy, S Huang, J W Negele, Phys. Rev. 483340M.-C. Chu, J.M. Grandy, S. Huang and J.W. Negele, Phys. Rev. D48, 3340 (1993) ;
. Phys. Rev. D49. 6039ibid, Phys. Rev. D49 6039(1994).
. A Mishra, H Mishra, S P Misra, P K Panda, Varun Sheel, Int. J. Mod. Phys. 593A. Mishra, H. Mishra, S.P. Misra, P.K. Panda and Varun Sheel, Int. J. Mod. Phys. E5, 93 (1996).
. Varun Sheel, Hiranmaya Mishra, Jitendra C Parikh, Int. J. Mod. Phys. E6. 275Varun Sheel, Hiranmaya Mishra and Jitendra C. Parikh, Int. J. Mod. Phys. E6, 275, (1997).
. E Laermann, Nucl. Phys. 6101E. Laermann, Nucl. Phys. A610, 1c (1996).
. E V Shuryak, Nucl. Phys. 54465E.V. Shuryak, Nucl. Phys. A544, 65c (1992).
. T Schäfer, E V Shuryak, Phys. Rev. 541099T. Schäfer and E.V. Shuryak, Phys. Rev. D54, 1099 (1996)
. T Hatsuda, Y Koike, S H Lee, Nucl. Phys. 394221T. Hatsuda, Y. Koike and S.H. Lee, Nucl. Phys. B394, 221 (1993);
. Varun Sheel, Hiranmaya Mishra, Jitendra C Parikh, Phys. Lett. 382173Varun Sheel, Hiranmaya Mishra and Jitendra C. Parikh, Phys. Lett. B382, 173 (1996).
H Umezawa, H Matsumoto, M Tachiki, Thermofield dynamics and condensed states. North Holland, AmsterdamH. Umezawa, H. Matsumoto and M. Tachiki Thermofield dynamics and condensed states (North Holland, Amsterdam, 1982) ;
. P A Henning, Phys. Rep. 253235P.A. Henning, Phys. Rep.253, 235 (1995).
. P Gerber, H Leutwyler, Nucl. Phys. 321387P. Gerber and H. Leutwyler, Nucl. Phys. B321, 387 (1989).
. H.-S Roh, T Matsui, nucl-th/9611050H.-S. Roh and T. Matsui, nucl-th/9611050
. K Rajgopal, F Wilczek, Nucl. Phys. 399395K. Rajgopal and F. Wilczek, Nucl. Phys. B399, 395 (1993).
. F Karsch, ; F Karsch, E Laermann, Phys. Rev. 496954Phys. Rev.F. Karsch, Phys. Rev. D49, 3791 (1994). ; F. Karsch and E. Laermann, Phys. Rev. D50, 6954 (1994).
. E V Shuryak, J J M Verbaarschot, Nucl. Phys. 41037E.V. Shuryak and J.J.M. Verbaarschot, Nucl. Phys. B410, 37 (1993).
. C Adami, T Hatsuda, I Zahed, Phys. Rev. 43921C. Adami, T. Hatsuda and I. Zahed, Phys. Rev. D43, 921 (1991) .
. Tetsuo Hatsuda, Yuji Koike, Su Houng Lee, Nucl. Phys. 394221Tetsuo Hatsuda, Yuji Koike and Su Houng Lee, Nucl. Phys. B394, 221 (1993).
. T Hatsuda, T Kunihiro, Prog. Theor. Phys. Suppl. 91284T. Hatsuda and T. Kunihiro, Prog. Theor. Phys. Suppl. 91, 284 (1987).
. C Jitendra, Philip J Parikh, Siemens, Phys. Rev. 373246Jitendra C. Parikh, Philip J. Siemens, Phys. Rev. D37, 3246 (1988);
. R B Thayyullathil, J C Parikh, Phys. Rev. D44. 3964R.B. Thayyullathil and J.C. Parikh, Phys. Rev. D44, 3964, (1991).
. Eric Braaten, Robert D Pisarski, Phys. rev. Lett. 64310Nucl. Phys.Eric Braaten and Robert D. Pisarski, Phys. rev. Lett. 64, 1338 (1990), Nucl. Phys. B339, 310 (1990).
J F Donoghue, E Golowich, B R Holostein, Dynamics of the standard model. Cambridge university pressJ.F. Donoghue, E. Golowich and B.R. Holostein, Dynamics of the standard model, (Cam- bridge university press, 1992).
A A Abrikosov, L P Gorkov, I E Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics. Engelwood Cliffs, N. J.Prentice-HallA. A. Abrikosov, L. P. Gorkov and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics, (Prentice-Hall, Engelwood Cliffs, N. J., 1963)
. S Mallik, K Mukherjee, hep-ph/9711297S. Mallik and K. Mukherjee, hep-ph/9711297
The temperature dependence of mass, threshold (S 0 ) and coupling for the vector channel. T c =150 MeV. The vertical lines represent the errors obtained while fitting. FIG. 5. The temperature dependence of mass, threshold (S 0 ) and coupling for the vector chan- nel. T c =150 MeV. The vertical lines represent the errors obtained while fitting
| []
|
[]
| [
"\nComputer Science and Artificial Intelligence Laboratory\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n"
]
| [
"Computer Science and Artificial Intelligence Laboratory\nMassachusetts Institute of Technology\n02139CambridgeMAUSA"
]
| []
| The sensitivity of image classifiers to small perturbations in the input is often viewed as a defect of their construction. We demonstrate that this sensitivity is a fundamental property of classifiers. For any arbitrary classifier over the set of n-by-n images, we show that for all but one class it is possible to change the classification of all but a tiny fraction of the images in that class with a perturbation of size O(n 1/ max (p,1) ) when measured in any p-norm for p ≥ 0. We then discuss how this phenomenon relates to human visual perception and the potential implications for the design considerations of computer vision systems. | null | [
"https://export.arxiv.org/pdf/2112.04033v2.pdf"
]
| 244,954,601 | 2112.04033 | 0fe609659ce6cbba611124f6ee20880e5f0dd82e |
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
02139CambridgeMAUSA
Image classifiers can not be made robust to small perturbationsComputer vision · Human visual system · Adversarial machine learning · Isoperimetry
The sensitivity of image classifiers to small perturbations in the input is often viewed as a defect of their construction. We demonstrate that this sensitivity is a fundamental property of classifiers. For any arbitrary classifier over the set of n-by-n images, we show that for all but one class it is possible to change the classification of all but a tiny fraction of the images in that class with a perturbation of size O(n 1/ max (p,1) ) when measured in any p-norm for p ≥ 0. We then discuss how this phenomenon relates to human visual perception and the potential implications for the design considerations of computer vision systems.
Introduction
It has been observed that classifiers built on deep learning architectures are prone to misclassification given tiny perturbations on their inputs [20]. Because these perturbations are typically imperceptible, they are commonly thought of as adversarial [17]. The existence of small perturbations that alter classifier decisions has motivated a plethora of research into how such perturbations may be engineered or prevented [16,14,12,22,13]. While adversarial perturbations are defined to be imperceptible to humans, the concept of imperceptibility is difficult to formally define. Therefore, the size of a perturbation is often implicitly adopted as a surrogate for perceptibility [4].
Here we demonstrate that susceptibility to small perturbations is a fundamental property of any algorithm that partitions an image space into distinct classes. Specifically, we show that on any image space consisting of images with n-by-n pixels and finite bit depth, there exists some universal constant c (parametrized by the number of channels) such that most images in all but one class can have their classes changed with cn pixel changes, a vanishingly small number compared to the n 2 pixels within the entire image for sufficiently large n. Similarly, we show that a perturbation with a p-norm of size c n 1/p suffices as well, for some c dependent on p, the number of channels, and the bit depth. Thus, the creation of a classifier that is robust to perturbations of the sizes described above is impossible.
Conversely, we also demonstrate that an upper bound on classifier robustness that applies universally to all image classifiers cannot be smaller than ours by more than a constant factor (parametrized by bit depth). Finally, we show how increasing the bit depth of our image space decreases classifier robustness under certain definitions of perturbation size.
Our bounds are unconditional, therefore they apply to classifiers based on human perception as well. We discuss the possible interpretations of this fact, and its potential implications for designing computer vision systems.
Related work
The sensitivity of neural networks to small perturbations was discovered in [20] where the authors remarked that perhaps adversarial examples form a dense low measure set analogous to the rationals. A serious effort to explain adversarial examples was undertaken in [6], which suggests that adversarial examples is a consequence of high dimensional dot products between weights and inputs. However, their argument is not formal, and it has been shown that high dimensional dot products is neither necessary nor sufficient to explain adversarial images [21]. Formal arguments bounding adversarial perturbations and robustness have been proven for specific instances [5,23]. However, the settings under which these theoretical results hold are usually highly idealized, and these arguments do not hold under more general settings. arXiv:2112.04033v2 [cs.CV] 9 Aug 2022
The most general results for explaining adversarial examples comes from universal non-robustness bounds achieved through the use of isoperimetric bounds. This is the approach we take in this work. Isoperimetric results bound the surface area of any given volume in some space, so they are highly generalizable. The work presented in [4] uses an isoperimetric bound to bound the fraction of the space of natural images that is susceptible to changing classes under a small perturbation for any arbitrary classifier. However, they only consider perturbations measured by the Euclidean distance (2-norm), while our analysis encompasses perturbations measured by any p-norm. Furthermore, our bound is of a different nature as it considers the space of all images and is therefore unconditional and universal, while their bounds focus on image manifolds defined by generative functions and therefore are parametrized by the generator.
Isoperimetric bounds are also applied to understanding adversarial perturbations in [2], where it is shown that for arbitrary classifiers over boolean inputs, most inputs can be pushed into a misclassification region with a small perturbation as long as the region occupies an asymptotically finite fraction of the input space. This work has since been extended to apply to a more general class of spaces in [15] using concentration bounds. Our work instead focuses on pushing images into different classification regions, rather than into a specific misclassification region, and is therefore of a slightly different nature. Also, unlike our analysis, their analysis does not preclude the existence of asymptotically infinitesimal classes of images that are robust to perturbations.
Our work also explores how these bounds apply to the human visual system due to their universality in contrast to prior work.
Studies attempting to understand adversarial perturbations in the human visual system usually do so by showing people adversarial images. This line of work has revealed that imperceptible adversarial perturbations may in fact be perceptible and influence human classifications [3,24]. This line of work is very different from the work presented here: our approach is more theoretical, and our subsequent interpretations focus on perturbations that are clearly visible to humans despite being small.
In the remainder of this paper we provide a precise exposition of all our results as well as our terminology (Section 2), interpret these results (Section 3), and provide concluding remarks (Section 4). Proofs are mostly omitted and can be found in Appendix A.
Results
In this section we state universal non-robustness results for classifiers over images that can be encoded with finite bit strings. We then state how these non-robustness results are asymptotically the best we can achieve up to a constant factor, and we conclude by stating some results on how bit depth influences some of these bounds.
Intuitively, our results are a consequence of the high dimensional geometric phenomenon where measure concentrates near the boundary of sets in high dimensions.
Preliminaries
Images consist of pixels on a two dimensional grid, with each pixel consisting of a set of channels (for example R, G, and B) of varying intensity. Therefore, we define an h-channel image of size n × n to be a real valued tensor of shape (n, n, h), where each entry is restricted to the interval [0, 1]. The first two dimensions index the pixel, while the third indexes the channel. We use I n,h,∞ to denote the set of all such images.
Only a finite subset of these images can be represented with a finite number of bits. Therefore, we define the set of all h-channel images of size n × n with bit depth b, denoted B n,h,b , as the set of all bit valued tensors with shape (n, n, h, b). The additional fourth dimension indexes the positions of a bit string that encodes the intensity of a channel. We map elements of B n,h,b to I n,h,∞ by mapping each length b bit string to equally spaced values in [0, 1] with the largest value being 1 and the smallest being 0. We will use I n,h,b to denote the image of B n,h,b under this map. We will sometimes refer to I n,h,b as discrete image spaces to disambiguate them from I n,h,∞ , which we will refer to as the continuous image space.
Classifiers and Classes
A classifier C is a function I n,h,b → Y, where Y is some finite set of labels. For each y ∈ Y, we define the class of y as the preimage of y, denoted as the set of images C −1 (y). We say that such a class is induced by C. If a class takes up a large part of the image space, then it contains a lot of images that look like randomly sampled noise, since randomly sampling channel values from a uniform distribution yields a uniform distribution over the image space. Therefore, many images in these classes tend to be uninteresting, which motivates the following definition: Definition 1. A class C ⊆ I n,h,b is interesting if it is not empty, and if it contains no more than half of the total number of images in I n,h,b .
Note that if no class is empty, then no more than 1 class can be uninteresting. This is because classes are disjoint and so at most 1 class can contain more than half the total number of images.
Perturbations and Robustness In order to discuss perturbations, we define addition and subtraction over tensors that are of the same shape to be element-wise, and we define the p-norm of a tensor A, denoted A p , to be the pth root of the sum of the absolute values of the entries of A raised to the pth power. p is assumed to be a non-negative integer, and for the special case of p = 0 we let A 0 be the number of non-zero entries in A.
We can then define what it means for an image to be robust to perturbations:
Definition 2. Let C : I n,h,b → Y be a classifier. We say an image I ∈ I n,h,b is robust to L p -perturbations of size d if for all I ∈ I n,h,b , I − I p ≤ d implies C(I) = C(I ).
We can then define what it means for a class to be robust to perturbations. Note that unless a class occupies the entire image space, it must contain some non-robust images, so the best we can hope for is to attain robustness for a large fraction of the images within a class. This is reflected in the following definition.
Definition 3. Let C : I n,h,b → Y be a classifier, and let C be a class induced by it. Then we say that a class C is r-robust to L p -perturbations of size d if it is not empty, and the number of images I ∈ C that are robust to L p -perturbations of size d is at least r|C|, where |C| is the number of images in C.
Universal upper bound on classifier robustness
We can now state a universal non-robustness result that applies to all classifiers over discrete image spaces I n,h,b .
Theorem 1. Let C : I n,h,b → Y be any classifier. Then for all real values c > 0, no interesting class is 2e −2c 2 -robust to L p -perturbations of size (2 + c √ h * n) 1/ max(p,1) .
Proof sketch. We can use the images in I n,h,b to form a graph where images are the vertices, and images are connected if and only if they differ at exactly one channel. In other words, the image tensors must differ at precisely one entry. Figure 1a illustrates the construction of this graph. Note that graph distance between vertices coincides with the Hamming distance between the images represented by the vertices. Such graphs are known as Hamming graphs, and they have a vertex expansion (or isoperimetry) property [7] which implies that for any sufficiently small set, if we add all vertices that are within a graph distance of O(n) to that set, then the size of that set increases by at least some given factor (see Figure 1b for an example). We can then show that an interesting class C cannot be too robust in the following way: suppose for contradiction that it is. Then there must be some set C ⊆ C that is pretty large, and has the property that all vertices within some graph distance of C are in C. We can then use the vertex expansion property to show that adding these vertices to C gives a set larger than C, which contradicts the assumption that all vertices within some graph distance C . Plugging explicit values into this argument yields the statement of the theorem.
We can then generalize to L p -perturbations for arbitrary p since each coordinate varies by at most 1 unit. The full proof can be found in Appendix A.1.
Intuitively, the above results state that we can change the class of most "interesting" images with small perturbations that are on the order of O(n) pixel changes. The implications of this are considered in the discussion. We show how we construct a Hamming graph using the elements of I2,1,1, the space of binary images on four pixels. By construction, graph distance coincides exactly with Hamming distance. b) We demonstrate the expansion property of Hamming graphs on a Hamming graph constructed using I3,1,1 as the vertex set. If we pick some initial set of vertices (in black), then the set of vertices that are a graph distance of at most 3 (3 being n in this case) from that initial set (in black and red) is much larger than that initial set. The nature of "much larger" is expanded on in Appendix A.1.
The universal non-robustness results are asymptotically optimal up to a constant factor Up to a constant factor, the bounds in Theorem 1 are the best possible for a universal non-robustness result that applies to arbitrary predictors if we only consider n and hold the number of channels per pixel h and bit depth b constant. In other words, there exists no bound on robustness that applies universally to all classifiers that grows much more slowly in n than the ones given in Theorem 1. Therefore, if we wish to show that the classes induced by some classifier are not robust to, for instance, L 0 -perturbations of size O(log(n)), more specific properties of that classifier would need to be considered.
To prove this, consider the classifier defined by Algorithm 1.
Algorithm 1: Robust Classifier
Input :
An image I ∈ I n,h,b Result: A label belonging to {0, 1} S ← 0; for x ← 1 to n do for y ← 1 to n do for a ← 1 to h do S ← S + Ix,y,a;
if S < n 2 h/2 then return 0; else return 1;
Theorem 2. Let C : I n,h,b → {0, 1} be the classifier described by Algorithm 1. Then there exists an interesting class C induced by C such that for all c > 0:
1. C is (1 − 4c)-robust to L p -perturbations of size c √ h * n − 2 for all p ≤ 1. 2. C is (1 − 4c)-robust to L p -perturbations of size (c √ h * n−2) 1/p 2 b −1 for all p ≥ 2.
Proof sketch. Given an image I, let S(I) be the sum of all its channel values subtracted by n 2 h/2. Then I being robust to L 1 -perturbations of size x is approximately equivalent to S(I) / ∈ [−x, x]. By the central limit theorem, the fraction of images I such that
S(I) / ∈ [−cn √ h, cn √ h]
is some monotonic function of c independent of n and h if n 2 h is sufficiently large, which is our desired result. Appendix A.2 provides a more careful analysis of this that does not rely on limiting behaviour and extends the result to all p-norms.
Combining this statement with Theorem 1 then immediately yields the following statement, which implies that the statements in Theorem 1 are asymptotically optimal up to a constant factor:
Corollary 1. For all integers h, b ≥ 1, p ≥ 0, and r ∈ (0, 1), there exist constants c 1 ≥ c 2 > 0 and n 0 such that for any n ≥ n 0 and labels Y:
1. No classifier C : I n,h,b → Y induces an interesting class that is r-robust to L p -perturbations of size c 1 n 1/ max(p,1) . 2. There exists a classifier C : I n,h,b → Y which induces an interesting class that is r-robust to L pperturbations of size c 2 n 1/ max(p,1) .
We remark that the constant factor by which Theorem 1 misses optimality by is dependent on the bit depth b for p-norms where p ≥ 2, so significant improvements in the bound may still be possible when we consider it. We make some progress towards this in Theorem 3. Table 1: Bounds for attainable robustness. Rather than leaving the robustness and bound parametrized by a separate constant c, the bounds have been reparametrized in terms of the robustness r. The upper bound should be understood as "no classifier induces an interesting class that is r-robust to perturbations of these sizes" and the lower bound should be interpreted as "there exists a classifier that induces an interesting class that is r-robust to perturbations of these sizes".
Perturbation
Upper bound Lower bound
L 0 -perturbation L 1 -perturbation 2 + h 2 ln( 2 r ) * n −2 + 1 − r 4 √ h * n L p -perturbation, p ≥ 2 min 2 + h 2 ln( 2 r ) * n 1/p , 2ln( 2 r ) + 2 √ h 2 b * n 2/p − 2 + 1 − r 4 √ h * n 1/p 2 b − 1
Classifier robustness to L 2 -perturbations decreases with increasing bit depth In this section we investigate the role played by the bit depth b. Theorem 2 has a dependency on b when considering L p -perturbations for p ≥ 2. Is there an alternative construction by which we can remove any such dependency altogether to close the gap between Theorem 1 and 2? We demonstrate in this section that we cannot. Specifically, we can derive a universal upper bound on robustness that is dependent on b, such that as b grows without bound, this bound approaches some constant independent of the number of pixels in the image.
Theorem 3. Let C : I n,h,b → Y be any classifier. Then for all real values c > 0 and p ≥ 2, no interesting
class is 2e −c 2 /2 -robust to L p -perturbations of size c + 2 n √ h 2 b 2/p .
Proof sketch. We will focus on the 2-norm. Extension to higher p-norms is straightforward and is given as part of the full proof found in Appendix A.3. The main idea of the proof rests on the fact that if we extend the classifier to the continuous image space with something like a nearest neighbour approach, the measure of the images that are robust to perturbations of a constant size is small (the statement and proof may be found in Appendix A.4). Therefore, if we randomly jump from an image in the discrete image space to an image in the continuous image space, with high probability we will be within a constant distance of an image of a different class. The size of this random jump can be controlled with a factor that shrinks with increasing bit depth. Summing up the budget required for this jump, the perturbation required on the continuous image space, and the jump back to the discrete image space yields the desired bound.
We remark that this suggests that the bounds in Theorem 1 pertaining to L p -perturbations for p ≥ 2 can be improved to reflect its dependency on the bit depth b. However, additional work would need to be done to show that the component that shrinks with b scales with n 1/p rather than n 2/p .
Summary of bounds and their relation to average image distances
We conclude this section by recapitulating the bounds we derived and compare them to the average distances between images for context.
We summarize the bounds we derived in Table 1. For parsimony, we have reparametrized the bounds in terms of the robustness r in Table 1, although the equations look more complex as a result. In terms of image size n, the bounds stated for the 0-norm and 1-norm are asymptotically optimal up to a constant factor. The bounds for the other p-norms are also asymptotically optimal up to a constant factor, although the constant is parametrized by the bit depth b. We showed that the presence of b in our lower bound is not an artifact of our construction: robustness really does drop as b increases (Theorem 3).
To conclude this section, we contextualize the bounds derived in this section by comparing them to typical distances between random elements of the image space. We can show that for a pair of images I, I ∈ I n,h,b that are sampled independently and uniformly, we have:
E[ I − I p ] ≥ k h,b,p n 2/ max(1,p)(1)
Where k h,b,p is some constant parametrized by h, b, and p. See Appendix A.5 for additional details.
Combining this with Corollary 1 shows that if n is sufficiently large, for 99% (or some arbitrarily high percentage) of images I within some interesting class, there exists some c h,b,p parametrized by h, b, and p such that:
min X∈I n,h,b ,C(I ) =C(X) I − X p E[ I − I p ] ≤ c h,b,p n − 1 max(p,1)(2)
The right hand side approaches 0 as n grows without bound, so compared to typical distances one finds in an image space, the distance of an image to an image outside of its class is vanishingly small in any p-norm it is measured in.
Human classification decisions are subject to universal bounds on robustness
Since the bounds in Table 1 apply universally to any image classifier, they must also apply to the human visual system. Although there are many nuances to consider when interpreting the human visual system as a classifier, we can abstract most of them out by considering the following system for classifying images: we imagine a room containing a person and a monitor that displays images of size n-by-n. The person then has access to a selection of labels to label images with. To classify an image, the image is first fed into a memoization subroutine that checks if the image has been seen before and returns the label it was previously labelled with if it has. If the image has not been seen before, it is then displayed on the monitor, and the person is allowed to select a single label (or no label at all) to apply to the image. We remark that this classifier can be concretely realized, so we cannot dismiss it as simply an abstract construction.
This system acts as a classifier which partitions the set of all images into disjoint classes, therefore the bounds in Table 1 must apply. To simplify the discussion, we make an assumption about the human based classifier: at least half the images in the image space are unlabelled. This condition is met if there is no label applicable to images that look like random static. Intuitively, we can interpret labelled images as ones that are "meaningful" and unlabelled images as ones that are "meaningless" if the label set is sufficiently large.
If the unlabelled images occupy at least half the image space, then the labelled images form an interesting class (as defined by Definition 1). Therefore, the bounds in Table 1 apply, which means that a large fraction of labelled images can be turned into unlabelled images with a small perturbation.
If we return to the intuition that labels formalize the notion of "meaning", this means that for most "meaningful" images, the meaning can be erased with only a tiny fluctuation. Conversely, the "meaning" present in most "meaningful" images arises from tiny fluctuations.
The bounds in Table 1 then state that such "meaning" can fit in a perturbation of size O(n) when measured using the 1-norm or via the Hamming distance. This can be interpreted as a statement about the saliency of line drawings. Figure 2 gives a demonstration of how line drawings are small perturbations that contain "meaning".
Table 1 also states that when we raise the bit depth of the image space to be arbitrarily high, "meaning" can fit in a perturbation of size O(1) when measured using a p-norm with p ≥ 2. Line drawings do not necessarily fulfill this criterion, so the interpretation of this fact is more difficult. The human visual system is known to be particularly sensitive to certain small cues [11], but a unified understanding remains elusive.
Understanding the nature of the small perturbations that humans are sensitive to is not merely of academic curiosity. The results summarized in Table 1 show that no computer vision system can be robust to small perturbations. However, a computer vision system that is aligned to the human visual system ought not be robust to small perturbations, since the human visual system is not robust either. Over the past decade we have learned that standard machine learning methodology does not automatically produce vision systems that are aligned to the human visual system with respect to small perturbations [20], and methodologies that seek to produce such vision systems still contain misalignments [22]. A deeper understanding of how small perturbations affect the human visual system may inform the development of such methodologies (for
a) b) c) d) e)
Fig. 2: Small perturbations form meaningful patterns
We show how a small perturbation (a) when overlaid on either a (b) natural image (sourced from [9]) or a (d) uniformly randomly drawn image is able to add meaningful information, in this case a parachute, to those images (c, e).
example we may wish to explicitly train computer vision systems on human sensitive small perturbations), which is becoming increasingly necessary as computer vision systems become increasingly deployed in safety and security critical applications, where the trustworthiness of the system is essential [18,12].
Conclusion
We have derived universal non-robustness bounds that apply to any arbitrary image classifier. We have further demonstrated that up to a constant factor, these are the best bounds attainable. These bounds reveal that most images in any interesting class can have their class changed with a perturbation that is asymptotically infinitesimal when compared to the average distance between images. We then discuss how these universal properties of classifiers relate to the human visual system. We show that part of our results can be interpreted as the sensitivity of the human visual system to line drawings, which are tiny signals when measured using the 1-norm or 0-norm. However, line drawings can still be "large" when measured using the 2-norm, so a full understanding remains the subject of future work.
Our results focuses on image classifiers, which make hard decisions when labelling images. However, vision models underlying the classifiers can make soft decisions, which are then further processed into hard decisions. The applicability of our results to such underlying vision models will be the subject of future investigation.
Bibliography
A Proofs of Statements
A.1 Proof of Theorem 1
Properties of binomial coefficients We will work with binomial coefficients extensively. To simplify some of our statements, we will extend the definition of a binomial coefficient to work with any n > 0 and arbitrary integer k:
n k = n! k!(n − k)! if 0 ≤ k ≤ n 0 otherwise(3)
Binomial coefficients can be bound in the following way:
Lemma 1. n k < 2 n √ n when n ≥ 1.
Proof. We first note that n! is bounded by the following for all n ≥ 1 [19]: √ n n n e n < n! √ 2π < √ n n n e n e 1/(12n)
Applying the appropriate inequalities for the numerator and denominator yields the following for when n is even:
n k ≤ n n/2 = n! ((n/2)!) 2 < 2 2 n √ n e 1/(12n) √ 2π (5)
When n is odd, we have:
n k ≤ n n/2 (6) = 1 2
n + 1 (n + 1)/2 (7)
< 2 2 n √ n + 1
e 1/(12(n+1)) √ 2π (8) < 2 2 n √ n e 1/(12n) √ 2π (9)
Where the third comparison is an application of Equation 5. If n ≥ 1, we have e 1/(12n) √ 2π < 0.5, which proves the claim.
It will also be useful to define the following cumulative sums (which are also the tails of binomial distributions):
U n,p (k) = k i=0 n i p k (1 − p) n−k if k ≥ 0 0 otherwise (10)
We can show that the ratio of these cumulative sums are monotonic increasing:
Lemma 2. Let p ∈ (0, 1). Then
Un,p(x−k) Un,p(x)
is monotonic increasing in x, where 0 ≤ x ≤ n and k is any positive integer.
Proof. First, we note that the ratio n x−k / n x is monotonic increasing in x when x ≥ 0. This holds by definition if x − k < 0. Otherwise, we have the following:
n x−k / n x n x−k+1 / n x+1 = (n − x) (n − x + k) * (x − k + 1) (x + 1) ≤ 1 (11)
We then claim the following holds for all x where 0 ≤ x ≤ n − 1:
U n,p (x − k) U n,p (x) ≤ U n,p (x − k + 1) U n,p (x + 1) ≤ n x−k+1 (1 − p) k n x+1 p k(12)
The above holds with equality when x − k + 1 < 0. If x − k + 1 = 0, the above also holds: the leftmost ratio is 0. For the other two ratios, if we multiply the rightmost ratio by (1 − p) n−k above we can see that the numerators are equal while the denominator of the rightmost ratio is smaller. Otherwise, by induction on x we have:
U n,p (x − k) U n,p (x) ≤ n x−k (1 − p) k n x p k (13) ≤ n x−k+1 (1 − p) k n x+1 p k (14) = n x−k+1 p x−k+1 (1 − p) n−x+k−1 n x+1 p x+1 (1 − p) n−x+1(15)
Where the first inequality follows by induction, and the second inequality follows because n x−k / n x is monotonic increasing in x.
For any positive numbers a, c and strictly positive numbers b, d, where a b ≤ c d , we have a b ≤ a+c b+d ≤ c d because:
d dλ a + λc b + λd = bc − ad (b + λd) 2 ≥ 0(16)
Therefore, we have:
U n,p (x − k) U n,p (x) ≤ U n (x − k) + n x−k+1 p x−k+1 (1 − p) n−x+k−1 U n,p (x) + n x+1 p x+1 (1 − p) n−x+1 (17) = U n,p (x − k + 1) U n,p (x + 1) (18) ≤ n x−k+1 (1 − p) k n x+1 p k(19)
As claimed. Carrying on the induction up to x = n − 1 yields the statement.
Bounding the interior of a set over a Hamming graph We will prove our main results by an application of isoperimetry bounds over a Hamming graph. Let Q be a set of q symbols. Then we define the n dimensional Hamming graph over q letters, denoted H(n, q), as the graph with a vertex set Q n and an edge set containing all edges between vertices that differ at precisely one coordinate. For example, H(n, 2) is isomorphic to the Boolean hypercube. We will use V (H(n, q)) to denote the vertex set of the Hamming graph. Let S ⊆ H(n, q). We define the expansion of S, denoted Exp(S), as the set of vertices that are either in S or have a neighbour in S. Since Exp(.) inputs and outputs sets of vertices, we can iterate it. We will use Exp k (.) to denote k applications of Exp(.).
We now adapt a a result from [7] (Theorem 3 in the paper).
Lemma 3 (Isoperimetric Theorem on Hamming graphs). Let S H(n, q). Then:
|Exp k (S)| |V (H(n, q))| ≥ min{U n,p (r + k) |U n,p (r) = |S| |V (H(n, q))| , p ∈ (0, 1), r ∈ [0, n − k)}(20)
To work with this we first obtain bounds for the expression on the right hand side of Lemma 3.
Lemma 4. Let p be any value in (0, 1). Let n > r ≥ k such that U n,p (r) ≤ 1 2 . Then Un,p(r−k)
Un,p(r) ≤ 2e −2(k−1) 2 /n . Proof. Let X be a binomially distributed random variable with n trials and probability of success p. Let r be the median of X. We have r ≤ np + 1 because the median and mean differ by at most 1 [10].
U n,p (r − k) can be interpreted as Pr(X ≤ r − k), We can then apply Hoeffding's inequality [8]:
Pr(X ≤ r − k) = Pr(X ≤ np + 1 − k) (21) ≤ e −2(k−1) 2 /n(22)
Since r is the median of X, we also have U n,p (r) ≥ 1 2 . Combining this with the above equation gives:
U n,p (r − k) U n,p (r) ≤ 2e −2(k−1) 2 /n(23)
Since Un,p(x−k)
Un,p(x) is monotonically increasing via Lemma 2, this also implies that the above relation holds for all smaller r. This completes the proof.
We can then plug this into Lemma 3 to obtain a non-robustness result on Hamming graphs, which we will then apply to image spaces.
Theorem 4. Let S V (H(n, q)) such that |S| ≤ |V (H(n, q))|/2, and c > 0 be any number. Let S ⊆ S be the set of vertices for which no path with c √ n + 2 edges or less leads to a vertex not in S. Then |S | |S| < 2e −2c 2 .
Proof. Suppose for contradiction that |S | ≥ 2e −2c 2 |S|. Since for any vertex in S no path with c √ n + 2 edges or less leads to a vertex outside of S, we have Exp c √ n+2 (S ) ⊆ S. Then:
|Exp c √ n+2 (S )| ≥|V (H(n, q))| min{U n,p (r + c √ n + 2) |U n,p (r) = |S | |V (H(n, q))| , p ∈ (0, 1), r ∈ [0, n − c √ n − 2)} (24) ≥2e 2(c √ n+1) 2 /n |S | (25) >2e 2c 2 |S |(26)
The first relation follows from Lemma 3 and the second follows from Lemma 4. Lemma 4 applies since Exp c √ n+2 (S ) ⊆ S, so |Exp c √ n+2 (S )| ≤ |S| ≤ 1 2 . But then |Exp c √ n+2 (S )| > 2e −2c 2 |S | ≥ |S|, which implies that Exp c √ n+2 (S ) S. This is a contradiction, so we obtain our desired statement.
Proving Theorem 1 Let C : I n,h,b → Y be a classifier and let C ⊆ I n,h,b be any interesting class induced by C.
Lemma 5. C is not 2e −2c 2 -robust to L 0 -perturbations of size c √ h * n + 2.
Proof. Let M : V (H(n 2 h, 2 b )) → I n,h,b be the following bijection: first let Q be a set of 2 n equally spaced values between 0 and 1, where the largest value is 0 and the smallest is 1. Then the elements of V (H(n 2 h, 2 b )) can be viewed as Q n 2 h . We then map elements from Q n 2 h to I n,h,b such that the inverse operation is a flattening of the image tensor. Note that such a mapping preserves graph distance on V (H(n 2 h, 2 b )) as Hamming distance on I n,h,b .
Let C ⊆ C be the set of images that are robust to L 0 -perturbations of size c √ h * n + 2. Let S = M −1 (C) and S = M −1 (C ). S is then the set of vertices for which no path with c √ h * n + 2 edges or less leads to a vertex outside of S.
C is an interesting class and M(.) preserves cardinality due to it being a bijection. Therefore |C | ≤ |V (H(n 2 h, 2 b ))|/2, so by Theorem 4 we have |S |/|S| < 2e −2c 2 . Again, since M(.) preserves cardinality, this implies that |C |/|C| < 2e −2c 2 , which means that C is not 2e −2c 2 -robust to L 0 -perturbations of size c √ h * n + 2.
We remark that if the domain of M(.) is changed to H(n 2 , h2 b ), the above argument also shows that C is not 2e −2c 2 -robust to cn + 2 pixel changes.
It is straightforward to generalize this to p-norms with larger p.
Lemma 6. C is not 2e −2c 2 -robust to L p -perturbations of size (c √ h * n + 2) 1/p .
Proof. Let S 1 be the set of images that are r-robust to L 0 -perturbations of size d, and let S 2 be the set of images that are r-robust to L p -perturbations of size d 1/p . Suppose I / ∈ S 1 . Then there exists some image I in a different class from I such that I − I 0 ≤ d. Therefore, for all p > 0, we have:
d ≥ I − I 0 (27) = x,y,c |I x,y,c − I x,y,c | (28) ≥ x,y,c |I x,y,c − I x,y,c | p (29) = ( I − I p ) p(30)
Where the second and third relation follows from the fact that channel values are contained in [0, 1]. Therefore, I / ∈ S 2 either since I − I p ≤ d 1/p . Taking the contraposition yields S 2 ⊆ S 1 . Setting d = c √ h * n + 2 and applying Lemma 5 gives the desired result.
A.2 Proof of Theorem 2
Anti-concentration inequalities We first prove an anti-concentration lemma concerning the binomial distribution.
Lemma 7. Let X be a random variable following the binomial distribution with n trials and a probability of success of 0.5. Let Y be a discrete random variable independent of X whose distribution is symmetric about the origin. Then for any t where t < E[X] and t − t = 1/2, we have:
Pr(X + Y ≤ t) ≥ Pr(X < t)(31)
Proof. We have the following:
Pr(X + Y ≤ t) =Pr(X + Y ≤ t, X < t) (32) + Pr(X + Y ≤ t, X > t) Pr(X < t) =Pr(X + Y ≤ t, X < t)(33)+ Pr(X + Y > t, X < t)
Therefore it suffices to show that Pr(X + Y ≤ t, X > t) ≥ Pr(X + Y > t, X < t). We have for any r ≥ 0:
Pr(X + Y ≤ t, X = t + r) = Pr(Y ≤ −r)Pr(X = t + r) (34) ≥ Pr(Y > r)Pr(X = t + r) (35) ≥ Pr(Y > r)Pr(X = t − r) (36) = Pr(X + Y > t, X = t − r)(37)
Where Equation 34 follows from the independence of X and Y , Equation 35 follows from the symmetry of the distribution of Y , and Equation 36 follows from our assumption that t < E[X] and t − t = 1/2.
Summing over all positive r for which Pr(X = t + r) ≥ 0 yields the desired result.
Lemma 8. Let X 1 , X 2 , ..., X n be independently and identically distributed random variables such that each X i is uniformly distributed on 2k evenly spaced real numbers a = r 1 < r 2 < ... < r 2k = b. Then for t > 0, we have:
Pr( n i=1 X i ≤ ( n i=1 E[X i ]) − t + (b − a)) > 1 2 − 2t √ n(b − a)(38)
Proof. Let Y 1 , Y 2 , ..., Y n be independently and identically distributed Bernoulli random variables with p = 0.5. Let Z 1 , Z 2 , ..., Z n be a set of independently and identically distributed random variables uniformly distributed between the integers between 1 and k inclusive. If the Y s and Zs are independent of each other as well, we have:
n i=1 (X i − E[X i ]) = b − a 2k − 1 n i=1 (kY i + Z i − E[kY i + Z i ]) (39) =k b − a 2k − 1 ( n i=1 Y i ) + ( n i=1 Z i − E[Z i ] k ) − ( n i=1 E[Y i ])(40)Let n i=1 Y i = B, n i=1 Z i − E[Z i ] k = D, and k b−a 2k−1 = c.
Then for any t > 0, we have:
Pr( n i=1 (X i − E[X i ]) ≤ −t) =Pr(B + D ≤ − t c + E[B]) (41) ≥Pr(B + D ≤ − t c + E[B] − u) (42) ≥Pr(B < − t c + E[B] − 1) (43) ≥Pr(B − E[B] < − 2t b − a − 1) (44) ≥ 1 2 − Pr(B − E[B] ∈ [− 2t b − a − 1, 0])(45)≥ 1 2 − n n/2 2 −n ( 2t b − a + 2)(46)
Where 1 ≥ u ≥ 0 is chosen such that − t c + E[B] − u is the average of two adjacent integers. Equation 43 is then an application of Lemma 7 since B is binomially distributed with p = 0.5 and D has a distribution that is symmetric about the origin, and Equation 46 follows from the fact that no more than x + 1 values are supported on an interval of length x, and no supported value has probability greater than n n/2 2 −n . Observing that n n/2 2 −n < 1 √ n due to Lemma 1 and substituting t with t − (b − a) yields the desired result.
Proving Theorem 2 Let A : I n,h,b → {0, 1} be described by Algorithm 1. In other words, it is the classifier that inputs an image, sums all of its channels, and outputs 0 if the sum is less than n 2 h/2 and 1 otherwise. Let Z be the class of images that A outputs 0 on. Note that Z is an interesting class since it cannot be larger than its complement, so it suffices to prove that Z is robust.
Lemma 9. Z is (1 − 4c)-robust to L 1 -perturbations of size c √ h * n − 2
Proof. Let Z ⊆ Z be the set of images in Z that are robust to L 1 -perturbations of size c √ h * n − 2. Let I be a random image sampled uniformly. Then |Z | = Pr(I ∈ Z )2 −(n 2 hb) . We then have the following:
Pr(I ∈ Z ) = Pr( x,y,a I x,y,a + c √ h * n − 2 < n 2 h/2) (47) ≥ Pr( x,y,a I x,y,a ≤ n 2 h/2 − c √ h * n + 1) (48) > 1 2 − 2c(49)
Where the last inequality follows from Lemma 8 since each channel is sampled from a uniform distribution over a set of 2 b evenly spaced values between 0 and 1. Noting that |Z| ≤ 2 (n 2 hb)−1 since it cannot be larger than its complement yields |Z |
|Z| ≥ 1 − 4c. Therefore, Z is (1 − 4c)-robust to L 1 -perturbations of size c √ h * n − 2. Lemma 10. Z is (1 − 4c)-robust to L 0 -perturbations of size c √ h * n − 2
Proof. It suffices to show that an image that is robust to L 1 -perturbations of size d is also robust to L 0 -perturbations of size d, since the statement then follows directly from Lemma 9.
Let I be an image that is not robust to L 0 -perturbations of size d, so there exists some I in a different class such that I − I 0 ≤ d. Then:
d ≥ I − I 0 (50) = (x,y,a) |I x,y,a − I x,y,a |(51)
≥ (x,y,a) |I x,y,a − I x,y,a | (52)
= I − I 1(53)
Where the second and third relations hold since channel values lie in [0, 1]. This implies that I is not robust to L 1 -perturbations of size d. Therefore any image that is not robust to L 0 -perturbations of size d is also not robust to L 1 -perturbations of size d. The contraposition yields the desired statement.
Lemma 11. Z is (1 − 4c)-robust to L p -perturbations of size (c √ h * n−2) 1/p 2 b −1 for p ≥ 2.
Proof. It suffices to show that any image that is robust to L 0 -perturbations of size d is also robust to L p -perturbations of size d 1/p 2 b −1 for any p ≥ 2, since the statement then follows directly from Lemma 9. Let I be an image that is robust to L 0 -perturbations of size d. Let I be any image in a different class, so I − I 0 > d. Then for any p ≥ 1:
I − I p p = (x,y,a) |I x,y,a − I x,y,a | p (54) ≥ (x,y,a) |I x,y,a − I x,y,a | (2 b − 1) p (55) = I − I 0 (2 b − 1) p (56) > d (2 b − 1) p(57)
Where the second relation follows from the fact that if two channel values differ, they must differ by at least 1 2 b −1 . Therefore, I − I p > d 1/p 2 b −1 for any I whose class is different from I, so I is robust to L p -perturbations of size d 1/p 2 b −1 for p ≥ 2.
A.3 Proof of Theorem 3
Let C : I n,h,b → Y be any classifier, and let C be any interesting class induced by C. Our objective is to show that C is not robust to various perturbations.
Let T = {[x * 2 −b , (x + 1) * 2 −b )|x ∈ Z ∩ [0, 2 b − 2]} ∪ {[1 − 2 −b , 1]
} be a set of 2 b equal length intervals whose union is the interval [0, 1]. Let D 2 b (n 2 h) = T n 2 h be their Cartesian power. Then the elements of D 2 b (n 2 h) are disjoint, and their union is precisely the hypercube [0, 1] n 2 h .
We can associate each element of I n,h,b with an element of D 2 b (n 2 h) by first mapping I n,h,b to [0, 1] n 2 h , which can be done by flattening the image tensor (which we denote by (I) for an image I ∈ I n,h,b ). We then map that point to the element of D 2 b (n 2 h) the point falls within. The overall mapping is bijective, and we will denote it by F .
Let A : [0, 1] n 2 h × R → [0, 1] n 2 h ∪ {⊥} be a partial function that maps a point p 1 and a real value c to a point p 2 such that the following hold:
1. p 1 − p 2 2 ≤ c. 2. Let I 1 , I 2 ∈ I n,h,b such that p 1 ∈ F (I 1 ) and p 2 ∈ F (I 2 ). Then we require that C(I 1 ) = C(I 2 ). A(.) returns ⊥ if and only if no such p 2 exists. We can then define a procedure FindPerturbation for finding a perturbation given an image I, which is outlined in Algorithm 2.
Algorithm 2: Find Perturbation
Input : An image I ∈ I n,h,b and a real values c. Result: An image I ∈ I n,h,b such that C(I) = C(I ), or ⊥. Sample p1 from F (I) uniformly at random;
p2 ← A(p1, c) if p2 = ⊥ then return ⊥; else
Find I2 such that p2 ∈ F (I2); return I2;
Our proof strategy is to show that the perturbations found by FindPerturbation are guaranteed to be small, and that the probability of failure is low. This must then imply that most images are not robust.
Lemma 12. If I = FindPerturbation(I, c) is not ⊥, then I − I 2 ≤ c + 2 n √ h 2 b .
Proof. Each element of D 2 b (n 2 h) has a diameter of √ n 2 h 2 b , thus p 1 differs from (I) be at most that distance. Similarly, p 2 differs from (I 2 ) = (I ) by that distance. We also must have p 1 − p 2 2 ≤ c since I = ⊥. Putting it altogether with the triangle inequality we get (I) − (I ) 2 ≤ c + 2 n √ h 2 b . Since (.) preserves distances, we get the desired statement.
Lemma 13. If I is drawn uniformly from C, then Pr(FindPerturbation(I, c) = ⊥) < 2e −c 2 /2 . Proof. Let F (C) denote the image of C under F . Let F (C) denote the union of all elements in F (C).
If the input I is drawn uniformly from C, then p 1 is distributed uniformly over F (C). The procedure fails if and only if A(p 1 , c) = ⊥, which happens if and only if all elements within a radius of c from p 1 all belong to F (C). Let C denote the set of all such points.
Pr(A(I 2 , c) = ⊥) = µ(C ) µ( F (C))(58)< 2e −c 2 /2(59)
Where µ(.) denotes the Lebesgue measure. The last inequality comes from Theorem 5, which is given in the next section. The statement applies for any set S formed from a union of elements of D 2 b (n 2 h) whose measure is no larger than 1/2. F (C) satisfies these criteria since C is an interesting class, so we attain the desired statement.
Lemma 14. C is not 2e −c 2 /2 -robust to L 2 -perturbations of size c + 2 n √ h 2 b . Proof. Let I be drawn uniformly from C. Let C r be the set of images that are robust to L 2 -perturbations of size c + 2 n √ h 2 b . Let I = FindPerturbation(I, c). Then I is randomly distributed over I n,h,b ∪ {⊥}. By Lemma 12, if I ∈ I n,h,b , then I − I 2 ≤ c + 2 n √ h 2 b , which implies that I / ∈ C r . By contraposition, I ∈ C r implies that FindPerturbation(I, c) = ⊥. Therefore:
Pr(I = ⊥) = Pr(I ∈ C r ) + Pr(I / ∈ C r , I = ⊥)
≥ Pr(I ∈ C r )
= |C r | |C|
By Lemma 13, Pr(I = ⊥) < 2e −c 2 /2 . Thus, |Cr| |C| < 2e −c 2 /2 , which yields the desired statement.
Lemma 15. C is not 2e −c 2 /2 -robust to L p -perturbations of size c + 2 n √ h 2 b 2/p for p ≥ 2.
Proof. We use the identical argument from Lemma 6.
Let S 1 be the set of images that are r-robust to L 2 -perturbations of size d, and let S 2 be the set of images that are r-robust to L p -perturbations of size d 2/p , where p ≥ 2.
Suppose I / ∈ S 1 . Then there exists some image I in a different class from I such that I − I 2 ≤ d. Therefore, for all p > 0, we have:
Where the third relation follows from the fact that channel values are contained in [0, 1]. Therefore, I / ∈ S 2 either since I − I p ≤ d 2/p . Taking the contraposition yields S 2 ⊆ S 1 .
Setting d = c + 2 n √ h 2 b and applying Lemma 14 gives the desired result.
A.4 Proof of Theorem 5
Our objective in this section is to complete the proof of Theorem 3 by proving Theorem 5, stated below. We will use µ(.) to denote Lebesgue measure throughout this section.
Definition 4. We say a set S ⊆ [0, 1] n is a regular set if there is some q and T ⊆ D q (n) such that S = t∈T t.
Theorem 5. Let S ⊆ [0, 1] n be a regular set such that µ(S) ≤ 1/2. Let S r ⊆ S contain all the points in S such that for all y ∈ [0, 1], x − y 2 ≤ r =⇒ y ∈ S. Then µ(Sr) µ(S) < 2e c 2 /2 .
Properties of the standard normal distribution First, we define the cumulative distribution function for the standard normal distribution and its derivative.
Φ(x) = x −∞ 1 √ 2π e −t 2 /2 dt (67) Φ (x) = 1 √ 2π e −x 2 /2(68)
Similarly to the discrete case, the ratio of the cumulative distribution functions is monotonic increasing.
When x ≥ 0, this derivative is negative since both terms in the numerator are negative. If x < 0, we have the following:
−x x −∞ e −t 2 /2 dt < −x x −∞ e −t 2 /2 + 1 t 2 e −t 2 /2 dt (70) = −x − 1 t e −t 2 /2 x −∞ (71) = e −x 2 /2(72)
So the sum is strictly smaller than (e −x 2 /2 ) 2 − (e −x 2 /2 ) 2 = 0. Therefore, the derivative is everywhere negative, so f (x) is strictly decreasing. Therefore, we have the following for any non-negative k:
d dx ln( Φ(x − k) Φ(x) ) = f (x − k) − f (x) ≥ 0(73)
Since ln(.) is a monotonic increasing function, Φ(x−k) Φ(x) must also be monotonic increasing.
Proving Theorem 5 Similarly to the discrete case, our main result relies on an isoperimetry statement, this time on the unit hypercube [1].
Lemma 17 (Isoperimetric Theorem on the Unit Hypercube). For any n, let A ⊂ [0, 1] n be a Borel set. Let A = {x ∈ [0, 1] n ∃x ∈ A :
x − x ≤ }. Then we have the following:
lim inf →0 + µ(A ) − µ(A) ≥ √ 2πΦ (Φ −1 (µ(A)))(74)
Let C ⊆ [0, 1] be a regular set such that 0 < µ(C) ≤ 1/2. Let C r ⊆ C denote the points p 1 in C such that for any point p 2 ∈ [0, 1], p 1 − p 2 2 ≤ r =⇒ p 2 ∈ C.
Lemma 18. C r ≤ Φ(Φ −1 (µ(C)) − r)
Proof. Let z = Φ −1 (µ(C)) and let f (x) = Φ(x + z). Let v(.) be a Lebesgue integrable function such that the following holds:
V (r) = (−∞,r) v(t)dt = µ(C −r ) if r ≤ 0 µ(C 0 ) otherwise(75)
This exists since C is a regular set. Since V (x) results from integration, it is also a continuous function. It then suffices to show that V (x) ≤ f (x) for all x, since V (x) corresponds to the left hand side of the theorem statement and f (x) corresponds to the right hand side. Suppose this is not the case. We know that V (x) ≤ f (x) for all x ≥ 0, so if this is violated it must happen when x < 0. This gives us the following:
Fig. 1 :
1Interpreting image spaces as Hamming graphs a)
I − I p ) p
Lemma 16 .
16Φ(x−k) Φ(x) is monotonic increasing in x for all k ≥ 0.Proof. Let f (e −t 2 /2 dt − e −x 2 /2 e −x 2 /2x −∞ e t 2 /2 dt 2
Since V (x) and f (x) are both continuous, by the intermediate value theorem there must exist some interval [a, b) whereV (x) > f (x) if x ∈ [a, b), V (b) = f (b), and a < b ≤ 0.
[ 1 ]
1Franck Barthe and Bernard Maurey. Some remarks on isoperimetry of gaussian type. In Annales de Dimitrios I Diochnos, Saeed Mahloujifar, and Mohammad Mahmoody. Adversarial risk and robustness: General definitions and implications for the uniform distribution. arXiv preprint arXiv:1810.12272, 2018. [3] Gamaleldin F Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein. Adversarial examples that fool both computer vision and time-limited humans. Alhussein Fawzi, Hamza Fawzi, and Omar Fawzi. Adversarial vulnerability for any classifier. arXiv preprint arXiv:1802.08686, 2018. [5] Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S Schoenholz, Maithra Raghu, Martin Wattenberg, and Ian Goodfellow. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018. [6] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [7] LH Harper. On an isoperimetric problem for hamming graphs. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. In The collected works of Wassily Hoeffding, pages 409-426. Springer, 1994. [9] Jeremy Howard. imagenette. URL https://github.com/fastai/imagenette/. [10] Rob Kaas and Jan M Buhrman. Mean, median and mode in binomial distributions. Statistica Neerlandica, 34(1):13-18, 1980. [11] Jiangang Liu, Jun Li, Lu Feng, Ling Li, Jie Tian, and Kang Lee. Seeing jesus in toast: neural and behavioral correlates of face pareidolia. Cortex, 53:60-77, 2014. [12] Lei Ma, Felix Juefei-Xu, Minhui Xue, Qiang Hu, Sen Chen, Bo Li, Yang Liu, Jianjun Zhao, Jianxiong Yin, and Simon See. Secure deep learning engineering: A software quality assurance perspective. arXiv preprint arXiv:1810.04538, 2018. [13] Gabriel Resende Machado, Eugênio Silva, and Ronaldo Ribeiro Goldschmidt. Adversarial machine learning in image classification: A survey toward the defender's perspective. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. [15] Saeed Mahloujifar, Dimitrios I Diochnos, and Mohammad Mahmoody. The curse of concentration in robust learning: Evasion and poisoning attacks from concentration of measure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4536-4543, 2019. [16] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574-2582, 2016. [17] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), pages 372-387. IEEE, 2016. [18] Ana Pereira and Carsten Thomas. Challenges of machine learning applied to safety-critical cyber-physical systems. Herbert Robbins. A remark on stirling's formula. The American mathematical monthly, 62(1):26-29, 1955. [20] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [21] Thomas Tanay and Lewis Griffin. A boundary tilting persepective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016. [22] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347, 2020. [23] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018.l'Institut Henri Poincare (B) Probability and Statistics, volume 36, pages 419-434. Elsevier, 2000.
[2] arXiv preprint arXiv:1802.08195, 2018.
[4] Discrete applied mathematics, 95(1-3):
285-309, 1999.
[8] ACM Computing Surveys
(CSUR), 55(1):1-38, 2021.
[14] Machine Learning and Knowledge Extraction, 2(4):579-602, 2020.
[19] [24] Zhenglong Zhou and Chaz Firestone. Humans can decipher adversarial images. Nature communications,
10(1):1-9, 2019.
Where Z is the set of values where the limit in Equation77is not equal to v(t), which by the Lebesgue differentiation theorem is a set of measure 0. Equation 79 is an application of Lemma 17, which is applicable since C −t is a Borel set due to the C being a regular set. Equation 80 follows from the fact that f (. This contradicts the above, so it must be the case thatWhere the first inequality follows from Lemma 18, the second inequality follows from Lemma 16 and the fact that µ(C) ≤ 1/2, and the third inequality follows from the Gaussian tail bound Φ(x) < e −x 2 /2 for all x ≤ 1/2.A.5 Average distance between imagesWe wish to show that for a pair of images I, I ∈ I n,h,b that are sampled independently and uniformly, there exists a k h,b,p such that:First, we note that we have:Where X and Y are independent random variables that are both drawn uniformly from a set of 2 b equally spaced values, where the largest is 1 and the smallest is 0. For simplicity, we denoteis non-negative and cannot be larger than n 2 h. Therefore, the probability that I − I max(1,p) p ≥ n 2 hk b,p /2 is at leastVia a monotonicity argument we can deduce that the probability that I−I p ≥ (hk b,p /2) 1/ max(p,1) n 2/ max(p,1) is at least k b,p 2−k b,p as well. We can then apply Markov's inequality to get the following:(hk b,p /2) 1/ max(p,1) n 2/ max(p,1)By setting k h,b,p to be k b,p 2−k b,p (hk b,p /2) 1/ max(p,1) we attain our desired result. | [
"https://github.com/fastai/imagenette/."
]
|
[
"Stochastic Variance Reduction for Variational Inequality Methods",
"Stochastic Variance Reduction for Variational Inequality Methods"
]
| [
"Ahmet Alacaoglu ",
"Yura Malitsky "
]
| []
| []
| We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games. | null | [
"https://arxiv.org/pdf/2102.08352v2.pdf"
]
| 231,933,735 | 2102.08352 | 1d027f39491042f95f4894e7006145ab57261c2b |
Stochastic Variance Reduction for Variational Inequality Methods
Ahmet Alacaoglu
Yura Malitsky
Stochastic Variance Reduction for Variational Inequality Methods
We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.
Introduction
In this paper, we focus on solving variational inequalities (VI):
find z * ∈ Z such that F (z * ), z − z * + g(z) − g(z * ) ≥ 0, ∀z ∈ Z,(1)
where F is a monotone operator and g is a proper convex lower semicontinuous function. This formulation captures optimality conditions for minimization/saddle point problems, see [FP07,Section 1.4.1]. In the last decade there have been at least two surges of interest to VIs. Both were motivated by the need to solve min-max problems. The first surge came from the realization that many nonsmooth problems can be solved more efficiently if they are formulated as saddle point problems [Nes05;Nem04;CP11;EZC10]. The second has been started by machine learning community, where solving nonconvex-nonconcave saddle point problems became of paramount importance [Gid+19; GM18;Mer+19]. Additionally, VIs have applications in game theory, control theory, and differential equations, see [FP07].
A common structure encountered in min-max problems is that the operator F can be written as a finitesum: F = F 1 + · · · + F N , see Section 5 for concrete examples. Variance reduction techniques use this specific form to improve the complexity of deterministic methods in minimization. Existing results on variance reduction for saddle point problems show that these techniques improve the complexity for bilinear problems compared to deterministic methods. However, in general these methods require stronger assumptions to converge than the latter do (see Table 1). At the same time, stochastic methods that have been shown to converge under only monotonicity do not have complexity advantages over the deterministic methods.
Such a dichotomy does not exist in minimization: variance reduction comes with no extra assumptions. This points out to a fundamental lack of understanding for its use in saddle point problems. Our work shows that there is indeed a natural correspondence between variance reduction in variational inequalities and minimization. In particular, we propose stochastic variants of extragradient (EG), forward-backward-forward (FBF), and forward-reflected-backward (FoRB) methods which converge under mere monotonicity. For the bilinear case our results match the best-known complexities, while for the nonbilinear, we do not require bounded domains as in the previous work and we improve the best-known complexity by a logarithmic factor, using simpler algorithms. Recently, [HXZ21] established the optimality of our algorithms with matching lower bounds, for solving (potentially nonbilinear) convex-concave min-max problems with finite sum form.
We also show application of our techniques for solving monotone inclusions and strongly monotone problems. Our results for monotone inclusions potentially improve the rate of deterministic methods (depending on the Lipschitz constants) and they seem to be the first such result in the literature. We illustrate practical benefits of our new algorithms by comparing with deterministic methods and an existing variance reduction scheme in Section 6.
Assumptions
Complexity
EG/MP, FBF, FoRB
† F is monotone O N L F ε EG/MP ‡ F is monotone & z → F (z) +∇g(z), z − u is convex for any u O N + √ N L ε EG/MP ‡ F is monotone & bounded domainsÕ N + √ N L ε FoRB * F is monotone O N + N L ε This paper EG/MP, FBF, FoRB F is monotone O N + √ N L ε
Related works
Variational inequalities. The standard choices for solving VIs have been methods such as extragradient (EG)/Mirror-Prox (MP) [Kor76;Nem04], forward-backward-forward (FBF) [Tse00], dual extrapolation [Nes07] or reflected gradient/forward-reflected-backward (FoRB) [Mal15; MT20] 1 . These methods differ in the number of operator calls and projections (or proximal operators) used each iteration, and consequently, can be preferable to one another in different settings. The standard convergence results for these algorithms include global iterates' convergence, complexity O(ε −1 ) for monotone problems and linear rate of convergence for strongly monotone problems.
Variance reduction. Variance reduction has revolutionized stochastic methods in optimization. This technique applies to finite sum minimization problem of the form min x 1 N N i=1 f i (x). Instead of using a random sample g k = ∇f i (x k ) as SGD does, variance reduction methods use
g k = ∇f (w k ) + ∇f i (x k ) − ∇f i (w k ).
(2)
A good choice of w k decreases the "variance" E g k −∇f (x k ) 2 compared to E ∇f i (x k )−∇f (x k ) 2 that SGD has. A simple idea that is easy to explain to undergraduates, easy to implement, and most importantly that provably brings us a better convergence rate than pure SGD and GD in a wide range of scenarios. Classical works include [JZ13;DBL14]. For a more thorough list of references, see the recent review [Gow+20].
Variance reduction and VIs. One does not need to be meticulous to quickly find finite sum problems where existing variance reduction methods do not work. In the convex world, the first that comes to mind is non-smoothness. As already mentioned, saddle point reformulations often come to rescue. The work [BB16] was seminal in using variance reduction for saddle point problems and monotone inclusions in general. In particular, the authors studied stochastic variance reduced variants of forward-backward algorithm and proved linear convergence under strong monotonicity. For bilinearly coupled problems, the complexity in [BB16] improves the deterministic method in the strongly monotone setting. [Cha+19] developed an extragradient method with variance reduction and analyzed its convergence under strong monotonicity assumption. Unfortunately, the worst-case complexity in this work was less favorable than [BB16].
Strong monotonicity may seem like a fine assumption, similar to strong convexity in minimization. While algorithmically it is true, in applications with min-max, the former is far less frequent. For instance, the operator F associated with a convex-concave saddle point problem is monotone, but not strongly monotone without further assumptions. Thus, it is crucial to remove this assumption.
An influential work in this direction is by [Car+19], where the authors proposed a randomized variant of Mirror-Prox. The authors focused primarily on matrix games and for this important case, they improved complexity over deterministic methods. However, because of this specialization, more general cases required additional assumptions. In particular, for problems beyond matrix games, the authors assumed that either z → F (z) +∇g(z), z − u is convex for all u [Car+19, Corollary 1] or that domain is bounded [Car+19, Algorithm 5, Corollary 2]: in particular, domain diameter is used as a parameter for this algorithm. As one can check, the former might not hold even for convex minimization problems with F = ∇f . The latter, on the other hand, while already restrictive, requires a more complicated three-loop algorithm, which incurred an additional logarithmic factor into total complexity.
There are other works that did not improve complexity but introduced new ideas. An algorithm similar in spirit to ours is due to [AMC21], where variance reduction is applied to FoRB. This algorithm was the first to converge under only monotonicity, but did not improve complexity of deterministic methods. Several works studied VI methods in the stochastic setting and showed slower rates with decreasing step sizes [Mis+20; Böh+20], or increasing mini-batch sizes [Ius+17; Boţ+21; CS21], or extra assumptions [Gor+22].
Outline of results and comparisons
Throughout the paper, we assume access to a stochastic oracle F ξ such that E[F ξ (z)] = F (z).
Complexity and ε-accurate solution. A pointz is an
ε-accurate solution if E [Gap(z)] ≤ ε,
where Gap is defined in Section 2.3.1. Complexity of the algorithm is defined as the number of calls to F ξ to reach an ε-accurate solution. In general, we suppose that evaluation of F is N times more expensive than F ξ . For specific problems with bilinear coupling, we measure the complexity in terms of arithmetic operations.
Nonbilinear finite-sum problems. We consider the problem (1) Bilinear problems. When we focus on bilinear problems (App. 5.1), the complexity of our methods is O nnz(A) + nnz(A)(m + n)Lε −1 , where L = A Frob with Euclidean setup and L = A max with simplex constraints and the entropic setup. In contrast, the complexity of deterministic method isÕ nnz(A)L F ε −1 , where L F = A with Euclidean setup and L F = A max with the entropic setup. Our complexity shows strict improvements over deterministic methods when A is dense. Our variance reduced variants for FBF and FoRB enjoy similar guarantees and obtain the same complexities (Corollary 4.3, Corollary 4.8).
with F = N i=1 F i where F is monotone, L F -Lipschitz,
In both settings this complexity was first obtained by [Car+19]. Our results generalize the set of problems where this complexity applies due to less assumptions (for example, linearly constrained convex optimization) and also use more practical/simpler algorithms (see Section 6 for an empirical comparison). Note that our variance reduced Mirror-Prox in Alg. 2 is different from the Mirror-Prox variant in [Car+19, Alg. 1, Alg. 2].
How to read the paper? We summarize the main results in Table 2. We recommend a reader, who wants a quick grasp of the idea, to refer to Section 2. This should be sufficient for understanding our main technique. The extension to Bregman case is technical in nature and noticing the reason for using a double loop algorithm in this case requires a good deal of understanding of proposed analysis.
For the most general case with Bregman distances, a reader can skip Section 2 without losing much and go directly to Section 3. We kept Section 2 for a clearer exposition of the main ideas via a simpler algorithm and analysis. We tried to make the sections self-contained and the proofs isolated: convergence rate and convergence of iterates are separated.
Finally, one can read Section 4 right after Section 2 to see how the same ideas give rise to variance reduced FBF and FoRB algorithms with similar guarantees and ability to solve monotone inclusions. In this section, we also illustrate how to obtain linear rate of convergence with strong convexity. Section 5 clarifies how to apply our developments to specific problems such as matrix games and linearly constrained optimization.
Rate & Complexity
Convergence of iterates
Most of the proofs are given with the corresponding results; remaining proofs are deferred to Section 8.
Practical guide.
We give the parameters recommended in practice in Remark 2.1, Remark 3.1, Remark 4.4 for Algorithm 1, Algorithm 2, Algorithm 4, respectively. These parameters are optimized to obtain the best complexity in terms of dependence to problem dimensions (and not dependence to constants) and we use them in our numerical experiments in Section 6. For convenience we also specify the updates in the important case of matrix games with entropic setup in Section 5.1.2.
Euclidean setup
To illustrate our technique, we pick extragradient method due to the simplicity of its analysis, its extension to Bregman distances and its wide use in the literature.
Preliminaries
Let Z be a finite dimensional vector space with Euclidean inner product ·, · and norm · . The notation [N ] represents the set {1, . . . , N }. We say F : dom g → Z is monotone if for all x, y, F (x) − F (y), x − y ≥ 0. Proximal operator is defined as prox g (x) = argmin y g(y) + 1 2 y − x 2 . For a proper convex lower semicontinuous (lsc) g, domain is defined as dom g = {z : g(z) < +∞} and the following prox-inequality is
standardz = prox g (z) ⇐⇒ z − z, u −z ≥ g(z) − g(u), ∀u ∈ Z.(3)
We continue with our assumptions.
Assumption 1.
(i) The solution set Sol of (1) is nonempty.
(ii) The function g : Z → R ∪ {+∞} is proper convex lower semicontinuous.
(iii) The operator F is monotone.
(iv) The operator F has a stochastic oracle F ξ that is unbiased F (z) = E [F ξ (z)] and L-Lipschitz in mean:
E F ξ (u) − F ξ (v) 2 ≤ L 2 u − v 2 , ∀u, v ∈ Z.
Finite sum. Suppose F has a finite sum representation F = N i=1 F i , where each F i is L i -Lipschitz and the full operator F is L F -Lipschitz. By triangle inequality it follows, of course, that L F ≤ N i=1 L i . On one hand, N i=1 L i can be much larger than L F . On the other, it might be the case that L i are easy to compute, but not a true L F . Then the latter inequality gives us the most natural upper bound on L F . The two simplest stochastic oracles can be defined as follows
Algorithm 1 Extragradient with variance reduction
Input: Set p ∈ (0, 1], probability distribution Q, step size τ , α ∈ (0, 1), z 0 = w 0
for k = 0, 1, . . . dō z k = αz k + (1 − α)w k z k+1/2 = prox τ g (z k − τ F (w k )) Draw an index ξ k according to Q z k+1 = prox τ g (z k − τ [F (w k ) + F ξ k (z k+1/2 ) − F ξ k (w k )]) w k+1 = z k+1 , with probability p w k , with probability 1 − p end for 1. Uniform sampling: F ξ (z) = N F i (z), q i = Pr{ξ = i} = 1 N . In this case, L = N i∈[N ] L 2 i .
2. Importance sampling:
F ξ (z) = 1 qi F i (z), q i = Pr{ξ = i} = Li j∈[N ]
Lj . In this case,
L = i∈[N ] L i .
This example is useful in several regards. First, it is one of the most general problems that proposed algorithms can tackle and for concreteness it is useful to keep it as a reference point. Second, this problem even in its generality already indicates possible pitfalls caused by non-optimal stochastic oracles. If L of our stochastic oracle is much worse (meaning larger) than L F , it may eliminate all advantages of cheap stochastic oracles. In the sequel, for finite-sum problems, we assume that ξ ∈ [N ], similar to the two oracles described above.
Extragradient with variance reduction
The classical stochastic variance reduced gradient (SVRG) [JZ13] uses a double loop structure (looped): the full gradients are computed in the outer loop and the cheap variance reduced gradients (2) are used in the inner loop. Works [KHR20; Hof+15] proposed a loopless variant of SVRG, where the outer loop was eliminated and instead full gradients were computed once in a while according to a randomized rule. Both methods share similar guarantees, but the latter variant is slightly simpler to analyze and implement.
We present the loopless version of extragradient with variance reduction in Alg. 1. Every iteration requires two stochastic oracles F ξ and one F with probability p. Parameter α is the key in establishing a favorable complexity. While convergence of (z k ) to a solution will be proven for any α ∈ [0, 1), a good total complexity requires a specific choice of α. Therefore, the specific form ofz k is important. Later, we see that with α = 1 − p, Alg. 1 has the claimed complexity in Table 1. It is interesting to note that by eliminating all randomness, Alg. 1 reduces to extragradient. . Specific problem may require a more careful examination of "optimal" parameters (see App. 5.1).
Analysis
In Alg. 1, we have two sources of randomness at each iteration: the index ξ k which is used for computing z k+1 and the choice of w k (the snapshot point). We use the following notation for the conditional expectations:
E[·|σ(ξ 0 , . . . , ξ k−1 , w k )] = E k [·] and E[·|σ(ξ 0 , . . . , ξ k , w k )] = E k+1/2 [·].
For the iterates (z k ), (w k ) of Alg. 1 and any z ∈ dom g, we define
Φ k (z) := α z k − z 2 + 1 − α p w k − z 2 .
We see in the following lemma how Φ k naturally arises in our analysis as the Lyapunov function.
Lemma 2.2. Let Assumption 1 hold, α ∈ [0, 1), p ∈ (0, 1], and τ = √ 1−α L γ, for γ ∈ (0, 1). Then for (z k ) generated by Alg. 1 and any z * ∈ Sol, it holds that
E k [Φ k+1 (z * )] ≤ Φ k (z * ) − (1 − γ) (1 − α) z k+1/2 − w k 2 + E k z k+1 − z k+1/2 2 .
Moreover, it holds that
∞ k=0 (1 − α)E z k+1/2 − w k 2 + E z k+1 − z k+1/2 2 ≤ 1 1−γ Φ 0 (z * ).
Proof. A reader may find it simpler to follow the analysis by assuming that g is the indicator function of some convex set. Then since all iterates are feasible, we would have g(z k ) = 0.
Let us denote F (z k+1/2 ) = F (w k ) + F ξ k (z k+1/2 ) − F ξ k (w k )
. By prox-inequality (3) applied to the definitions of z k+1 and z k+1/2 , we have that for all z, By the definition of w k+1 and E k+1/2 , it follows that
1 − α p E k+1/2 w k+1 − z * 2 = (1 − α) z k+1 − z * 2 + (1 − α) 1 p − 1 w k − z * 2 .(11)
We add (11) to (10) and apply the tower
property E k E k+1/2 [·] = E k [·] to deduce αE k z k+1 − z * 2 + 1 − α p E k w k+1 − z * 2 ≤ α z k − z * 2 + 1 − α p w k − z * 2 − (1 − γ) (1 − α) z k+1/2 − w k 2 + E k z k+1 − z k+1/2 2 .
Using the definition of Φ k (z), we obtain the first result. Applying total expectation and summing the inequality yields the second result.
To show the almost sure convergence of the sequence (z k ), we need F ξ to be continuous for all ξ. For a finite sum example it follows automatically from Assumption 1. The proof is given in Section 8.
Theorem 2.3. Let Assumption 1 hold, F ξ be continuous for all ξ, α ∈ [0, 1), p ∈ (0, 1], and τ = √ 1−α L γ, for γ ∈ (0, 1)
. Then, almost surely there exists z * ∈ Sol such that (z k ) generated by Alg. 1 converges to z * .
Convergence rate and complexity for monotone case
In the general monotone case, the convergence measure is the gap function given by
Gap(w) = max z∈C F (z), w − z + g(w) − g(z) ,
where C is a compact subset of Z that we use to handle the possibility of unboundedness of dom g (see [Nes07, Lemma 1]). Since we work in probabilistic setting, naturally our convergence measure will be based on E[Gap(w)]. We start with a simple lemma for "switching" the order of maximum and expectation, which is required for showing convergence of expected gap. This technique is standard for such purpose [Nem+09] and the proof is given in Section 8.
Lemma 2.4. Let F = (F k ) k≥0 be a filtration and (u k ) a stochastic process adapted to F with E[u k+1 |F k ] = 0. Then for any K ∈ N, x 0 ∈ Z, and any compact set C ⊂ Z,
E max x∈C K−1 k=0 u k+1 , x ≤ max x∈C 1 2 x 0 − x 2 + 1 2 K−1 k=0 E u k+1 2 .
We now continue with the main result of this section.
Theorem 2.5. Let Assumption 1 hold, p ∈ (0, 1], α = 1 − p, and τ = √ 1−α L γ, for γ ∈ (0, 1). Then, for z K = 1 K K−1 k=0 z k+1/2 , it follows that E Gap(z K ) = O L √ pK . In particular, for τ = √ p 2L , the rate is E Gap(z K ) ≤ 17.5L √ pK max z∈C z 0 − z 2 .
Recall that we measure complexity in terms of calls to the stochastic oracle F ξ (·) and we assumed that the cost of computing F (·) is N times that of F ξ (·). For a finite sum example, this is a natural assumption.
Remark 2.6. For Alg. 1, since per iteration cost is pN + 2 calls to F ξ in expectation, the result is "average" total complexity: expected number of calls to get a small expected gap.
Corollary 2.7. In the setting of Theorem 2.5, the average total complexity of Alg. 1 to reach ε-accuracy is
O N + (pN + 2) 1 + L √ pε . In particular, for p = 2 N it is O N + √ N L ε .
Proof of Theorem 2.5. As we have already mentioned, when all randomness is eliminated, that is F ξ = F and p = 1, Algorithm 1 reduces to the extragradient. In that case, the convergence rate O(1/K) would follow almost immediately from the proof of Lemma 2.2. In a stochastic setting the proof is more subtle and we have to rely on Lemma 2.4 to deal with the error terms caused by randomness. Let
Θ k+1/2 (z) = F (z k+1/2 ), z k+1/2 − z + g(z k+1/2 ) − g(z).
We will proceed as in Lemma 2.2 before getting (10). In particular, using (6) and (7) in (5) gives
2τ Θ k+1/2 (z) + z k+1 − z 2 ≤ α z k − z 2 + (1 − α) w k − z 2 + 2τ F ξ k (w k ) − F ξ k (z k+1/2 ), z k+1 − z k+1/2 − (1 − α) z k+1/2 − w k 2 − z k+1 − z k+1/2 2 + 2τ F (z k+1/2 ) − F ξ k (z k+1/2 ) − F (w k ) + F ξ k (w k ), z k+1/2 − z , e1(z,k)(12)
where we call the last term by e 1 (z, k).
Now, we set α = 1 − p. We want to rewrite (12) using Φ k (z) = (1 − p) z k − z 2 + w k − z 2 .
For this, we need to add w k+1 − z 2 − w k − z 2 to both sides. Then, we define the error
e 2 (z, k) = p w k − z 2 + w k+1 − z 2 − w k − z 2 − p z k+1 − z 2 = 2 pz k+1 + (1 − p)w k − w k+1 , z − p z k+1 2 − (1 − p) w k 2 + w k+1 2 .(13)
With this at hand, we can cast (12) as
2τ Θ k+1/2 (z) + Φ k+1 (z) ≤ Φ k (z) + e 1 (z, k) + e 2 (z, k) + 2τ F ξ k (w k ) − F ξ k (z k+1/2 ), z k+1 − z k+1/2 − p z k+1/2 − w k 2 − z k+1 − z k+1/2 2 .
We sum this inequality over k = 0, . . . , K − 1, take maximum of both sides over z ∈ C, and then take total expectation to obtain
2τ KE Gap(z K ) ≤ max z∈C Φ 0 (z) + E max z∈C K−1 k=0 e 1 (z, k) + e 2 (z, k) − E K−1 k=0 z k+1 − z k+1/2 2 + p z k+1/2 − w k 2 + 2τ E K−1 k=0 F ξ k (w k ) − F ξ k (z k+1/2 ), z k+1 − z k+1/2(14)
where we used E max
z∈C K−1 k=0 Θ k+1/2 (z) ≥ KE Gap(z K )
, which follows from monotonicity of F , linearity of
z k+1/2 → F (z), z k+1/2 − z , and convexity of g.
The tower property, the estimation from (9), and 1 − α = p applied on (14) imply
2τ KE Gap(z K ) ≤ max z∈C Φ 0 (z) + E max z∈C K−1 k=0 e 1 (z, k) + e 2 (z, k) .(15)
Therefore, the proof will be complete upon deriving an upper bound for the second term on RHS. We will instantiate Lemma 2.4 twice for bounding this term. First, for e 1 (z, k) we set in Lemma 2.4,
F k = σ(ξ 0 , . . . , ξ k−1 , w k ),x 0 = z 0 , u k+1 = 2τ [F ξ k (z k+1/2 ) − F ξ k (w k )] − [F (z k+1/2 ) − F (w k )] ,
where by definition we set F 0 = σ(ξ 0 , ξ −1 , w 0 ) = σ(ξ 0 ). With this, we obtain the bound
E max z∈C K−1 k=0 e 1 (z, k) = E max z∈C K−1 k=0 u k+1 , z − E K−1 k=0 u k+1 , z k+1/2 = E max z∈C K−1 k=0 u k+1 , z ≤ max z∈C 1 2 z 0 − z 2 + 1 2 K−1 k=0 E u k+1 2 ≤ max z∈C 1 2 z 0 − z 2 + 2τ 2 L 2 K−1 k=0 E z k+1/2 − w k 2 ,(16)
where the second equality follows by the tower property, E k [u k+1 ] = 0, and F k -measurability of z k+1/2 . The last inequality is due to
E u k+1 2 = E E k u k+1 2 ≤ 4τ 2 E E k F ξ k (z k+1/2 ) − F ξ k (w k ) 2 ≤ 4τ 2 L 2 E z k+1/2 − w k 2 ,
where we use the tower property, E X − E X 2 ≤ E X 2 , and Assumption 1(iv). Secondly, we set in Lemma 2.4
F k = σ(ξ 0 , . . . , ξ k , w k ),x 0 = z 0 , u k+1 = pz k+1 + (1 − p)w k − w k+1 ,and use E E k+1/2 [ w k+1 2 − p z k+1 2 − (1 − p) w k 2 ] = 0, to obtain the bound E max z∈C K−1 k=0 e 2 (z, k) = 2 E max z∈C K−1 k=0 u k+1 , z ≤ max z∈C z 0 − z 2 + K−1 k=0 E u k+1 2 = max z∈C z 0 − z 2 + p(1 − p) K−1 k=0 E z k+1 − w k 2 ,(17)
where the inequality follows from Lemma 2.4 and the second equality from the derivation
E u k+1 2 = E E k+1/2 u k+1 2 = E E k+1/2 E k+1/2 [w k+1 ] − w k+1 2 = E E k+1/2 w k+1 2 − E k+1/2 [w k+1 ] 2 = E p z k+1 2 + (1 − p) w k 2 − pz k+1 + (1 − p)w k 2 = p(1 − p) E z k+1 − w k 2 , which uses E X − E X 2 = E X 2 − E X 2 .
Combining (16), (17), and (15), we finally arrive at
2τ KE Gap(z K ) ≤ max z∈C Φ 0 (z) + max z∈C 1 2 z 0 − z 2 + 2τ 2 L 2 K−1 k=0 E z k+1/2 − w k 2 + max z∈C z 0 − z 2 + p(1 − p) K−1 k=0 E z k+1 − w k 2(18)
We have to estimate terms under the sum:
E K−1 k=0 2τ 2 L 2 z k+1/2 − w k 2 + p(1 − p) z k+1 − w k 2 ≤ p E K−1 k=0 2 z k+1/2 − w k 2 + z k+1 − w k 2 ≤ p E K−1 k=0 (2 + √ 2) z k+1/2 − w k 2 + (2 + √ 2) z k+1 − z k+1/2 2 ≤ 2 + √ 2 1 − γ Φ 0 (z * ) ≤ 3.5 1 − γ max z∈C Φ 0 (z),(19)
where the first inequality in (19) uses Lemma 2.2 and 1 − α = p. Now we will use that w 0 = z 0 and, hence,
Φ 0 (z) = (2 − p) z 0 − z 2 ≤ 2 z 0 − z 2 in (18). This yields 2τ KE Gap(z K ) ≤ 2 + 3 2 + 7 1 − γ max z∈C z 0 − z 2 = 7 1 2 + 1 1 − γ max z∈C z 0 − z 2 . Finally, using τ = √ pγ L , we obtain E Gap(z K ) ≤ 7L 2 √ pγK 1 2 + 1 1 − γ max z∈C z 0 − z 2 = O L √ pK .
In particular, with a stepsize τ = √ p 2L , the right-hand side reduces to 17.5L √ pK max z∈C z 0 − z 2 .
Proof of Corollary 2.7. In average each iteration costs pN + 2 calls to F ξ . To reach ε-accuracy we need
O L √ pε
iterations. Hence, the total average complexity is O (pN +2)L √ pε . Finally, the optimal choice
p = 2 N gives O √ N L ε complexity.
To see the justification for the choice of α = 1 − p, consider the proof with any choice of α. The resulting bound will be O
1 √ 1−α + √ 1−α p
. Then α = 1 − p optimizes it in terms of p dependence. Our rate guarantee on Theorem 2.5 is on the averaged iterate z K , which is shown to be necessary to get the O(1/K) rate for deterministic extragradient in [Gol+20].
Bregman setup 3.1 Preliminaries
In this section, we assume that Z is a normed vector space with a dual space Z * and primal-dual norm pair · and · * . Let h :
Z → R ∪ {+∞} be a proper convex lsc function that satisfies (i) dom g ⊆ dom h, (ii) h is differentiable over dom ∂h, (iii) h is 1-strongly convex on dom g. Then we can define the Bregman distance D : dom g × dom ∂h → R + associated with h by D(u, v) := h(u) − h(v) − ∇h(v), u − v . Note that since h is 1-strongly convex with respect to norm · , we have D(u, v) ≥ 1 2 u − v 2 . Naturally, we shall say that F : dom g → Z * is L F -Lipschitz, if F (u) − F (v) * ≤ L F u − v , ∀u, v.
However, Lipschitzness for a stochastic oracle this time will be more involved. Evidently, we prefer stochastic oracles F ξ of F with as small L as possible. Moreover, the proof of Lemma 2.2 indicates that in k-th iteration we need Lipschitzness only for already known two iterates. Hence, following [GK95;Car+19], in contrast to Alg. 1, we will not fix distribution Q in the beginning, but allow it to vary from iteration to iteration. Formally, this amounts to the following definition.
Algorithm 2 Mirror-prox with variance reduction 1: Input:
Step size τ , α ∈ (0, 1),
K > 0. Let z −1 j = z 0 0 = w 0 = z 0 , ∀j ∈ [K] 2: for s = 0, 1 . . . do 3: for k = 0, 1 . . . K − 1 do 4: z s k+1/2 = argmin z g(z) + F (w s ), z + α τ D(z, z s k ) + 1−α τ D(z,w s ) 5:
Fix distribution Q z s k+1/2 ,w s and sample ξ s k according to it 6:
F (z s k+1/2 ) = F (w s ) + F ξ s k (z s k+1/2 ) − F ξ s k (w s ) 7: z s k+1 = argmin z g(z) + F (z s k+1/2 ), z + α τ D(z, z s k ) + 1−α τ D(z,w s ) 8:
end for 9:
w s+1 = 1 K K k=1 z s k 10: ∇h(w s+1 ) = 1 K K k=1 ∇h(z s k ) 11: z s+1 0 = z s K 12: end for Definition 1. We say that F has a stochastic oracle F ξ that is variable L-Lipschitz in mean, if for any u, v ∈ dom g there exists a distribution Q u,v such that (i) F is unbiased: F (z) = E ξ∼Qu,v [F ξ (z)] ∀z ∈ dom g; (ii) E ξ∼Qu,v F ξ (u) − F ξ (v) 2 * ≤ L 2 u − v 2 .
Note that the second condition holds only for given u, v, but the constant L is universal for all u, v. Changing u, v also changes a distribution, hence the name "variable". Without loss of generality, we denote any distribution that realizes the above Lipschitz bound for given u, v by Q u,v . This definition resembles the one in [Car+19, Definition 2]. It is easy to see when Q u,v = Q for all u, v, we get the same definition as before in Assumption 1.
We now introduce Assumption 2 which will replace and generalize Assumption 1(iv).
Assumption 2. The operator F : dom g → Z * has a stochastic oracle F ξ that is variable L-Lipschitz in mean (see Definition 1).
Mirror-Prox with variance reduction
In this setting, we could simply adjust the steps of Alg. 1 and correspondingly the analysis of Lemma 2.2. However, to show a convergence rate, double randomization in Alg. 1 causes technical complications. For this reason, in the Bregman setup we propose a double loop variant of Alg. 1, similar to the classical SVRG [JZ13]. Our algorithm can be seen as a variant of Mirror-Prox [Nem04] with variance reduction. Now it should be clear that Alg. 1 is a randomized version of Alg. 2 with p = 1 K and a particular choice D(z, z ) = 1 2 z − z 2 2 . The technical reason for this change is the calculation given in (13). In fact, all the other steps in the previous proofs would go through by using three point identity, except this step, which is inherently using the properties of 2 -norm. By removing double randomization and introducing double loop instead, step (13) will not be needed in the analysis of Bregman case.
Compared to Alg. 1, w s serves the same purpose as w k : the snapshot point in the language of SVRG [JZ13]. Since we have two loops in this case, we get w s by averaging, again, similar to SVRG for non-strongly convex optimization [Red+16; AY16]. The difference due to Bregman setup is that we have the additional pointw s that averages in the dual space. This operation does not incur additional cost.
Analysis
Similar to Euclidean case, we define for the iterates (z s k ) of Alg. 2 and any z ∈ dom g,
Φ s (z) := αD(z, z s 0 ) + (1 − α) K j=1 D(z, z s−1 j ),
where Φ 0 (z) = (α + K(1 − α))D(z, z 0 ), due to the definition of z −1 in Alg. 2. Since we have two indices s, k in Alg. 2, we define F s k = σ(z 0 1/2 , . . . , z 0 k−1/2 , . . . , z s 1/2 , . . . , z s k+1/2 ) and E s,
k [·] = E [·|F s k ].
To analyze z s k+1 and z s k+1/2 , we introduce the next lemma and provide its proof in Section 8. Lemma 3.2. Let g be proper convex lsc, and
z + = argmin z {g(z) + u, z + αD(z, z 1 ) + (1 − α)D(z, z 2 )} .
Then, for any z,
g(z) − g(z + ) + u, z − z + ≥ D(z, z + ) + α D(z + , z 1 ) − D(z, z 1 ) +(1 − α) D(z + , z 2 ) − D(z, z 2 ) .
We now introduce some definitions to be used in the proofs of this section.
Θ s k+1/2 (z) = F (z s k+1/2 ), z s k+1/2 − z + g(z s k+1/2 ) − g(z),(20)e(z, s, k) = τ F (z s k+1/2 ) − F ξ s k (z s k+1/2 ) − F (w s ) + F ξ s k (w s ), z s k+1/2 − z . (21) δ(s, k) = τ F ξ s k (w s ) − F ξ s k (z s k+1/2 ), z s k+1 − z s k+1/2 − 1 2 z s k+1 − z s k+1/2 2 − 1 − α 2 z s k+1/2 − w s 2 . (22)
The first expression will be needed for deriving the rate, the second term e(z, s, k) for controlling the error caused by max z∈C E[·] = E max z∈C [·], and the third term δ(s, k) will be nonpositive after taking expectation.
Lemma 3.3. Let Assumption 1 hold, α ∈ [0, 1), and τ = √ 1−α L γ for γ ∈ (0, 1). We have the following: (i) For any z ∈ Z and s, K ∈ N, it holds that
K−1 k=0 τ Θ s k+1/2 (z) + αD(z, z s+1 0 ) + (1 − α) K j=1 D(z, z s j ) ≤ αD(z, z s 0 ) + (1 − α) K j=1 D(z, z s−1 j ) + K−1 k=0
[e(z, s, k) + δ(s, k)].
(ii) For any solution z * , it holds that
E s,0 Φ s+1 (z * ) ≤ Φ s (z * ) − (1 − α)(1 − γ 2 ) 2 K−1 k=0 E s,0 z s k+1/2 − w s 2 . (iii) It holds that ∞ s=0 K−1 k=0 E z s k+1/2 − w s 2 ≤ 2 (1−α)(1−γ 2 ) Φ 0 (z * ).
Remark 3.4. We use Lemma 3.3(i) and Lemma 3.3(iii) for proving the convergence rate. On the other hand, Lemma 3.3(ii) can be used to derive subsequential convergence, which we do not include for brevity.
Proof of Lemma 3.3. Applying Lemma 3.2 to z s k+1/2 update, with z = z s k+1 , we have
τ g(z s k+1 ) − g(z s k+1/2 ) + F (w s ), z s k+1 − z s k+1/2 ≥ D(z s k+1 , z s k+1/2 ) + α D(z s k+1/2 , z s k ) − D(z s k+1 , z s k ) + (1 − α) D(z s k+1/2 ,w s ) − D(z s k+1 ,w s ) .(23)
Applying Lemma 3.2 to z s k+1 update with a general z ∈ Z, we have
τ g(z) − g(z s k+1 ) + F (z s k+1/2 ), z − z s k+1 ≥ D(z, z s k+1 ) + α D(z s k+1 , z s k ) − D(z, z s k ) + (1 − α) D(z s k+1 ,w s ) − D(z,w s ) . (24)
Note that for any u, v, the expression D(u,w s ) − D(v,w s ) is linear in terms of ∇h(w s ), that is
D(u,w s ) − D(v,w s ) = 1 K K j=1 D(u, z s−1 j ) − D(v, z s−1 j ) .(25)
Summing up (23) and (24) and using (25) with definition of F (z s k+1/2 ), we obtain
τ g(z) − g(z s k+1/2 ) + F (z s k+1/2 ), z − z s k+1/2 ≥ D(z, z s k+1 ) − αD(z, z s k ) + 1 − α K K j=1 D(z s k+1/2 , z s−1 j ) − 1 − α K K j=1 D(z, z s−1 j ) + D(z s k+1 , z s k+1/2 ) + τ F ξ s k (z s k+1/2 ) − F ξ s k (w s ), z s k+1 − z s k+1/2 .(26)
By D(u, v) ≥ 1 2 u − v 2 and Jensen's inequality, we have
1 − α K K j=1 D(z s k+1/2 , z s−1 j ) ≥ 1 − α K K j=1 1 2 z s k+1/2 − z s−1 j 2 ≥ 1 − α 2 z s k+1/2 − w s 2 ,(27)D(z s k+1 , z s k+1/2 ) ≥ 1 2 z s k+1 − z s k+1/2 2 .(28)
By using (20), (27), and (28) in (26), we deduce
τ Θ s k+1/2 (z) + D(z, z s k+1 ) ≤ αD(z, z s k ) + 1 − α K K j=1 D(z, z s−1 j ) + τ F ξ s k (w s ) − F ξ s k (z s k+1/2 ), z s k+1 − z s k+1/2 − 1 2 z s k+1 − z s k+1/2 2 − 1 − α 2 z s k+1/2 − w s 2 + τ F (z s k+1/2 ) − F (z s k+1/2 ), z s k+1/2 − z e(z,s,k) ,
where we defined the last term as e(z, s, k) (see (21)). We sum this inequality over k to obtain the result in (i).
Next, similar to (9), we estimate by Assumption 2 and Young's inequality
τ E s,k F ξ s k (w s ) − F ξ s k (z s k+1/2 ), z s k+1 − z s k+1/2 ≤ E s,k τ 2 2 F ξ s k (w s ) − F ξ s k (z s k+1/2 ) 2 * + 1 2 z s k+1 − z s k+1/2 2 ≤ (1 − α)γ 2 2 z s k+1/2 − w s 2 + 1 2 E s,k z s k+1 − z s k+1/2 2 ,(29)
since τ 2 L 2 = (1 − α)γ 2 . We take expectation of (26), plug in z = z * ; use (8), (29), (27), and (28) to get
E s,k D(z * , z s k+1 ) ≤ αD(z * , z s k ) + 1 − α K K j=1 D(z * , z s−1 j ) + (1 − α)(γ 2 − 1) 2 z s k+1/2 − w s 2 .(30)E s,0 D(z * , z s k+1 ) ≤ E s,0 αD(z * , z s k ) + 1 − α K K j=1 D(z * , z s−1 j ) − (1 − α)(1 − γ 2 ) 2 z s k+1/2 − w s 2 .(31)
Summing this inequality over k = 0, . . . , K − 1 and using the definition of Φ s (z * ) together with z s+1
0 = z s K , we derive (ii).
Finally, we take total expectation of (ii) and sum the inequality over s to obtain (iii).
In order to prove the convergence rate, we need the Bregman version of Lemma 2.4. The proof of the lemma is in Section 8.
C ⊂ dom g E max x∈C S−1 s=0 K−1 k=0 u s k+1 , x ≤ max x∈C D(x, x 0 ) + 1 2 S−1 s=0 K−1 k=0 E u s k+1 2 * .
We now continue with the main result of this section.
Theorem 3.6. Let Assumption 1(i,ii,iii) and Assumption 2 hold, α ∈ [0, 1), and τ = We take maximum and expectation, use E max z∈C S−1 s=0
√ 1−α L γ for γ ∈ (0, 1). Then, for z S = 1 KS S−1 s=0 K−1 k=0 z s k+1/2 , it follows that E Gap(z S ) ≤ 1 τ KS 1 + 1 + 8γ 2 1 − γ 2 (α + K(1 − α) max z∈C D(z, z 0 ).K−1 k=0 τ Θ s k+1/2 (z) ≥ τ KSE Gap(z S ) to deduce τ KSE Gap(z S ) ≤ max z∈C Φ 0 (z) + E max z∈C S−1 s=0 K−1 k=0 e(z, s, k) + E S−1 s=0 K−1 k=0 δ(s, k) .
The term E S−1 s=0 K−1 k=0 δ(s, k) is nonpositive by the tower property, Lipschitzness, Young's inequality, and τ < √ p L (the same arguments used in (29) can be applied here with δ(s, k) defined as (22)). Therefore,
τ KSE Gap(z S ) ≤ max z∈C Φ 0 (z) + E max= τ [ F (z s k+1/2 ) − F (z s k+1/2 )] = τ [F (w s ) − F ξ s k (w s ) − F (z s k+1/2 ) + F ξ s k (z s k+1/2 )], which help us write E max z∈C S−1 s=0 K−1 k=0 e(z, k) = E max z∈C S−1 s=0 K−1 k=0 τ F (z s k+1/2 ) − F (z s k+1/2 ), z − z s k+1/2 = E max z∈C S−1 s=0 K−1 k=0 u s k+1 , z − S−1 s=0 K−1 k=0 E u s k+1 , z s k+1/2 = E max z∈C S−1 s=0 K−1 k=0 u s k+1 , z ,
where the last equality is due to the tower property, F s k -measurability of z s k+1/2 and E s,k [u s k+1 ] = 0. We apply Lemma 3.5 with the specified F s k , u s k+1 to obtain
E max z∈C S−1 s=0 K−1 k=0 e(z, k) ≤ max z∈C D(z, z 0 ) + S−1 s=0 K−1 k=0 τ 2 E F ξ s k (z s k+1/2 ) − F ξ s k (w s ) + F (w s ) − F (z s k+1/2 ) 2 * ≤ max z∈C D(z, z 0 ) + S−1 s=0 K−1 k=0 4τ 2 E F ξ s k (z s k+1/2 ) − F ξ s k (w s ) 2 * (32) ≤ max z∈C D(z, z 0 ) + S−1 s=0 K−1 k=0 4τ 2 L 2 E z s k+1/2 − w s 2 (33) ≤ max z∈C D(z, z 0 ) + 8τ 2 L 2 (1 − α)(1 − γ 2 ) Φ 0 (z * ),(34)
where (32) is due to the tower property and E X − EX 2 * ≤ 2E X 2 * + 2 EX 2 * ≤ 4E X 2 * , which follows from triangle inequality, Young's inequality, and Jensen's inequality. Moreover, (33) is by variable Lipschitzness of F ξ , and the last step is by Lemma 3.3. Consequently, by Φ 0 (z * ) ≤ max z∈C Φ 0 (z) = (α + K(1 − α)) max z∈C D(z, z 0 ) and τ 2 L 2 = (1 − α)γ 2 we have
τ KSE Gap(z S ) ≤ max z∈C D(z, z 0 ) + 1 + 8τ 2 L 2 (1 − α)(1 − γ 2 ) Φ 0 (z) = 1 + 1 + 8γ 2 1 − γ 2 (α + K(1 − α) max z∈C D(z, z 0 ),
which gives the result.
Proof of Corollary 3.7. As α = 1 − 1 K , it holds that α + K(1 − α) = 1 − 1 K + 1 ≤ 2. With this, from Theorem 3.6 it follows . Now, by setting γ = 1 3 in (35), we will get specific constants. In particular, we will have
E Gap(z S ) ≤ 1 τ KS 1 + 1 + 8γ 2 1 − γ 2 (α + K(1 − α) max z∈C D(z, z 0 ) ≤ L √ KγS 3 + 16γ 2 1 − γ 2 max z∈C D(z, z 0 ) = O L √ N S .(35)E Gap(z S ) ≤ 15L √ KS max z∈C D(z, z 0 ) = 15 √ 2L √ N S max z∈C D(z, z 0 ).
Consequently, since 30 √ 2 < 43, the final complexity is 2N + 43 √ N L ε max z∈C D(z, z 0 ) .
Algorithm 3 FBF with variance reduction
1: Input: Probability p ∈ (0, 1], probability distribution Q, step size τ , α ∈ (0, 1). Let z 0 = w 0 2: for k = 0, 1 . . . do 3:z k = αz k + (1 − α)w k 4:
z k+1/2 = J τ G (z k − τ F (w k )) 5:
Draw an index ξ k according to Q 6:
z k+1 = z k+1/2 − τ (F ξ k (z k+1/2 ) − F ξ k (w k )) 7:
w k+1 = z k+1 , with probability p w k , with probability 1 − p 8: end for Remark 3.8. Because we work with general norms, we had to use in (34) a crude inequality E X − EX 2 * ≤ 4E X 2 * . Of course, in the Euclidean case with D(z, z ) = 1 2 z − z 2 this factor 4 is redundant. It is easy to see that setting τ =
Extensions
In this section, we show how to obtain the variance reduced versions of two other operator splitting methods: forward-backward-forward (FBF) [Tse00] and forward-reflected-backward (FoRB) [MT20] for monotone inclusions. We also show how to obtain linear convergence with Algorithm 1 when g in (1) is strongly convex. Formally, the monotone inclusion problem is to find
z * ∈ Z such that 0 ∈ (F + G)(z * ),(36)
where Z is a finite dimensional vector space with Euclidean inner product and the rest of the assumptions are summarized in Assumption 3.
Assumption 3.
(i) The solution set Sol of (36) is nonempty: (F + G) −1 (0) = ∅.
(ii) The operators G : Z ⇒ Z and F : Z → Z are maximally monotone.
(iii) The operator F has an oracle F ξ that is unbiased F (z) = Eξ [F ξ (z)] and L-Lipschitz in mean:
E ξ F ξ (u) − F ξ (v) 2 ≤ L 2 u − v 2 , ∀u, v ∈ Z.
We remark that one can use variable Lipschitz assumption from Assumption 2 instead of standard Lipschitzness, but we chose the latter for simplicity. Let us also recall the conditional expectation definitions based on the iterates of the algorithms:
E[·|σ(ξ 0 , . . . , ξ k−1 , w k )] = E k [·] and E[·|σ(ξ 0 , . . . , ξ k , w k ] = E k+1/2 [·]
. Next, the resolvent of an operator G is given by J G = (I + G) −1 where I is the identity operator. It is easy to see that when G = ∂g for proper convex lsc function g, inclusion (36) becomes the VI in (1) and J G = prox g .
Forward-Backward-Forward with variance reduction
Forward-backward-forward (FBF) algorithm was introduced by Tseng in [Tse00]. On one hand, it is a modification of the forward-backward algorithm that does not require stronger assumptions than mere monotonicity. On the other, it is a modification of the extragradient method that works for general monotone inclusions and not just for variational inequalities. FBF reads as
z k+1/2 = J τ G (z k − τ F (z k )) z k+1 = z k+1/2 − τ F (z k+1/2 ) + τ F (z k ).
It is easy to see that FBF is equivalent to extragradient when G is absent. But when not, FBF applied to the VI requires one proximal operator every iteration, whereas extragradient requires two. This advantage can be important for the cases where proximal operator is computationally expensive [Böh+20]. We keep the same notation as Section 2.3 and recall the definition of Φ k for convenience
Φ k (z) = α z k − z 2 + 1 − α p w k − z 2 .
We now continue with the main result for FBF.
Theorem 4.2. Let Assumption 3 hold, α ∈ [0, 1), p ∈ (0, 1], and τ = √ 1−α L γ for γ ∈ (0, 1). Then for (z k ) generated by Alg. 3 and any z * ∈ Sol, it holds that
E k [Φ k+1 (z * )] ≤ Φ k (z * ).
Moreover, if F ξ is continuous for all ξ, then (z k ) converges to some z * ∈ Sol a.s.
Proof. Let z = z * ∈ Sol which gives −F (z) ∈ G(z). Next, by the definition of z k+1/2 and resolvent, z k − τ F (w k ) ∈ z k+1/2 + τ G(z k+1/2 ). Combining these estimates with monotonicity of G lead to
z k+1/2 −z k + τ F (w k ), z − z k+1/2 − τ F (z), z − z k+1/2 ≥ 0.
We plug in the definition of z k+1 into this inequality to obtain
z k+1 −z k + τ F ξ k (z k+1/2 ) − F ξ k (w k ) + F (w k ) , z − z k+1/2 − F (z), z − z k+1/2 ≥ 0.(37)
We estimate the term withz k as in (6) 2
z k+1 −z k , z − z k+1/2 = 2 z k+1 − z k+1/2 , z − z k+1/2 + 2 z k+1/2 −z k , z − z k+1/2 = z k+1 − z k+1/2 2 + z − z k+1/2 2 − z − z k+1 2 + 2 z k+1/2 −z k , z − z k+1/2 = z k+1 − z k+1/2 2 − z − z k+1 2 + α z − z k 2 + (1 − α) w k − z 2 − α z k+1/2 − z k 2 − (1 − α) z k+1/2 − w k 2 .(38)
By taking conditional expectation and using that z k+1/2 is F k -measurable, we deduce
2τ E k F ξ k (z k+1/2 ) − F ξ k (w k ) + F (w k ), z − z k+1/2 = 2τ E k F (z k+1/2 ), z − z k+1/2 .(39)
We use (38) and (39) in (37) to obtain
2τ F (z) − F (z k+1/2 ), z − z k+1/2 + E k z k+1 − z 2 ≤ α z k − z 2 + (1 − α) w k − z 2 + E k z k+1 − z k+1/2 2 − α z k+1/2 − z k 2 − (1 − α) z k+1/2 − w k 2 .
Note that, the first term in the LHS is nonnegative by monotonicity of F . Then we add (11) to this inequality and use z k+1 − z k+1/2 2 ≤ τ 2 L 2 z k+1/2 − w k 2 to obtain
αE k z k+1 − z 2 + 1 − α p w k+1 − z 2 ≤ α z k − z 2 + 1 − α p w k − z 2 − α z k+1/2 − z k 2 − (1 − α) − τ 2 L 2 z k+1/2 − w k 2 .
This derives the first result, which is the analogue of Lemma 2.2. To show almost sure convergence, we basically follow the proof of Theorem 2.3. First, using Robbins-Siegmund theorem and [CP15, Proposition 2.3] as in Theorem 2.3, we obtain that there exists a probability 1 set Ξ of random trajectories such that ∀θ ∈ Ξ and ∀z ∈ Sol, we have that α z k (θ) − z 2 + 1−α p w k (θ) − z 2 converges, z k+1/2 (θ) − z k (θ) → 0, k=0 z k+1/2 . Then, the total complexity to get an
ε-accurate solution to (1) is O N + √ N L .
Forward-reflected-backward with variance reduction: revisited
In a similar spirit to FBF, but using a different idea, [MT20] proposed FoRB method
z k+1 = J τ G (z k − τ [F (z k ) + F (z k ) − F (z k−1 )]) .
This scheme generalizes optimistic gradient descent [RS13;Das+18] and in some particular cases is equivalent to Popov's method [Pop80]. Later, in [AMC21], the authors suggested the most straightforward variance reduction modification of FoRB by combining FoRB and loopless SVRG [KHR20]. This algorithm had the drawback of small step sizes which lead to complexity bounds that do not improve upon the deterministic methods. As highlighted in the experiments of [AMC21], the small step size τ ∼ 1 n seemed to be nonimprovable for the given method. One possible speculation for this phenomenon might be that the method is too aggressive and therefore prohibits large step sizes. We will use the retracted iteratez k = αz k +(1−α)w k instead of the latest iterate z k in the update to improve complexity.
The advantage of FoRB compared to extragradient is similar to FBF. FoRB only needs one proximal operator, applied to VI. Compared to FBF, FoRB has a simpler update rule and, unlike FBF, it is easy to adjust to Bregman setting, see [AMC21;Zha22]. Lyapunov function here is slightly more complicated than the ones in previous sections:
Φ k+1 (z) := α z k+1 − z 2 + 1 − α p w k+1 − z 2 + 2τ F (z k+1 ) − F (w k ), z − z k+1 + (1 − α) z k+1 − w k 2 .
Theorem 4.5. Let Assumption 3 hold, α ∈ [0, 1), p ∈ (0, 1], and τ = √ α(1−α) L γ for γ ∈ (0, 1). Then for (z k ) generated by Alg. 4 and any z * ∈ Sol, it holds that Φ k (z * ) is nonnegative and
E k [Φ k+1 (z * )] ≤ Φ k (z * ).
Moreover, if F ξ is continuous for all ξ, then (z k ) converges to some z * ∈ Sol a.s. Remark 4.6. Note that again when randomness is null, F ξ = F and p = 1, Alg. 4 reduces to the original FoRB algorithm. Moreover, with α = 1 2 we recover the result in [MT20]. Proof of Theorem 4.5. Nonnegativity of Φ k (z * ) is straightforward to prove by using Lipschitzness of F and τ L ≤ a(1 − α).
Let z = z * ∈ Sol which gives −F (z) ∈ G(z). Next, by the definitions of z k+1 and resolvent,
z k − τ [F (w k ) − F ξ k (w k−1 ) + F ξ k (z k )] ∈ z k+1 + τ G(z k+1 ).
Combining these estimates and monotonicity of G leads to
z k+1 −z k + τ [F (w k ) − F ξ k (w k−1 ) + F ξ k (z k )] , z − z k+1 − τ F (z), z − z k+1 ≥ 0.(41)
We split the first inner product and work with each term separately. First,
τ F (w k ) − F ξ k (w k−1 ) + F ξ k (z k ), z − z k+1 = τ F (w k ) − F (z k+1 ), z − z k+1 − τ F ξ k (w k−1 ) − F ξ k (z k ), z − z k+1 + τ F (z k+1 ), z − z k+1 = τ F (w k ) − F (z k+1 ), z − z k+1 − τ F ξ k (w k−1 ) − F ξ k (z k ), z − z k − τ F ξ k (w k−1 ) − F ξ k (z k ), z k − z k+1 + τ F (z k+1 ), z − z k+1 .
Second, as we derived in (6),
2 z k+1 −z k , z − z k+1 = α z k − z 2 − z k+1 − z 2 + (1 − α) w k − z 2 − α z k+1 − z k 2 − (1 − α) z k+1 − w k 2 .
Substituting the last two estimates into (41), we obtain
z k+1 − z 2 + 2τ F (z k+1 ) − F (w k ), z − z k+1 + 2τ F (z) − F (z k+1 ), z − z k+1 ≤ α z k − z 2 + (1 − α) w k − z 2 + 2τ F ξ k (z k ) − F ξ k (w k−1 ), z − z k + 2τ F ξ k (z k ) − F ξ k (w k−1 ), z k − z k+1 − α z k+1 − z k 2 − (1 − α) z k+1 − w k 2 .(42)
We take expectation conditioning on the knowledge of z k , w k , use E k F ξ k (z k ) = F (z k ), E k F ξ k (w k−1 ) = F (w k−1 ), and monotonicity of F for the third term in the LHS. This yields
E k z k+1 − z 2 + 2τ F (z k+1 ) − F (w k ), z − z k+1 + (1 − α) z k+1 − w k 2 ≤ α z k − z 2 + (1 − α) w k − z 2 + 2τ F (z k ) − F (w k−1 ), z − z k + 2τ E k F ξ k (z k ) − F ξ k (w k−1 ), z k − z k+1 − α z k+1 − z k 2 .(43)
Using Assumption 1(iv), Cauchy-Schwarz and Young's inequalities, we can bound the last line above as
E k [2τ F ξ k (z k ) − F ξ k (w k−1 ), z k − z k+1 − α z k+1 − z k 2 ] ≤ E k τ 2 αγ F ξ k (z k ) − F ξ k (w k−1 ) 2 + αγ z k+1 − z k 2 − α z k+1 − z k 2 ≤ (1 − α)γ L 2 E k F ξ k (z k ) − F ξ k (w k−1 ) 2 − (1 − γ)α z k+1 − z k 2 ≤ (1 − α)γ z k − w k−1 2 − (1 − γ)α z k+1 − z k 2 .(44)
Adding (11) and (44) to (43), we obtain
E k [Φ k+1 (z)] ≤ Φ k (z) − (1 − α)(1 − γ) z k − w k−1 2 − (1 − γ)α z k+1 − z k 2 .
The rest of the proof is the same as Theorem 4.2. The only difference is that instead of (40), we have
z k+1 (θ) −z k (θ) + τ (F ξ k (z k (θ)) − F ξ k (w k−1 (θ))) + τ (F (z k+1 (θ)) − F (w k (θ))) ∈ τ (F + G) (z k+1 (θ)),(45)
which gives the same conclusion as F ξ is continuous for all ξ, z k+1 −z k → 0, z k+1 −w k → 0 almost surely.
Remark 4.7. Even though we will set the parameters α, p, τ by optimizing complexity, we observe that the requirements in Theorem 4.5 allows step sizes arbitrary close to 1 2L . This already shows flexibility of the analysis, compared to the strict requirement of τ = p 4L in [AMC21]. The improvement in the step size choice is due to usingz k which allows us to use tighter estimations whereas the analysis in [AMC21] needs to make use of multiple Young's inequalities. In particular, we use z * as an anchor point in (11), whereas [AMC21] uses z k as anchor point, which requires Young's inequalities to transform to z k−1 and obtain a telescoping sum. Finally, as Corollary 4.3, we give the complexity of the algorithm for solving VI in the spirit of Section 2.3.1.
Linear convergence
In this section, we illustrate how to obtain linear convergence of Alg. 1 for solving VI (1) when g is µ-strongly convex. Alternatively, one can replace this assumption with strong monotonicity of F , which we omit for brevity. One can use the same arguments for FBF and FoRB variants in the previous sections to show linear convergence for solving strongly monotone inclusions.
Theorem 4.9. Let Assumption 1 hold, g be µ-strongly convex, and z * be the solution of (1). If we set α = 1 − p and τ = √ p 2L in Alg. 1, then it holds that
E z k − z * 2 ≤ 1 1 + c/3 k 2 1 − p z 0 − z * 2 , with c = min 3p 8 , √ pµ 2L .
Proof. In (4), we use strong convexity of g to have an additional term τ µ 2 z k+1 − z 2 on the right-hand side of the first inequality. Next, we continue as in the proof of Lemma 2.2 to obtain, instead of (10),
(1 + τ µ) E k z k+1 − z * 2 ≤ α z k − z * 2 + (1 − α) w k − z * 2 − (1 − α)(1 − γ) z k+1/2 − w k 2 − (1 − γ)E k z k+1 − z k+1/2 2 .
We add (11) to this inequality after using the tower property, to deduce
(α + τ µ) E k z k+1 − z * 2 + 1 − α p E k w k+1 − z * 2 ≤ α z k − z * 2 + 1 − α p w k − z * 2 − (1 − γ) (1 − α) z k+1/2 − w k 2 + E k z k+1 − z k+1/2 2 .
Since we set α = 1 − p and γ = 1 2 , we can rewrite it as
(1 − p + τ µ) E k z k+1 − z * 2 + E k w k+1 − z * 2 ≤ (1 − p) z k − z * 2 + w k − z * 2 − 1 2 p z k+1/2 − w k 2 + E k z k+1 − z k+1/2 2 .(46)
Next, by 2 u 2 + 2 v 2 ≥ u + v 2 applied two times,
2c 3 E k z k+1 − z * 2 ≥ c 3 E k w k+1 − z * 2 − 2c 3 E k E k+1/2 z k+1 − w k+1 2 = c 3 E k w k+1 − z * 2 − 2c(1 − p) 3 E k z k+1 − w k 2 ≥ c 3 E k w k+1 − z * 2 − 4c 3 E k z k+1 − z k+1/2 2 − 4c 3 z k+1/2 − w k 2 .
Using this inequality in (46) and that c ≤ √ pµ 2L = τ µ gives us
1 − p + c 3 E k z k+1 − z * 2 + 1 + c 3 E k w k+1 − z * 2 ≤ (1 − p) z k − z * 2 + w k − z * 2 − 1 2 p z k+1/2 − w k 2 + E k z k+1 − z k+1/2 2 + 4c 3 z k+1/2 − w k 2 + E k z k+1 − z k+1/2 2 . (47)
By our choice of c, we have 4c 3 ≤ p 2 and, therefore, the second line of (47) is nonpositive. Using 1 − p + c 3 > (1 − p)(1 + c 3 ) and taking total expectation, yields
1 + c 3 E (1 − p) z k+1 − z * 2 + w k+1 − z * 2 ≤ E (1 − p) z k − z * 2 + w k − z * 2 .
By iterating this inequality, we obtain
(1 − p)E z k − z * 2 ≤ 1 1 + c/3 k (2 − p) z 0 − z * 2 ,
which gives the result.
max 8 p , 6L √ pµ (pN + 2) ≤ 32 p + 24L √ pµ = 16N + 12 √ 2N L µ .
We lastly multiply the last estimate with log ε −1 .
Remark 4.11. In this case, Alg. 1 has complexity O N + √ N L µ log 1 ε , compared to the deterministic methods O N L F µ log 1 ε . This complexity recovers the previously obtained result in [BB16] and [Car+19, Section 5.4], where our advantage is having algorithmic parameters independent of µ and having more general assumptions.
Applications
Bilinear min-max problems
In this section, we analyze the overall complexity of our method compared to deterministic extragradient and show the complexity improvements.
Notation. For a vector x we use x i to denote its i-th coordinate and for an indexed vector x k it is x k,i . For a matrix A ∈ R m×n we denote a number of its non-zero entries by nnz(A); it is exactly the complexity of computing Ax or A y. We use the spectral, Frobenius and max norms of A defined as A = σ max (A),
A Frob = i,j A 2 ij = rank(A) i=1 σ i (A) 2 , and A max = max i,j |A ij |.
For i-th row and j-th column of A we use a convenient notation A i: and A :j . Here, for simplicity, we measure complexity in terms of arithmetic operations.
Problem. The general problem that we consider is min x∈R n max y∈R m Ax, y + g 1 (x) − g 2 (y), where g 1 , g 2 are proper convex lsc functions. We can formulate this problem as a VI by setting
F (z) = F (x, y) = A y −Ax , g(z) = g 1 (x) + g 2 (y).(48)
Linearly constrained minimization
A classical example of bilinear saddle point problems is linearly constrained minimization min x∈R n f (x) : Ax = b, where f is proper convex lsc. The equivalent min-max formulation corresponds to (48) when g 1 (x) = f (x) and g 2 (y) = b, y .
We will instantiate Alg. 1 for this problem. To make our presentation clearer, we consider only the most common scenario when nnz(A) > m + n. In this setting, deterministic methods (extragradient, FBF, FoRB, etc.) solve (51) with O nnz(A) A ε −1 total complexity. As we see in the sequel, variance reduced methods provide us O nnz(A) + nnz(A)(m + n) A Frob ε −1 total complexity. We now describe the definition of F ξ with two oracle choices. The first choice is the version of "importance" sampling described in Section 2.1.
Oracle 1. The fixed distribution (the same in every iteration) is defined as
F ξ (z) = 1 ri A i: y i − 1 cj A :j x j , Pr{ξ = (i, j)} = r i c j , r i = A i: 2 2 A 2 Frob , c j = A :j 2 2 A 2 Frob .
In the view of Assumption 1, the Lipschitz constant of F ξ can be computed as
E F ξ (z) 2 2 = E i∼r 1 r 2 i A i: y i 2 2 + E j∼c 1 c 2 j A :j x j 2 2 = m i=1 1 r i A i: y i 2 2 + n j=1 1 c j A :j x j 2 2 = m i=1 1 r i A i: 2 2 (y i ) 2 + n j=1 1 c j A :j 2 2 (x j ) 2 = A 2 Frob z 2 2 .(49)
Oracle 2. The second stochastic oracle is slightly more complicated, since it is iteration-dependent as [Car+19]. We use the setting of Assumption 2. Given u = (u x , u y ) and v = (v x , v y ), for z = (x, y), we define
F ξ (z) = 1 ri A i: y i − 1 cj A :j x j , Pr{ξ = (i, j)} = r i c j , r i = |u y i − v y i | 2 u y − v y 2 , c j = |u x j − v x j | 2 u x − v x 2 ,
and call the described distribution as Q(u, v). Similarly, in every iteration of Alg. 2 we define a distribution Q(z s k+1/2 , w s ) and sample ξ according to it. Clearly, as before, F ξ is unbiased. It is easy to show that this oracle is variable A Frob -Lipschitz. Its proof is similar to the variable Lipschitz derivation that we will include for matrix games with Bregman distances, in Section 5.1.2.
Complexity. We suppose that computing proximal operators prox g1 , prox g2 can be done efficiently iñ O(m + n) complexity. Our result in Theorem 2.5 stated that Alg. 1 has the rate O L √ pK . Given that the expected cost of each iteration is O (p nnz(A) + m + n), setting p = m+n nnz(A) gives us the average total complexityÕ
nnz(A) + nnz(A)(m + n) A Frob ε .(50)
It is easy to see that Alg. 2 has the same complexity if we set K = nnz(A) m+n . Compared to the complexity of deterministic methods, this complexity improves depending on the relation between A Frob and A . In particular, when A is a square dense matrix, due to A Frob ≤ rank(A) A , the bound in (50) improves that of deterministic VI methods. In (50) we suppress z 0 − z * 2 that is common to all methods considered in this paragraph.
Finally, we remark that the analysis in [Car+19, Section 5.2] requires the additional assumption that z → F (z) +∇f (z), z − u is convex for all u to apply to this case, where we denote a subgradient of f bỹ ∇f . This assumption requires more structure on f .
Matrix games
The problem in this case is written as min
x∈X max y∈Y Ax, y ,(51)
where A ∈ R m×n and X ⊂ R n , Y ⊂ R m are closed convex sets, projection onto each are easy to compute. In view of (48), we have g(z) = δ X (x) + δ Y (y). As we shall see, our complexities in this case recover the ones in [Car+19]. We refer to Section 1.1 for a detailed comparison.
In the Euclidean setup, we suppose that the underlying space Z = R n × R m has a Euclidean structure with the norm · 2 and, hence, it coincides with the dual Z * . In this case, we can use Oracle 1 and Oracle 2 from Section 5.1.1 and we obtain the same complexity as (50). The same discussions as Section 5.1.1 apply.
Bregman setup
Let X = ∆ n = {x ∈ R n : n i=1 x i = 1, x i ≥ 0} and Y = ∆ m .
With this, problem (51) is known as a zero sum game. In this case, deterministic algorithms formulated with a specific Bregman distance (given below) have O nnz(A) A max ε −1 total complexity. These settings are standard and we recall them only for reader's convenience.
For Z = R m+n and z = (x, y) ∈ Z we define z = x 2 1 + y 2 1 . Correspondingly, Z * = (R m+n , · * ) is the dual space with z * = x * 2 ∞ + y * 2 ∞ for z * = (x * , y * ). For z = (x, y) ∈ ∆ n ×∆ m we use the neg-
ative entropy h 1 (x) = n i=1 x i log x i , h 2 (y) = m i=1 y i log y i and set h(z) = h 1 (x) + h 2 (y) = m+n i=1 z i log z i .
Then we define the Bregman distance as
D(z, z ) = h(z) − h(z ) − ∇h(z ), z − z = i z i log z i z i .
Of course, this definition requires z to be in the relative interior of ∆ n × ∆ m ; normally it is satisfied automatically for the iterates of the algorithm (including our Alg. 2). If we choose z 0 = (x 0 , y 0 ) with x 0 = 1 n 1 n , y 0 = 1 m 1 m , it is easy to see that max z∈∆ n ×∆ m D(z, z 0 ) ≤ log n + log m = log(mn).
We know that D satisfies D(z, z ) ≥ 1 2 z − z 2 for all z, z ∈ ∆ n × ∆ m . Deterministic algorithms have constant A max in their complexity, since F defined in (48) is A max -Lipschitz:
F (z) 2 * = A y 2 ∞ + Ax 2 ∞ ≤ A 2 max ( x 2 1 + y 2 1 ) = A 2 max z 2 .
Oracle. The stochastic oracle here is similar to the Oracle 2 in Section 5.1.1 for the Euclidean case, but with adjustment to the 1 -norm. Again we are in the setting of Assumption 2. Given u = (u x , u y ) and v = (v x , v y ), for z = (x, y), we define
F ξ (z) = 1 ri A i: y i − 1 cj A :j x j , Pr{ξ = (i, j)} = r i c j , r i = |u y i − v y i | u y − v y 1 , c j = |u x j − v x j | u x − v x 1 ,
and call the described distribution as Q(u, v). We show that F ξ is variable A max -Lipschitz in view of Definition 1. Indeed, we have
E ξ∼Q(u,v) [ F ξ (u) − F ξ (v) 2 * ] = E ξ∼Q(u,v) F ξ (u − v) 2 * = E i∼r 1 r 2 i A i: (u y i − v y i ) 2 max + E j∼c 1 c 2 j A :j (u x j − v x j ) 2 max = m i=1 1 r i A i: 2 max |u y i − v y i | 2 + n j=1 1 c j A :j 2 max |u x j − v x j | 2 ≤ m i=1 A 2 max |u y i − v y i | u y − v y 1 + n j=1 A 2 max |u x j − v x j | u x − v x 1 = A 2 max u y − v y 2 1 + u y − v x 2 1 = A 2 max u − v 2 .
Similarly, in every iteration of Alg. 2 we define a distribution Q(z s k+1/2 , w s ) and sample ξ s k according to it. This stochastic oracle was already used in [GK95] and used extensively after that, see [NN13;CHW12] and references therein. In [Car+19] this oracle was called "sampling from the difference".
O nnz(A) + nnz(A)(m + n) A max ε ,
which, in the square dense case, improves the deterministic complexity by √ n.
Updates. For concreteness we specify updates in lines 4-7 of Alg. 2. Let w s = (u, v),w s = (ū s ,v s ).
∇h 1 (x s k+1/2 ) = α∇h 1 (x s k ) + (1 − α)∇h 1 (ū s ) − τ A v s ∇h 2 (y s k+1/2 ) = α∇h 2 (y s k ) + (1 − α)∇h 2 (v s ) + τ Au s Then we form a distribution Q(z s k+1/2 , w s ) Pr{ξ = (i, j)} = r i c j , r i = |y s k+1/2,i − v s i | y s k+1/2 − v s 1 , c j = |x s k+1/2,i − u s i | x s k+1/2 − u s 1
and sample ξ k = (i, j) according to Q(z s k+1/2 , w s ). Finally, we update x s k+1 and y s k+1 as
∇h 1 (x s k+1 ) = α∇h 1 (x s k ) + (1 − α)∇h 1 (ū s ) − τ A v s − τ r i A i: (y s k+1/2,i − v s i ) = ∇h 1 (x s k+1/2 ) − τ A i: y s k+1/2 − v s sign(y s k+1/2,i − v s i )
∇h 2 (y s k+1 ) = ∇h 2 (y s k+1/2 ) + τ A :j x s k+1/2 − u s sign(x s k+1/2,j − u s j )
Switching from dual variables ∇h 1 (x) to primal x is elementary by duality:
X = ∇h 1 (x) ⇐⇒ x = ∇h * 1 (X) = (e X1 , . . . , e Xn ) n i=1 e Xi
and similarly for y. Updates for w and ∇h(w) are straightforward by means of incremental averaging.
Nonbilinear min-max problems
An important example of nonbilinear min-max problems is constrained optimization
min x∈X f (x) subject to h i (x) ≤ 0, for i ∈ [N ],
where f, h i are smooth convex functions. We can map this problem to the VI template (1) by setting
F = ∇f (x) + N i=1 y i ∇h i (x) − h 1 (x), . . . h N (x) , g(z) = δ X (x) + δ R N + (y).
One possible choice for stochastic oracles is to set where e i is the i-th standard basis vector. Of course, this form of the oracle will not necessarily be a good choice for specific applications.
F i (z) = ∇f (x) + N y i ∇h i (x) N h i (x)e i ,(52)
In particular, as discussed in Section 1.2 and in the corollaries of our main theorems, our results will apply in their full generality and they will improve deterministic complexity as long as L ≤ √ N L F , where L is the Lipschitz constant corresponding to stochastic oracle in view of Assumption 1 and L F is for the full operator. However, it is not clear that the generic choice in (52) will satisfy this requirement. Therefore, one should be careful to design suitable oracles depending on the particular structure of the problem to ensure complexity improvements. We refer to Section 1.1 for a detailed comparison with related works.
Numerical experiments
In this section, we provide preliminary empirical evidence 2 on how variance reduced methods for VIs perform in practice. By no means, this report is exhaustive, but only an illustration for showing (i) variance reduction helps in practice compared to deterministic methods and (ii) our approach is not only more general in theory but also offers practical advantages compared to the previous approach in [Car+19].
We focus on matrix games with simplex constraints in the Euclidean and entropic setups. In the Euclidean step, we use the projection to simplex from [Con16]. We compare deterministic extragradient (EG), existing variance-reduced method [Car+19] (EG-Car+19) and proposed Alg. 1 and Alg. 2. To distinguish from the Euclidean case, we write 'MP' instead of 'EG' for all algorithms. We have chosen three test problems used in the literature [Nem13;Nem+09] and fixed m = n = 500.
For all problems, we use the largest step sizes allowed by theory. In particular, EG uses 1/L F , where L F is the Lipschitz constant of the overall operator F . We also use the reported parameters from [Car+19] for EG-Car+19. In the Euclidean case, by tracing the proof of [Car+19, Proposition 2], we observed that one can improve the step size from η = α 10L 2 to η = α 4L 2 , where α is defined to be L therein. Therefore, we 2 Code can be found in https://github.com/ymalitsky/vr_for_vi use the improved step size for EG-Car+19 for experiments with Euclidean setup. However, in the Bregman setup, we did not find a way to improve the step size of EG-Car+19, so we use the reported one.
In our methods, we use the parameters from Remarks 2.1 and 3.1. For performance measure, we use duality gap, which can be simply computed as max i (Ax) i − min j (A y) j due to simplex constraints. Cost of computing one F is counted as an epoch, and the cost of stochastic oracles are counted accordingly to match the overall cost.
We report the results in Figures 1 and 2. We see that variance reduced variants consistently outperform deterministic EG in all cases, as predicted in theory. Within variance reduced methods, due to the small step sizes of EG-Car+19, except the first dataset in the Euclidean setup, we observe our algorithms to also outperform EG-Car+19. Especially in the Bregman setting, the difference is noticeable since the analysis of EG-Car+19 requires smaller step sizes.
Conclusions
We conclude by discussing a few potential directions that our results could pave the way for.
Sparsity. An important consideration in practice is to adapt to sparsity of the data. The recent work by [Car+20] built on the algorithm in [Car+19] and improved the complexity for matrix games in Euclidean setup, for sparse data, by using specialized data structures. We suspect that these techniques can also be used in our algorithms.
Stochastic oracles. As we have seen for bilinear and nonbilinear problems, harnessing the structure is very important for devising suitable stochastic oracles with small Lipschitz constants. On top of our algorithms, an interesting direction is to study important nonbilinear min-max problems and devise particular Bregman distances and stochastic oracles to obtain complexity improvements.
New algorithms. For brevity, we only showed the application of our techniques for extragradient, FBF, and FoRB methods. However, for more structured problems other extensions might be more suitable. Such structured problems arise, for example, when only partial strong convexity is present or when F is the sum of a skew-symmetric matrix and a gradient of a convex function.
Let Ξ be the probability 1 set such that for all θ ∈ Ξ , z k+1 (θ) − z k+1/2 (θ) → 0, z k+1/2 (θ) − z k (θ) → 0, and z k+1/2 (θ) − w k (θ) → 0. Pick θ ∈ Ξ ∩ Ξ and letz(θ) be a cluster point of the bounded sequence (z k (θ)). From z k+1/2 (θ) − z k (θ) → 0 and z k+1/2 (θ) − w k (θ) → 0 it follows thatz(θ) is also a cluster point of (w k (θ)).
By prox-inequality (3) applied to the definition of z k+1 , z k+1 (θ) −z k (θ) + τ F (w k (θ)) − τ F ξ k (z k+1/2 (θ)) + τ F ξ k (w k (θ)), z − z k+1 (θ) + τ g(z) − τ g(z k+1 (θ)) ≥ 0, ∀z ∈ Z. (55)
By extracting the subsequence of z k (θ) if needed, taking the limit along that subsequence and using the lower semicontinuity of g, we deduce thatz(θ) ∈ Sol. In doing so, we also used that (z k+1 (θ)) is bounded and F ξ is continuous for all ξ to deduce τ F ξ k (w k (θ)) − F ξ k (z k+1/2 (θ)), z − z k+1 (θ) → 0. Moreover, since z k+1 (θ) − z k (θ) → 0 and z k+1 (θ) − w k (θ) → 0, it follows that z k+1 (θ) −z k (θ) → 0. Hence, all cluster points of (z k (θ)) and (w k (θ)) belong to Sol. We have shown that at least on one subsequence α z k (θ) −z(θ) 2 + 1−α p w k (θ) −z(θ) 2 converges to 0. Then, by (54) we deduce α z k (θ) − z(θ) 2 + 1−α p w k (θ) −z(θ) 2 → 0 and consequently z k (θ) −z(θ) 2 → 0. This shows (z k ) converges almost surely to a point in Sol.
Proof of Lemma 3.2. By optimality of z + , 0 ∈ ∂g(z + ) + u + α ∇h(z + ) − ∇h(z 1 ) + (1 − α) ∇h(z + ) − ∇h(z 2 ) .
This implies by convexity of g g(z) − g(z + ) ≥ u + α ∇h(z + ) − ∇h(z 1 ) + (1 − α) ∇h(z + ) − ∇h(z 2 ) , z + − z .
By applying three point identity twice, we deduce g(z) − g(z + ) + u, z − z + ≥ α D(z, z + ) + D(z + , z 1 ) − D(z, z 1 )
+(1 − α) D(z, z + ) + D(z + , z 2 ) − D(z, z 2 )
and by a simple rearrangement we obtain the result.
Proof of Lemma 2.4. First, we define the sequence x k+1 = x k + u k+1 . It is easy to see that x k is F kmeasurable. Next, by using the definition of (x k ), we have
x k+1 − x 2 = x k − x 2 + 2 u k+1 , x k − x + u k+1 2 .
Summing over k = 0, . . . , K − 1, we obtain
K−1 k=0 2 u k+1 , x − x k ≤ x 0 − x 2 + K−1 k=0 u k+1 2 .
Next, we take maximum of both sides and then expectation
E max x∈C K−1 k=0 u k , x ≤ max x∈C 1 2 x 0 − x 2 + 1 2 K−1 k=0 E u k+1 2 + K−1 k=0 E [ u k+1 , x k ] .
We use the tower property, F k -measurability of x k , and E [u k+1 |F k ] = 0 to finish the proof, since First, we observe x s k is F s k -measurable. By the definition of x s k+1 , we have for all x ∈ dom g, ∇h(x s k+1 ) − ∇h(x s k ) − u s k+1 , x − x s k+1 ≥ 0.
We apply three point identity to obtain
D(x, x s k ) − D(x, x s k+1 ) − D(x s k+1 , x s k ) − u s k+1 , x − x s k+1 ≥ 0.
We manipulate the inner product by using Hölder's, Young's inequalities, and strong convexity of h,
u s k+1 , x − x s k+1 = u s k+1 , x − x s k + u s k+1 , x s k − x s
Remark 2. 1 .
1For running Alg. 1 in practice, we suggest p = 2 N , α = 1−p, and τ =
Remark 3. 1 .
1For running the algorithm in practice, we suggest K = N 2 , α = 1 − 1 K , and τ = 0.99 √ p L .
Lemma 3. 5 .
5Let F = (F s k ) s≥0,k∈[0,K−1] be a filtration and (u s k ) a stochastic process adapted to F with E[u s k+1 |F s k ] = 0. Given x 0 ∈ Z, for any S ∈ N and any compact set
Corollary 3. 7 .
7Let K = N 2 and α = 1 − 1 K = 1 − 2 N , and τ = √ 1−α L γ for γ ∈ (0, 1). Then the total complexity of Alg. 2 to reach ε-accuracy is O N + L √ N ε . In particular, if τ = z∈C D(z, z 0 ).Proof of Theorem 3.6. We start with the result of Lemma 3.3 and proceed similar to Theorem 2.5. Since z s+1 0 = z s K , we use definition of Φ s (z), and sum the inequality in Lemma 3.3(i) over s to obtain z, s, k) + δ(s, k)]
We bound the second term on RHS, similar to the proof of Theorem 2.5. For s ∈ {0, . . . , S − 1} and k ∈ {0, . . . , K − 1}, set F s k = σ(z 0 1/2 , . . . , z 0 K−1/2 , . . . , z s 1/2 , . . . , z s k+1/2 ), u s k+1
z∈C z − z 0 2 total complexity for the Euclidean setting.
Remark 4. 1 .
1For running Alg. 3 in practice, we suggest p = 2 N , α = 1 − p, and τ =
Remark 4. 4 .
4For running Alg. 4 in practice, we suggest p = 2 N , α = 1 − p, and τ =
Corollary 4. 8 .
8Let α = 1 − p = 1 − 2 N and z K k . Then, the total complexity to get anε-accurate solution to (1) is O N + √ N L .
Corollary 4. 10 .
10Let p = 2 N , τ = √ p 2L . The total average complexity is O N + √ N L µ log 1 .Proof. The ε-accuracy is reached after O(log 1 ε / log(1 + c 3 )) iterations. This yields a factor pN +2 log(1+ c 3 ) ≈ 3 c (pN + 2) in total complexity. Using our choice for c, we obtain total average complexity
Complexity. In this case, the complexity of deterministic algorithms (Mirror Prox, FoRB) isO nnz(A) A max ε −1 .Our result in Corollary 3.7 stated that Alg. 2 has the rate O L √ KS . Given that the cost of each epoch of Alg. 2 is O (nnz(A) + K(m + n)), setting K = nnz(A) m+n gives us the total complexitỹ
Figure 2 :
2Entropic setup. The same matrices in Figure 1 used in the same arrangement.
K− 1 k=0
1E [ u k+1 , x k ] = K−1 k=0 E [ E [u k+1 |F k ] , x k ] = 0.Proof of Lemma 3.5. Define for each s ≥ 0 and for k ∈ {0, . . . , K − 1}, x s k+1 = argmin x∈dom g { −u s k+1 , x + D(x, x s k )}, and let x s+1 0 = x s m .
Table 1 :
1Table of algorithms with F (z) = FoRB: forward-reflected-backward.∇g denotes a subgradient of g. † [Kor76; Tse00; Nem04; MT20], ‡ [Car+19], * [AMC21].N
i=1 Fi(z). EG: Extragradient, MP: Mirror-Prox, FBF: forward-
and it is L-Lipschitz in mean, in view of Assumption 1(iv). In this setting, our variance reduced variants of EG, FBF, and FoRB (Corollary 2.7, Corollary 4.3, Corollary 4.8) havecomplexity O N + √ N Lε −1 compared to the deterministic methods with O N L F ε −1 .Our methods improve over deterministic variants as long as L ≤ √ N L F . This is a similar improvement over deterministic complexity, as accelerated variance reduction does for minimization problems[WS16;All17]. To our knowledge, the only precedent with a result similar to ours is the work[Car+19], where spurious assumptions were required (see Section 1.1 andTable 1), complexity had additional logarithmic terms and a complicated three-loop algorithm was needed.
Table 2 :
2Structure of the paper
One epoch requires one evaluation of F and 2K of F ξ , therefore in total we have N + 2K = 2N . To reach εaccuracy, we need O
L
√
N ε
epochs. Hence, the final complexity is O N + L
√
N
ε
Figure 1: Euclidean setup. left: policeman and burglar matrix[Nem13], middle, right: two test matrices given in [Nem+09, Section 4.5].0
0.2
0.4
0.6
0.8
1
·10 4
10 −3
10 −2
10 −1
10 0
epoch
Duality gap
EG
EG-Car+19
EG-Alg2
EG-Alg1
0
0.2
0.4
0.6
0.8
1
·10 4
10 −3
10 −2
10 −1
epoch
Duality gap
EG
EG-Car+19
EG-Alg2
EG-Alg1
0
0.2
0.4
0.6
0.8
1
·10 4
10 −5
10 −4
10 −3
10 −2
10 −1
epoch
Duality gap
EG
EG-Car+19
EG-Alg2
EG-Alg1
In the unconstrained setting, this method is also known as Optimistic Mirror Descent (OMD) or Optimistic Gradient Descent Ascent (OGDA)[RS13;Das+18] and is also equivalent to the classical Popov's method[Pop80]
AcknowledgmentsAppendixProof of Theorem 2.3. By the proof of Lemma 2.2, without removing the term −α z k+1/2 − z k 2 in (7), we haveBy Robbins-Siegmund theorem [RS71, Theorem 1], we have that Φ k (z ) converges a.s. and z k+1/2 − z k , z k+1/2 − w k converges to 0 a.s.Q , we can construct Ξ, with P(Ξ) = 1, such that for all θ ∈ Ξ and ∀z * ∈ Sol Z k (θ) − Z * Q converges and therefore, there exists Ξ with P(Ξ) = 1, such thatMoreover, by taking total expectation on (53), we get ∞ k=1 E z k+1 −z k+1/2 2 < ∞. By Fubini-Tonelli theorem, we have E ∞ k=1 z k+1 − z k+1/2 2 < ∞ and since ∞ k=1 z k+1 −z k+1/2 2 is nonnegative, ∞ k=1 z k+1 − z k+1/2 2 < ∞ a.s. and thus z k+1 − z k+1/2 converges to 0 a.s.
Forward-reflected-backward method with variance reduction. A Alacaoglu, Y Malitsky, V Cevher, Computational Optimization and Applications. 80A. Alacaoglu, Y. Malitsky, and V. Cevher. "Forward-reflected-backward method with variance reduc- tion". In: Computational Optimization and Applications 80.2 (2021), pp. 321-346.
Katyusha: The first direct acceleration of stochastic gradient methods. Z Allen-Zhu, 10.1145/3055399.3055448Journal of Machine Learning Research. 18Z. Allen-Zhu. "Katyusha: The first direct acceleration of stochastic gradient methods". In: Journal of Machine Learning Research 18.1 (2017), pp. 8194-8244.
Improved SVRG for non-strongly-convex or sum-of-non-convex objectives. Z Allen-Zhu, Y Yuan, PMLR. 2016International Conference on Machine Learning. Z. Allen-Zhu and Y. Yuan. "Improved SVRG for non-strongly-convex or sum-of-non-convex objectives". In: International Conference on Machine Learning. PMLR. 2016, pp. 1080-1089.
Stochastic variance reduction methods for saddle-point problems. P Balamurugan, F Bach, Advances in Neural Information Processing Systems. P. Balamurugan and F. Bach. "Stochastic variance reduction methods for saddle-point problems". In: Advances in Neural Information Processing Systems. 2016, pp. 1416-1424.
Two steps at a time -taking GAN training in stride with Tseng's method. A Böhm, M Sedlmayer, E R Csetnek, R I Boţ, arXiv:2006.09033A. Böhm, M. Sedlmayer, E. R. Csetnek, and R. I. Boţ. "Two steps at a time -taking GAN training in stride with Tseng's method". In: arXiv:2006.09033 (2020).
Minibatch forward-backward-forward methods for solving stochastic variational inequalities. R I Boţ, P Mertikopoulos, M Staudigl, P T Vuong, https:/pubsonline.informs.org/doi/abs/10.1287/stsy.2019.0064Stochastic Systems. 11R. I. Boţ, P. Mertikopoulos, M. Staudigl, and P. T. Vuong. "Minibatch forward-backward-forward meth- ods for solving stochastic variational inequalities". In: Stochastic Systems 11.2 (2021), pp. 112-139.
Variance reduction for matrix games. Y Carmon, Y Jin, A Sidford, K Tian, Advances in Neural Information Processing Systems. Y. Carmon, Y. Jin, A. Sidford, and K. Tian. "Variance reduction for matrix games". In: Advances in Neural Information Processing Systems. 2019, pp. 11377-11388.
Coordinate methods for matrix games. Y Carmon, Y Jin, A Sidford, K Tian, IEEE 61st Annual Symposium on Foundations of Computer Science. IEEE. 2020. Y. Carmon, Y. Jin, A. Sidford, and K. Tian. "Coordinate methods for matrix games". In: IEEE 61st Annual Symposium on Foundations of Computer Science. IEEE. 2020, pp. 283-293.
A first-order primal-dual algorithm for convex problems with applications to imaging. A Chambolle, T Pock, 10.1007/s10851-010-0251-1Journal of Mathematical Imaging and Vision. 40A. Chambolle and T. Pock. "A first-order primal-dual algorithm for convex problems with applications to imaging". In: Journal of Mathematical Imaging and Vision 40.1 (2011), pp. 120-145.
Reducing noise in GAN training with variance reduced extragradient. T Chavdarova, G Gidel, F Fleuret, S Lacoste-Julien, Advances in Neural Information Processing Systems. T. Chavdarova, G. Gidel, F. Fleuret, and S. Lacoste-Julien. "Reducing noise in GAN training with variance reduced extragradient". In: Advances in Neural Information Processing Systems. 2019, pp. 391- 401.
Sublinear optimization for machine learning. K L Clarkson, E Hazan, D P Woodruff, 10.1145/2371656.2371658Journal of the ACM. 59K. L. Clarkson, E. Hazan, and D. P. Woodruff. "Sublinear optimization for machine learning". In: Journal of the ACM 59.5 (2012), pp. 1-49.
Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping. P L Combettes, J.-C Pesquet, 10.1007/s10107-018-1296-ySIAM Journal on Optimization. 25P. L. Combettes and J.-C. Pesquet. "Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping". In: SIAM Journal on Optimization 25.2 (2015), pp. 1221-1248.
Fast projection onto the simplex and the 1 ball. L Condat, 10.1007/s10107-015-0946-6Mathematical Programming. 158L. Condat. "Fast projection onto the simplex and the 1 ball". In: Mathematical Programming 158.1 (2016), pp. 575-585.
On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems. S Cui, U V Shanbhag, https:/link.springer.com/article/10.1007/s11228-021-00572-6Set-Valued and Variational Analysis. 29S. Cui and U. V. Shanbhag. "On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems". In: Set-Valued and Variational Analysis 29.2 (2021), pp. 453-499.
Training GANs with Optimism. C Daskalakis, A Ilyas, V Syrgkanis, H Zeng, International Conference on Learning Representations. C. Daskalakis, A. Ilyas, V. Syrgkanis, and H. Zeng. "Training GANs with Optimism". In: International Conference on Learning Representations. 2018.
SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. A Defazio, F Bach, S Lacoste-Julien, Advances in Neural Information Processing Systems. A. Defazio, F. Bach, and S. Lacoste-Julien. "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives". In: Advances in Neural Information Processing Systems. 2014, pp. 1646-1654.
A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. E Esser, X Zhang, T F Chan, 10.1137/09076934XSIAM Journal on Imaging Sciences. 34E. Esser, X. Zhang, and T. F. Chan. "A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science". In: SIAM Journal on Imaging Sciences 3.4 (2010), pp. 1015- 1046.
Finite-dimensional variational inequalities and complementarity problems. F Facchinei, J.-S Pang, https:/link.springer.com/book/10.1007/b97543Springer Science & Business MediaF. Facchinei and J.-S. Pang. Finite-dimensional variational inequalities and complementarity problems. Springer Science & Business Media, 2007.
Global convergence to the equilibrium of GANs using variational inequalities. I Gemp, S Mahadevan, arXiv:1808.01531I. Gemp and S. Mahadevan. "Global convergence to the equilibrium of GANs using variational inequal- ities". In: arXiv:1808.01531 (2018).
A Variational Inequality Perspective on Generative Adversarial Networks. G Gidel, H Berard, G Vignoud, P Vincent, S Lacoste-Julien, International Conference on Learning Representations. G. Gidel, H. Berard, G. Vignoud, P. Vincent, and S. Lacoste-Julien. "A Variational Inequality Perspec- tive on Generative Adversarial Networks". In: International Conference on Learning Representations. 2019.
Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems. N Golowich, S Pattathil, C Daskalakis, A Ozdaglar, PMLR. 2020Conference on Learning Theory. N. Golowich, S. Pattathil, C. Daskalakis, and A. Ozdaglar. "Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems". In: Conference on Learning Theory. PMLR. 2020, pp. 1758-1784.
Stochastic extragradient: General analysis and improved rates. E Gorbunov, H Berard, G Gidel, N Loizou, PMLR. 2022International Conference on Artificial Intelligence and Statistics. E. Gorbunov, H. Berard, G. Gidel, and N. Loizou. "Stochastic extragradient: General analysis and improved rates". In: International Conference on Artificial Intelligence and Statistics. PMLR. 2022, pp. 7865-7901.
Variance-reduced methods for machine learning. R M Gower, M Schmidt, F Bach, P Richtarik, 10.1109/JPROC.2020.3028013Proceedings of the IEEE. the IEEE108R. M. Gower, M. Schmidt, F. Bach, and P. Richtarik. "Variance-reduced methods for machine learning". In: Proceedings of the IEEE 108.11 (2020), pp. 1968-1983.
A sublinear-time randomized approximation algorithm for matrix games. M D Grigoriadis, L G Khachiyan, 10.1016/0167-6377(95)00032-0Operations Research Letters. 18M. D. Grigoriadis and L. G. Khachiyan. "A sublinear-time randomized approximation algorithm for matrix games". In: Operations Research Letters 18.2 (1995), pp. 53-58.
Lower complexity bounds of finite-sum optimization problems: The results and construction. Y Han, G Xie, Z Zhang, arXiv:2103.08280Y. Han, G. Xie, and Z. Zhang. "Lower complexity bounds of finite-sum optimization problems: The results and construction". In: arXiv:2103.08280 (2021).
Variance reduced stochastic gradient descent with neighbors. T Hofmann, A Lucchi, S Lacoste-Julien, B Mcwilliams, Advances in Neural Information Processing Systems. T. Hofmann, A. Lucchi, S. Lacoste-Julien, and B. McWilliams. "Variance reduced stochastic gradient descent with neighbors". In: Advances in Neural Information Processing Systems. 2015, pp. 2305-2313.
Extragradient method with variance reduction for stochastic variational inequalities. A N Iusem, A Jofré, R I Oliveira, P Thompson, 10.1137/15M1031953SIAM Journal on Optimization. 27A. N. Iusem, A. Jofré, R. I. Oliveira, and P. Thompson. "Extragradient method with variance reduction for stochastic variational inequalities". In: SIAM Journal on Optimization 27.2 (2017), pp. 686-724.
Accelerating stochastic gradient descent using predictive variance reduction. R Johnson, T Zhang, Advances in Neural Information Processing Systems. R. Johnson and T. Zhang. "Accelerating stochastic gradient descent using predictive variance reduction". In: Advances in Neural Information Processing Systems. 2013, pp. 315-323.
The extragradient method for finding saddle points and other problems. G M Korpelevich, Ekon. Mat. Metody. 12G. M. Korpelevich. "The extragradient method for finding saddle points and other problems". In: Ekon. Mat. Metody 12 (1976), pp. 747-756.
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop. D Kovalev, S Horvath, P Richtárik, International Conference on Algorithmic Learning Theory. 2020. D. Kovalev, S. Horvath, and P. Richtárik. "Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop". In: International Conference on Algorithmic Learning Theory. 2020, pp. 451-467.
Projected reflected gradient methods for monotone variational inequalities. Y Malitsky, 10.1137/14097238XSIAM Journal on Optimization. 25Y. Malitsky. "Projected reflected gradient methods for monotone variational inequalities". In: SIAM Journal on Optimization 25.1 (2015), pp. 502-520.
A forward-backward splitting method for monotone inclusions without cocoercivity. Y Malitsky, M K Tam, 10.1137/18M1207260SIAM Journal on Optimization. 30Y. Malitsky and M. K. Tam. "A forward-backward splitting method for monotone inclusions without cocoercivity". In: SIAM Journal on Optimization 30.2 (2020), pp. 1451-1472.
Optimistic mirror descent in saddle-point problems: Going the extra(-gradient) mile. P Mertikopoulos, B Lecouat, H Zenati, C.-S Foo, V Chandrasekhar, G Piliouras, International Conference on Learning Representations. P. Mertikopoulos, B. Lecouat, H. Zenati, C.-S. Foo, V. Chandrasekhar, and G. Piliouras. "Optimistic mirror descent in saddle-point problems: Going the extra(-gradient) mile". In: International Conference on Learning Representations. 2019.
Revisiting stochastic extragradient. K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky, PMLR. 2020International Conference on Artificial Intelligence and Statistics. K. Mishchenko, D. Kovalev, E. Shulgin, P. Richtárik, and Y. Malitsky. "Revisiting stochastic extragra- dient". In: International Conference on Artificial Intelligence and Statistics. PMLR. 2020, pp. 4573- 4582.
Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. A Nemirovski, 10.1137/S1052623403425629SIAM Journal on Optimization. 15A. Nemirovski. "Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems". In: SIAM Journal on Optimization 15.1 (2004), pp. 229-251.
Mini-Course on Convex Programming Algorithms. Lecture notes. A Nemirovski, A. Nemirovski. Mini-Course on Convex Programming Algorithms. Lecture notes. 2013.
Robust stochastic approximation approach to stochastic programming. A Nemirovski, A Juditsky, G Lan, A Shapiro, 10.1137/070704277SIAM Journal on Optimization. 19A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. "Robust stochastic approximation approach to stochastic programming". In: SIAM Journal on Optimization 19.4 (2009), pp. 1574-1609.
Smooth minimization of non-smooth functions. Y Nesterov, http:/10.1007/s10107-004-0552-5Mathematical programming. 103Y. Nesterov. "Smooth minimization of non-smooth functions". In: Mathematical programming 103.1 (2005), pp. 127-152.
Dual extrapolation and its applications to solving variational inequalities and related problems. Y Nesterov, 10.1007/s10107-006-0034-zMathematical Programming. 109Y. Nesterov. "Dual extrapolation and its applications to solving variational inequalities and related problems". In: Mathematical Programming 109.2-3 (2007), pp. 319-344.
On first order algorithms for 1/nuclear norm minimization. Y Nesterov, A Nemirovski, 10.1017/S096249291300007XActa Numerica. 22Y. Nesterov and A. Nemirovski. "On first order algorithms for 1/nuclear norm minimization". In: Acta Numerica 22 (2013), pp. 509-575.
A modification of the Arrow-Hurwicz method for search of saddle points. L D Popov, 10.1007/BF01141092Mathematical notes of the Academy of Sciences of the USSR. 28L. D. Popov. "A modification of the Arrow-Hurwicz method for search of saddle points". In: Mathematical notes of the Academy of Sciences of the USSR 28.5 (1980), pp. 845-848.
Online learning with predictable sequences. A Rakhlin, K Sridharan, Conference on Learning Theory. PMLR. A. Rakhlin and K. Sridharan. "Online learning with predictable sequences". In: Conference on Learning Theory. PMLR. 2013, pp. 993-1019.
Stochastic variance reduction for nonconvex optimization. S J Reddi, A Hefny, S Sra, B Poczos, A Smola, PMLR. 2016International Conference on Machine Learning. S. J. Reddi, A. Hefny, S. Sra, B. Poczos, and A. Smola. "Stochastic variance reduction for nonconvex optimization". In: International Conference on Machine Learning. PMLR. 2016, pp. 314-323.
A convergence theorem for non negative almost supermartingales and some applications. H Robbins, D Siegmund, 10.1016/B978-0-12-604550-5.50015-8Optimizing Methods in Statistics. H. Robbins and D. Siegmund. "A convergence theorem for non negative almost supermartingales and some applications". In: Optimizing Methods in Statistics. 1971, pp. 233-257.
A modified forward-backward splitting method for maximal monotone mappings. P Tseng, 10.1137/S0363012998338806SIAM Journal on Control and Optimization. 38P. Tseng. "A modified forward-backward splitting method for maximal monotone mappings". In: SIAM Journal on Control and Optimization 38.2 (2000), pp. 431-446.
Tight complexity bounds for optimizing composite objectives. B Woodworth, N Srebro, Advances in Neural Information Processing Systems. B. Woodworth and N. Srebro. "Tight complexity bounds for optimizing composite objectives". In: Advances in Neural Information Processing Systems. 2016, pp. 3646-3654.
Extragradient and extrapolation methods with generalized bregman distances for saddle point problems. H Zhang, Operations Research Letters. 50H. Zhang. "Extragradient and extrapolation methods with generalized bregman distances for saddle point problems". In: Operations Research Letters 50.3 (2022), pp. 329-334.
| [
"https://github.com/ymalitsky/vr_for_vi"
]
|
[
"Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution",
"Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution"
]
| [
"M Küß \nExperimental Physics I\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany\n",
"F Porrati \nInstitute of Physics\nGoethe University\n60438Frankfurt am MainGermany\n",
"A Hörner \nExperimental Physics I\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany\n",
"M Weiler \nFachbereich Physik and Landesforschungszentrum OPTIMAS\nTechnische Universität Kaiserslautern\n67663KaiserslauternGermany\n",
"M Albrecht \nExperimental Physics IV\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany\n",
"M Huth \nInstitute of Physics\nGoethe University\n60438Frankfurt am MainGermany\n",
"A Wixforth \nExperimental Physics I\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany\n"
]
| [
"Experimental Physics I\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany",
"Institute of Physics\nGoethe University\n60438Frankfurt am MainGermany",
"Experimental Physics I\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany",
"Fachbereich Physik and Landesforschungszentrum OPTIMAS\nTechnische Universität Kaiserslautern\n67663KaiserslauternGermany",
"Experimental Physics IV\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany",
"Institute of Physics\nGoethe University\n60438Frankfurt am MainGermany",
"Experimental Physics I\nInstitute of Physics\nUniversity of Augsburg\n86135AugsburgGermany"
]
| []
| The interaction between surface acoustic waves (SAWs) and spin waves (SWs) in a piezoelectric-magnetic thin film heterostructure yields potential for the realization of novel microwave devices and applications in magnonics. In the present work, we characterize magnetoacoustic waves in three adjacent magnetic microstripes made from CoFe+Ga, CoFe, and CoFe+Pt with a single pair of tapered interdigital transducers (TIDTs). The magnetic micro-stripes were deposited by focused electron beam-induced deposition (FEBID) and focused ion beam-induced deposition (FIBID) direct-writing techniques. The transmission characteristics of the TIDTs are leveraged to selectively address the individual micro-stripes. Here, the external magnetic field is continuously rotated out of the plane of the magnetic thin film and the forward volume SW geometry is probed with the external magnetic field along the film normal. Our experimental findings are well explained by an extended phenomenological model based on a modified Landau-Lifshitz-Gilbert approach that considers SWs with nonzero wave vectors. Magnetoelastic excitation of forward volume SWs is possible because of the vertical shear strain ε xz of the Rayleigh-type SAW. arXiv:2208.05205v1 [cond-mat.mtrl-sci] | 10.1063/5.0101526 | [
"https://export.arxiv.org/pdf/2208.05205v1.pdf"
]
| 251,467,844 | 2208.05205 | 3c1a116d5da842db9d45981923b2a03f491e4e65 |
Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution
M Küß
Experimental Physics I
Institute of Physics
University of Augsburg
86135AugsburgGermany
F Porrati
Institute of Physics
Goethe University
60438Frankfurt am MainGermany
A Hörner
Experimental Physics I
Institute of Physics
University of Augsburg
86135AugsburgGermany
M Weiler
Fachbereich Physik and Landesforschungszentrum OPTIMAS
Technische Universität Kaiserslautern
67663KaiserslauternGermany
M Albrecht
Experimental Physics IV
Institute of Physics
University of Augsburg
86135AugsburgGermany
M Huth
Institute of Physics
Goethe University
60438Frankfurt am MainGermany
A Wixforth
Experimental Physics I
Institute of Physics
University of Augsburg
86135AugsburgGermany
Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution
The interaction between surface acoustic waves (SAWs) and spin waves (SWs) in a piezoelectric-magnetic thin film heterostructure yields potential for the realization of novel microwave devices and applications in magnonics. In the present work, we characterize magnetoacoustic waves in three adjacent magnetic microstripes made from CoFe+Ga, CoFe, and CoFe+Pt with a single pair of tapered interdigital transducers (TIDTs). The magnetic micro-stripes were deposited by focused electron beam-induced deposition (FEBID) and focused ion beam-induced deposition (FIBID) direct-writing techniques. The transmission characteristics of the TIDTs are leveraged to selectively address the individual micro-stripes. Here, the external magnetic field is continuously rotated out of the plane of the magnetic thin film and the forward volume SW geometry is probed with the external magnetic field along the film normal. Our experimental findings are well explained by an extended phenomenological model based on a modified Landau-Lifshitz-Gilbert approach that considers SWs with nonzero wave vectors. Magnetoelastic excitation of forward volume SWs is possible because of the vertical shear strain ε xz of the Rayleigh-type SAW. arXiv:2208.05205v1 [cond-mat.mtrl-sci]
I. INTRODUCTION
Over the last decade, increasing attention has been paid to the resonant coupling between surface acoustic waves (SAWs) and spin waves (SWs) 1-3 . On the one hand, magnetoacoustic interaction opens up the route toward energy-efficient SW excitation and manipulation in the field of magnonics 4 . On the other hand magnetoacoustic interaction greatly affects the properties of the SAW, which in turn can be used to devise new types of microwave devices such as magnetoacoustic sensors 5,6 or microwave acoustic isolators [7][8][9][10][11][12][13][14] . High flexibility in the design of these devices is possible since the properties of the SWs can be varied in a wide range of parameters. For instance, the SW dispersion can be reprogrammed by external magnetic fields or electrical currents 15,16 and more complex design of the magnet geometry 17,18 or use of multilayers 14,[19][20][21] allow for multiple dispersion branches with potentially large nonreciprocal behavior. Vice versa, SAW-SW interaction can be also used as an alternative method to characterize magnetic thin films, SWs, and SAWs 12,20,22,23 . Design of future magnetoacoustic devices can benefit from the fact that SAW technology is well-developed and already employed in manifold ways in our daily life [24][25][26][27] . Efficient excitation and detection of SAWs with metallic comb-shaped electrodes -so-called interdigital transducers (IDTs) -is possible on piezoelectric substrates. For example, acoustic delay lines with low insertion losses of about 6 dB at 4 GHz have been realized 28 . Fundamental limitations in the SAW excitation efficiency are mainly given by interaction with thermal phonons, spurious excitation of a) Electronic mail: [email protected].
FIG. 1.
Optical micrograph of the fabricated device. Rayleigh-type SAWs are excited on the piezoelectric substrate LiNbO3 by a tapered-IDT (TIDT) within a wide range of frequencies f0 − ∆f TIDT 2 , . . . , f0 + ∆f TIDT 2 . In dependence of the applied frequency, SWs can be magnetoacoustically excited in one of the three different magnetic micro-stripes which were deposited by FEBID and FIBID. Magnetoacoustic transmission measurements are performed by a pair of TIDTs.
longitudinal acoustic waves in the air and non-linear effects at high input power 27,29 . So far, IDTs which excite SAWs homogeneously over the whole aperture have been used in resonant magnetoacoustic experiments. Apart from Refs. 30,31 , these studies have been performed with an external magnetic field which was exclusively oriented in the plane of the magnetic thin film.
Here, we experimentally demonstrate targeted magnetoacoustic excitation and characterization of SWs in the forward volume SW geometry with micron-scale spatial resolution. To do so, magnetoacoustic transmission measurements are performed with one pair of tapered interdigital transducers (TIDTs) at three different magnetic micro-stripes, as shown in Fig. 1. This study is carried The (x, y, z) frame of reference is defined by the SAW propagation direction and the surface normal. We employ the (1,2,3) coordinate system to solve the LLG equation. Hereby, the 3-direction corresponds to the equilibrium magnetization orientation and the 2-direction is always aligned in the plane of the magnetic film. The inset shows the precession cone of the magnetization, with the transverse magnetization components m1 and m2. The coordinate system is taken from Ref. 30 . out in different geometries in which the external magnetic field is tilted out of the plane of the magnetic thin film. We demonstrate that magnetoelastic excitation of SWs is possible even if the static magnetization is parallel to the magnetic film normal -which is the so-called forward volume spin wave (FVSW) geometry -thanks to the vertical shear strain component ε xz of the Rayleigh-type SAW. The experimental results are simulated with an extended phenomenological model, that takes the arbitrary orientation of the external magnetic field and magnetization into account.
The magnetic micro-stripes with lateral dimensions of about 20 µm × 40 µm and different magnetic properties were deposited by focused electron beam-induced deposition (FEBID) and focused ion beam-induced deposition (FIBID). One particular advantage of using the direct-write approach 32,33 to fabricate the micro-stripes is the ease with which the magnetic properties can be tailored, such as the saturation magnetization 34 . Moreover, direct-write capabilities make the fabrication of complex 3D magnetic structures on the nano-scale possible. Applications in magnonics are, for instance, 3D nanovolcanoes with tunable higher-frequency eigenmodes 35 , 2D and 3D magnonic crystals with SW bandgaps 36,37 , SW beam steering via graded refractive index, and frustrated 3D magnetic lattices 38,39 .
II. THEORY
A surface acoustic wave is a sound wave propagating along the surface of a solid material with evanescent displacement normal to the surface. Density, surface boundary conditions, elastic, dielectric, and potentially piezoelectric properties of the material mainly determine if and which SAW mode can be launched. Typical SAW modes on homogeneous substrates show a lin-ear dispersion with a constant propagation velocity of about c SAW = 3500 m/s 27 . We use a standard Y-cut Z-propagation LiNbO 3 substrate, which gives rise to a Rayleigh-type SAW. On the substrate surface, this SAW mode causes a retrograde elliptical lattice motion in a plane defined by the SAW propagation direction and the surface normal 27,40 .
An optical micrograph of the fabricated magnetoacoustic device is shown in Fig. 1 , which corresponds to different positions of the TIDT along the length of its aperture W . To describe the magnetoacoustic transmission of the three different magnetic thin films, we extend the phenomenological model of Dreher et al. 30 and Küß et al. 12 in terms of magnetoacoustically excited SWs with nonzero wave vector and arbitrary orientation of the equilibrium magnetization direction, as is detailed next.
A. Magnetoacoustic driving fields and SAW transmission
In the following, we use the (x, y, z) coordinate system shown in Fig. 2 30 . The x-and z-axes are parallel to the wave vector k SAW = kx of the SAW and normal to the plane of the magnetic micro-stripes, respectively. The equilibrium direction of the magnetization M and the orientation of the external magnetic field H are specified by the angles (θ 0 , φ 0 ) and (θ H , φ H ). Here, θ 0 and φ 0 are calculated by minimization of the static free energy. For that, we take the external magnetic field H, thin film shape anisotropy M sẑ with saturation magnetization M s , and a small uniaxial in-plane anisotropy H ani , which encloses an angle φ ani with the x-axis, into account 12,30 . Because the characterized magnetic thin films are relatively 12 thick (d ≥ 24 nm), we neglect the surface anisotropy. The SAW-SW interaction can be described by effective dynamic magnetoacoustic driving fields, which exert a torque on the static magnetization 41 . The resulting damped precession of M is then determined by the Landau-Lifshitz-Gilbert equation for small precession amplitudes. To this end, we introduce the rotated (1, 2, 3) Cartesian coordinate system in Fig. 2. The 3axis is parallel to M and the 2-axis is aligned in the film plane 41 . In this phenomenological model, it is assumed that the frequencies f and wave vectors k of SAW and SW are identical 12,42 . Furthermore, only magnetic films with small thicknesses |k|d 1 and homogeneous strain in the z-direction of the magnetic film are considered 12,30 .
The effective magnetoacoustic driving field as a function of SAW power in the (1,2) plane can be written 12 as
h(x, t) = h 1 h 2 k R c SAW W P SAW (x) e i(kx−ωt) .(1)
Here, ω = 2πf and c SAW are the angular frequency and propagation velocity of the SAW, W is the width of the aperture of the TIDT, and the constant R = 1.4 × 10 11 J/m 343 . The normalized effective magnetoelastic driving fieldsh 1 andh 2 of a Rayleigh wave with strain components ε kl=xx,zz,xz = 0 are 12,30 h 1
h 2 = 2 µ 0 b 1ãxx − sin θ 0 cos θ 0 cos 2 φ 0 sin θ 0 sin φ 0 cos φ 0 +b 1ãzz sin θ 0 cos θ 0 0 +b 2ãxz − cos (2θ 0 ) cos φ 0 cos θ 0 sin φ 0 ,(2)
where b 1,2 are the magnetoelastic coupling constants for cubic symmetry of the ferromagnetic layer 7,30 ,ã kl = ε kl,0 /(|k||u z,0 |) are the normalized amplitudes of the strain, and ε kl,0 are the complex amplitudes of the strain. Furthermore, u z,0 is the amplitude of the lattice displacement in the z-direction. For the sake of simplicity, we neglect non-magnetoelastic interaction, like magnetorotation coupling 12,22,44 , spin-rotation coupling [45][46][47] or gyromagnetic coupling 48 . In contrast to previous magnetoacoustic studies 10,12,20,22,23,42,49 where the equilibrium magnetization direction was aligned in the plane of the magnetic film (θ 0 = 90°), the strain component ε zz results in a modified driving field for geometries with θ 0 = 90°.
In the experiments, we characterize SAW-SW interaction for the three geometries depicted in Fig. 3. The oop0-, oop45-, and oop90-geometries are defined by the polar angle φ H of the external magnetic field H. Since the symmetry of the magnetoacoustic driving field h essentially determines the magnitude of the magnetoacoustic interaction, we will now discuss the orientation dependence of |µ 0h (θ 0 )| for the Rayleigh wave strain components ε xx , ε zz and ε xz separately, setting all other strain components equal to zero 30 . In Fig. 4 we show a polar plot of the normalized magnitude of the driving field |µ 0h (θ 0 )|, using 2b 1,2ãkl = 1 T and assuming no in-plane anisotropy (H ani = 0, φ 0 = φ H ). First, it is interesting, that magnetoelastic excitation of SWs in the FV-geometry (θ 0 = 0°) can be solely mediated by the driving fields of the shear component ε xz . Second, finite element method (FEM) eigenmode simulations reveal 50 , that the strain component ε zz is phase shifted by π with respect to ε xx . Thus, the magnetoacoustic driving fields of ε xx and ε zz show a constructive superposition. Third, the SAW-SW helicity mismatch effect arises because of a ±π/2 phase shift of ε xz with respect to ε xx [8][9][10][11][12]23,30 . Under an inversion of the SAW propagation direction (k → −k, or k S21 → k S12 ), the phase shift changes its sign (π/2 → −π/2). For measurements in the in-plane geometry, the SAW-SW helicity mismatch effect is attributed to a superposition of driving fields caused by ε xx and ε xz . This is in contrast to the oop90-geometry (φ 0 = 90°), where the SAW-SW helicity mismatch effect is mediated by the strain components ε zz and ε xz .
The magnetoacoustic driving field causes the excitation of SWs in the magnetic film. Thus, the power of
FIG. 3.
The magnetoacoustic transmission is studied in the three geometries oop0, oop45, and oop90, which are defined by the polar angle φH of the external magnetic field H. Hereby, H is tilted with respect to the z-axis by the azimuthal angle θH .
FIG. 4.
Polar plot of the normalized driving field's magnitude |µ0h(θ0)| for the relevant strain components εxx, εzz, and εxz and for the different geometries oop0, oop45, and oop90, assuming φ0 = φH . The distance from the origin indicates for all panels the normalized magnitude of the driving field. Thereby, the driving field was calculated by Eq. the traveling SAW is exponentially decaying while propagating through the magnetic film with length l f and thickness d. With respect to the initial power P 0 , the absorbed power of the SAW is
P abs = P 0 1 − exp −C Im (h) * χh with C = 1 2 µ 0 l f d k 2 R .(3)
The magnetic susceptibility tensorχ describes the magnetic response to small time-varying magnetoacoustic fields and is calculated as described by Dreher et al. 30 for arbitrary equilibrium magnetization directions (θ 0 , φ 0 ). Besides the external magnetic field, exchange coupling, and uniaxial in-plane anisotropy, we take additionally the dipolar fields for SWs with k = 0 into account, which are given in Eq. (A1) in the Appendix A. Finally, to directly simulate the experimentally determined relative change of the SAW transmission ∆S ij on the logarithmic scale, we use ∆S ij = 10 lg P 0 − P abs P 0 with ij = 21, for k ≥ 0 12, for k < 0 (4) for SAWs propagating parallel (k ≥ 0) and antiparallel (k < 0) to the x-axis.
B. Spin wave dispersion
Resonant SAW-SW excitation is possible if the dispersion relations of SAW and SW intersect in the uncoupled state. The SW dispersion is obtained by setting det χ −1 = 0 and taking the real part of the solution for small SW damping constants α. If we neglect the uniaxial in-plane anisotropy (
H ani = 0, φ 0 = φ H ) we obtain 51 f = γµ 0 2π H 11 H 22 − H 2 12(5)
with
H 11 =H cos (θ 0 − θ H ) + Dk 2 − M s cos (2θ 0 ) +M s (1 − G 0 ) cos (2θ 0 ) − sin 2 φ 0 cos 2 θ 0 H 22 =H cos (θ 0 − θ H ) + Dk 2 − M s cos 2 θ 0 +M s (1 − G 0 ) sin 2 φ 0 H 12 =M s (1 − G 0 ) sin φ 0 cos φ 0 cos θ 0 .(6)
Here, γ is the gyromagnetic ratio, G 0 = 1−e −|k|d |k|d and D = 2A µ0Ms with the magnetic exchange constant A. We exemplarily calculated the SW resonance frequency f in Fig. 5(a) for the oop0-geometry as a function of the external magnetic field magnitude µ 0 H. The corresponding azimuthal angle θ 0 of the equilibrium magnetization orientation is shown in Fig. 5(b). For the simulation, we use besides φ 0 = 0°, k = 5.9 µm −1 , µ 0 M s = 1 T and H ani = 0 the parameters of the CoFe+Ga thin film in Table II. Additionally, the resonance frequency f = 3 GHz of a SAW with k = 5.9 µm −1 is depicted by the dashed line in Fig. 5(a). The dispersion f (µ 0 H) changes strongly with the azimuthal angle θ H of the applied external magnetic field. For the FVSW-geometry θ H = 0°, the magnetic thin film is saturated (θ 0 = 0°) when the magnetic field overcomes the magnetic shape anisotropy µ 0 H > µ 0 M s and resonant SAW-SW interaction is only possible at µ 0 H = 1.06 T. In contrast, for θ H = 0.9°, we expect magnetoacoustic interaction in a wide range µ 0 H ≈ 0.7, ..., 1.0 T, where the dispersions of SAW and SW intersect. For this geometry and µ 0 H ≤ 1.5 T, the magnetic film is not fully saturated (θ 0 = 0.9°).
III. EXPERIMENTAL SETUP
In contrast to previous magneotoacoustic studies performed with conventional IDTs 10,12,20,22,23,31,42,49 , here we use "tapered" or "slanted" interdigital transducers (TIDTs) 52-55 to characterize SAW-SW interaction in three different magnetic thin micro-stripes in one run. Although the fingers of the TIDT are slanted, the SAW propagates dominantly parallel to the x-axis in Fig. 1 because of the strong beam steering effect of the Y-cut Z-propagation LiNbO 3 substrate 27,52 . The linear change of the periodicity p(y) along the transducer aperture W results in a spatial dependence of the SAW resonance frequency f (y) = c SAW /p(y) 52 . Thus, a TIDT has a wide transmission band and can be thought of to consist out of multiple conventional IDTs that are connected electrically in parallel 54 . In good approximation, the frequency bandwidth of a conventional IDT is given by ∆f IDT = 0.9f 0 /N and is constant for higher harmonic resonance frequencies. From the bandwidth ∆f TIDT of the TIDT the width of the acoustic beam w at constant frequency can be estimated 55 with The TIDTs are fabricated out of Ti(5)/Al(70) (all thicknesses are given in units of nm), have an aperture of W = 100 µm, the number of finger-pairs is N = 22 and the periodicity p(y) changes from 3.08 µm to 3.72 µm. As shown in Fig. 6(a), we operate the TIDT at the third harmonic resonance, which corresponds to a transmission band and SAW wave length in the ranges of 2.69 GHz < f < 3.22 GHz and 1.06 µm < λ < 1.27 µm. According to Eq. (7), we expect for the width of the acoustic beam at constant frequency w = 100 µm(41 MHz/530 MHz) ≈ 7.7 µm. Moreover, Streibel et al. argue that internal acoustic reflections in the single electrode structure used additionally lowers w by about a factor of four 55 . Since λ is in the range of w, diffraction effects can be expected. These beam spreading losses are partly compensated by the beam steering effect and the frequency selectivity of the receiving transducer, which filters out the diffracted portions of the SAW 55 .
w = W ∆f IDT ∆f TIDT .(7)
The three different magnetic micro-stripes in Fig. 1 were deposited by direct-writing techniques between the two 800 µm distant TIDTs. For details we refer to the appendix B. The compositions of the deposited magnetic films were characterized by energy-dispersive Xray spectroscopy (EDX). The results are summarized in Table I. More details about the microstructure and magnetic properties of CoFe can be found in Refs. 34,56 . For the microstructure of mixed CoFe-Pt deposits we refer to Ref. 57 in which results of a detailed investigation of the microstructural and magnetic properties of fully analogous Co-Pt deposits are presented. We determined the thicknesses d and the root mean square roughness of the samples CoFe+Ga(24 ± 2), CoFe(72 ± 2), and low propagation velocity of the SAW, a time-domain gating technique was employed to exclude spurious signals 58 , in particular electromagnetic crosstalk. We use the relative change of the background-corrected SAW transmission signal as
∆S ij (µ 0 H) = S ij (µ 0 H) − S ij (2 T)(8)
to characterize SAW-SW coupling. Here ∆S ij is the magnitude of the complex transmission signal with ij ∈ {21, 12}. In all measurements, the magnetic field is swept from −2 T to 2 T.
IV. DISCUSSION
A. Experimental results
In Fig. 6(b), we show the magnetoacoustic transmission ∆S 21 as a function of external magnetic field magnitude and frequency for the FVSW-geometry (θ H ≈ 0°). Within the wide transmission band of the TIDT, the magnetoacoustic transmission ∆S 21 (µ 0 H) clearly differs for the three different frequency sub-bands, each of which spatially addresses one of the three different magnetic micro-stripes. Both, the maximum change of the transmission with Max(∆S CoFe ) and the resonance fields are different for the three films. The small signals ∆S 21 = 0 at frequencies corresponding to the gaps between the magnetic structures are attributed to diffraction effects. The apparent signal ∆S 21 at the edges of the transmission band is attributed to measurement noise. From Fig. 6(b) we identify the frequencies which correspond to the centers of the three magnetic films CoFe+Ga, CoFe, and CoFe+Pt as 2.78 GHz, 2.96 GHz, and 3.17 GHz, respectively. Further analysis is performed at these fixed frequencies.
In Fig. 7, we show the magnetoacoustic transmission ∆S 21 (µ 0 H, θ H ) of all three films in the oop0-, oop45-, and oop90-geometry (see Fig. 3) as a function of external magnetic field magnitude µ 0 H and orientation θ H in a range of −90°≤ θ H ≤ 90°with an increment of ∆θ H = 3.6°. For almost all geometries, the magnetoacoustic response ∆S 21 (µ 0 H, θ H ) has a star shape symmetry, which was already observed by Dreher et al. for Ni(50) thin films 30 . This symmetry results from magnetic shape anisotropy. The sharp resonances in Fig. 7 around θ H = 0°are studied in Fig. 8 in the range of −3.6°≤ θ H ≤ 3.6°with ∆θ H = 0.225°in more detail. For all three magnetic micro-stripes SWs can be magnetoacoustically excited in the FVSW-geometry (θ H = 0°) and the resonance fields µ 0 H res (θ H = 0°) differ. Additionally, the symmetry of the magnetoacoustic resonances µ 0 H res (θ H ) changes for the geometries oop0, oop45, and oop90 and the different magnetic micro-stripes. In general, the resonance fields |µ 0 H res | decrease if |φ H | is increased from 0°to 90°(oop0 to oop90). Moreover, the line symmetry with respect to θ H = 0°is broken, in particular for the oop45-, and oop90-geometry.
B. Simulation and Interpretation
To simulate the experimental results in Figs Table II. The complex amplitudes of the normalized strainã kl = ε kl,0 /|k||u z,0 | are estimated from a COMSOL 50 finite element method (FEM) simulation. Since we do not know the elastic constants and density of the magnetic micro-stripes, we assume a pure LiNbO 3 substrate with a perfectly conducting overlayer of zero thickness. Thus, the real values ofã kl might deviate from the assumed ones 12 . Furthermore, the normalized strain of the simulation was averaged over the thickness 0 ≤ z ≤ −d. The values for the SW effective damping α, magnetoelastic coupling for polycrystalline films 30 b 1 = b 2 and small phenomenological uniaxial in-plane anisotropy (H ani , φ ani ) were adjusted to obtain a good agreement between experiment and simulation. Thereby, α includes Gilbert damping and inhomogeneous line broadening 12 . The phenomenological uniaxial in-plane anisotropy could be caused by substrate clamping effects or the patterning strategy of the FEBID / FIBID direct-write process. Note that the values of all these parameters listed in Table II are very reasonable. For all three magnetic micro-stripes, the qualitative agreement between simulation and experiment in Figs. 7 and 8 is good. For magnetoelastic interaction, SWs can be excited in the FVSW-geometry (θ H = 0°) solely due to the vertical shear strain ε xz which causes a nonzero magnetoacoustic driving field, as discussed in Fig. 4. According to Eq. (2) the driving field mediated by ε xx,zz contributes for θ H = 0°. In Fig. 8, the intensity of the resonances for θ H = 0°is therefore more pronounced than for θ H = 0°. Because the driving fields, which are mediated by the strain ε xx and ε zz , are in phase, SW excitation in one of the out-of-plane geometries can be even more efficient than in the in-plane geometry.
The magnetoacoustic resonance fields of the three magnetic micro-stripes mainly differ, due to differences in M s and d, which strongly affect the corresponding dipolar fields of a SW. As expected from the SW dispersion in Fig. 5(a), we observe for the CoFe+Ga film in Fig. 8(a,b) for θ H = 0 a resonance at µ 0 H = 1.06 T with a narrow linewidth and for θ H = 0.9°a wide resonance between µ 0 H ≈ 0.7, ..., 1.0 T. The symmetry of the magnetoacoustic resonances µ 0 H res (θ H ) changes with the geometries oop0, oop45 and oop90 since the magnetic dipolar fields of the SW dispersion Eq. (5) depend on φ 0 . For CoFe+Pt, two resonances are observed in the oop00-geometry, whereas in the oop45-and oop90geometry confined oval-shaped resonances show up. This behavior can be modeled by assuming an uniaxial inplane anisotropy with φ ani ≈ 90°. In the oop00-geometry, the resonance with the lower resonant fields can be attributed to the switching of the in-plane direction of the equilibrium magnetization direction. In the oop45-and oop90-geometries, the resonance frequencies of the SWs are higher than the excitation frequency of the SAW for |θ H | > 0.7°. Thus, the magnetoacoustic response ∆S 21 is low for |θ H | > 0.7°in Figs. 8(o)-(r). We attribute discrepancies between experiment and simulation to the following effects: The phenomenological model solely considers an in-plane uniaxial anisotropy. Additional in-and out-of-plane anisotropies would result in a shift of the resonance fields. Furthermore, the strain is estimated by a simplified FEM simulation and assumed to be homogeneous along the thick-
FIG. 7.
The magnetoacoustic transmission ∆S21(µ0H, θH ) of the magnetic micro-stripes CoFe+Ga (2.78 GHz), CoFe (2.96 GHz), and CoFe+Pt (3.17 GHz) is shown in the oop0-, oop45-, and oop90-geometry (see Fig. 3). Resonances are observed for θH = 0°, which are studied in more detail in Fig. 7. Simulation and experiment show good qualitative agreement. ness of the micro-stripe. Moreover, we neglect magnetorotation coupling 12,22,44 , spin-rotation coupling [45][46][47] and gyromagnetic coupling 48 . These assumptions have an impact on the intensity and symmetry of the resonances. Finally, low-intensity spurious signals are caused by SAW diffraction effects which are, for instance, observed in Fig. 8(m,o,q) for |µ 0 H| > 1 T.
C. Nonreciprocal behavior
The nonreciprocal behavior of the magnetoacoustic wave in the oop0, oop45, and oop90-geometries is exemplarily shown for CoFe+Ga in Fig. 9. If the magnetoacoustic wave propagates in inverted directions k S21 and k S12 (k and −k) the magnetoacoustic transmission ∆S 21 (µ 0 H, θ H ) and ∆S 12 (µ 0 H, θ H ) differs for the oop45-and oop90-geometry. The qualitative agreement between experiment and simulation is also good with respect to the nonreciprocity. The SAW-SW helicity mismatch effect, discussed in the theory section, causes ∆S 21 (µ 0 H, θ H ) = ∆S 12 (µ 0 H, θ H ) in Fig. 9 and the broken line symmetry with respect to θ H = 0°in Figs. 8 and 9. So far, nonreciprocal magnetoacoustic transmission was only observed in studies where the external magnetic field was aligned in the plane of the magnetic film (θ H = 90°) [8][9][10][11][12]23,30 . The magnetoacoustic driving field in Eq. (2) is linearly polarized along the 1-axis for φ 0 = 0. Thus, no nonreciprocity due to the SAW-SW helicity mismatch effect is observed in the oop0-geometry. In contrast, the driving field has a helicity in the oop45and oop90-geometry. Since this helicity is inverted under inversion of the propagation direction of the SAW (ε xz,0 → −ε xz,0 ), nonreciprocal behavior shows up in the oop45-and oop90-geometry. In comparison to the experimental results, the simulation slightly underestimates the nonreciprocity. This is mainly attributed to magneto-rotation coupling 12,22,44 , which can be modeled by a modulated effective coupling constant b 2,eff and can result in an enhancement of the SAW-SW helicity mismatch effect 12,22 .
V. CONCLUSIONS
In conclusion, we have demonstrated magnetoacoustic excitation and characterization of SWs with micronscale spatial resolution using TIDTs. The magnetoacoustic response at different frequencies, which lay within the wide transmission band of the TIDT, can be assigned to the spatially separated CoFe+Ga, CoFe, and CoFe+Pt magnetic micro-stripes. SAW-SW interaction with micron-scale spatial resolution can be interesting for future applications in magnonics and the realization of new types of microwave devices such as magnetoacoustic sensors 5,6,60 or microwave acoustic isolators 14,[19][20][21] . For instance, giant nonreciprocal SAW transmission was observed in magnetic bilayers and proposed to build re-configurable acoustic isolators 14,[19][20][21] . In combination with TIDTs, acoustic isolators, which show in adjacent frequency bands different nonreciprocal behavior could be realized. Furthermore, if two orthogonal delay lines are combined in a cross-shaped structure, resolution of magnetoacoustic interaction of different magnetic micro-structures in two dimensions can potentially be achieved 55,61 .
In addition, we extended the theoretical model of magnetoacoustic wave transmission 12,30 in terms of SWs with nonzero wave vector and arbitrary out-of-plane orientation of the static magnetization direction. This phenomenological model describes the experimental results for CoFe+Ga, CoFe, and CoFe+Pt magnetic microstripes in different geometries of the external magnetic field -including the FVSW-geometry -in a good qualitative way. We find that FVSWs can be magnetoelastically excited by Rayleigh-type SAWs due to the shear strain component ε xz . Also magneto-rotation coupling 12,22,44 , spin-rotation coupling [45][46][47] or gyromagnetic coupling 48 may contribute to the excitation of FVSWs. Since the SAW-SW helicity mismatch effect, which is related to ε xz and the effective coupling constant b 2,eff , is low in Ni thin films 9,30,42,62,63 , we expect a low excitation efficiency for FVSWs in Ni. In contrast to the previously discussed in-plane geometry, the strain component ε zz of Rayleightype waves plays an important role in the out-of-plane geometries and can result in enhanced SAW-SW coupling efficiency and SAW-SW helicity mismatch effect. Nonreciprocal magnetoacoustic waves are characterized by different transmission amplitudes ∆S21 and ∆S12 for oppositely propagating SAWs with wave vectors kS21 and kS12. The nonreciprocal transmission is exemplarily shown for the magnetic micro-stripes CoFe+Ga (2.78 GHz) in the oop0-, oop45-, and oop90-geometry for almost out-of-plane oriented external magnetic field (θH = −3.6°, . . . , 3.6°). Nonreciprocal behavior can solely be observed in the oop45-and oop90-geometry, which is nicely reproduced by the simulation.
Here, m 1,2 are the precession amplitudes of the normalized magnetization m = M/M s .
Appendix B: Details about the deposition of the magnetic thin films FEBID and FIBID are direct-write lithographic techniques for the fabrication of samples of various dimension, shape and composition 33 . In FEBID/FIBID, the adsorbed molecules of a precursor gas injected in a SEM/FIB chamber dissociate by the interaction with the electron/ion beam forming the sample during the rastering process 32 . In the present work, the samples were fabricated in a dual beam SEM/FIB microscope (FEI, Nova NanoLab 600) equipped with a Schottky electron emitter. FEBID was employed to fabricate the CoFe and CoFe+Pt samples with the following electron beam parameters: 5 kV acceleration voltage, 1.6 nA beam current, 20 nm pitch, and 1 µs dwell time. The number of passes, i.e., the number of rastering cycles, was 1500. FIBID was used to prepare the CoFe+Ga sample with the following ion beam parameters: 30 kV acceleration voltage, 10 pA ion beam current, 12 nm pitch, 200 ns dwell time, and 500 passes. The precursor HFeCo 3 (CO) 12 was employed to fabricate the CoFe and the CoFe+Ga samples 64 , while HFeCo 3 (CO) 12 and (CH 3 ) 3 CH 3 C 5 H 4 Pt were simultaneously used to grow CoFe+Pt 65 . Standard FEI gasinjection-systems (GIS) were used to flow the precursor gases in the SEM via capillaries with 0.5 mm inner diameter. The distance capillary-substrate surface was about 100 µm and 1000 µm for the HFeCo 3 (CO) 12 and (CH 3 ) 3 CH 3 C 5 H 4 Pt GIS, respectively . The temperature of the precursors were 64°C and 44°C for HFeCo 3 (CO) 12 and (CH 3 ) 3 CH 3 C 5 H 4 Pt, respectively. The basis pressure of the SEM was 5 × 10 7 mbar, which rose up to about 6 × 10 7 mbar, during CoFe and CoFe+Ga deposition, and to about 2 × 10 6 mbar, during CoFe+Pt deposition.
FIG. 2 .
2Relation between the coordinate systems employed.
(2) with 2b1,2ã kl = 1 T. This diagram extendsFig. 4of Ref.30 by panels (c), (d), (e), (f), and (i).
FIG. 5 .
5(a) The SW resonance frequency f is calculated with Eq. (5) for the oop0-geometry as a function of the external magnetic field magnitude µ0H and azimuthal angle θH . The corresponding azimuthal angle θ0 of the equilibrium magnetization orientation is shown in (b). For the simulation, we use φ0 = 0°, k = 5.9 µm −1 , µ0Ms = 1 T, and zero in-plane anisotropy. The remaining parameters are taken from the CoFe+Ga thin film inTable II. (c) The saturation magnetizations Ms of the three different magnetic thin films (colored dots) are calculated from the experimentally determined resonance field µ0Hres of the FVSW inFig. 8. The general dependence µ0Ms(µ0Hres) is shown by the lines for the different magnetic films.
FIG. 6 .
6(a) The transmission characteristics of the fabricated device shows the expected wide band behavior. (b) Within this transmission band, the magnetoacoustic transmission ∆S21(µ0H) differs for the three different frequency sub-bands, that correspond to the three different magnetic films.
CoFe+Pt(70±2) by atomic force microscopy (AFM). The length and widths of all micro-stripes are identical with l f = 40 µm and w f = 20 µm, except w CoFe+Ga f = 26 µm. The SAW transmission of our delay line device was characterized by a vector network analyzer. Based on the
. (4), we first have to determine the saturation magnetizations M s of the different magnetic thin films. For this purpose, we compute Eq. (5) for the FVSW geometry (θ H = 0°, θ 0 = 0°). The relation M s (H ≡ H res ) is shown inFig. 5(c) for all three magnetic films. Thereby, frequency f and wave vector k of the SW are determined by the SAW and we assume c SAW = 3200 m/s 59 , g = 2.1834 and D = 24.7 × 10 −12 Am 34 . Since the inplane anisotropy H ani is expected to be small compared to the shape anisotropy, the impact on the resonance in the FVSW geometry is small, and we use H ani = 0. Under these assumptions, the relations M s (H res ) are almost identical for the three magnetic films. Together with the experimentally determined µ 0 H res (θ H = 0°) inFig. 8, the saturation magnetizations of CoFe+Ga, CoFe, and CoFe+Pt are determined to be 772 kA/m, 1296 kA/m, and 677 kA/m. For the simulations in Figs. 7 and 8, we use the parameters summarized in
FIG. 8 .
8The magnetoacoustic transmission ∆S21(µ0H, θH ) of the magnetic micro-stripes CoFe+Ga (2.78 GHz), CoFe (2.96 GHz), and CoFe+Pt (3.17 GHz) is shown in the oop0-, oop45-, and oop90-geometry (seeFig. 3) for almost out-of-plane oriented external magnetic field (θH = −3.6°, . . . , 3.6°). Simulation and experiment show good qualitative agreement.
FIG. 9 .
9FIG. 9. Nonreciprocal magnetoacoustic waves are characterized by different transmission amplitudes ∆S21 and ∆S12 for oppositely propagating SAWs with wave vectors kS21 and kS12. The nonreciprocal transmission is exemplarily shown for the magnetic micro-stripes CoFe+Ga (2.78 GHz) in the oop0-, oop45-, and oop90-geometry for almost out-of-plane oriented external magnetic field (θH = −3.6°, . . . , 3.6°). Nonreciprocal behavior can solely be observed in the oop45-and oop90-geometry, which is nicely reproduced by the simulation.
. Rayleigh-type SAWs can be excited in a frequency range between f 0 − ∆fTIDT2
, . . . , f 0 +
∆fTIDT
2
TABLE I .
ICompositional EDX analysis of test samples with size 1.5 µm × 1.5 µm. The electron beam voltage was 5 keV for FEBID samples and 3 keV for FIBID sample.Sample
C
O
Fe
Co
Ga
Pt
CoFe+Pt
61.8
6.5
4.2
20.1
7.4
CoFe
26.2
6.9
12.4
54.5
CoFe+Ga
16.9
16.5
7.7
37.5
21.4
TABLE II .
IIParameters to simulate the magnetoacoustic transmission ∆S21 (k > 0) of the Rayleigh-type SAW in Figs. 7-9. For the simulation of ∆S12 (k < 0), the sign of the normalized strainãxz is inverted. For all micro-stripes, we assume g = 2.1834 and D = 24.7 × 10 −12 Am 34 .CoFe+Ga
CoFe
CoFe+Pt
d (nm)
24
72
70
f (GHz)
2.78
2.96
3.17
Ms (kA/m)
772
1296
677
α
0.04
0.1
0.05
φani (°)
-10
0
88
µ0Hani (mT)
1
5
10
axx
0.49
0.40
0.40
azz
-0.15
-0.10
-0.10
axz
0.13i
0.17i
0.17i
|b1| (T)
4
15
6
D. A. Bozhko, V. I. Vasyuchka, A. V. Chumak, and A. A. Serga, "Magnon-phonon interactions in magnon spintronics (review ar-
ACKNOWLEDGMENTSThis work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)project numbers 391592414 and 492421737. M.H. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) through the trans-regional collaborative research center TRR 288 (project A04) and through project No. HU 752/16-1.Appendix A: Effective dipolar fieldsThe effective dipolar fields in the (1,2,3) coordinate system for arbitrary equilibrium magnetization directions (θ 0 , φ 0 ) are taken from Ref.51
. 10.1063/10.0000872Low Temp. Phys. 46383ticle)," Low Temp. Phys. 46, 383 (2020).
Advances in coherent coupling between magnons and acoustic phonons. Y Li, C Zhao, W Zhang, A Hoffmann, V Novosad, 10.1063/5.0047054APL Mater. 960902Y. Li, C. Zhao, W. Zhang, A. Hoffmann, and V. Novosad, "Advances in coherent coupling between magnons and acoustic phonons," APL Mater. 9, 060902 (2021).
Acoustic control of magnetism toward energy-efficient applications. W.-G Yang, H Schmidt, 10.1063/5.0042138Appl. Phys. Rev. 821304W.-G. Yang and H. Schmidt, "Acoustic control of magnetism toward energy-efficient applications," Appl. Phys. Rev. 8, 021304 (2021).
YIG magnonics. A A Serga, A V Chumak, B Hillebrands, 10.1088/0022-3727/43/26/264002J. Phys. D. 43264002A. A. Serga, A. V. Chumak, and B. Hillebrands, "YIG magnon- ics," J. Phys. D 43, 264002 (2010).
Magneto-surfaceacoustic-waves microdevice using thin film technology: design and fabrication process. H Chiriac, M Pletea, E Hristoforou, 10.1016/S0924-4247(01)00500-3Sens. Actuators A: Phys. 91107H. Chiriac, M. Pletea, and E. Hristoforou, "Magneto-surface- acoustic-waves microdevice using thin film technology: design and fabrication process," Sens. Actuators A: Phys. 91, 107 (2001).
Wide band low noise love wave magnetic field sensor system. A Kittmann, P Durdaut, S Zabel, J Reermann, J Schmalz, Be, D Spetzler, N X Meyners, J Sun, M Mccord, G Gerken, M Schmidt, R Höft, F Knöchel, E Faupel, Quandt, 10.1038/s41598-017-18441-4Sci. Rep. 81A. Kittmann, P. Durdaut, S. Zabel, J. Reermann, J. Schmalz, Be. Spetzler, D. Meyners, N. X. Sun, J. McCord, M. Gerken, G. Schmidt, M. Höft, R. Knöchel, F. Faupel, and E. Quandt, "Wide band low noise love wave magnetic field sensor system," Sci. Rep. 8, 1 (2018).
Interaction of spin waves and ultrasonic waves in ferromagnetic crystals. C Kittel, 10.1103/PhysRev.110.836Phys. Rev. 110836C. Kittel, "Interaction of spin waves and ultrasonic waves in fer- romagnetic crystals," Phys. Rev. 110, 836 (1958).
Acoustic-surface-wave isolator. M F Lewis, E Patterson, 10.1063/1.1654147Appl. Phys. Lett. 20276M. F. Lewis and E. Patterson, "Acoustic-surface-wave isolator," Appl. Phys. Lett. 20, 276 (1972).
Nonreciprocal propagation of surface acoustic wave in Ni/LiNbO 3. R Sasaki, Y Nii, Y Iguchi, Y Onose, 10.1103/PhysRevB.95.020407Phys. Rev. B. 95R20407R. Sasaki, Y. Nii, Y. Iguchi, and Y. Onose, "Nonreciprocal prop- agation of surface acoustic wave in Ni/LiNbO 3 ," Phys. Rev. B 95, 020407(R) (2017).
Large nonreciprocal propagation of surface acoustic waves in epitaxial ferromagnetic/semiconductor hybrid structures. A Hernández-Mínguez, F Macià, J M Hernàndez, J Herfort, P V Santos, 10.1103/PhysRevApplied.13.044018Phys. Rev. Applied. 1344018A. Hernández-Mínguez, F. Macià, J. M. Hernàndez, J. Herfort, and P. V. Santos, "Large nonreciprocal propagation of surface acoustic waves in epitaxial ferromagnetic/semiconductor hybrid structures," Phys. Rev. Applied 13, 044018 (2020).
Highly nonreciprocal spin waves excited by magnetoelastic coupling in a Ni/Si bilayer. S Tateno, Y Nozaki, 10.1103/PhysRevApplied.13.034074Phys. Rev. Applied. 1334074S. Tateno and Y. Nozaki, "Highly nonreciprocal spin waves ex- cited by magnetoelastic coupling in a Ni/Si bilayer," Phys. Rev. Applied 13, 034074 (2020).
Nonreciprocal Dzyaloshinskii-Moriya magnetoacoustic waves. M Küß, M Heigl, L Flacke, A Hörner, M Weiler, M Albrecht, A Wixforth, 10.1103/PhysRevLett.125.217203Phys. Rev. Lett. 125217203M. Küß, M. Heigl, L. Flacke, A. Hörner, M. Weiler, M. Albrecht, and A. Wixforth, "Nonreciprocal Dzyaloshinskii-Moriya magne- toacoustic waves," Phys. Rev. Lett. 125, 217203 (2020).
Nonreciprocal surface acoustic waves in multilayers with magnetoelastic and interfacial Dzyaloshinskii-Moriya interactions. R Verba, I Lisenkov, I Krivorotov, V Tiberkevich, A Slavin, 10.1103/PhysRevApplied.9.064014Phys. Rev. Applied. 964014R. Verba, I. Lisenkov, I. Krivorotov, V. Tiberkevich, and A. Slavin, "Nonreciprocal surface acoustic waves in multilayers with magnetoelastic and interfacial Dzyaloshinskii-Moriya inter- actions," Phys. Rev. Applied 9, 064014 (2018).
Wide-band nonreciprocity of surface acoustic waves induced by magnetoelastic coupling with a synthetic antiferromagnet. R Verba, V Tiberkevich, A Slavin, 10.1103/PhysRevApplied.12.054061Phys. Rev. Applied. 1254061R. Verba, V. Tiberkevich, and A. Slavin, "Wide-band nonre- ciprocity of surface acoustic waves induced by magnetoelastic coupling with a synthetic antiferromagnet," Phys. Rev. Applied 12, 054061 (2019).
Reconfigurable spin-wave nonreciprocity induced by dipolar interaction in a coupled ferromagnetic bilayer. R A Gallardo, T Schneider, A K Chaurasiya, A Oelschlägel, S S P K Arekapudi, A Roldán-Molina, R Hübner, K Lenz, A Barman, J Fassbender, J Lindner, O Hellwig, P Landeros, 10.1103/PhysRevApplied.12.034012Phys. Rev. Applied. 1234012R.A. Gallardo, T. Schneider, A.K. Chaurasiya, A. Oelschlägel, S.S.P.K. Arekapudi, A. Roldán-Molina, R. Hübner, K. Lenz, A. Barman, J. Fassbender, J. Lindner, O. Hellwig, and P. Landeros, "Reconfigurable spin-wave nonreciprocity induced by dipolar in- teraction in a coupled ferromagnetic bilayer," Phys. Rev. Applied 12, 034012 (2019).
Switchable giant nonreciprocal frequency shift of propagating spin waves in synthetic antiferromagnets. M Ishibashi, Y Shiota, T Li, S Funada, T Moriyama, T Ono, 10.1126/sciadv.aaz6931Sci. Adv. 66931M. Ishibashi, Y. Shiota, T. Li, S. Funada, T. Moriyama, and T. Ono, "Switchable giant nonreciprocal frequency shift of prop- agating spin waves in synthetic antiferromagnets," Sci. Adv. 6, eaaz6931 (2020).
Review and prospects of magnonic crystals and devices with reprogrammable band structure. M Krawczyk, D Grundler, 10.1088/0953-8984/26/12/123202J. Phys.: Condens. Matter. 26123202M. Krawczyk and D. Grundler, "Review and prospects of magnonic crystals and devices with reprogrammable band struc- ture," J. Phys.: Condens. Matter 26, 123202 (2014).
Towards ultraefficient nanoscale straintronic microwave devices. M Jaris, W Yang, C Berk, H Schmidt, 10.1103/PhysRevB.101.214421Phys. Rev. B. 101214421M. Jaris, W. Yang, C. Berk, and H. Schmidt, "Towards ultraef- ficient nanoscale straintronic microwave devices," Phys. Rev. B 101, 214421 (2020).
Giant nonreciprocity of surface acoustic waves enabled by the magnetoelastic interaction. P J Shah, D A Bas, I Lisenkov, A Matyushov, N X Sun, M R Page, 10.1126/sciadv.abc5648Sci. Adv. 65648P. J. Shah, D. A. Bas, I. Lisenkov, A. Matyushov, N. X. Sun, and M. R. Page, "Giant nonreciprocity of surface acoustic waves enabled by the magnetoelastic interaction," Sci. Adv. 6, eabc5648 (2020).
Nonreciprocal magnetoacoustic waves in dipolar-coupled ferromagnetic bilayers. M Küß, M Heigl, L Flacke, A Hörner, M Weiler, A Wixforth, M Albrecht, 10.1103/PhysRevApplied.15.034060Phys. Rev. Applied. 1534060M. Küß, M. Heigl, L. Flacke, A. Hörner, M. Weiler, A. Wix- forth, and M. Albrecht, "Nonreciprocal magnetoacoustic waves in dipolar-coupled ferromagnetic bilayers," Phys. Rev. Applied 15, 034060 (2021).
Large surface acoustic wave nonreciprocity in synthetic antiferromagnets. H Matsumoto, T Kawada, M Ishibashi, M Kawaguchi, M Hayashi, 10.35848/1882-0786/ac6da1Appl. Phys. Express. 1563003H. Matsumoto, T. Kawada, M. Ishibashi, M. Kawaguchi, and M. Hayashi, "Large surface acoustic wave nonreciprocity in syn- thetic antiferromagnets," Appl. Phys. Express 15, 063003 (2022).
Nonreciprocal surface acoustic wave propagation via magnetorotation coupling. M Xu, K Yamamoto, J Puebla, K Baumgaertl, B Rana, K Miura, H Takahashi, D Grundler, S Maekawa, Y Otani, 10.1126/sciadv.abb1724Sci. Adv. 61724M. Xu, K. Yamamoto, J. Puebla, K. Baumgaertl, B. Rana, K. Miura, H. Takahashi, D. Grundler, S. Maekawa, and Y. Otani, "Nonreciprocal surface acoustic wave propagation via magneto- rotation coupling," Sci. Adv. 6, eabb1724 (2020).
Symmetry of the magnetoelastic interaction of Rayleigh and shear horizontal magnetoacoustic waves in nickel thin films on LiTaO 3. M Küß, M Heigl, L Flacke, A Hefele, A Hörner, M Weiler, M Albrecht, A Wixforth, 10.1103/PhysRevApplied.15.034046Phys. Rev. Applied. 1534046M. Küß, M. Heigl, L. Flacke, A. Hefele, A. Hörner, M. Weiler, M. Albrecht, and A. Wixforth, "Symmetry of the magnetoelas- tic interaction of Rayleigh and shear horizontal magnetoacoustic waves in nickel thin films on LiTaO 3 ," Phys. Rev. Applied 15, 034046 (2021).
Surface acoustic wave devices for mobile and wireless communications. C K Campbell, Academic PressSan Diego, CAC. K. Campbell, Surface acoustic wave devices for mobile and wireless communications (Academic Press, San Diego, CA, 1998).
Surface acoustic wave biosensors: a review. K Länge, B E Rapp, M Rapp, 10.1007/s00216-008-1911-5Anal. Bioanal. Chem. 3911509K. Länge, B. E. Rapp, and M. Rapp, "Surface acoustic wave biosensors: a review," Anal. Bioanal. Chem. 391, 1509 (2008).
Surface acoustic wave (SAW) directed droplet flow in microfluidics for PDMS devices. T Franke, A R Abate, D A Weitz, A Wixforth, 10.1039/B906819HLab Chip. 92625T. Franke, A. R. Abate, D. A. Weitz, and A. Wixforth, "Surface acoustic wave (SAW) directed droplet flow in microfluidics for PDMS devices," Lab Chip 9, 2625 (2009).
D P Morgan, Surface Acoustic Wave Filters: With Applications to Electronic Communications and Signal Processing. AmsterdamElsevier2nd ed.D. P. Morgan, Surface Acoustic Wave Filters: With Applications to Electronic Communications and Signal Processing, 2nd ed. (Elsevier, Amsterdam, 2007).
Ghz-range low-loss wide band filter using new floating electrode type unidirectional transducers. K Yamanouchi, C Lee, K Yamamoto, T Meguro, H Odagawa, 10.1109/ULTSYM.1992.276049IEEE Ultrason. Symp. 1K. Yamanouchi, C. Lee, K. Yamamoto, T. Meguro, and H. Oda- gawa, "Ghz-range low-loss wide band filter using new float- ing electrode type unidirectional transducers," IEEE Ultrason. Symp. 1, 139-142 (1992).
Problems encountered in high-frequency surface-wave devices. R C Williamson, Proc. IEEE Ultrason. Symp. IEEE Ultrason. Symp321R. C. Williamson, "Problems encountered in high-frequency surface-wave devices," Proc. IEEE Ultrason. Symp. , 321 (1974).
Surface acoustic wave driven ferromagnetic resonance in nickel thin films: Theory and experiment. L Dreher, M Weiler, M Pernpeintner, H Huebl, R Gross, M S Brandt, S T B Goennenwein, 10.1103/PhysRevB.86.134415Phys. Rev. B. 86134415L. Dreher, M. Weiler, M. Pernpeintner, H. Huebl, R. Gross, M. S. Brandt, and S. T. B. Goennenwein, "Surface acoustic wave driven ferromagnetic resonance in nickel thin films: Theory and experiment," Phys. Rev. B 86, 134415 (2012).
. L Thevenard, C Gourdon, J Y Prieur, H J Von Bardeleben, S Vincent, L Becerra, L Largeau, J , L. Thevenard, C. Gourdon, J. Y. Prieur, H. J. von Bardeleben, S. Vincent, L. Becerra, L. Largeau, and J.-
Surface-acoustic-wave-driven ferromagnetic resonance in. Y Duquesne, 10.1103/PhysRevB.90.094401Phys. Rev. B. 9094401As,P) epilayersY. Duquesne, "Surface-acoustic-wave-driven ferromagnetic res- onance in (Ga,Mn)(As,P) epilayers," Phys. Rev. B 90, 094401 (2014).
Focused electron beam induced deposition meets materials science. M Huth, F Porrati, O V Dobrovolskiy, 10.1016/j.mee.2017.10.012Microelectron. Eng. 185-186. 9M. Huth, F. Porrati, and O. V. Dobrovolskiy, "Focused electron beam induced deposition meets materials science," Microelec- tron. Eng. 185-186, 9 (2018).
Living up to its potentialdirect-write nanofabrication with focused electron beams. M Huth, F Porrati, S Barth, 10.1063/5.0064764J. Appl. Phys. 130170901M. Huth, F. Porrati, and S. Barth, "Living up to its potential- direct-write nanofabrication with focused electron beams," J. Appl. Phys. 130, 170901 (2021).
Engineered magnetization and exchange stiffness in direct-write Co-Fe nanoelements. S A Bunyaev, B Budinska, R Sachser, Q Wang, K Levchenko, S Knauer, A V Bondarenko, M Urbánek, K Y Guslienko, A V Chumak, M Huth, G N Kakazei, O V Dobrovolskiy, 10.1063/5.0036361Appl. Phys. Lett. 11822408S. A. Bunyaev, B. Budinska, R. Sachser, Q. Wang, K. Levchenko, S. Knauer, A. V. Bondarenko, M. Urbánek, K. Y. Guslienko, A. V. Chumak, M. Huth, G. N. Kakazei, and O. V. Dobrovolskiy, "Engineered magnetization and exchange stiffness in direct-write Co-Fe nanoelements," Appl. Phys. Lett. 118, 022408 (2021).
Spinwave eigenmodes in direct-write 3d nanovolcanoes. O V Dobrovolskiy, N R Vovk, A V Bondarenko, S A Bunyaev, S Lamb-Camarena, N Zenbaa, R Sachser, S Barth, K Y Guslienko, A V Chumak, M Huth, G N Kakazei, 10.1063/5.0044325Appl. Phys. Lett. 118132405O. V. Dobrovolskiy, N. R. Vovk, A. V. Bondarenko, S. A. Bun- yaev, S. Lamb-Camarena, N. Zenbaa, R. Sachser, S. Barth, K. Y. Guslienko, A. V. Chumak, M. Huth, and G. N. Kakazei, "Spin- wave eigenmodes in direct-write 3d nanovolcanoes," Appl. Phys. Lett. 118, 132405 (2021).
Plane-wave theory of threedimensional magnonic crystals. M Krawczyk, H Puszkarski, 10.1103/PhysRevB.77.054437Phys. Rev. B. 7754437M. Krawczyk and H. Puszkarski, "Plane-wave theory of three- dimensional magnonic crystals," Phys. Rev. B 77, 054437 (2008).
G Gubbiotti, Three-dimensional magnonics: Layered, micro-and nanostructures. SingaporeJenny Stanford PublishingG. Gubbiotti, ed., Three-dimensional magnonics: Layered, micro-and nanostructures (Jenny Stanford Publishing, Singa- pore, 2019).
Realisation of a frustrated 3d magnetic nanowire lattice. A May, M Hunt, A Van Den, A Berg, S Hejazi, Ladak, 10.1038/s42005-018-0104-6Commun. Phys. 213A. May, M. Hunt, A. van den Berg, A. Hejazi, and S. Ladak, "Realisation of a frustrated 3d magnetic nanowire lattice," Com- mun. Phys. 2, 13 (2019).
Writing 3d nanomagnets using focused electron beams. A Fernández-Pacheco, L Skoric, J M De Teresa, J Pablo-Navarro, M Huth, O V Dobrovolskiy, 10.3390/ma13173774Materials. 133774A. Fernández-Pacheco, L. Skoric, J. M. de Teresa, J. Pablo- Navarro, M. Huth, and O. V. Dobrovolskiy, "Writing 3d nano- magnets using focused electron beams," Materials 13, 3774 (2020).
On waves propagated along the plane surface of an elastic solid. L Rayleigh, 10.1112/plms/s1-17.1.4Proc. London Math. Soc. 1L. Rayleigh, "On waves propagated along the plane surface of an elastic solid," Proc. London Math. Soc. 1, 4-11 (1885).
Elastically driven ferromagnetic resonance in nickel thin films. M Weiler, L Dreher, C Heeg, H Huebl, R Gross, M S Brandt, S T B Goennenwein, 10.1103/PhysRevLett.106.117601Phys. Rev. Lett. 106117601M. Weiler, L. Dreher, C. Heeg, H. Huebl, R. Gross, M. S. Brandt, and S. T. B. Goennenwein, "Elastically driven ferromagnetic res- onance in nickel thin films," Phys. Rev. Lett. 106, 117601 (2011).
Traveling surface spin-wave resonance spectroscopy using surface acoustic waves. P G Gowtham, T Moriyama, D C Ralph, R A Buhrman, 10.1063/1.4938390J. Appl. Phys. 118233910P. G. Gowtham, T. Moriyama, D. C. Ralph, and R. A. Buhrman, "Traveling surface spin-wave resonance spectroscopy using sur- face acoustic waves," J. Appl. Phys. 118, 233910 (2015).
A simple method of approximating surface acoustic wave power densities. W P Robbins, 10.1109/T-SU.1977.30956IEEE Trans. Son. Ultrason. 24339W. P. Robbins, "A simple method of approximating surface acoustic wave power densities," IEEE Trans. Son. Ultrason. 24, 339 (1977).
Surface acoustic attenuation due to surface spin wave in ferro-and antiferromagnets. S Maekawa, M Tachiki, AIP Conf. Proc. 29542S. Maekawa and M. Tachiki, "Surface acoustic attenuation due to surface spin wave in ferro-and antiferromagnets," AIP Conf. Proc. 29, 542 (1976).
Effects of mechanical rotation on spin currents. M Matsuo, J Ieda, E Saitoh, S Maekawa, 10.1103/PhysRevLett.106.076601Phys. Rev. Lett. 10676601M. Matsuo, J. Ieda, E. Saitoh, and S. Maekawa, "Effects of me- chanical rotation on spin currents," Phys. Rev. Lett. 106, 076601 (2011).
Mechanical generation of spin current by spin-rotation coupling. M Matsuo, J Ieda, K Harii, E Saitoh, S Maekawa, 10.1103/PhysRevB.87.180402Phys. Rev. B. 87180402M. Matsuo, J. Ieda, K. Harii, E. Saitoh, and S. Maekawa, "Me- chanical generation of spin current by spin-rotation coupling," Phys. Rev. B 87, 180402(R) (2013).
Spin current generation using a surface acoustic wave generated via spin-rotation coupling. D Kobayashi, T Yoshikawa, M Matsuo, R Iguchi, S Maekawa, E Saitoh, Y Nozaki, 10.1103/PhysRevLett.119.077202Phys. Rev. Lett. 11977202D. Kobayashi, T. Yoshikawa, M. Matsuo, R. Iguchi, S. Maekawa, E. Saitoh, and Y. Nozaki, "Spin current generation using a sur- face acoustic wave generated via spin-rotation coupling," Phys. Rev. Lett. 119, 077202 (2017).
Observation of gyromagnetic spin wave resonance in NiFe films. Y Kurimune, M Matsuo, Y Nozaki, 10.1103/PhysRevLett.124.217205Phys. Rev. Lett. 124217205Y. Kurimune, M. Matsuo, and Y. Nozaki, "Observation of gy- romagnetic spin wave resonance in NiFe films," Phys. Rev. Lett. 124, 217205 (2020).
Surfaceacoustic-wave induced ferromagnetic resonance in Fe thin films and magnetic field sensing. J.-Y Duquesne, P Rovillain, C Hepburn, M Eddrief, P Atkinson, A Anane, R Ranchal, M Marangolo, 10.1103/PhysRevApplied.12.024042Phys. Rev. Applied. 1224042J.-Y. Duquesne, P. Rovillain, C. Hepburn, M. Eddrief, P. Atkin- son, A. Anane, R. Ranchal, and M. Marangolo, "Surface- acoustic-wave induced ferromagnetic resonance in Fe thin films and magnetic field sensing," Phys. Rev. Applied 12, 024042 (2019).
. Comsol Comsol, Multiphysics ® V, Stockholm, Sweden5.4. www.comsol.com. COMSOL ABComsol, COMSOL Multiphysics ® v. 5.4. www.comsol.com. COMSOL AB, Stockholm, Sweden.
Influence of the Dzyaloshinskii-Moriya interaction on the spin-wave spectra of thin films. D Cortés-Ortuño, P Landeros, 10.1088/0953-8984/25/15/156001J. Phys.: Condens. Matter. 25156001D. Cortés-Ortuño and P. Landeros, "Influence of the Dzyaloshinskii-Moriya interaction on the spin-wave spectra of thin films," J. Phys.: Condens. Matter 25, 156001 (2013).
Use of rotated electrodes for amplitude weighting in interdigital surface-wave transducers. A P Van Den, Heuvel, 10.1063/1.1654378Appl. Phys. A. P. van den Heuvel, "Use of rotated electrodes for amplitude weighting in interdigital surface-wave transducers," Appl. Phys.
. Lett, 10.1063/1.165437821280Lett. 21, 280 (1972).
Design techniques for SAW filters using slanted finger interdigital transducers. H Yatsuda, IEEE transactions on ultrasonics, ferroelectrics, and frequency control. 44H. Yatsuda, "Design techniques for SAW filters using slanted fin- ger interdigital transducers," IEEE transactions on ultrasonics, ferroelectrics, and frequency control 44, 453-459 (1997).
Tapered transducers-design and applications. L Solie, IEEE Ultrasonics Symposium. Proceedings. 1IEEEL. Solie, "Tapered transducers-design and applications," in 1998 IEEE Ultrasonics Symposium. Proceedings, Vol. 1 (IEEE, 1998) pp. 27-37.
Saw tomography-spatially resolved charge detection by saw in semiconductor structures for imaging applications. M Streibl, F Beil, A Wixforth, C Kadow, A C Gossard, 1999 IEEE Ultrasonics Symposium. Proceedings. International Symposium (Cat. No. 99CH37027). IEEE111M. Streibl, F. Beil, A. Wixforth, C. Kadow, and A. C. Gos- sard, "Saw tomography-spatially resolved charge detection by saw in semiconductor structures for imaging applications," in 1999 IEEE Ultrasonics Symposium. Proceedings. International Symposium (Cat. No. 99CH37027), Vol. 1 (IEEE, 1999) p. 11.
Direct-write of free-form building blocks for artificial magnetic 3D lattices. L Keller, M K I Mamoori, J Pieper, C Gspan, I Stockem, C Schröder, S Barth, R Winkler, H Plank, M Pohlit, J Müller, M Huth, 10.1038/s41598-018-24431-xSci. Rep. 86160L. Keller, M. K. I. Al Mamoori, J. Pieper, C. Gspan, I. Stockem, C. Schröder, S. Barth, R. Winkler, H. Plank, M. Pohlit, J. Müller, and M. Huth, "Direct-write of free-form building blocks for arti- ficial magnetic 3D lattices," Sci. Rep. 8, 6160 (2018).
Room temperature L1 0 phase transformation in binary CoPt nanostructures prepared by focused-electron-beam-induced deposition. F Porrati, E Begun, M Winhold, C H Schwalb, R Sachser, A S Frangakis, M Huth, 10.1088/0957-4484/23/18/185702Nanotechnology. 23185702F. Porrati, E. Begun, M. Winhold, C. H. Schwalb, R. Sachser, A. S. Frangakis, and M. Huth, "Room temperature L1 0 phase transformation in binary CoPt nanostructures prepared by focused-electron-beam-induced deposition," Nanotechnology 23, 185702 (2012).
M Hiebel, Grundlagen der vektoriellen Netzwerkanalyse. MünchenRohde & Schwarz3rd ed.M. Hiebel, Grundlagen der vektoriellen Netzwerkanalyse, 3rd ed. (Rohde & Schwarz, München, 2011).
The propagation velocity of a Rayleigh-type SAW on a pure Ycut Z-propagation LiNbO 3 substrate with a perfectly conducting overlayer of zero thickness is c SAW = 3404 m/s 27 . We assume that c SAW in the real piezoelectric-ferromagnetic heterostructure is slightly lowered 20 because of mass loading and different elastic constants of LiNbO 3 and the magnetic films. The propagation velocity of a Rayleigh-type SAW on a pure Y- cut Z-propagation LiNbO 3 substrate with a perfectly conducting overlayer of zero thickness is c SAW = 3404 m/s 27 . We assume that c SAW in the real piezoelectric-ferromagnetic heterostructure is slightly lowered 20 because of mass loading and different elastic constants of LiNbO 3 and the magnetic films.
Imaging of love waves and their interaction with magnetic domain walls in magnetoelectric magnetic field sensors. C Müller, P Durdaut, R B Holländer, A Kittmann, V Schell, D Meyners, M Höft, E Quandt, J Mccord, 10.1002/aelm.202200033Adv. Electron. Mater. 82200033C. Müller, P. Durdaut, R. B. Holländer, A. Kittmann, V. Schell, D. Meyners, M. Höft, E. Quandt, and J. McCord, "Imaging of love waves and their interaction with magnetic domain walls in magnetoelectric magnetic field sensors," Adv. Electron. Mater. 8, 2200033 (2022).
Fast surface acoustic wave-based sensors to investigate the kinetics of gas uptake in ultra-microporous frameworks. B Paschke, A Wixforth, D Denysenko, D Volkmer, 10.1021/acssensors.7b00014ACS Sens. 2740B. Paschke, A. Wixforth, D. Denysenko, and D. Volkmer, "Fast surface acoustic wave-based sensors to investigate the kinetics of gas uptake in ultra-microporous frameworks," ACS Sens. 2, 740 (2017).
Voltage controlled inversion of magnetic anisotropy in a ferromagnetic thin film at room temperature. M Weiler, A Brandlmaier, S Geprägs, M Althammer, M Opel, C Bihler, H Huebl, M S Brandt, R Gross, S T B Goennenwein, 10.1088/1367-2630/11/1/013021New J. Phys. 1113021M. Weiler, A. A Brandlmaier, S. Geprägs, M. Althammer, M. Opel, C. Bihler, H. Huebl, M. S. Brandt, R. Gross, and S. T. B. Goennenwein, "Voltage controlled inversion of magnetic anisotropy in a ferromagnetic thin film at room temperature," New J. Phys. 11, 013021 (2009).
Power absorption in acoustically driven ferromagnetic resonance. D Labanowski, A Jung, S Salahuddin, 10.1063/1.4939914Appl. Phys. Lett. 10822905D. Labanowski, A. Jung, and S. Salahuddin, "Power absorp- tion in acoustically driven ferromagnetic resonance," Appl. Phys. Lett. 108, 022905 (2016).
Direct writing of CoFe alloy nanostructures by focused electron beam induced deposition from a heteronuclear precursor. F Porrati, M Pohlit, J Müller, S Barth, F Biegger, C Gspan, H Plank, M Huth, 10.1088/0957-4484/26/47/475701Nanotechnology. 26475701F. Porrati, M. Pohlit, J. Müller, S. Barth, F. Biegger, C. Gspan, H. Plank, and M. Huth, "Direct writing of CoFe alloy nanos- tructures by focused electron beam induced deposition from a heteronuclear precursor," Nanotechnology 26, 475701 (2015).
Granular hall sensors for scanning probe microscopy. R Sachser, J Hütner, C H Schwalb, M Huth, 10.3390/nano11020348Nanomaterials. 11348R. Sachser, J. Hütner, C. H. Schwalb, and M. Huth, "Granular hall sensors for scanning probe microscopy," Nanomaterials 11, 348 (2021).
| []
|
[
"String Corrected Supergravity; A Complete and Consistent Non-Minimal Solution",
"String Corrected Supergravity; A Complete and Consistent Non-Minimal Solution"
]
| [
"D O'reilly to:[email protected] \nPhysics Department\nThe Graduate School and University Center\n365 Fifth Avenue10016-4309New YorkNY\n"
]
| [
"Physics Department\nThe Graduate School and University Center\n365 Fifth Avenue10016-4309New YorkNY"
]
| []
| We complete the solution to string corrected (deformed), D=10, N=1 Supergravity as the non-minimal low energy limit of string theory. We reaffirm a previously given solution, and we make important corrections to that solution. We solve what was an apparently intractable Bianchi identity in superspace, and we introduce a new important modification to the known first order results. In so doing we show that this approach to string corrected supergravity is indeed a consistent approach and we pave the way for many applications of the results. | null | [
"https://export.arxiv.org/pdf/hep-th/0611068v9.pdf"
]
| 118,917,664 | hep-th/0611068 | de091e76135248e9bbe6f6bdf637fbc496a0c273 |
String Corrected Supergravity; A Complete and Consistent Non-Minimal Solution
15 Dec 2006
D O'reilly to:[email protected]
Physics Department
The Graduate School and University Center
365 Fifth Avenue10016-4309New YorkNY
String Corrected Supergravity; A Complete and Consistent Non-Minimal Solution
15 Dec 2006
We complete the solution to string corrected (deformed), D=10, N=1 Supergravity as the non-minimal low energy limit of string theory. We reaffirm a previously given solution, and we make important corrections to that solution. We solve what was an apparently intractable Bianchi identity in superspace, and we introduce a new important modification to the known first order results. In so doing we show that this approach to string corrected supergravity is indeed a consistent approach and we pave the way for many applications of the results.
Introduction
The route to finding a manifestly supersymetric theory of D= 10, N=1 supergravity at second order in the string slope parameter has encountered many difficulties over the years. Some years ago a solution to D=10, N=1 Supergravity as the low energy limit of String Theory was given at first order in the string slope parameter, [1]. It was recently re-calculated [1]/(2004). In a sense this was a minimal solution. This approach was founded on what we now choose to call the scenario of Gates and collaborators, (see [1], [2], and references therein). Other varied approaches are nowadays pursued, however the power of this older approach is currently being vindicated, [1]. A partial second order solution was recently given in [3] and [4]. It was incomplete and therefore in doubt due to an unsatisfactory assumption in the curvature sector, as well as a calculational error.
Here we reaffirm that that solution is correct up to a curvature. We then show that the results obtained satisfy the problem curvature, equation (3). We achieve this through introducing a new and important condition on R (1) abα γ , a quantity previously undefined. This result also modifies the old first order case. The difficulties that prevented completely closing the Bianchi identities at second order are fully overcome. We complete the set of equations that consistently satisfy all Bianchi identities. As the work in itself is lengthy we leave finding the equations of motion and other applications for another paper. We do not list results which are explicitly solved by Bianchi identities such as H (2) abc . For this approach it is required that we solve the Bianchi identities for D=10 N=1 Supergravity in Superspace at second order in the slope parameter, in the presence of the Lorentz Chern Simmons Form, and the so called Beta Function Favored Constraints, (βF F ). This approach has been detailed to first order in [1], and to second order in [3] and [4], so we will not recount it here. We show that all results fall neatly into place in a very elegant way, therefore further vindicating the whole original scenario. We note here that it appears also to work consistently at third order as we have proceeded to that order and that is left for another letter.
Review of Solution and Notation
The Bianchi identities in Superspace are as follows
[[∇ [A , ∇ B }, ∇ C) } = 0(1)
Here we have switched off Yang Mills fields and the commutator is given by
[∇ A , ∇ B } = T AB C + 1 2 R ABd e M e d(2)
This generates many identities, and a solution must be found in such a way that all of them are satisfied simultaneously. A small alteration in one solution will change the whole picture. Most of the resulting identities are listed in [1] and [4], so we will not list them here. The second order solution given in parts in [3] and [4] was, to some extent based on an anzatz for the so called X tensor as well as extensive algebraic manipulations. The necessity for introducing the X tensor was predicted by Gates et. al., [1]. In [3], and [4], the following Bianchi identity was not properly solved.
T (αβ| λ R |γ)λde − T (αβ| g R |γ)gde − ∇ (α| R βγ)de = 0(3)
It is crucial to show that the torsions and curvature already found (7), (11) and (20), satisfy this identity otherwise the whole set of equations is in doubt. Also R (2) γgde is required to complete the set. Various ideas such as finding a new X tensor, imposing constraints on the spinor derivative ∇ α χ at second order or adjusting the super current A abc were previously fruitlessly considered.
In this paper we find a consistent solution. We also point out that equation (58) in reference [3] (or equation (115) in reference [4]) is wrong.
In order to avoid a proliferation of terms we maintain the same notation and conventions as in [1] and [4], but to avoid relisting the first order results we denote all quantities by order in the slope parameter as follows
R ABde = R (0) ABde + R (1) ABde + R (2) ABde + ... T AD G = T (0) AD G + T (1) AD G + T (2) AD G ...
The numerical superscript refers to the order of the quantity. In this work we make some improvements to the notation in references [3] and [4]. For convenience we also have the following quantity
Ω (1) gef = L (1) gef − 1 4 A (1) gef(4)
and its spinor derivative
Ω (1) αgef = ∇ γ { L (1) gef − 1 4 A (1) gef }(5)
A crucial input at first order is that for the super-current A (1) gef . The choice made for on-shell conditions in [1] and hence also [3] and [4], is as follows
A (1) gef = +iγσ gef ǫτ T mnǫ T mn τ(6)
In [3] and [4], we proposed the form of the X tensor to be as follows
T (2) αβ d = σ pqref αβ X pqref d = − iγ 6 σ pqref αβ H (0)d ef A (1) pqr (7)
Coupled with this we also have a conventional constraint which may or may not be imposed to all orders. We have
T αb δ = − 1 48 σ bαλ σ pqrλδ A pqr(8)
If we impose this at second order we have a result that relates this torsion at second order to the super current.
T (2) αb δ = − 1 48 σ bαλ σ pqrλδ A (2) pqr(9)
However we may relax this constraint also. We will consider this option in reconsidering the solution to equation (13).
A fundamental result which was used in every Bianchi identity and which is very lengthy to derive is the following
T (0) (αβ| λ σ pqref |γ)λ A (1) pqr H (0) def − σ pqref (αβ| H (0) def ∇ |γ) A (1) pqr = −24σ g (αβ| H (0) d ef [Ω (1) |γ)gef ](10)
We note however in this paper that this result can be arrived at indirectly by using the first order results found in [1], in conjunction with the Bianchi identity (3).
Torsions Solutions
We found from the H sector Bianchi identities that the following dimension one half torsion is given uniquely by
T (2) αβ λ = − iγ 12 σ pqref αβ A (1) pqr T ef λ(11)
It was then shown that together with the proposed X tensor anzatz (7), as well as equation (8) and other observations and results, that the H sector Bianchi identities as listed in [1], [2] could be solved. Also solved was the torsions (10), below.
T (αβ| λ T |γ)λ d − T (αβ| g T |γ)g d − ∇ (α| T βγ) d = 0(12)
These results also offer a solution to the following
T (αβ| λ T |γ)λ δ − T (αβ| g T |γ)g δ − ∇ (α| T |βγ) δ − 1 4 R (αβ|de σ de |γ) δ = 0(13)
However consideration must be given here as to whether or not to impose the constraint (9). Either way we can solve the identity. It is important to note that imposing this constraint results in a null term in (13),
T (0) (αβ| g T (2) |γ)g δ = iσ (αβ| g {− 1 48 σ g|γ)λ σ pqrλδ A (2) pqr }(14)
This is due to the fact that
σ (αβ| g σ g|γ)λ = 0(15)
We find the second order solutions to (12) to be given by (7) and the following
σ g (αβ| T (2) |γ)gd = 4γσ g (αβ| Ω |γ)gef H (0) d ef − iγ 6 σ g (αβ| σ pqre g|γ)φ A (1) pqr T (0) de φ(16)
The lengthy extracted and symmetrized equation is listed in [3], and [4]. In equation (13), we notice the occurrence of the term
−∇ (α| T (0) |βγ) δ [Order(2)] = [2δ (α| δ δ |β) λ + σ g (αβ| σ g δλ ]∇ |γ) χ λ (2)(17)
If we impose the constraint (14), then this term must be retained as being non zero. If we relax the constraint (14), then we may include this as an extra constraint as follows,
[2δ (α| δ δ |β) λ + σ g (αβ| σ g δλ ]∇ |γ) χ λ (2) = 0(18)
Here we chose to relax the constraint. Hence in so doing we find for the solution of (13) after some algebra and neat cancelations,
T (2) γg δ = 2γ T (0)ef δ Ω (1) γgef (19) And R (2) αβde = − iγ 12 σ pqref αβ A (1) pqr R (0) ef de(20)
We now must show that all of the above found results satisfy (3).
New Solution for R (2)
λgde
We must show that we can close equation (3) using the results (7), (11), and (20). As mentioned, in references [3] and [4] the curvature (3) was not properly solved. In fact there existed terms which seemed at first to predict serious problems for the entire scenario. The mentioned various approaches did not work, nor was there any way to manipulate the terms using the sigma matrix algebra. Eventually the following procedure provided a confident and elegant solution. At second order the Bianchi identity (3) becomes
T (0) (αβ| λ R (2) |γ)λde + T (2) (αβ| λ R (0) |γ)λde − T (0) (αβ| g R (2) |γ)g de − T (2) (αβ| g R (0) |γ)gde − ∇ (α| [R (0) |βγ)
Order (2) de
+ R (1) |βγ)
Order (2) de
+ R (2) |βγ) Order(2) de ] = 0(21)
Using the results we found, (7), (11) and (20), we arrive at
−iσ g (αβ| R (2) |γ)gde + T (0) (αβ| λ [− iγ 12 σ pqrab |γ)λ A (1) pqr R (0) abde ] − iγ 12 σ pqrab (αβ| A (1) pqr T ab λ R (0) |γ)λde + iγ 6 σ pqrab (αβ| H (0)g ab A (1) pqr R (0) |γ)gde −∇ (γ| {−2iσ g |αβ) Π (0)+(1) gde + iγ 24 σ pqr de|αβ) A (1) pqr − iγ 12 σ pqrab |αβ) A (1) pqr R (0) abde } = 0 (22)
Here we encounter second order contributions from zeroth order parts but in solvable form. We define
Π g ef = L g ef − 1 8 A g ef(23)
Now again using our key relation, (10) we obtain
−iσ g (αβ| R (2) |γ)gde + 2iγσ g (αβ| R (0) abde [Ω (1) |γ)gab ] − ∇ (γ| {−2iσ g |αβ) Π (0)+(1) gde } − iγ 12 σ pqrab (αβ| A (1) pqr T ab λ R (0) |γ)λde + iγ 6 σ pqrab (αβ| H (0)g ab A (1) pqr R (0) |γ)gde + iγ 12 σ pqrab (αβ| A (1) pqr [∇ |γ) R (0) abde ] − i 24 σ pqr de(αβ| [∇ |γ) A (1)Order(2) pqr ](24)
Of particular concern and interest is the last term in (22). One possible approach to eliminating this term is that taken in [3] and [4]. However here we now disagree with that approach. Hence the problem terms will still remain. It was thought that a possible modification of A (1) pqr , or a contribution from A (2) pqr would be necessary. These approaches are now seen also to be unnecessary.
In advance we anticipate that the solution will be as follows
+iσ g (αβ| R (2) |γ)gde = 2iγσ g (αβ| R (0) abde [Ω (1) |γ)g ab ] + ∇ (γ| {2iσ g |αβ) Π (0)+(1) gde } Order(2)(25)
And
− iγ 12 σ pqrab (αβ| A (1) pqr T ab λ R (0) |γ)λde + iγ 6 σ pqrab (αβ| H (0)g ab A (1) pqr R (0) |γ)gde + iγ 12 σ pqrab (αβ| A (1) pqr [∇ |γ) R (0) abde ] − i 24 σ pqr de(αβ| [∇ |γ) A (1)Order(2) pqr ] = 0(26)
We need to show that (26) does in fact vanish. We must begin with the Bianchi identity that gives the spinor derivative of T kl τ .
∇ γ T kl τ = T γ[k| λ T λ|l] τ + T γ[k g T g|l] τ + T kl λ T λγ τ + T kl g T gγ τ − ∇ [k| T |l]γ τ − R klγ τ(27)
At first order this simplifies to
∇ γ T kl τ Order(1) = −R (1) klγ τ + T (1) kl λ T (0) λγ τ + 1 48 [2H (0) klg σ g γλ σ pqrλτ A (1) pqr − σ [k|γλ σ pqrλτ (∇ |l] A (1) pqr )](28)
We now write the last term in (26), using the ten dimensional metric so that the unsolved part becomes
− i 12 σ pqrab (αβ| {γA (1) pqr [T ab λ R (0) |γ)λde + T (0) ab g R (0) |γ)gde − ∇ |γ) R (0) abde ] + 1 2 η ad η be ∇ |γ) A (1) (Order(2)) pqr } = 0(29)
Using the definition of A (1) pqr , (6), gives therefore
+ γ 12 σ pqrab (αβ| σ pqrǫτ T klǫ {γT kl τ [T ab λ R (0) |γ)λde + T (0) ab g R (0) |γ)gde − ∇ |γ) R (0) abde ] +η ad η be ∇ |γ) T kl τ } = 0(30)
We now use equation (25) and the properties of the sigma matrices. After some algebra we obtain an extremely interesting condition on R (1) klγ τ . We find
R (1) klγ τ = {+ γ 100 T kl τ [T mn λ R (0) γλ mn + T (0) mn g R (0) |γ)g mn − ∇ γ R (0) mn mn ] +T (1) kl λ T (0) λγ τ + 4iγ[T mn λ T mnτ H (0) klg σ g γλ − σ [k|γλ T mn λ ∇ |l] T mnτ ]}(31)
This can now be added to the list of first order results quoted in [1]. It assumes a correction T (1) kl λ which may itself be complicated. R (1) klγ τ was not defined in [1]. Furthermore the curvature (3) is neatly solved. We obtain
R (2) γ gde = 2γR (0) abde [Ω (1) γg ab ] + ∇ γ {Π (0)+(1) gde } Order(2)(32)
The following Bianchi identity also includes R (2) αbde .
1 4 R (α|amn σ mn |β) γ + T αβ g T ga γ + T αβ λ T λa γ + T a(α| λ T |β)λ γ − T a(α| g T |β)g γ −∇ (α| T |β)a γ − ∇ a T αβ γ = 0(33)
Although not yet simplified this identity predicts the same term that we found exist in R (2) α amn . However it includes a great deal more information which we have included in another letter.
Conclusions
We have found a consistent solution to the manifestly supersymmetric equations of D=10, N=1 Supergravity, with string corrections to second order in the string slope parameter. We have reaffirmed the results of [3] and [4], and we have solved the remaining previously intractable curvature. We find a new and important modification to the first order case as in equation (31). We gave more careful consideration to the imposition of the constraint (9), and we note that imposing this constraint will modify the solution. However a solution can also be found by correspondingly modifying the constraint (18). This solution allows for flexibility in finding a suitable candidate for the supercurrent A (2) pqr . Otherwise it is tied to the torsion T (2)
R αgde = −iσ [d|αφ T g|e] f + iγσ [g|α φ T kl φ R kl |de] +2γR (0) abde [Ω (1) αg ab ] + ∇ α {Π (0)+(1) gde } Order(2) + Order(γ 3 )(44)
The spinor derivative of L abc is solved and available from a Bianchi identity. We will list it in a later paper.
αb δ 6
δAcknowledgement I would like to acknowledge S. Bellucci for introducing me to the method of Bianchi identities and to recognize the founding work done in this area by S. J. Gates Jr,.−
iγ
12
A (1)
pqr σ pqrg
[a|φλ T (0)
|b]g
λ
−
iγ
72
σ ab γ
φ σ pqreg
φλ A (1)
pqr T (0)
eg
λ
iγ
144
A (1)
pqr σ [a|
g
γ
φ [σ pqre
|b]φλ T (0)
eg
λ + σ pqre
gφλ T (0)
e|b]
λ ] + Order(γ 3 ) + ...
(41)
R αβde = −2iσ g
αβ Π gde
(1) +
i
24
σ pqref
αβ A pqr
(1)
−
iγ
12
σ pqref
αβ A (1)
pqr R ef de + Order(γ 3 )
(42)
Where
Π (1)
g
ef = L (1)
g
ef −
1
8
A (1)
g
ef
(43)
AppendixHere for convenience we list the torsions curvatures and H sector results to second order. Other first order results listed in[1]also form part of the set.Or symmetrized,
. S Bellucci, D A Depireaux, S J GatesJr, Phys. Lett. 238315S. Bellucci, D.A. Depireaux and S.J. Gates, Jr., Phys. Lett. B238 (1990) 315 ;
. S J GatesJr, A Kiss, W Merrell, JHEP. 041247S.J. Gates, Jr., A. Kiss, W. Merrell, JHEP 0412 (2004) 047.
. S J Gates, Jr , H Nishino, Nucl. Phys. 29152Phys. Lett.S.J. Gates, Jr. and H. Nishino, Nucl. Phys. B291 (1987) 205; ibid. Phys. Lett. B173 (1986) 52 ;
. S J Gates, Jr , S Vashakidze, Nucl. Phys. 291172S.J. Gates, Jr. and S. Vashakidze, Nucl. Phys. B291 (1987) 172.
. S Bellucci, D O'reilly, hep-th/0603033Phys. Rev. D. 7365009S. Bellucci and D. O'Reilly, Phys. Rev. D 73, 065009 (2006); hep-th/0603033
The Graduate Center. D O'reilly, hep-th/0601184December. City University of New YorkPhD. ThesisD. O'Reilly, PhD. Thesis, The Graduate Center, City University of New York, De- cember 2005; hep-th/0601184
| []
|
[
"Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans",
"Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans"
]
| [
"Mohammad Hamghalam [email protected] \nSchool of Biomedical Engineering\nGuangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging\nNational-Regional Key Technology Engineering Laboratory for Medical Ultrasound\nHealth Science Center\nShenzhen University\n518060ShenzhenChina\n\nFaculty of Electrical\nBiomedical and Mechatronics Engineering\nQazvin Branch\nIslamic Azad University\nQazvinIran\n",
"] ",
"Baiying Lei \nSchool of Biomedical Engineering\nGuangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging\nNational-Regional Key Technology Engineering Laboratory for Medical Ultrasound\nHealth Science Center\nShenzhen University\n518060ShenzhenChina\n",
"Tianfu Wang [email protected] \nSchool of Biomedical Engineering\nGuangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging\nNational-Regional Key Technology Engineering Laboratory for Medical Ultrasound\nHealth Science Center\nShenzhen University\n518060ShenzhenChina\n"
]
| [
"School of Biomedical Engineering\nGuangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging\nNational-Regional Key Technology Engineering Laboratory for Medical Ultrasound\nHealth Science Center\nShenzhen University\n518060ShenzhenChina",
"Faculty of Electrical\nBiomedical and Mechatronics Engineering\nQazvin Branch\nIslamic Azad University\nQazvinIran",
"School of Biomedical Engineering\nGuangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging\nNational-Regional Key Technology Engineering Laboratory for Medical Ultrasound\nHealth Science Center\nShenzhen University\n518060ShenzhenChina",
"School of Biomedical Engineering\nGuangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging\nNational-Regional Key Technology Engineering Laboratory for Medical Ultrasound\nHealth Science Center\nShenzhen University\n518060ShenzhenChina"
]
| []
| Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS'19) demonstrate that our proposed method can efficiently segment the tumor regions. | 10.1007/978-3-030-46640-4_1 | [
"https://arxiv.org/pdf/2010.10612v1.pdf"
]
| 218,688,872 | 2010.10612 | 3c45d971cbb2d289c052695e3c6ddd7a94dff9d1 |
Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans
Mohammad Hamghalam [email protected]
School of Biomedical Engineering
Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging
National-Regional Key Technology Engineering Laboratory for Medical Ultrasound
Health Science Center
Shenzhen University
518060ShenzhenChina
Faculty of Electrical
Biomedical and Mechatronics Engineering
Qazvin Branch
Islamic Azad University
QazvinIran
]
Baiying Lei
School of Biomedical Engineering
Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging
National-Regional Key Technology Engineering Laboratory for Medical Ultrasound
Health Science Center
Shenzhen University
518060ShenzhenChina
Tianfu Wang [email protected]
School of Biomedical Engineering
Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging
National-Regional Key Technology Engineering Laboratory for Medical Ultrasound
Health Science Center
Shenzhen University
518060ShenzhenChina
Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma Segmentation in MRI Scans
Pixel-wise segmentation · CNN · 3D to 2D conversion · Brain tumor · MRI
Structural magnetic resonance imaging (MRI) has been widely utilized for analysis and diagnosis of brain diseases. Automatic segmentation of brain tumors is a challenging task for computer-aided diagnosis due to low-tissue contrast in the tumor subregions. To overcome this, we devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model to predict class labels of the central pixel in the input sliding patches. Precisely, we first extract 3D patches from each modality to calibrate slices through the squeeze and excitation (SE) block. Then, the output of the SE block is fed directly into subsequent bottleneck layers to reduce the number of channels. Finally, the calibrated 2D slices are concatenated to obtain multimodal features through a 2D convolutional neural network (CNN) for prediction of the central pixel. In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch through the 2D CNN classifier. We implicitly apply all modalities through trainable parameters to assign weights to the contributions of each sequence for segmentation. Experimental results on the segmentation of brain tumors in multimodal MRI scans (BraTS'19) demonstrate that our proposed method can efficiently segment the tumor regions.
Introduction
Among brain tumors, glioma is the most aggressive and prevalent tumor that begins from the tissue of the brain and hopefully cannot spread to other parts of the body. Glioma can be classified into low-grade glioma (LGG) and highgrade glioma (HGG).
LGGs are primary brain tumors and usually affect young people compared to HGGs. Multimodal MR sequences comprised of FLAIR, T1, T1c, and T2 are usually used to segment internal parts of the tumor, i.e., whole tumor (WT), tumor core (TC), and enhancing tumor (ET) as depicted in Fig. 1. Since the shape and location of tumors are unpredictable, it is difficult to identify exactly type of brain tumor by studying the brain scans. On the other hand, the low tissue contrast in the lesion regions makes the tumor segmentation a challenging task. Moreover, manual annotation of these tumors is a timeconsuming and often biased task. Thus, automatic segmentation approaches are a crucial task in diagnosis, analysis, and treating plane.
Many segmentation methods have been proposed to segment tissue of interest based on traditional [5,6,20,19] and modern machine learning methods [7] in medical application. The brain tumor segmentation methods [8,14] can be roughly categorized into the pixel-wise [15,9] and region-wise [11,21,16,18,12] techniques. The former predicts only the central pixel of each input patch while the latter predicts labels of the most pixels inside the input patches. The regionwise methods are usually based on 3D [21,11,16] and 2D [18,12] fully convolutional networks (FCNs). Wang et al. [21] applied the cascaded framework with three stages to segment WT, TC, and ET on each stage, respectively. Isensee et al. [11] employed a U-Net-like architecture [17] that was trained on the BraTS training dataset [13,3] along with a private clinical dataset with some augmentations. In another work, Pereira et al. [16] introduced two new blocks to extract discriminative feature maps: recombination-recalibration (RR) and segmentation squeeze-and-excitation (SegSE) blocks. In 2D structures, Shen et al. [18] utilized a multi-task FCN framework to segment tumor regions. Additionally, Le et al. [12] introduced deep recurrent level set (DRLS) based on VGG-16 with three layers: convolutional, deconvolutional, and LevelSet layer. In the pixel-wise networks [15,9], the authors established the 2D CNN-based model to predict a single class label for the central pixel in the 2D multimodal patches. However, the intra-slice features are not used in their segmentation frameworks.
Although the 3D FCN models can capture 3D information from MRI scans, 3D architectures are too computationally expensive because of the complicated network structure, including the 3D kernels, 3D input patches, and input dimensions. Notably, the size of the image patches is the most notable memory factor in convolutional nets, especially in the multimodal BraTS scans with four sequences. In the case of multimodal 3D scans, we have 5-dimensional tensors, including batch size, width, length, depth, and the number of modality concatenation. These tensors require much more memory for training and testing compared to 2D FCN.
The focus of the current study is to develop a 3D to 2D conversion network for the pixel-wise segmentation. The conversion block employs squeezeand-excitation (SE) block to adaptively calibrate slices in the input patch sequence by explicitly modeling the interdependencies between these slices. The bottleneck layer is applied to encode the 3D patches to 2D ones to decrease the number of input channel to the following feature extraction block. We use multimodal 2D output patches for segmentation through the 2D-CNN network. Particularly, we utilize the 3D feature between consecutive slices while using convolutional layers with 2D kernels in our framework. The rest of our paper is organized as follows. In Section 2, we describe 3D to 2D conversion method. Section 3 explains the databases used for evaluation and experimental results. Some conclusions are drawn in Section 4.
Method
Our goal is to segment an input MR volume, I ∈ R H×W ×D , according to manual labels S ∈ {1, 2, ..., c} H×W ×D , where c is the number of output classes. Also H, W , and D are the spatial height, width, and depth, respectively. Let x ∈ R ω×ω×L denotes the cropped 3D input patch on the central voxel,
x ω 2 , ω 2 , L 2 .
We need to predict the label of central voxels in each extracted 3D patch via 2D-CNN network. Fig. 2 demonstrates an overview of the proposed method. We first introduce the adaptive 3D to 2D conversion module, and then 2D-CNN architecture will be discussed.
Convolutional 3D to 2D Patch Conversion
We extend the SE block [10] to deal with the calibration of input 3D patches. Our model squeezes the global spatial information in each slice by computing average in each slice as:
z l = F sq (x l ) = 1 ω × ω ω i=1 ω j=1 x l (i, j).(1)
where z l is the global embedded information in the slice of l. The second operation called 'excitation' is applied to capture slice-wise dependencies with a sigmoid (σ) and ReLU (δ) activation, respectively. Thus we have:
u = F ex (z, W ) = σ(W 2 δ(W 1 z))(2)
where W 1 ∈ R r×w 2 and W 2 ∈ R w 2 ×r are the weight matrices of two fullyconnected with reduction ratio r. At last, the scalar u l and input slice x l are multiplied to obtain the calibrated 3D patch, x ∈ R ω×ω×L . Our bottleneck layer is a block that contains one convolutional layer with the kernel size of 1 × 1 to represent calibrated 3D slices as 2D with nonlinear dimensionality reduction, x . Each 2D patch thus forms a 3D-like representation of a part (n consecutive slices) of the MR volume. This model allows incorporating some 3D information while bypassing the high computational and memory requirements of the 3D CNN.
Classifier Block for Pixel-wise Prediction
The output slices from four 3D to 2D blocks are concatenated and fed into classifier block to predict the label of voxel where is located at the center of its cropped patch. The proposed network allows jointly capturing contextual features from FLAIR, T1, T1c, and T2 modality. For feature extraction, we rely on CNN block to learn from ground truth scores. Our feature extractor consists of two levels of 3 × 3 convolutions along with max-pooling layers. The number of kernels in each level is 32, 32, 32, 64, 64, and 64, respectively. The fullyconnected layers are composed of 64 and 32 hidden neurons, respectively, followed by the final Softmax layer. Finally, we optimize cross-entropy loss between the predicted score, F seg (x F LAIR , x T 1 , x T 1c , x T 2 ; W), and the ground truth label, s ω 2 , ω 2 , L 2 , with ADADELTA optimizer [22] as:
arg min W − c i s ω 2 , ω 2 , L 2 . log(F seg (x F LAIR , x T 1 , x T 1c , x T 2 ; W))(3)
where c is the class number and W is the trainable parameter of the model.
Experimental Results
Implementation Details
We implement the proposed method using the KERAS and TensorFlow with 12GB NVIDIA TITAN X GPU. We have experimentally found that volumes of seven have the best compromise between accuracy and complexity. Thus, the input MR volumes are partitioned into 33 × 33 × 7 patches at the center of each label, then the concatenated patches from four modalities are considered as training data. For efficient training and class imbalance in brain tumor, we perform augmentation in the number of patches for the small sample size classes. The model is trained using the ADADELTA [22] optimizer (learning rate = 1.0, ρ = 0.95, epsilon=1e-6) and cross-entropy as the loss function. Dropout is employed to avoid over-fitting during the training process (p drop = 0.5).
Datasets
The performance of the proposed pixel-wise method is evaluated on BraTS [4,2,1,3,13]
Segmentation Results on BRATS'13
Ablation Study To investigate the effect of the proposed adaptive 3D to 2D block, we perform experiments with and without considering the 3D to 2D block. For the latter, we directly apply multimodal 3D volume into the 3D plain CNN model. We train both models with the 320K patch for an equal number of the patch in each group and validate on ten unseen subjects. Also, Dropout is employed to avoid over-fitting during the training process (p drop = 0.5). As presented in Table 1, the results with 3D to 2D block increase the accuracy of segmentation in terms of standard evaluation metrics compared to the 3D baseline.
Comparison with State-of-the-arts We also compare the performance of the proposed method with the well-known pixel-wise approach [15,9] and 2D regionwise ones [18] on BraTS'13 Challenge. Table 2 shows DSC (%), Sensitivity, and PPV for EN, WT, and TC, respectively. Moreover, it can be seen that the proposed method outperforms others in DSC for WT.
Segmentation Results on BRATS'19
One limitation of pixel-wise methods is the time complexity at inference time due to pixel by pixel prediction. Specifically, we have to process about 9M voxels per channel for each patient. Although we eliminate voxels with the value of zero in testing time, the pixel-wise prediction still needs longer time compared to region-wise ones. This issue limits our method for evaluation on BraTS'19 with 125 validation samples. To decrease the inference time, we use a plain 3D U-Net model to solely predict WT as an initial segmentation, which further allows us to compute a bounding box concerning tumor region for our pixel-wise method. In this way, the segmentation of the internal part of the tumor area is performed inside the bounding box. The results in Table 3 show that our method achieved competitive performance on automatic brain tumor segmentation. Results are reported in the online processing platform by BraTS'19 organizer. Moreover, Fig. 3 shows examples for glioma segmentation from validation slices of BraTS'19. For simplicity of visualization, only the FLAIR image is shown in the axial and sagittal view along with our segmentation results. The subject IDs in each column are related to the validation set.
Conclusion
This paper provides a framework that adaptively converts 3D patch into 2D to highlight discriminative pixels for the label prediction of central voxels. The converted 2D images are fed into the classifier block with 2D kernels for the predication. This conversion enables incorporating 3D features while bypassing axial slices sagittal slices the high computational and memory requirements of fully 3D CNN. We provided ablation study to examine the effect of our proposed conversion block on the segmentation performance. Results from the BraTs'13 and BraTS'19 dataset confirm that inter and intra-slice features effectively improve the performance while using 2D convolutional kernels. Though pixel-wise methods have limitation in inference time, we can take advantage of pre-trained network for classification purpose through fine-tuning with MRI training set. Future works will concentrate on 3D to 2D patch conversion with an attention mechanism.
Fig. 1 .
1Structural MRI provides a non-invasive method to determine abnormal changes in the brain for clinical purpose. Four MRI modalities (FLAIR, T1, T1c, and T2) along with brain lesion: WT (all internal parts), TC (all except edema), and ET (enhancing tumor).
assessed by the SMIR 3 and CBICA IPP 4 online platforms. Metrics computed by the online evaluation platforms in BraTS'19 are Dice Similarity Coefficient (DSC) and the 95th percentile of the Hausdorff Distance (HD95), whereas, in BraTS'13, the online platform calculates DSC, Sensitivity, and Positive Predictive Value (PPV). DSC is considered to measure the union of automatic and manual segmentation. It is calculated as DSC = 2T P F P +2T P +F N where TP, FP, and FN are the numbers of true positive, false positive, and false negative detections, respectively.
Table 1 .
1Impact of the 3D to 2D conversion block in segmentation: we perform experiments using the same setting to evaluate performance with and without proposed block.dataset to compare with other segmentation methods based on the
pixel. BraTS'13 contains small subjects, i.e., 30 cases for training and 10 cases for
the Challenge. We additionally evaluate the proposed technique on BraTS'19,
which has two publicly available datasets of multi-institutional pre-operative
MRI sequences: Training (335 cases) and Validation (125 cases). Each patient
is contributing 155 × 240 × 240 with four sequences: T1, T2, T1c, and FLAIR.
In BraTS'19, it identifies three tumor regions: non-enhancing tumor, enhanc-
ing tumor, and edema. Evaluation is performed for the WT, TC, and ET. The
Table 2 .Table 3 .
23Comparison of proposed 3D to 2D method with others on BraTS'13 Challenge dataset. DSCs and HD95 of the proposed method on BraTS'19 Validation set (training on 335 cases of BraTS'19 training set). 25 quantile 70.99 88.31 74.63 67.73 87.83 72.88 99.84 99.26 99.56 1.4 2.0 2.0 75 quantile 89.22 94.72 93.39 88.71 96.58 96.04 99.98 99.81 99.93 4.2 5.3 10.2Method
DSC
Sensitivity
PPV
EN WT TC EN WT TC EN WT TC
Shen [18]
0.76 0.88 0.83 0.81 0.90 0.81 0.73 0.87 0.87
Pereira [15]
0.77 0.88 0.83 0.81 0.89 0.83 0.74 0.88 0.87
Havaei [9]
0.73 0.88 0.79 0.80 0.87 0.79 0.68 0.89 0.79
Proposed method 0.74 0.89 0.80 0.78 0.86 0.86 0.73 0.92 0.76
Dice
Sensitivity
Specificity
HD95 (mm)
ET WT TC ET WT TC ET WT TC ET WT TC
Mean
72.48 89.65 79.56 73.25 90.60 79.57 99.87 99.45 99.69 5.4 7.8 8.7
Std.
29.47 8.968 21.62 26.61 08.91 24.77 0.23.5 0.58 0.36 9.2 15.5 13.5
Median 84.46 92.19 89.17 83.20 93.66 91.14 99.94 99.64 99.82 2.2 3.1 3.8
Fig. 3. Segmentation results are overlaid on FLAIR axial and sagittal slices on BraTS'19 Validation Data. The yellow label is edema, blue color means enhancing tumor, and the green one presents the necrotic and non-enhancing tumor core. Each column displays one slice of different Subject IDs of BraTS'19.MDA_959 MDA_1060 WashU_S040 WashU_W053 CBICA_AQE
CBICA_ARR
TCIA10_195
TCIA10_220
https://www.smir.ch/BRATS/Start2013 4 https://ipp.cbica.upenn.edu
Acknowledgment
S Bakas, H Akbari, A Sotiras, M Bilello, M Rozycki, Kirby, Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection. The Cancer Imaging Archive. Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection. The Cancer Imaging Archive (2017).
. 10.7937/K9/TCIA.2017.KLXWJJ1Qhttps://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q
S Bakas, H Akbari, A Sotiras, M Bilello, M Rozycki, Kirby, Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. The Cancer Imaging Archive. Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. The Cancer Imaging Archive (2017).
. 10.7937/K9/TCIA.2017.GJQ7R0EFhttps://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF
Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. S Bakas, H Akbari, A Sotiras, M Bilello, M Rozycki, J S Kirby, J B Freymann, K Farahani, C Davatzikos, 10.1038/sdata.2017.117Nature Scientific Data. 4Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J.S., Freymann, J.B., Farahani, K., Davatzikos, C.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Nature Scientific Data 4, 170-117 (2017). https://doi.org/10.1038/sdata.2017.117
Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. S Bakas, M Reyes, A Jakab, S Bauer, M Rempfler, A Crimi, arXiv:1811.02629arXiv preprintBakas, S., Reyes, M., Jakab, A., Bauer, S., Rempfler, M., Crimi, A., et al.: Identify- ing the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv preprint arXiv:1811.02629 (2018)
Automatic counting of leukocytes in giemsastained images of peripheral blood smear. M Hamghalam, A Ayatollahi, 10.1109/ICDIP.2009.92009 International Conference on Digital Image Processing. Hamghalam, M., Ayatollahi, A.: Automatic counting of leukocytes in giemsa- stained images of peripheral blood smear. In: 2009 International Conference on Digital Image Processing. pp. 13-16 (2009). https://doi.org/10.1109/ICDIP.2009.9
Leukocyte segmentation in giemsa-stained image of peripheral blood smears based on active contour. M Hamghalam, M Motameni, A E Kelishomi, 10.1109/ICSPS.2009.362009 International Conference on Signal Processing Systems. hamghalam, M., Motameni, M., Kelishomi, A.E.: Leukocyte segmentation in giemsa-stained image of peripheral blood smears based on active contour. In: 2009 International Conference on Signal Processing Systems. pp. 103-106 (2009). https://doi.org/10.1109/ICSPS.2009.36
Brain tumor synthetic segmentation in 3d multimodal mri scans. M Hamghalam, arXiv:1909.13640arXiv preprintHamghalam, M., et al.: Brain tumor synthetic segmentation in 3d multimodal mri scans. arXiv preprint arXiv:1909.13640 (2019)
A machine learning approach to brain tumors segmentation using adaptive random forest algorithm. T Hatami, 10.1109/KBEI.2019.87350722019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). Hatami, T., et al.: A machine learning approach to brain tumors segmen- tation using adaptive random forest algorithm. In: 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). pp. 076-082 (2019). https://doi.org/10.1109/KBEI.2019.8735072
Brain tumor segmentation with deep neural networks. M Havaei, A Davy, D Warde-Farley, A Biard, A C Courville, Y Bengio, C Pal, P Jodoin, H Larochelle, Medical Image Analysis. 35Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A.C., Bengio, Y., Pal, C., Jodoin, P., Larochelle, H.: Brain tumor segmentation with deep neural networks. Medical Image Analysis 35, 18-31 (2017)
Squeeze-and-excitation networks. J Hu, L Shen, G Sun, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7132-7141 (2018)
Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. F Isensee, P Kickingereder, W Wick, M Bendszus, K H Maier-Hein, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. SpringerIsensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. pp. 287-297. Springer (2018)
Deep recurrent level set for segmenting brain tumors. T H N Le, R Gummadi, M Savvides, Medical Image Computing and Computer Assisted Intervention. SpringerLe, T.H.N., Gummadi, R., Savvides, M.: Deep recurrent level set for segmenting brain tumors. In: Medical Image Computing and Computer Assisted Intervention. pp. 646-653. Springer (2018)
The multimodal brain tumor image segmentation benchmark (BRATS). B H Menze, A Jakab, S Bauer, J Kalpathy-Cramer, K Farahani, 10.1109/TMI.2014.2377694IEEE Transactions on Medical Imaging. 3410Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015). https://doi.org/10.1109/TMI.2014.2377694
Diagnosis of astrocytoma and globalastom using machine vision. D Najrabi, 10.1109/CFIS.2018.83366616th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS). Najrabi, D., et al.: Diagnosis of astrocytoma and globalastom using machine vision. In: 2018 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS). pp. 152-155 (2018). https://doi.org/10.1109/CFIS.2018.8336661
Brain tumor segmentation using convolutional neural networks in mri images. S Pereira, A Pinto, V Alves, C A Silva, IEEE Transactions on Medical Imaging. 355Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain tumor segmentation using con- volutional neural networks in mri images. IEEE Transactions on Medical Imaging 35(5), 1240-1251 (2016)
Adaptive feature recombination and recalibration for semantic segmentation: application to brain tumor segmentation in mri. S Pereira, V Alves, C A Silva, Medical Image Computing and Computer Assisted Intervention. SpringerPereira, S., Alves, V., Silva, C.A.: Adaptive feature recombination and recalibra- tion for semantic segmentation: application to brain tumor segmentation in mri. In: Medical Image Computing and Computer Assisted Intervention. pp. 706-714. Springer (2018)
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Medical Image Computing and Computer-Assisted Intervention. SpringerRonneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed- ical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention. pp. 234-241. Springer (2015)
Boundary-aware fully convolutional network for brain tumor segmentation. H Shen, R Wang, J Zhang, S J Mckenna, Medical Image Computing and Computer-Assisted Intervention. SpringerShen, H., Wang, R., Zhang, J., McKenna, S.J.: Boundary-aware fully convolu- tional network for brain tumor segmentation. In: Medical Image Computing and Computer-Assisted Intervention. pp. 433-441. Springer (2017)
A novel random-valued impulse noise detector based on mlp neural network classifier. S Soleimany, 10.1109/RIOS.2017.79564612017 Artificial Intelligence and Robotics (IRA-NOPEN). Soleimany, S., et al.: A novel random-valued impulse noise detector based on mlp neural network classifier. In: 2017 Artificial Intelligence and Robotics (IRA- NOPEN). pp. 165-169 (2017). https://doi.org/10.1109/RIOS.2017.7956461
Segmentation of whole tumor using localized active contour and trained neural network in boundaries. M Soleymanifard, 10.1109/KBEI.2019.87350502019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). Soleymanifard, M., et al.: Segmentation of whole tumor using localized active contour and trained neural network in boundaries. In: 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). pp. 739-744 (2019). https://doi.org/10.1109/KBEI.2019.8735050
Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. G Wang, W Li, S Ourselin, T Vercauteren, SpringerWang, G., Li, W., Ourselin, S., Vercauteren, T.: Automatic brain tumor segmen- tation using cascaded anisotropic convolutional neural networks. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. pp. 178-190. Springer (2018)
ADADELTA: an adaptive learning rate method. M D Zeiler, CoRR abs/1212.5701Zeiler, M.D.: ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701 (2012)
| []
|
[
"Can we hear physical and social space together through prosody?",
"Can we hear physical and social space together through prosody?"
]
| [
"Ambre Davat [email protected] \nGIPSA-lab\nUniv. Grenoble Alpes\nCNRS\nGrenoble INP*\nGrenobleFrance\n",
"Véronique Aubergé [email protected] \nUniv. Grenoble Alpes\nCNRS\nGrenoble INP*, LIG\nGrenobleFrance\n",
"Gang Feng [email protected] \nUniv. Grenoble Alpes\nCNRS\nGrenoble INP*, LIG\nGrenobleFrance\n"
]
| [
"GIPSA-lab\nUniv. Grenoble Alpes\nCNRS\nGrenoble INP*\nGrenobleFrance",
"Univ. Grenoble Alpes\nCNRS\nGrenoble INP*, LIG\nGrenobleFrance",
"Univ. Grenoble Alpes\nCNRS\nGrenoble INP*, LIG\nGrenobleFrance"
]
| []
| When human listeners try to guess the spatial position of a speech source, they are influenced by the speaker's production level, regardless of the intensity level reaching their ears. Because the perception of distance is a very difficult task, they rely on their own experience, which tells them that a whispering talker is close to them, and that a shouting talker is far away. This study aims to test if similar results could be obtained for prosodic variations produced by a human speaker in an everyday life environment. It consists in a localization task, during which blindfolded subjects had to estimate the incoming voice direction, speaker orientation and distance of a trained female speaker, who uttered single words, following instructions concerning intensity and social-affect to be performed. This protocol was implemented in two experiments. First, a complex pretext task was used in order to distract the subjects from the strange behavior of the speaker. On the contrary, during the second experiment, the subjects were fully aware of the prosodic variations, which allowed them to adapt their perception. Results show the importance of the pretext task, and suggest that the perception of the speaker's orientation can be influenced by voice intensity. | 10.21437/speechprosody.2020-146 | [
"https://export.arxiv.org/pdf/2305.13021v1.pdf"
]
| 219,467,369 | 2305.13021 | 5631d96631fe253b152f62e90d4cbaa8ec6d396d |
Can we hear physical and social space together through prosody?
Ambre Davat [email protected]
GIPSA-lab
Univ. Grenoble Alpes
CNRS
Grenoble INP*
GrenobleFrance
Véronique Aubergé [email protected]
Univ. Grenoble Alpes
CNRS
Grenoble INP*, LIG
GrenobleFrance
Gang Feng [email protected]
Univ. Grenoble Alpes
CNRS
Grenoble INP*, LIG
GrenobleFrance
Can we hear physical and social space together through prosody?
Index Terms: speech localizationacoustic proxemicssocial- affective prosodysocial spaceecological experimentation
When human listeners try to guess the spatial position of a speech source, they are influenced by the speaker's production level, regardless of the intensity level reaching their ears. Because the perception of distance is a very difficult task, they rely on their own experience, which tells them that a whispering talker is close to them, and that a shouting talker is far away. This study aims to test if similar results could be obtained for prosodic variations produced by a human speaker in an everyday life environment. It consists in a localization task, during which blindfolded subjects had to estimate the incoming voice direction, speaker orientation and distance of a trained female speaker, who uttered single words, following instructions concerning intensity and social-affect to be performed. This protocol was implemented in two experiments. First, a complex pretext task was used in order to distract the subjects from the strange behavior of the speaker. On the contrary, during the second experiment, the subjects were fully aware of the prosodic variations, which allowed them to adapt their perception. Results show the importance of the pretext task, and suggest that the perception of the speaker's orientation can be influenced by voice intensity.
Introduction
In the last century, several technologies have been developed to make humans ubiquitous. First, the telephone allowed people to speak to each other in real time, no matter their physical distance. The videophone further enhanced ubiquity, by also transmitting the image of its user. Nowadays, telepresence robots represent a new step in remote and ubiquitous immersion. This time, the goal is to bring the body of a person from one place to another, by using a remote controlled robot which embodies its user.
These technologies are currently developed for business or medical applications, where high social immersion of the remote user is needed. It is therefore interesting to reconsider the fidelity of voice transmission. The vocal artefacts produced by these robots have yet to be integrated into the ways we use them. In particular, it was observed that people interacting with a telepresence robot are reluctant to tune its volume, while tuning the volume of a phone when it's too high or too low is a perfectly common behavior. Unlike phones, correcting the volume of the robot would require physically interacting with their interlocutor's substitute body.
To improve the illusion of the remote users' presence, and allow them to conform to social rules, they need to be able to adapt their speech to the local environment. Previous research on this subject mostly consist in ensuring that the loudness of the robot is convenient for the local interlocutors [1]- [3]. This implies an artificial increase or lowering of the speaker's voice intensity to keep the voice intelligible. However, the impact of these variations on the interaction is yet to be studied.
Voice intensity depends on multiple elements of context, including acoustic properties of the environment, distance of hearing, as well as the speaker's social role [4]. Audio technologies enables these elements to be dissociated, leading to what [5] referred as "schizophony". In particular, intimate voices with a small earshot can be amplified in order to be audible by a large audience [6], [7]. Some results in psychoacoustics suggest that these artefacts could affect distance perception. In [8]- [12], subjects were asked to estimate the position of a sound source, and gave closer distances when hearing whispers, and further distances when hearing shouts.
Tuning the volume of a telepresence robot may therefore affect the users' acoustic proxemics. This effect should be evaluated in realistic conditions. In this paper, we present a first study aiming at assessing if the perception of spatial information can be affected by prosodic variations. It consists in two experiments. One has already been described in [13], and used a complex scenario, so that the subjects were focused on a task totally different from localization. In the second experiment, our aim was clear for the subjects, who were also asked to recognize the prosodic patterns. In this article, we will briefly summarize the methodology used, and compare the results of both experiments.
Method
The localization test took place in a reverberant room (reverberation time around 0.8 s). The subjects (S) were blindfolded and sat in the middle a square space ( Figure 1). As a part of the pretext task, eight tables with plastic cups were placed around them. One experimenter (E1) moved to twelve predefined positions in the room. She uttered single words from a list of 40 scents with varying number of syllables (ex: rose, eucalyptus). A second experimenter (E2) sat next to the subject. Loudspeakers were set up behind the subject, and rhythmic music was played between each utterance, in order to mask the speaker's footsteps when she changed position. Five parameters varied during the experiment:
• the speaker's direction: left, right, behind or in front of the subject; • her orientation: she was either facing towards the subject (face), or turning her back to them (back): those first parameters represent basic spatial information, that subjects should be able to guess while driving a telepresence robot; they were included in anticipation of future experiments; • her distance: 1.7 m (close distance), 2.5 m (middle distance) or 3.3 m away from the subject (far distance); these distances cover both close and far phases of social space according to Hall's proxemics theory [14], while being close enough to be difficult to distinguish [15]; • the social-affect she expressed: polite doubt (intended to bring the listener socially closer) or authoritative confidence (intended to push back the listener) (see section 3 for prosodic analyses); • her voice intensity: low or loud. Further details concerning the choice of these parameters can be found in [13]. In order to shorten the duration of the experiment, the orientation varied only for middle distance: By default, the speaker was always facing towards the subject. For each test, a list of the 64 combinations of these 5 variables was randomly selected. It is worth noticing that the speaker E1 is a French native language phonetician, who has been studying audio-visual prosodic attitudes for thirty years. She is able to produce consistent French vocal attitudes, which were perceptually validated in previous studies (see, for instance, [16]).
During the first experiment, 10 subjects (all native French speaker) were convinced that they were participating in a study about the interferences between olfaction and taste during social interactions. E1 pretended to be a professional Nose, and the words she pronounced were supposed to be the flavor of the pills she had to identify during the experiment. The subject S wore a blindfolding mask, supposedly for preventing both of them from reading emotion on the face of the other. S was asked to localize E1's position in the room, so we could monitor if s/he was still focused on the interaction task. Then, E1 gave to S a smelling jar, and s/he was able to have a short discussion with E1 to express their views on the flavor. After the task was completed, the subject was informed of the real aim of the experiment, and had the choice to ask to delete the data. If s/he agreed with the use of these data, s/he signed a new consent form canceling the one they signed initially.
During the second experiment, we didn't use the pretext task. This time, 8 new subjects (7 native French speakers) were asked to guess the speaker's distance, orientation and direction. Moreover, they had to label the words they heard as "low doubt", "loud doubt", "low confidence" or "loud confidence". They signed only one consent form, at the beginning of the experiment.
Before analyzing the results of both experiments, we need to validate the speaker's productions.
Validation
The speaker's performances were evaluated a posteriori, using recordings obtained with a Sennheiser HSP4 wireless headworn microphone. Every key-word was extracted by hand and labeled with the instructions given to the speaker. For brevity, the four classes of stimuli are labeled as "doubt", "DOUBT", "confidence" and "CONFIDENCE", uppercase letters corresponding to loud intensity.
First, the variations in intensity were checked (Table 1). Low stimuli clearly differ from loud stimuli, as they are 8.8 dB lower on average. This means that the speaker managed to respect the instructions she was given. Standard deviations are quite high, because the intensity also varies depending on the said word. Furthermore, the social-affect appears to have an impact on the intensity. Low doubt is indeed 2.4dB lower than low confidence, while loud confidence is 2.3dB louder than loud doubt. The productions of social-affects are also interesting to analyze. The average intensity and pitch curves for each class of stimuli are shown in Figure 2. Intensity curves are consistent with the previous measures, being lower for the low stimuli, and higher for the loud stimuli. Pitch curves are also very specific: ascending for doubtful stimuli, versus descending for confident stimuli. Moreover, the word duration varies significantly, as shown in Table 2. On average, doubtful stimuli are 320ms longer than confident stimuli. The duration also depends on intensity, as loud doubt is 87ms longer than low doubt, and low confidence 98ms shorter than loud confidence. Voice quality was also considered, as shown in Table 3. Breathiness and laxness are strongly correlated with doubt, while tenseness is correlated with confidence. In particular, 95% of doubt stimuli are labeled as breathy or lax, and 95% of confidence stimuli are labeled as tense. On the contrary, only 8% of doubt stimuli are labeled as tense, and 5% of loud confidence stimuli are labeled as breathy or lax.
It is worth noticing that the speaker found loud doubt and low confidence harder to produce. This initial feeling is confirmed by the analyses. As shown above, low confidence is not as low as low doubt, while loud doubt is not as loud as loud confidence. In order to produce these counter-intuitive stimuli, she exaggerated the word duration: loud doubt is therefore longer, and low confidence shorter. Moreover, the percentages in voice quality labels are more extreme when the intensity is coherent with the social-affect.
Results
The answers of the subjects were compared with the real productions of the speaker. Recognition scores were computed for each variable in different conditions. They are shown on Figure 3. Each plot is followed by a measure of p-value obtained with an ANOVA implemented with the R software.
Direction
The speaker's direction was well perceived by the subjects, who obtained very high recognition scores (> 90 %) in both experiments. Most of the errors were due to front/back confusion, which are common during localization tests [18]. It is worth noticing that these confusions occur generally in one direction: When the speaker was behind the subject, in 17 % of cases, they answered that she was in front of them, while the opposite occurred only in 2 % of cases. Neither the socialaffect nor the intensity seem to have any influence on the perception of direction, as the recognition scores are approximately constant in each condition.
Orientation
The orientation was more difficult to perceive, with recognition rates of 79 % and 85 % for experiments 1 and 2, respectively. Moreover, the variations in each condition vary significantly between the experiments. When the subjects were not aware of the aim of the experiment, their results were particularly poor in the low-intensity condition, because they tended to perceive that the speaker was back to them. In the experiment 1, the number of "back" stimuli perceived as "front" is three times higher in the low-intensity condition than in the loud-intensity condition, while in the experiment 2, both counts are approximately equal. There is also a visible effect of the social-affect in the first experiment, but considering the high inter-subject variability, it is not strong enough to be statistically significant.
Distance
The most difficult variable to estimate was the distance: In both experiments, the subjects were right only for 58 % of the stimuli. The close distance was generally well recognized (only 9% of the errors). Most of the errors (62 %) occurred for the middle distance, which was perceived as far in 70 % of cases. Generally, when the subjects were wrong, they chose the adjacent distance. Therefore, close was perceived as far only in 10 % of the wrong recognitions, and far was perceived as close only in 15 % of the wrong recognitions. Surprisingly, there is no improvement between the two experiments. Again, there is no clear effect of the social-affect or the intensity in the second experiment. However, the social-affect seems to have a little effect in the first experiment, the recognition scores being lower when the speaker was confident. During the second experiment, the subjects were asked to identify the labels of each stimulus. Their results are Table 4. The scores were very high when the social-affect was coherent with the intensity, i.e. for low doubt and loud confidence. In particular, loud confidence was never mistaken as doubt, and low doubt was never mistaken as loud confidence. Ambiguous stimuli were more difficult to classify and the subjects were right only half of the time. On average, social-affect was more easily perceived by the subject than intensity, as the recognition rates on both variables are respectively 89 % and 76 %.
Social-affect and intensity
Discussion
The aim of this study was to test if the perception of a speaker's spatial position could be affected by prosodic variations in a non-anechoic environment. First, a complex scenario was designed in order to evaluate the subjects' localization skills when their attention is diverted from the real aim of the experiment by a pretext task. In these conditions, the subjects were fully focused on the localization task, but were not able to guess that the speaker's prosodic variations were part of the experiment. Then, a simplified version of this experiment without pretext task was implemented, for comparison purposes.
In the first experiment, we observed some effects of the prosody on the localization skills of the listener, despite a strong inter-subject variability. In particular, subjects tended to perceive that the speaker was facing away from them, when she pronounced words with a low intensity. Distance was also harder to perceive when the speaker expressed confidence. None of these effects could be obtained in the second experiment. This seems to validate our first choice of designing a complex protocol, in order to divert the subject's attention from the aim of the experiment. When they are aware of the prosodic variations, they probably adapt better to them and it is no longer possible to observe an influence of socialaffect and intensity.
However, the first experiment was difficult to set up, due to the availability of the speaker, as well as long and stressful for the experimenters. Therefore, only a small number of subjects passed the experiment. We tried to contact these first subjects for the second experiment, but most of them being students in their last year of university, they were no longer in the city at the time of the tests.
Another issue of this experimental protocol is the question of reproducibility. It was not possible to use prerecorded sounds, as it would have shattered the pretext of the experiment. The subjects needed to believe that the speaker was in the same room with them. The use of a portable loudspeaker was considered, but the subjects could have heard a difference between the pre-recorded words and the spontaneous talking generated by the experiment. Instead, the speaker's productions were analyzed a posteriori. The analyses showed four different prosodic patterns, i.e. one for each combination of social-affect and intensity. The speaker was better at following the instructions in intensity and voice quality, when the social-affect was coherent with the intensity. Coherent stimuli were better classified by the subjects in the second experiment. This is another proof that voice intensity is strongly linked to social-affect.
The results are therefore positive, but need to be confirmed by new reproductions. Our next step consists in an online listening test. This time, the stimuli have been recorded with our telepresence robot. It will be presented as a test to evaluate the quality of acoustic immersion for telepresence.
Figure 1 :
1Photo and top-view of the experimental setting. The crosses correspond to the speaker's spatial positions.
Figure 2 :
2Average pitch and intensity curves for each class of stimuli in both experimentsrepresented in
Figure 3 :
3Localization recognition rates. Comparison between two conditions: with or without a pretext task.
Table 1 :
1Intensity measures for each class.Procedure: An A-weighting was applied to the numerical recordings. Then the intensity was measured on 20ms-frames with the algorithm used in Praat[17] in order to obtain readable values.Class
Mean (dB)
Standard
deviation (dB)
Number
of stimuli
doubt
47.9
5.1
291
DOUBT
56.6
5.4
281
confidence
50.2
5.0
281
CONFIDENCE
58.9
4.7
298
Table 2 :
2Duration measures for each class.Class
Mean (ms)
Standard
deviation (ms)
Number
of stimuli
doubt
752
177
291
DOUBT
839
207
281
confidence
423
120
281
CONFIDENCE
521
134
298
Table 3 :
3Voice quality for each class.Class
Lax
(%)
Breathy
(%)
Modal
(%)
Tense
(%)
doubt
95.9
96.5
4.5
0.7
DOUBT
81.0
89.6
66.0
16.1
confidence
6.8
8.5
41.6
91.5
CONFIDENCE
0.7
0.7
11.3
98.3
Table 4 :
4Confusionmatrix for the perception of
intensity and social-affect in experiment 2.
Production
doubt DOUBT
conf.
CONF.
Perception
doubt
0.67
0.19
0.11
0
DOUBT
0.23
0.48
0.01
0
conf.
0.10
0.10
0.50
0.07
CONF
0
0.22
0.38
0.93
« A case study of an automatic volume control interface for a telepresence system. M Takahashi, M Ogata, M Imai, K Nakamura, K Nakadai, 10.1109/ROMAN.2015.73336052015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Kobe, JapanM. Takahashi, M. Ogata, M. Imai, K. Nakamura, et K. Nakadai, « A case study of an automatic volume control interface for a telepresence system », in 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 2015, p. 517-522, doi: 10.1109/ROMAN.2015.7333605.
« Yelling in the hall: using sidetone to address a problem with mobile remote presence systems. A Paepcke, B Soto, L Takayama, F Koenig, B Gassend, 10.1145/2047196.2047209Proceedings of the 24th annual ACM symposium on User interface software and technology -UIST '11. the 24th annual ACM symposium on User interface software and technology -UIST '11Santa Barbara, California, USA107A. Paepcke, B. Soto, L. Takayama, F. Koenig, et B. Gassend, « Yelling in the hall: using sidetone to address a problem with mobile remote presence systems », in Proceedings of the 24th annual ACM symposium on User interface software and technology -UIST '11, Santa Barbara, California, USA, 2011, p. 107, doi: 10.1145/2047196.2047209.
A Kimura, M Ihara, M Kobayashi, Y Manabe, K Chihara, Visual Feedback: Its Effect on Teleconferencing. Heidelberg; Berlin HeidelbergSpringer4553J. A. Jacko, Éd. BerlinA. Kimura, M. Ihara, M. Kobayashi, Y. Manabe, et K. Chihara, « Visual Feedback: Its Effect on Teleconferencing », in Human- Computer Interaction. HCI Applications and Services, vol. 4553, J. A. Jacko, Éd. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, p. 591-600.
« The listening talker: A review of human and algorithmic context-induced modifications of speech. M Cooke, S King, M Garnier, V Aubanel, 10.1016/j.csl.2013.08.003Computer Speech & Language. 282M. Cooke, S. King, M. Garnier, et V. Aubanel, « The listening talker: A review of human and algorithmic context-induced modifications of speech », Computer Speech & Language, vol. 28, n o 2, p. 543-571, 2014, doi: 10.1016/j.csl.2013.08.003.
The tuning of the world. R M Schafer, Alfred A. KnopfR. M. Schafer, The tuning of the world. Alfred A. Knopf, 1977.
« The Proxemics of the Mediated Voice: An Analytical Framework for Understanding Sound Space in Mediated Talk. A Maasø, Lowering the Boom: Critical Studies in Film Sound. Jay Beck and Anthony GrajedaUrbana and ChicagoUniversity of Illinois pressPreprint of chapterA. Maasø, « The Proxemics of the Mediated Voice: An Analytical Framework for Understanding Sound Space in Mediated Talk », Preprint of chapter in Lowering the Boom: Critical Studies in Film Sound (2008), edited by Jay Beck and Anthony Grajeda, pp 36-50. Urbana and Chicago: University of Illinois press.
« Sonic Proxemics and the Art of Persuasion: An Analytical Framework. K Collins, R Dockwray, 10.1162/LMJ_a_00935Leonardo Music Journal. 25K. Collins et R. Dockwray, « Sonic Proxemics and the Art of Persuasion: An Analytical Framework », Leonardo Music Journal, vol. 25, n o 25, p. 53-56, déc. 2015, doi: 10.1162/LMJ_a_00935.
« Distance Estimation of 0° or Apparent 0°-Oriented Speech Signals in Anechoic Space. M B Gardner, 10.1121/1.1911372The Journal of the Acoustical Society of America. 451M. B. Gardner, « Distance Estimation of 0° or Apparent 0°-Oriented Speech Signals in Anechoic Space », The Journal of the Acoustical Society of America, vol. 45, n o 1, p. 47-53, janv. 1969, doi: 10.1121/1.1911372.
« Phenomenal Geometry and the measure-ment of perceived auditory distance. D H Mershon, Binaural and Spatial Hearing in Real and Virtual Environments. Mahwah, New JerseyPsychology PressD. H. Mershon, « Phenomenal Geometry and the measure-ment of perceived auditory distance », in Binaural and Spatial Hearing in Real and Virtual Environments, Mahwah, New Jersey: Psychology Press, 1997, p. 257-274.
Informational and energetic masking effects in the perception of two simultaneous talkers. D S Brungart, 10.1121/1.1345696The Journal of the Acoustical Society of America. 1093D. S. Brungart, « Informational and energetic masking effects in the perception of two simultaneous talkers », The Journal of the Acoustical Society of America, vol. 109, n o 3, p. 1101-1109, mars 2001, doi: 10.1121/1.1345696.
« Knowledge about typical source output influences perceived auditory distance. J W Philbeck, D H Mershon, 10.1121/1.1471899The Journal of the Acoustical Society of America. 1115J. W. Philbeck et D. H. Mershon, « Knowledge about typical source output influences perceived auditory distance », The Journal of the Acoustical Society of America, vol. 111, n o 5, p. 1980, 2002, doi: 10.1121/1.1471899.
« Perception of vocal effort and distance from the speaker on the basis of vowel utterances. A Eriksson, H Traunmüller, 10.3758/BF03194562Perception & Psychophysics. 641A. Eriksson et H. Traunmüller, « Perception of vocal effort and distance from the speaker on the basis of vowel utterances », Perception & Psychophysics, vol. 64, n o 1, p. 131-139, janv. 2002, doi: 10.3758/BF03194562.
« Integrating Socio-Affective Information in Physical Perception aimed to Telepresence Robots. A Davat, V Aubergé, G Feng, 2018 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC). Kaohsiung, TaiwanA. Davat, V. Aubergé, et G. Feng, « Integrating Socio-Affective Information in Physical Perception aimed to Telepresence Robots », in 2018 International Conference on Behavioral, Economic and Socio-cultural Computing (BESC), Kaohsiung, Taiwan, 2018.
. E T Hall, Current Anthropology. 9and Comments and RepliesE. T. Hall et al., « Proxemics [and Comments and Replies] », Current Anthropology, vol. 9, n o 2/3, p. 83-108, 1968.
« Auditory Distance Perception in Humans: A Summary of Past and Present Research. P Zahorik, ACTA ACUSTICA UNITED WITH ACUSTICA. 9112P. Zahorik, « Auditory Distance Perception in Humans: A Summary of Past and Present Research », ACTA ACUSTICA UNITED WITH ACUSTICA, vol. 91, p. 12, 2005.
T Shochi, A Rilliard, V Aubergé, D Erickson, The Role of Prosody in Affective Speech. Peter Lang97« Intercultural Perception of EnglishT. Shochi, A. Rilliard, V. Aubergé, et D. Erickson, « Intercultural Perception of English, French and Japanese Social Affective Prosody », in The Role of Prosody in Affective Speech, Sylvie Hancil., vol. 97, Peter Lang, 2009, p. 31-60.
P Boersma, D Weenink, Praat: doing phonetics by computer. P. Boersma et D. Weenink, Praat: doing phonetics by computer. 2019.
« Sound Localization by Human Listeners. J C Middlebrooks, D M Green, 10.1146/annurev.ps.42.020191.001031Annual Review of Psychology. 421J. C. Middlebrooks et D. M. Green, « Sound Localization by Human Listeners », Annual Review of Psychology, vol. 42, n o 1, p. 135-159, jan. 1991, doi: 10.1146/annurev.ps.42.020191.001031.
| []
|
[
"On the variable timing behavior of PSR B0540−69: an almost excellent example to study pulsar braking mechanism",
"On the variable timing behavior of PSR B0540−69: an almost excellent example to study pulsar braking mechanism"
]
| [
"F F Kou \nXinjiang Astronomical Observatory\nChinese Academy of Sciences\n830011UrumqiChina\n\nUniversity of Chinese Academy of Sciences\n19A Yuquan RoadBeijingChina\n",
"Z W Ou \nXinjiang Astronomical Observatory\nChinese Academy of Sciences\n830011UrumqiChina\n",
"H Tong \nXinjiang Astronomical Observatory\nChinese Academy of Sciences\n830011UrumqiChina\n\nUniversity of Chinese Academy of Sciences\n19A Yuquan RoadBeijingChina\n"
]
| [
"Xinjiang Astronomical Observatory\nChinese Academy of Sciences\n830011UrumqiChina",
"University of Chinese Academy of Sciences\n19A Yuquan RoadBeijingChina",
"Xinjiang Astronomical Observatory\nChinese Academy of Sciences\n830011UrumqiChina",
"Xinjiang Astronomical Observatory\nChinese Academy of Sciences\n830011UrumqiChina",
"University of Chinese Academy of Sciences\n19A Yuquan RoadBeijingChina"
]
| [
"Astron. Astrophys"
]
| PSR B0540−69 has braking index measurement in its persistent state: n = 2.129 ± 0.012. Recently, it is reported to have spin-down state changes: a suddenly 36% increase in the spin-down rate. Combining the persistent state braking index measurement and different spin-down states, PSR B0540−69 is more powerful than intermittent pulsars in constraining pulsar spin-down models. The pulsar wind model is applied to explain the variable timing behavior of PSR B0540−69. The persistent state braking index of PSR B0540−69 is the combined effect of magnetic dipole radiation and particle wind. The particle density reflects the magnetospheric activity in real-time and may be responsible for the changing spin-down behavior. Corresponding to the 36% increase in the spindown rate of PSR B0540−69, the relative increase in the particle density is 88% in the vacuum gap model. And the model calculated braking index in the new state is n = 1.79. Future braking index observation of PSR B0540−69 in the new spin-down state will be very powerful in distinguishing between different pulsar spin-down models and different particle acceleration models in the wind braking scenario. The variable timing behavior of PSR J1846−0258 is also understandable in the pulsar wind model. | 10.1088/1674-4527/16/5/079 | [
"https://arxiv.org/pdf/1507.00643v2.pdf"
]
| 118,467,031 | 1507.00643 | f3555ff79c4a1aee0fad9551bd19282d8b09839d |
On the variable timing behavior of PSR B0540−69: an almost excellent example to study pulsar braking mechanism
5 Dec 2015. 200x
F F Kou
Xinjiang Astronomical Observatory
Chinese Academy of Sciences
830011UrumqiChina
University of Chinese Academy of Sciences
19A Yuquan RoadBeijingChina
Z W Ou
Xinjiang Astronomical Observatory
Chinese Academy of Sciences
830011UrumqiChina
H Tong
Xinjiang Astronomical Observatory
Chinese Academy of Sciences
830011UrumqiChina
University of Chinese Academy of Sciences
19A Yuquan RoadBeijingChina
On the variable timing behavior of PSR B0540−69: an almost excellent example to study pulsar braking mechanism
Astron. Astrophys
005 Dec 2015. 200xResearch in Astronomy and Astrophysicspulsars: general -pulsars: individual (PSR B0540−69; PSR J1846−0258) -stars: neutron -wind
PSR B0540−69 has braking index measurement in its persistent state: n = 2.129 ± 0.012. Recently, it is reported to have spin-down state changes: a suddenly 36% increase in the spin-down rate. Combining the persistent state braking index measurement and different spin-down states, PSR B0540−69 is more powerful than intermittent pulsars in constraining pulsar spin-down models. The pulsar wind model is applied to explain the variable timing behavior of PSR B0540−69. The persistent state braking index of PSR B0540−69 is the combined effect of magnetic dipole radiation and particle wind. The particle density reflects the magnetospheric activity in real-time and may be responsible for the changing spin-down behavior. Corresponding to the 36% increase in the spindown rate of PSR B0540−69, the relative increase in the particle density is 88% in the vacuum gap model. And the model calculated braking index in the new state is n = 1.79. Future braking index observation of PSR B0540−69 in the new spin-down state will be very powerful in distinguishing between different pulsar spin-down models and different particle acceleration models in the wind braking scenario. The variable timing behavior of PSR J1846−0258 is also understandable in the pulsar wind model.
INTRODUCTION
PSR B0540−69, known as the "Crab Twin", is a young radio pulsar with spin-down parameters ν ≈ 19.727 Hz,ν ≈ −1.86 × 10 −10 Hzs −1 (Marshall et al. 2015) and braking index n = 2.129 ± 0.012 (Ferdman et al. 2015). Its characteristic magnetic field is about 10 13 G at the magnetic poles 1 . Only two glitches with relative small changes in spin-down parameters were reported (Zhang et al. 2001;Cusumano et al. 2003;Livingstone et al. 2005;Ferdman et al. 2015). Recently, a persistent and unprecedented increase in the spin-down rate of PSR B0540−69 was observed: the relative increase in the spin-down rate is 36% which is orders magnitude larger than the changes induced by glitches (Marshall et al. 2015). Another pulsar PSR J1846−0258 was also reported to have variable timing behaviors: a net decrease in the spin frequency (∆ν ≈ −10 4 Hz) after the large glitch (Livingstone et al. 2010) and a lower braking index n = 2.19 ± 0.03 (Livingstone et al. 2011;Archibald et al. 2015b) than its persistent state value n = 2.65 ± 0.01 (Livingstone et al. 2006).
The spin-down behavior of pulsars can be described by the power law: Kramer et al. (2006). The on state has larger spin-down rate than the off state. (b): From Marshall et al. (2015). "Low" means the previous spin-down state and "high" means the new spin-down state with a higher spin-down rate. where ν andν are respectively the spin frequency and frequency derivative, C is usually taken as a constant and n is the braking index. The braking index is defined accordingly:
ν = −Cν n ,(1)n = νν ν 2 ,(2)
whereν is the second derivative of spin frequency. The braking index reflects the pulsar braking mechanism (Tong 2015). In the magneto-dipole braking model, a pulsar rotates uniformly in vacuumν ∝ ν 3 . The expected braking index is three which is not consistent with the observations (Lyne et al. 2015). Like the intermittent pulsar (Kramer et al. 2006), PSR B0540−69 also has two different spin-down states. For the intermittent pulsar PSR B1931+24, people tried to measure its braking index during the on and off state (Young et al. 2013). Now this aim has been partially fulfilled by PSR B0540−69 which has not only different spin down states but also braking index measurement for the persistent state ("low" spin-down rate state), see Table 1. Therefore, it can put more constraints on pulsar spin-down models. Any candidate model should explain both the braking index during the persistent state and the variable spin-down rate. Previously, the pulsar wind model (Xu & Qiao 2001) is employed to explain the spin-down behavior of intermittent pulsars ) and braking index of the Crab pulsar (Kou & Tong 2015). In the following, it is shown that both the persistent state braking index and varying spin-down rate of PSR B0540−69 are understandable in the wind braking model. The varying spin-down rate is due to a variable particle wind. And the varying braking index of PSR J1846−0258 is caused by a changing particle density. The pulsar wind model and calculations are listed in Section 2. Discussions and conclusions are presented in Section 3 and Section 4, respectively.
VARIABLE TIMING BEHAVIOR OF PULSARS CAUSED BY A VARYING PARTICLE WIND
Description of the pulsar wind model
Pulsars are oblique rotators in general. The perpendicular and parallel magnetic dipole moment may respectively relate to the magnetic dipole radiation and particles acceleration (Xu & Qiao 2001;Kou & Tong 2015):
Ė d = 2µ 2 Ω 4 3c 3 sin 2 α,(3)E p = 2πr 2 p cρ e ∆φ = 2µ 2 Ω 4 3c 3 3κ ∆φ ∆Φ cos 2 α,(4)
where µ = 1/2BR 3 is the magnetic dipole moment (B is the polar magnetic field and R is the neutron star radius), c is the speed of light, and α is the angle between the rotational axis and the magnetic axis (i. e., inclination angle), Ω = 2πν is the angular velocity of the pulsar, r p = R(RΩ/c) 1/2 is the polar cap radius, ρ e = κρ GJ is the primary particle density where ρ GJ = ΩB/(2πc) is the Goldreich-Julian charge density (Goldreich & Julian 1969) and κ is the dimensionless particle density, ∆φ is the corresponding acceleration potential of the acceleration region, and ∆Φ = µΩ 2 /c 2 is the maximum acceleration potential for a rotating dipole (Ruderman & Sutherland 1975). The pulsar rotational energy is consumed by the combined effect of magnetic dipole radiation and particle acceleration (Xu & Qiao 2001)
− IΩΩ = 2µ 2 Ω 4 3c 3 η,(5)
where I = 10 45 g cm 2 is the moment of inertia, and η = sin 2 α + 3κ∆φ/∆Φ cos 2 α.
The spin-down behavior can be expressed as:
Ω = − 2µ 2 Ω 3 3Ic 3 η.(7)
According the equation (2), the braking index in the pulsar wind model can be written as (Xu & Qiao 2001):
n = 3 + Ω η dη dΩ ,(8)
The exact expression of η (equation (6)) depends on the particle acceleration potential. The vacuum gap model (Ruderman & Sutherland 1975) is taken as an example to show the calculation process and η = sin 2 α + 4.96 × 10 2 κB −8/7 12 Ω −15/7 cos 2 α,
where B 12 is the magnetic field in units of 10 12 G (Kou & Tong 2015). For other acceleration models, the corresponding expressions of η are listed in Table 2 in Kou & Tong (2015).
On the variable timing behavior of PSR B0540−69
A generic picture for the variable timing behavior of PSR B0540−69 and PSR J1846−0258 is: a glitch may occurred during the observations, as that in PSR J1846−0258 (Livingstone et al. 2010). This small glitch may have been missed in the case of PSR B0540−69. This glitch may induce some magnetospheric activities, e.g., outburst (Gavriil et al. 2008). The particle outflow will be stronger during this process. This will cause the pulsar to have a larger spin-down rate (Marshall et al. 2015). After some time, a larger spin-down state will result in a net spin-down of the pulsar compared with previous timing solutions (Livingstone et al. 2010). The braking index will be smaller since the particle wind is stronger (Wang et al. 2012a). When the pulsar magnetosphere relax to its persistent state, if the particle density is still varying with time κ = κ(t), the braking index will be different with the persistent state while the change in spin-down rate may be negelected (Livingstone et al. 2011;Archibald et al. 2015b;Kou & Tong 2015). From previous observations of PSR J1846−0258, its pulse profile has no significant variations before, during and after the outburst (Livingstone et al. 2010(Livingstone et al. , 2011Archibald et al. 2015b).
The observations of magnetar 1E 1048.1−5937 showed that the pulsed flux may not be a good indicator of the magnetospheric activities (Archibald et al. 2015a). The variation of total X-ray flux is needed. Therefore, the enhanced spin-down rate in PSR B0540−69 without changes in pulse profile and pulsed Notes: See Table 2 of Kou & Tong 2015 for the meanings of the acceleration models abbreviations. The minimum braking index of the SCLF (II ICS) is 2.4 which is larger than the persistent braking index of PSR B0540−69: n = 2.129. It means that the SCLF (II, ICS) model can be ruled out or can not exist alone to accelerate particle in the magnetosphere of PSR B0540−69 (Wu et al. 2003;Li et al. 2014).
flux is not unusual (Marshall et al. 2015). The reason may be that the geometry of the pulsar is unchanged during the magnetospheric activities. This may result in a constant pulse profile and pulsed flux.
For PSR B0540−69, giving the persistent state spin-down parameters ν = 19.727 Hz,ν = −1.86× 10 −10 Hz s −1 (Marshall et al. 2015), and inclination angle α = 50 • (the best fitted value given by Zhang & Cheng 2000). Parameters of magnetic field B = 10 13 G and κ = 834 can be calculated (by solving equations (5) and (8)) corresponding to the observed braking index n = 2.129 ± 0.012 (Ferdman et al. 2015). The calculated κ = 834 means that the particle density is 834 times the Goldreich-Julian charge density which is consistent with previous conclusions (Kou & Tong 2015 and references therein).
The spin-down rate of PSR B0540−69 has increased by 36% in the new spin-down state (Marshall et al. 2015). In the pulsar wind model, the variation of the spin-down rate is caused by a different particle densityΩ
′ Ω = η(κ ′ ) η(κ) ,(10)
whereΩ ′ and η(κ ′ ) correspond to the new spin-down state. A larger particle density will result in a higher spin-down rate (equation (7) and (9)). Figure 1 and figure 2 shows respectively the normalized spin-down rateν ′ /ν and braking index as function of normalized particle density κ ′ /κ for PSR B0540−69 in the vacuum gap model. As shown in figure 1, the spin-down rate increases as the particle density increases. An increase in the particle density of 88% will result in the 36% increase in the spindown rate. As particle density increase, the braking index will decrease because the effect of particle wind component is increasing (figure 2). When the particle density increases to 1.88 times the previous value, the braking index decreases to 1.79, the relative change is 15.7%. And the corresponding frequency second derivative will beν = 5.83 × 10 −21 Hz s −2 (equation (2)).
Calculations in all the acceleration models are also made. The same conclusion is obtained from these models: an increasing particle density results in the increase in spin-down rate. For PSR B0540−69, corresponding to the observational 36% relative increase in spin-down rate, the relative increase in the particle density in all these models ranges from 72% to 154%. The second frequency derivative ranges from 4.5 × 10 −21 Hz s −2 to 6.15 × 10 −21 Hz s −2 . Braking indices in the new state in all these acceleration models are list in Table 2. If the conversion efficiency of particle energy to X-ray luminosity is unchanged (Becker 2009), the total X-ray luminosity may also have increased by the same factor.
On the variable timing behaviors of PSR J1846−0258
Spin-down parameters and persistent state braking index of PSR J1846−0258 are respectively: ν = 3.08 Hz andν = −6.72 × 10 −11 Hz s −1 and n = 2.65 ± 0.01 (Table1) (Livingstone et al. 2006). In the pulsar wind model, corresponding to the observational braking index, the magnetic field B = 1.25 × 10 14 G and particle density κ = 28 are calculated in the vacuum gap model with an inclination angle 45 • (a inclination angle of 45 • is chosen in the following calculations 2 ). Such a magnetic field is comparable with the characteristic magnetic field 9.7 × 10 13 G at the poles and much larger than . The dotted line is the braking index 1.79 which is predicted by the increased particle density. magnetic fields of normal pulsars. Then it is not surprising that magnetar activities can be observed in this source (Gavriil et al. 2008).
Variable timing behavior of a net decrease in the spin-down frequency (∆ν ≈ −10 −4 Hz) was detected for PSR J1846−0258 after a larger glitch (Livingstone et al. 2010). The correspondingly relative increase in the spin-down rate is about 7% (∆ν = −4.82 × 10 −12 Hz s −1 during an epoch 240 days when phase coherency is lost). Such an increase in spin down rate may also be caused by a larger particle density. Just like the calculation for PSR B0540−69, in the vacuum gap model of the pulsar wind model, a 44% increase in the particle density results in the 7% increase in the spin-down rate. The braking index will also be smaller during this enhanced spin-down epoch. However, braking index measurement is only available long after the glitch when the timing noise is greatly reduced. A lower braking index 2.19 ± 0.03 is detected after the glitch (Livingstone et al. 2011;Archibald et al. 2015b) which is significantly smaller than its persistent state value n = 2.65 ± 0.01 (Livingstone et al. 2006). In the pulsar wind model, it can be understood by a time varying particle density κ = κ(t) (similar to the Crab pulsar, Kou & Tong 2015):
n = 3 + Ω η dη dΩ − κ η dη dκ τ c τ κ ,(11)
where τ c = − Ω 2Ω is the characteristic age, τ κ = κ 2κ is the typical variation timescale of the particle density. An increasing particle density (τ κ > 0 orκ > 0) lead to a smaller braking index. From figure 3, the braking index is insensitive to τ κ when it is much larger that 1000 yr, but decrease sharply when τ κ is comparable with the characteristic age (about 700 yr). The changing rate of the particle densitẏ κ = 1.68 × 10 −9 s −1 will result in a braking index of 2.19. During the epoch from MJD 55369 to MJD 56651, the particle density has increased 0.66%. The relative increase in spin-down rate is 0.1% which is very small. Therefore, the changing particle density will mainly result in a different braking index while not affecting the spin-down rate. This is the difference between the variable timing behavior of PSR 0540−69 and PSR J1846−0258.
DISCUSSIONS
Observations of intermittent pulsars (Kramer et al. 2006) and measurement of braking indices (Lyne et al. 2015) help to distinguish between different pulsar spin-down mechanisms. The variable spindown rate of PSR B0540−69 combined with its persistent state braking index measurement is more powerful than intermittent pulsars in constraining different models. In the magneto-dipole radiation modelν ∝ µ 2 sin 2 α/I × ν 3 , in order to explain the braking indices (n < 3) of eight young pulsars, an increasing inclination angle , increasing magnetic field (Espinoza et al. 2011) or decreasing moment of inertia (Yue et al. 2007) is expected. The variable spin-down rate may be induced by the change of inclination angle, magnetic field or moment of inertia. Corresponding to the 36% relative increase in the spin-down rate, the relative change should be: 26% increase in inclination angle, or 17% increase in magnetic field, or 26% decrease in moment of inertia. It seems impossible to achieve such huge changes during a short timescale (about 14 days, Marshall et al. 2015). A change in inclination angle is unlikely since the pulse profile did not change significantly (Marshall et al. 2015). An increase in magnetic field will require an increase of magnetic energy by 36%, about 10 42 erg. It is unlikely there is such a huge amount of energy injection. A decrease of moment of inertia will require a decrease of neutron star radius. During this process, a huge amount of gravitational energy will be released (Zhou et al. 2014), about 10 52 erg, which is again unlikely.
Previous models for the spin-down behavior of intermittent pulsars (Beskin & Nokhrina 2007;Li et al. 2012) may also be applied to the variable spin-down rate of PSR B0540−69. However, the expected braking index is three in Beskin & Nokhrina (2007) and the magnetohydrodynamical simulations (Li et al. 2012). Considering the effect of pulsar death or evolution of inclination angle, the braking index will be larger than three (Contopoulos & Spitkovsky 2006;Philippov et al. 2014). Therefore, these models should be modified before they can explain both the persistent state braking index and variable spin-down rate of PSR B0540−69.
There are several models designed for magnetar spin-down which may also be employed to the case of PSR B0540−69. The magnetar spin-down may be dominated by a particle wind (Harding et al. 1999). The calculations in Harding et al. (1999) is equivalent to assuming each outflow particle can attain the maximum acceleration potential of a rotating dipole (Tong et al. 2013). This wind braking model of magnetars was employed by Kramer et al. (2006) to explain the spin-down behavior of the first intermittent pulsar PSR B1931+24. An additional particle outflow in the on state will result in a larger spin-down rate. The rotational energy loss is related to the particle wind luminosity L p as ∝ L p (Harding et al. 1999). A particle wind luminosity 85% larger will result in a spin-down rate 36% larger. The particle wind luminosity is related to the polar cap radius R pc and magnetospheric opening radius r open as L p ∝ R 4 pc ∝ r −2 open (Harding et al. 1999). Therefore, the magnetospheric opening radius will be 26% smaller. However, there are several problems when applying the wind braking model of magnetars to the case of normal pulsars: (1) In the wind braking model of magnetars, a strong particle wind is assumed. The effect of magnetic dipole radiation is neglected. This may be appliable to the case of magnetars whose emissions are dominated by magnetic energy output (Tong et al. 2013). However, in the case of normal pulsars (including intermittent pulsars) the effect of dipole radiation may not be neglected.
(2) In the case of strong particle wind, the braking index is n = 1 (Tong et al. 2013). This is not consistent with the braking index of pulsars (Lyne et al. 2015). (3) When applying to the case of intermittent pulsars (Kramer et al. 2006;Young et al. 2013), pure magnetic dipole braking is assumed for the off state. This may be valid for the case of intermittent pulsars whose radio emissions are stopped in the off state ). However, this assumption can not be applied to the persistent spin-down state of PSR B0540−69 which still has multiwave emissions.
The twisted magnetosphere model of magnetars (Thompson et al. 2002) showed that the effective magnetic field will be larger for a larger twist. If the magnetosphere of PSR B0540−69 is twisted by a glitch, then it will also result in a larger spin-down rate. However, the twisted magnetosphere will relax back to the pure magnetic dipole case in several years (Beloborodov 2009). During this process, the neutron star X-ray luminosity, spin-down rate will both decrease with time. For PSR B0540−69, its high spin-down state has lasted more than 3 years (Marshall et al. 2015). This is inconsistent with the expectation of the twisted magnetosphere model.
There are also external models for the braking index or intermittent pulsar spin-down behavior, e.g., the fallback disk model (Liu et al. 2014 and references therein;Li et al. 2006). However, these external models are hard to verify or falsify. Furthermore, accretion will halt the magnetospheric activities. In the presence of accretion, it may be difficult to reconcile with the radio emissions in PSR B0540−69 and in other pulsars with braking index measured.
Observations of the timing behavior and pulse profile of some pulsars indicate that theν modulation and the pulse-shape variation are correlated, e.g., PSR B0910+16 (Perera et al. 2015a) and PSR B1859+07 (Perera et al. 2015b). The connection indicates that both these phenomena are magnetosphereic origin (Lyne et al. 2010). Theory of a variable particle density in the magnetosphere is successfully applied to explain the spin-down behavior and emission property of the intermittent pulsar (Kramer et al. 2006;Li et al. 2014). The mode changing and nulling pulsar may be understood similarity because of the detection of variation in spin-down rate of PSR J1717−4054 (Young et al. 2015) and a weak emission state in addition to its bright and nulling states of PSRs J1853+0505 and J1107-5907 (Young et al. 2014). For PSR B0540−69, the increase of particle density in the magnetosphere will change the spin-down rate, as well as the pulse profile. The radio giant pulse of PSR B 0540−69 (Johnston et al. 2004) may be caused by a larger out-flowing particle density. Besides, different emission models (core, cone and patch) are applied to explain the variable mean pulse profiles (Lyne & Manchester 1988). And, it is predicted that the corotation of magnetosphere with pulsar may also affects the emission properties (Wang et al. 2012b). Hence, the nonuniform distribution of particles between these core and conal componnets will change the pulse shape also. For PSR B0540−69, the pulse profile has a broad double peak which can be described with two Gaussians with a phase separation of 20% (de Plaa et al. 1993).We could emphasize that: (i) if the particles distribute uniform, the increase in out-flowing particle density may result in the increase of pulse intensity, the ratio of these two component keeps constant; (ii) if the particles distribute nonuniform, both the pulse intensity and the ratio will change; (iii) the coherent manner of radiated particles may affect the pulse shape as well. The pulse profile and total flux in the high spin-down state are needed to compare with them in the low spin-down state.
CONCLUSIONS
The pulsar wind model is applied to explain the variable timing behavior of PSR B0540−69 and PSR J1846−0258. Both the persistent state braking index and the variable spin-down rate of PSR B0540−69 are understandable. A larger particle density will result in an increase in the spin down rate and predicts a smaller braking index. And an increasing particle density will lead to a lower braking index. For PSR B0540−69, in the vacuum gap model, corresponding to the 36% increase in the spin-down rate, the relative increase in particle density is 88%. And the braking index decreases to 1.79. The same conclusion is obtained for the different acceleration models. Since it has both a variable spin-down rate and persistent state braking index measured, PSR B0540−69 is very powerful in constraining different pulsar spin-down mechanisms. Future observation of braking index in the new spin-down state will provide further test on different spin-down models and different particle acceleration models in the wind braking scenario. For PSR J1846−0258, the variable timing behavior of a net decreasing in spin down frequency (∆ν ≈ −10 −4 Hz) can be understood similarly. And a changing rate of particle densitẏ κ = 1.68 × 10 −9 s −1 will result in the lower barking index 2.19.
(c) Mean value of braking index (Ferdman et al. 2015). (d) From Livingstone et al. (2006). (e) From Archibald et al. (2015b).
Fig. 1 Fig. 2
12The normalized spin-down rate as function of the normalized particle density for PSR B0540−69 in the vacuum gap model. The dashed line is the spin-down rate in the persistent state. The dotted line is the new spin-down rate which is 1.36 times the persistent state spindown rate(Marshall et al. 2015). Braking index as function of the normalized particle density for PSR B0540−69 in the vacuum gap model. The dashed line is the persistent state braking index 2.13(Ferdman et al. 2015)
Fig. 3
3The braking index of the PSR J1846−0258 as function of τ κ in the vacuum gap model. The dashed line is the persistent state braking index 2.65(Livingstone et al. 2006). The dotted line is the smaller braking index 2.19 measured after the glitch(Archibald et al. 2015b).
Table 1
1Comparison of spin-down parameters of PSR B1931+24, PSR B0540−69, and PSR J1846−0258. The intermittent pulsar PSR B1931+24 has different spin-down states without any braking index information at present. PSR B0540−69 has both different spin-down states and the persistent state braking index measurement. PSR J1846−0258 is reported to have a variation of braking index.Pulsar name
ν(Hz)ν(Hz s −1 )
braking index
B1931+24 (off) a
1.229
−10.8 × 10 −15 ?
B1931+24 (on) a
1.229
−16.3 × 10 −15 ?
B0540−69 (low) b
19.727 −1.86 × 10 −10 2.129 c
B0540−69 (high) b
19.701 −2.53 × 10 −10 ?
J1846−0258 (persistent state) d 3.08
−6.72 × 10 −11 2.65
J1846−0258 (after glitch) e
3.06
−6.65 × 10 −11 2.19
(a): From
Table 2
2Braking indices of PSR B0540−69 in the new state in all the acceleration models.Acceleration models
VG(CR)
VG(ICS)
SCLF(II,CR)
SCLF(I)
OG
CAP
NTVG(CR)
NTVG(ICS)
Braking index
1.79
1.87
1.90
1.79
1.38
1.83
1.90
1.86
Assuming all the rotational energy is consumed by magneto-dipole radiation in vacuum, B(pole) = 6.4 × 10 19 PṖ G
There is no observational or best fitted inclination angle given.
This paper was prepared with the RAA L A T E X macro v1.2.
ACKNOWLEDGMENTSThe authors would like to thank R.X.Xu for discussions. H.Tong is supported West Light Foundation of CAS (LHXZ201201), 973 Program (2015CB857100) and Qing Cu Hui of CAS.
. R F Archibald, V M Kaspi, C Y Ng, ApJ. 80033Archibald R. F., Kaspi V. M., Ng C. Y., et al., 2015a, ApJ, 800, 33
. R F Archibald, V M Kaspi, A P Beardmore, arXiv:1506.06104Archibald R. F., Kaspi V. M., Beardmore A. P., et al., 2015b, arXiv:1506.06104
Neutron stars and pulsars. W Becker, ASSL. 35791Becker W., 2009, Neutron stars and pulsars, ASSL, 357, 91
. A M Beloborodov, ApJ. 7031044Beloborodov A. M., 2009, ApJ, 703, 1044
. V S Beskin, E E Nokhrina, Ap&SS. 308569Beskin V. S., & Nokhrina E. E., 2007, Ap&SS, 308, 569
. I Contopoulos, A Spitkovsky, ApJ. 6431139Contopoulos I., & Spitkovsky A., 2006, ApJ, 643, 1139
. G Cusumano, E Massaro, T Mineo, A&A. 402647Cusumano G., Massaro E., & Mineo T., 2003, A&A, 402, 647
. J De Plaa, L Kuiper, W Hermsen, A&A. 4001013de Plaa J., Kuiper L., Hermsen W., 2003, A&A, 400, 1013
. C M Espinoza, A G Lyne, M Kramer, ApJ. 74113Espinoza C. M., Lyne A. G., Kramer M., et al., 2011, ApJ, 741, L13
. R D Ferdamn, R F Archibald, V M Kaspi, arXiv:1506.00182Ferdamn R. D., Archibald R. F., & Kaspi, V. M. 2015, arXiv:1506.00182
. F P Gavriil, M E Gonzalez, E V Gotthelf, Science. 3191802Gavriil F. P., Gonzalez M. E., Gotthelf E. V., et al., 2008, Science, 319, 1802
. P Goldreich, W H Julian, ApJ. 157869Goldreich P., & Julian W. H., 1969, ApJ, 157, 869
. A K Harding, I Contopoulos, D Kazanas, ApJ. 525125Harding A. K., Contopoulos I., & Kazanas D., 1999, ApJ, 525, L125
. S Johnston, R W Romani, F E Marshall, MNRAS. 35531Johnston S., Romani R. W., Marshall F. E., et al., 2004, MNRAS, 355, 31
. F F Kou, H Tong, MNRAS. 450Kou F. F. & Tong H., 2015, MNRAS, 450, 1990
. M Kramer, A G Lyne, J T Obrien, Science. 312549Kramer M., Lyne A. G., OBrien J. T., et al., 2006, Science, 312, 549
. J Li, A Spitkovsky, A Tchekhovskoy, ApJL. 74624Li J., Spitkovsky A., & Tchekhovskoy A., 2012, ApJL, 746, L24
. L Li, H Tong, W M Yan, ApJ. 78816Li L., Tong H., Yan W. M., et al., 2014, ApJ, 788, 16
. X D Li, ApJ. 646139Li X. D., 2006, ApJ, 646, L139
. X W Liu, R X Xu, G J Qiao, RAA. 1485Liu X. W., Xu R. X., Qiao G. J., et al., 2014, RAA, 14, 85
. M A Livingstone, V M Kaspi, F P Gavriil, ApJ. 6331095Livingstone M. A., Kaspi V. M., & Gavriil F. P., 2005, ApJ, 633, 1095
. M A Livingstone, V M Kaspi, E V Gotthelf, ApJ. 6471286Livingstone M. A., Kaspi V. M., Gotthelf E. V., et al., 2006, ApJ, 647, 1286
. M A Livingstone, V M Kaspi, E V Gotthelf, ApJ. 7101710Livingstone M. A., Kaspi V. M., & Gotthelf E. V., 2010, ApJ, 710, 1710
. M A Livingstone, C.-Y Ng, V M Kaspi, ApJ. 73066Livingstone M. A., Ng C.-Y., Kaspi V. M., et al., 2011, ApJ, 730, 66
. A G Lyne, R N Manchester, MNRAS. 234477Lyne A. G., Manchester R. N., 1988, MNRAS, 234, 477
. A Lyne, G Hobbs, M Kramer, I Stairs, B Stappers, Science. 329408Lyne A., Hobbs G., Kramer M., Stairs I., Stappers B., 2010, Science, 329, 408
. A G Lyne, F G Smith, P Weltevrede, Science. 342598Lyne A. G., Smith F. G., Weltevrede P., et al., 2013, Science, 342, 598
. A G Lyne, C A Jordan, F G Smith, MNRAS. 446857Lyne A. G., Jordan C. A., Smith F. G., et al., 2015, MNRAS, 446, 857
. F E Marshall, L Guillemot, A K Harding, ApJ. 80727Marshall F. E., Guillemot L., Harding A. K., et al., 2015, ApJ, 807, L27
. B B P Perera, B W Stappers, P Weltevrede, 4461380Perera B. B. P., Stappers B. W., Weltevrede P., et al., 2015a, 446, 1380
. B B P Perera, B W Stappers, P Weltevrede, arXiv: 151004484Perera B. B. P., Stappers B. W., Weltevrede P., et al., 2015b, arXiv: 151004484
. A Philippov, A Tchekhovskoy, J G Li, MNRAS. 4411879Philippov A., Tchekhovskoy, A., & Li J. G., 2014, MNRAS, 441, 1879
. M A Ruderman, P G Sutherland, ApJ. 19651Ruderman M. A., & Sutherland P. G., 1975, ApJ, 196, 51
. C Thompson, M Lyutikov, S R Kulkarni, ApJ. 574332Thompson C., Lyutikov M., & Kulkarni S. R., 2002, ApJ, 574, 332
. H Tong, arXiv:1506.04605Tong H., 2015, arXiv:1506.04605
. H Tong, R X Xu, L M Song, ApJ. 768144Tong H., Xu R. X., Song L. M., et al., 2013, ApJ, 768,144
. J Wang, N Wang, H Tong, Astrophys.Space Sci. 340307Wang J., Wang N., Tong H., et al., 2012, Astrophys.Space Sci., 340, 307
. P F Wang, C Wang, J L Han, MNRAS. 4232464Wang P. F., Wang C., & Han J. L., 2012b, MNRAS, 423, 2464
. F Wu, R X Xu, J Gil, A&A. 409641Wu F., Xu R. X., & Gil J., 2003, A&A, 409, 641
. R X Xu, G J Qiao, ApJ. 56185Xu R. X., & Qiao G. J., 2001, ApJ, 561, L85
. N J Young, B W Stappers, A G Lyne, MNRAS. 4292569Young N. J., Stappers B. W., Lyne A. G., et al., 2013, MNRAS, 429, 2569
. N Y Young, P Weltevrede, B W Stappers, 4422519Young N. Y., Weltevrede P., Stappers B. W., et al., 2014, 442, 2519
. N Y Young, P Weltevrede, B W Stappers, 4491495Young N. Y., Weltevrede P., Stappers B. W., et al., 2015, 449, 1495
Y L Yue, R X Xu, W W Zhu, Advance in Space Research. 401491Yue Y. L., Xu R. X., & Zhu W. W., 2007, Advance in Space Research, 40, 1491
. L Zhang, K S Cheng, A&A. 363575Zhang L., & Cheng, K. S., 2000, A&A 363, 575
. W Zhang, F E Marshall, E V Gotthelf, ApJ. 554177Zhang W., Marshall F. E., Gotthelf E. V., et al., 2001, ApJ, 554, L177
. E P Zhou, J G Lu, H Tong, MNRAS. 4432705Zhou E. P., Lu J. G., Tong H., et al., 2014, MNRAS, 443, 2705
| []
|
[
"Perturbative Analysis of the Seiberg-Witten Map",
"Perturbative Analysis of the Seiberg-Witten Map"
]
| [
"A A Bichl ",
"J M Grimstrup ",
"L Popp ",
"M Schweda [email protected]@doppler.thp.univie.ac.at ",
"R Wulkenhaar \nInstitut für Theoretische Physik\nUniversität Wien\nBoltzmanngasse 5A-1090WienAustria\n",
"\nInstitut für Theoretische Physik\nTechnische Universität Wien\nWiedner Hauptstraße 8-10A-1040WienAustria\n"
]
| [
"Institut für Theoretische Physik\nUniversität Wien\nBoltzmanngasse 5A-1090WienAustria",
"Institut für Theoretische Physik\nTechnische Universität Wien\nWiedner Hauptstraße 8-10A-1040WienAustria"
]
| []
| We investigate the quantization of the θ-expanded noncommutative U(1) Yang-Mills action, obtained via the Seiberg-Witten map. As expected we find non-renormalizable terms. The one-loop propagator corrections are gauge independent, and lead us to a unique extention of the noncommutative classical action. We interpret our results as a requirement that also the trace in noncommutative field theory should be deformed. | 10.1142/s0217751x02010649 | [
"https://arxiv.org/pdf/hep-th/0102044v1.pdf"
]
| 16,789,480 | hep-th/0102044 | bf4be0fcc71569ca48a4c5508e9fe41ba0bfb353 |
Perturbative Analysis of the Seiberg-Witten Map
arXiv:hep-th/0102044v1 8 Feb 2001
A A Bichl
J M Grimstrup
L Popp
M Schweda [email protected]@doppler.thp.univie.ac.at
R Wulkenhaar
Institut für Theoretische Physik
Universität Wien
Boltzmanngasse 5A-1090WienAustria
Institut für Theoretische Physik
Technische Universität Wien
Wiedner Hauptstraße 8-10A-1040WienAustria
Perturbative Analysis of the Seiberg-Witten Map
arXiv:hep-th/0102044v1 8 Feb 2001
We investigate the quantization of the θ-expanded noncommutative U(1) Yang-Mills action, obtained via the Seiberg-Witten map. As expected we find non-renormalizable terms. The one-loop propagator corrections are gauge independent, and lead us to a unique extention of the noncommutative classical action. We interpret our results as a requirement that also the trace in noncommutative field theory should be deformed.
Noncommutative Yang-Mills Theory and the Seiberg-Witten Map
The Seiberg-Witten map was first discovered in the context of string theory, where it emerged from a 2D-σ-model regularized in different ways [1]. It was argued by Seiberg and Witten that the ordinary gauge theory should be gauge-equivalent to a noncommutative Yang-Mills (NCYM) field theory, which, in a certain limit, acts as an effective theory of open strings. Furthermore, they showed that the Seiberg-Witten map could be interpreted as an infinitesimal shift in the noncommutative parameter θ, and thus as an expansion of the noncommutative gauge field in θ.
Whereas in open string theory the (noncommutative) gauge fields are taken to transform in a certain matrix representation of a U(N) gauge group, the aim of a second approach to the subject [2,3] was to realize a general, non-Abelian gauge group, preferably SU(N). Using covariant coordinates the NCYM theory emerges as the gauge theory of a certain noncommutative algebra [2]. However, in this scenario, due to the choice of a general, non-Abelian gauge group, one is forced to consider enveloping algebra-valued fields, which leads to infinitely many degrees of freedom [3]. The solution to this problem was shown to be the Seiberg-Witten map, which in this context appears as an expansion of the noncommutative gauge field in both θ and the generators of the gauge group. Application of the Seiberg-Witten map yields a theory with finitely many degrees of freedom. However, since the Seiberg-Witten map is infinitely nonlinear, the resulting theory has infinitely many interactions at arbitrary high orders in the gauge field. Furthermore, since the noncommutative parameter θ, which has dimension −2, appears as a coupling constant, the model is non-renormalizable in the traditional sense. In the following we will refer to this model as the θ-expanded NCYM.
The aim of this paper is to study the quantization of the θ-expanded NCYM. We choose to consider the case of an Abelian, i.e. U(1), gauge group: noncommutative Maxwell theory.
The question of quantization of apparently non-renormalizable theories has been addressed in the literature, see e.g. [4] and citations therein. As a starting point, one could speculate if a power-counting non-renormalizable theory involving infinitely many interactions at arbitrary order in the field, as it is the case in the θ-expanded NCYM theory, could indeed be renormalizable in the sense that all divergent graphs may be absorbed in the classical action. However, we find that this is not the case for the θ-expanded NCYM. The self-energy produces terms which cannot be renormalized, thus forcing us to add extra, gauge invariant, terms quadratic in θ to the classical action of NCYM theory, yielding an extended NCYM theory. We regard this extention as the lowest order of an infinite deformation series of the scalar product. Furthermore, a consequence of the extended classical action is that propagation of light is altered. One may speculate whether this could lead to observable effects in e.g. cosmology.
One may object that an expansion in θ is not adequate for the following two reasons. First of all, taking all orders of θ into account, it was shown, in the context of string theory, that θ serves as a regulator for non-planar graphs [5] rendering otherwise UV-divergent graphs finite. The resulting radiative correction, however, is divergent for θ → 0, thus suggesting that the effective action is not analytical in θ [6]. Secondly, one could argue that renormalizability dictates one to take all orders of θ into account. Whereas e.g. the noncommutative φ 4 -theory expanded to n'th order in θ is obviously (perturbatively in the coupling constant) non-renormalizable, the theory is two-loop renormalizable [7] when all orders of θ are taken into account. However, if one insists on treating a general gauge group, the expansion in θ is the only known method of obtaining a quantizable action. In fact one may ask the question of how a noncommutative (gauge) theories should be correctly quantized.
The paper is organized as follows. In section 2 we give the classical action expanded to first order in θ. The gauge fixing is performed in section 3, where we argue that two fundamentally different ways of introducing ghosts to the theory, via a linear and a non-linear gauge, may be applied. In section 4 we give the relevant Feynman rules and calculate the self-energy to second order in θ. The extended NCYM theory is given in section 5, and in section 6 we present our summary and discussion.
θ-expanded NCYM
We consider the coordinates of a (flat) Minkowski space as self-adjoint operators on a Hilbert space with the following algebra
[x µ , x ν ] = iθ µν ,(1)
where θ µν is real and antisymmetric. A field theory in this context is equivalent to a field theory on a usual (commutative) flat manifold with the product substituted by the non-local ⋆-product 1
(f ⋆ g)(x) = d 4 k (2π) 4 d 4 p (2π) 4 e −i(kµ+pµ)x µ e − i 2 θ µν kµpνf (k)g(p),(2)
where f and g are functions on the manifold. A U(1) gauge field µ = * µ (Hermitian) gives rise to the noncommutative Yang-Mills action 2
Σ cl = − 1 4 d 4 xF µν ⋆F µν = − 1 4 d 4 xF µνF µν ,(3)withF µν = ∂ µÂν − ∂ νµ − i µ ⋆ ν + i ν ⋆ µ .(4)
The action (3) is invariant under the noncommutative gauge transformation
δλ µ = ∂ µλ − i µ ⋆λ + iλ ⋆ µ ≡D µλ ,(5)
with infinitesimalλ =λ * . It was shown by Seiberg and Witten [1] that an expansion in θ leads to a map between the noncommutative gauge field µ and the commutative gauge field A µ as well as their respective gauge parametersλ and λ, known as the Seiberg-Witten map:
A µ (A) = A µ − 1 2 θ ρσ A ρ (∂ σ A µ + F σµ ) + O(θ 2 ),(6)λ (λ, A) = λ − 1 2 θ ρσ A ρ ∂ σ λ + O(θ 2 ),(7)
1 We use the following Fourier conventions:
f (x) = d 4 p (2π) 4 e −ipµx µf (p),f (p) = d 4 xe ipµx µ f (x). 2
There could be a coupling constant added, however, in the absence of θ-independent interactions this coupling constant is not renormalized and may be absorbed in a reparametrization.
where the Abelian field strength is given by
F µν = ∂ µ A ν − ∂ ν A µ .(8)
Insertion of (6) into (3) leads to the action
Σ cl = d 4 x − 1 4 F µν F µν + 1 8 θ αβ F αβ F µν F µν − 1 2 θ αβ F µα F νβ F µν + O(θ 2 ),(9)
which is invariant under the usual Abelian gauge transformations
δ λ A µ = ∂ µ λ.(10)
The action (9) has in its full form, involving all orders of θ, infinitely many interactions at infinitely high order in the gauge field. Furthermore, since θ has dimension −2, the theory is power-counting non-renormalizable in the traditional sense.
Gauge Fixing
In order to quantize a gauge theory within the BRST-scheme, the gauge-symmetry is replaced by the nilpotent BRST-symmetry [8,9]. However, above we have two gauge symmetries:δλ and δ λ corresponding to the actions (3) and (9), respectively. Thus, there appear to be at least two fundamentally different ways of introducing ghosts into the theory, before and after performing the Seiberg-Witten map. Let us first consider the gauge-transformation (10) as the "fundamental" one and introduce ghosts into the action (9). We write
sA µ = ∂ µ c, sc = 0,(11)
where s is the BRST-operator and c the anti-commuting Faddeev-Popov ghost field. Within the quantization procedure a BRST-invariant gauge-fixing may be introduced in the following manner
Σ (i) gf = d 4 x s (c∂ µ A µ ) + α 2 B 2 ,(12)
with
sc = B, sB = 0.(13)
Herec is the anti-ghost field and B the Nakanishi-Lautrup (multiplier) field. The total action is now
Σ (i) tot = Σ cl + Σ (i) gf .(14)
In the following we will refer to this choice of gauge-fixing as the linear gauge.
Let us now consider the second option of introducing ghosts in the theory. We treat the gauge transformation (5) as the source of ghosts and thereby adding a gauge-fixing term to the action (3). We writeŝÂ
µ =D µĉ ,ŝĉ = iĉ ⋆ĉ,(15)
whereŝ is the BRST-operator emerging from the gauge-symmetry (5) andĉ the corresponding ghost field. The gauge-fixing term readŝ
Σ gf = d 4 x ŝ ĉ ⋆ ∂ µÂ µ + α 2B ⋆B ,(16)
withŝĉ =B,ŝB = 0.
Hereĉ andB are the anti-ghost and multiplier field. The total action is noŵ
Σ tot =Σ cl +Σ gf .(18)
In order to apply the Seiberg-Witten map to (18) we need the Seiberg-Witten map of the ghost and multiplier field. These are easily found by substituting λ with c andλ withĉ in (7). Notice that only the gauge field and the ghost have an expansion in θ:
c (c) = c − 1 2 θ νµ A ν ∂ µ c + O(θ 2 ),(19)c =c, (20) B = B,(21)
where c,c and B are the ordinary ghost, anti-ghost and multiplier field, respectively. Inserting (6) and (19)-(21) into (18) one finds, to first order in θ, the action
Σ (ii) = Σ cl + Σ (ii) gf ,(22)with Σ (ii) gf = d 4 x B∂ µ A µ −c∂ µ ∂ µ c −θ αβ ∂ µc ∂ α c∂ β A µ − 1 2 ∂ µ ∂ µc A α ∂ β c − 1 2 ∂ µ BA α (∂ β A µ + F βµ ) ,(23)
which is invariant under the BRST-transformations (11) and (13). Notice that (22) represents a nonlinear gauge. In the following we will refer to this choice of gauge-fixing as the nonlinear gauge.
Both gauge-fixed actions (14) and (22) are invariant under Abelian BRST-transformations and satisfy the Slavnov-Taylor identity
S Σ (i,ii) = 0,(24)
where the Slavnov-Taylor operator is given, for any functional F , by
S (F ) = d 4 x ∂ µ c δF δA µ + B δF δc .(25)
Photon Self-Energy
In order to check the one-loop UV and IR behaviour of the actions (14) and (22), one needs the corresponding Feynman rules. For the various propagators of the models only the bilinear part of the full actions is relevant. However, this is independent of θ and thus the propagators are identical in both cases
q p µ ν q p q p µG AA µν (p) = 1 (p 2 + iǫ) g µν − (1 − α) p µ p ν (p 2 + iǫ) ,(26)G AB µ (p) = −ip µ (p 2 + iǫ) ,(27)Gc c (p) = −1 (p 2 + iǫ) ,(28)
with p + q = 0. The action (9) represents free Maxwell theory in the limit θ → 0 . To first order in θ the photon vertex reads:
p r q µ ρ ν Ṽ µνρ AAA (p, q, r) = −iθ αβ Ω αβµνρ (p, q, r),(29)
with Ω αβµνρ (p, q, r) = g αµ g βν (pr)q ρ − (qr)p ρ + g αν g βρ (qp)r µ − (rp)q µ + g αρ g βµ (rq)r ν − (pq)r ν
+ g αµ (g νρ (rq) − r ν q ρ )p β − (g νρ (pq) − p ν q ρ )r β − (g νρ (rp) − r ν p ρ )q β + g αν (g ρµ (pr) − p ν r µ )q β − (g ρµ (qr) − q ν r µ )p β − (g ρµ (pq) − p ν q µ )r β + g αρ (g µν (qp) − q µ p ν )r β − (g µν (rp) − r µ p ν )q β − (g µν (qr) − q µ r ν )p β − g µν p ρ q α r β + q ρ p α r β − g νρ q µ r α p β + r µ q α p β − g ρµ r ν p α q β + p ν r α q β ,(30)
and p+q+r = 0. In the linear gauge the ghost is Abelian and does not couple to the gauge-field. In the nonlinear gauge the action (22) leads to the following interactions:
q -r p µ q p r ν µ Ṽ µ Acc (p, q, r) = −iθ αβ 1 2 q 2 r β g µα + p α r β q µ , V µν AAB (p, q, r) = θ αβ − 1 2 g αµ g βν (pr) + 1 2 g αµ g βν (qr) − g αµ q β r ν − g αν p β r µ ,(31)
with p + q + r = 0. and momentum conservation for the external momenta p i leading to a factor (2π) 4 δ(Σp i ). Each closed ghost line contributes a factor −1.
Before doing explicit one loop analysis we want to stress that the Ward identity (24) implies that the radiative corrections to the photon propagator must be transversal
p µ Π µν (p) = 0.(32)
Furthermore, (32) implies that the radiative corrections up to first order in θ must vanish (there are no θ-independent interactions) Π µν (p) = 0, (order θ).
The radiative corrections up to second order in θ are restricted in form by
Π µν (p) = g µν p 2 − p µ p ν Π (i) (p) +p µpν Π (ii) (p) + p µ p ν +p ν p µ + g µνp2 + p 2 θ µ σ θ νσ Π (iii) (p) (order θ 2 ).(34)
wherep µ = θ µν p ν andp µ = θ µν θ νρ p ρ . In (34) we used thatp is orthogonal to p andp and that p andp are independent. Notice that due to the negative dimension of θ, (34) indicates the presence of (divergent) Feynman graphs with 6 powers of p in Π µν (p). Since the bilinear part of the action (3) is the ordinary one of Maxwell theory, such a term will be non-renormalizable.
In the following we will explicitly perform the one loop analysis of the photon self-energy. Since all vertices are linear in θ the first contribution is proportional to θ 2 . In the linear gauge the only contributing graph is shown in fig. 1.a. In the nonlinear gauge we have interacting ghost and multiplier fields and thus find contributions from all three graphs shown in fig. 1. In fact we should also consider the tadpole graph emerging from the Seiberg-Witten map to second order in θ via a 4-legged photon interaction. However, the tadpole graph is identically zero, because there is no mass in the theory. Using the above Feynman rules one calculates the following expression for the photon self-energy with an internal photon line
Π (a),µν (p) = 2i d 4 k (2π) 4Ṽ µρσ AAA (p, −k + , −k − )Ṽ νκλ AAA (−p, k − , k + )G AA ρλ (−k + )G AA κσ (k − ),(35)
where k + = p 2 + k, k − = p 2 − k. The relevant integrals are evaluated in the Appendix. We find
Π (a),µν (p) = (4π) 2 ε − 1 8 (p 2 ) 2 θ 2 g µν p 2 − p µ p ν + 1 10p 2 p 2 g µν p 2 − p µ p ν + 1 30 (p 2 ) 2pµpν + 1 4 (p 2 ) 2 p µ p ν +p ν p µ + g µνp2 + p 2 θ µ σ θ νσ + O(1).(36)
Notice that (36) satisfies the transversality condition (34). For the graph (b) the integral reads
Π (b),µν (p) = − i d 4 k (2π) 4Ṽ µ Acc (p, −k + , −k − )Ṽ ν Acc (−p, k − , k + )Gc c (−k + )Gc c (k − ).(37)
We find
Π (b),µν (p) = − 60(4π) 2 ε 1 4 (p 2 ) 2p2 g µν + p 2p2 p µ p ν + 1 2 (p 2 ) 2pµpν + O(1).(38)
For the graph (c) we write
Π (c),µν (p) = i d 4 k (2π) 4Ṽ µρ AAB (p, −k + , −k − )Ṽ νσ AAB (−p, k − , k + )G AB ρ (−k + )G AB σ (k − ),(39)
and find
Π (c),µν (p) = 60(4π) 2 ε 1 4 (p 2 ) 2p2 g µν + p 2p2 p µ p ν + 1 2 (p 2 ) 2pµpν + O(1).(40)
One sees that the above divergent contributions from the ghost graph (b) and the multiplierphoton graph (c) cancel identically. This means that the choice of linear or non-linear gauge leaves the renormalization invariant. Furthermore, we would like to stress that the radiative correction (36) is independent of α, which shows that our result is gauge-independent. The reason for this is that the vertex (29) is transversal, p µṼ µνρ AAA (p, q, r) = 0.
Higher Derivative Action
In the previous section we have shown that the radiative corrections to the photon self-energy produce divergent terms involving two orders of θ and six orders of p. These terms cannot be absorbed into counterterms to the initial action (9), which thus is perturbatively nonrenormalizable. We interpret this problem as a hint to extend the classical action. The extension to (9) must be invariant under Lorentz transformations and the Abelian gauge transformations (10). There are many possibilities to write down the same terms. A generalization to non-Abelian models suggests however to use the field strengths F µν andF µν := θ α µ F αν as well as their derivatives using the operators ∂ µ and∂ µ := θ α µ ∂ α as building blocks. Thus we have the following tensors of dimension 2 at disposal:
F µν ,F ′ µν := ∂ µ∂ α F αν ,F ′′ µν :=∂ α ∂ µ F αν , F µνρσ := ∂ µ ∂ νFρσ ,F ′ µνρσ := ∂ µ∂ν F ρσ ,F ′′ µνρσ :=∂ µ ∂ ν F ρσ , F κλµνρσ := θ κλ ∂ µ ∂ ν F ρσ .(41)
The abelian case is degenerate; we haveF ′ µν =F ′′ µν andF ′ µνρσ =F ′′ νµρσ . The most general gauge and Lorentz invariant extension to (3) of dimension 4 with two θ's is 3,4
Σ ext = d 4 x 1 4g 2 1F ′ µνF ′µν + 1 4g 2 2F µνρσF µνρσ + 1 4g 2 3F ′ µνρσF ′µνρσ − sign(θ αβ θ αβ ) 4g 2 4F κλµνρσF κλµνρσ .(42)
The signs are chosen such that the highest time derivatives are positive, i.e. that the action is bounded from below. This requires for the second term
H ij 2 = θ i0 θ j0 + k =0 (θ k0 θ k0 δ ij − θ ki θ kj ) ≥ 0 , i, j = 0 .
For example, the case where the only non-vanishing commutators are [x 0 ,
x 3 ] = iΘ 1 and [x 1 , x 2 ] = iΘ 2 , requires |Θ 1 | ≥ |Θ 2 |.
We remark that the action (42) is bilinear in the gauge field. Therefore the photon propagator is changed, thus changing the whole scheme of quantization. The treatment of higher derivative actions have been investigated in the literature, see e.g [10] and references therein.
Here we choose to view θ as a constant external field, thus consider the photon propagator as unchanged and the action (42) as new vertices of type AAθθ. In this sense (36) represents the proper one-loop radiative correction to the coupling constants in (42).
The result of our one-loop calculation was the independence from the gauge parameter. This implies that we can have the special solution of a single coupling constant. From (36) we conclude the reduction to the following extended action:
Σ red ext = 1 4g 2 (ε) d 4 x 2 15F ′ µνF ′µν +F µνρσF µνρσ + 1 5F ′ µνρσF ′µνρσ − 1 4F κλµνρσF κλµνρσ .(43)
with g 2 (ε) = g 2 0 (1 +
g 2 0 4(4π) 2 ε + O(g 4 0 2 )) .(44)
The highest time derivatives in (43)
are H ij (∂ 3 0 A i )(∂ 3 0 A j ) with H ij = 17 60 θ i0 θ j0 + k =0 1 10 θ k0 θ k0 δ ij + 1 4 l>k =0 θ kl θ kl δ ij − k =0 θ ki θ kj > 0 ,(45)
i.e. for any θ the reduced extended action is bounded from below. The result (44) tells us that the extended action is not asymptotically free. Applying the Seiberg-Witten map in the opposite sense, the action (43) should arise from some noncommutative actionΣ ext . Gauge invariance leads immediately to the solution
Σ ext = 1 4g 2 2 15 β 1F ′ µνF ′µν + (1−β 1 )β 2F ′′ µνF ′′µν + (1−β 1 )(1−β 2 )F ′ µνF ′′µν 3
Observe that d 4 xF µνρσF ′µνρσ = d 4 xF µνρσF ′′µνρσ = d 4 xF ′ µνρσF ′′µνρσ = 0. 4 We may add that all terms involving tensorial combinations linear in θ are either identically zero or zero after integration (topological terms).
+ 1 5 β 3F ′ µνρσF ′µνρσ + (1−β 3 )F ′′ µνρσF ′′µνρσ +F µνρσF µνρσ − 1 4F κλµνρσF κλµνρσ + γ 1F µνρσF ′µνρσ + γ 2F µνρσF ′′µνρσ + γ 3F ′ µνρσF ′′µνρσ ,(46)
for 0 ≤ β i ≤ 1, wherễ
F µν := θ α µF αν ,D µ := θ α µD α , F ′ µν :=D µD αF αν ,F ′′ µν :=D αD µFαν , F µνρσ :=D µDνF ρσ ,F ′ µνρσ :=D µDνFρσ ,F ′′ µνρσ :=D µDνFρσ , F κλµνρσ := θ κλDµDνFρσ .
Note that the action (46) leads, after applying the Seiberg-Witten map, to an action containing infinitely many additional terms with finitely many free coefficients. The fact that the renormalization of the self-energy radiative correction puts restrictions on the relative weights of possible counterterms for the Green's function with three external legs provides us with a strong test of the model. We will address this question in a forthcoming paper [11].
Conclusion
We have analyzed the θ-expanded noncommutative U(1) Yang-Mills theory as a perturbative quantum field theory. As expected from the power-counting behaviour the Yang-Mills action F µνF µν is not renormalizable in this setting. We singled out the unique extended action for which the one-loop photon propagator is renormalizable.
Lorentz and gauge invariance allow for four different extension terms with arbitrary coefficients (coupling constants). Our one-loop calculations reduce this freedom to a single coupling constant, due to two not anticipated facts: the independence from the gauge parameter and from linear versus non-linear gauge.
We are thus led to ask whether there is a meaning in the relative weights of the extension terms. We recall in this respect the remarkable agreement of all three relative signs, which ensures that the action is bounded from below also for large momenta |p 0 | ≫ |θ| − 1 2 . It would be interesting to investigate whether θ-expanded noncommutative QED leads to the same weights.
It is obvious that the extension we derived is only valid to lowest order in θ. The new vertices lead to non-renormalizable divergences which give rise to more and more extension terms. Hence the action makes sense only as the lowest-order parts of an effective theory.
There are two ways a factor θ can arise in the θ-expansion of the noncommutative Yang-Mills action: in the form θpA via the Seiberg-Witten map and in the form θp 2 via the deformation product and possibly higher-order Seiberg-Witten terms. This leads to a field strengh of structure σ,δ
x σδ (pA)(θpA) σ (θp 2 ) δ + y σδ (θp 2 )A 2 (θpA) σ (θp 2 ) δ ,(47)
with the very important restriction x δ0 = 0 for all δ. A Feynman graph with E external A-lines and L loops has then the structure p E θ E−2 (θp 2 ) 2L+∆ , where ∆ is the total number of deformations δ in the vertices of the graph. It follows that, in principle, divergences in coefficients to factors θp 2 from integrated higher loop graphs can be absorbed by terms with a higher ∆ in the tree action. But this mechanism does not work for E = 2 and L = 0; in this case the tree action has ∆ ≡ 0. In other words, there is no chance that the photon propagator corrections are renormalizable. This is why we are forced to add to the tree action something with ∆ = 2 in order to compensate the L = 1 divergences. It is also clear that for compensating higher and higher loop graphs we need additional terms with arbitrarily large ∆ in the tree action. In some sense this makes the tree action more symmetric with respect to the power of θp 2 .
We would like to suggest the following interpretation of the extra terms to the Yang-Mills tree action. There is a remarkable structural asymmetry between the product of fields in NCYM (which contains arbitrarily many factors θp 2 in the ⋆-product) and the trace where the ⋆-product is reduced to the ordinary product. The extra terms we found restore the symmetry in deforming the trace as well. Differentiations in the scalar product are not unfamiliar, for instance, the Sobolev norm of f ∈ H s is given by
f 2 Hs ≡ f, f Hs = dx |f (x)| 2 + α, 1≤|α|≤s a α |∂ α x f (x)| 2 ,(48)
where α is a multi-index.
In this context, we have derived in this paper the necessity to replace the L 2 scalar product F ,F L 2 for the field strength by the H ∞ scalar product F ,F H∞ . Since the coordinate x has a dimension in physics, the derivatives must be accompanied by a dimensionful parameter θ. Of course, this scalar product must be gauge invariant, therefore we must take covariant derivatives in the Sobolev norm instead of partial derivatives. The dependence of the scalar product on the gauge field is very natural in the framework of noncommuatative geometry, where actions are built out of the covariant Dirac operator [12]. Moreover, the boundedness of the action from below gives certain restrictions on the pre-factors a α of the different combinations of θ ρσ andD µ . We would like to stress that in the commutative limit θ → 0 the H ∞ scalar product reduces to the standard L 2 scalar product.
Hence, the big quest is to find the true H ∞ scalar product (the prefactors a α in (48)) which makes the θ-expanded Yang-Mills action renormalizable. In this paper we have succeeded to derive the first correction to the L 2 scalar product -our result (46). We may speculate whether the relative weights we computed can serve as a hint in which direction to search for a closed form of the renormalizable H ∞ scalar product.
We may also speculate whether this renormalizable H ∞ scalar product also solves the UV/IR-mixing problem of the θ-unexpanded Yang-Mills action on noncommutative R 4 . We recall that the θ-expansion is free of infrared divergences but UV non-renormalizable whereas the unexpanded version is IR non-renormalizable [13] 5 . This can be interpreted as a hint to extend the Yang-Mills action also in the θ-unexpanded setting, and one could speculate if the solution is to substitute the ordinary scalar product with the H ∞ scalar product which is renormalizable via θ-expansion. Thus our result could be valuable also for the θ-undeformed framework. We would like to remark that the H ∞ scalar product leads to a θ-dependend photon propagator and could make contact with a different approach [15] to the noncommutative R 4 .
Finally let us mention that the extended action leads to a modified wave equation for the photon already on tree-level. Since the modification is of the order |θ| 2 |p| 4 , and if we assume |θ| 1/2 to be of the order of the Planck length, there can be observable consequences only for extremely high-energetic (cosmological) phenomena.
Acknowledgement
The authors would first of all like to thank Julius Wess for giving us the initial idea as well as for enlightening discussions. Also we thank Martin Ertl for his help. The very involved calculations found in this paper were performed using his fantastic Mathematica T M package "Index". Furthermore, we would like to thank Harald Grosse, Karl Landsteiner, Stefan Schraml and Raymond Stora for fruitful discussions.
A Integrals
We use Zimmermann's ǫ-trick [16] and replace 1 k 2 +iǫ = 1 k 2 0 − k 2 +iǫ by 1 k 2 0 − k 2 +iǫ k 2 . Then,
P (k, p) = lim ǫ→0 1 (( p 0 2 −k 0 ) 2 − ( p 2 − k) 2 + iǫ( p 2 − k) 2 )(( p 0 2 +k 0 ) 2 − ( p 2 + k) 2 + iǫ( p 2 + k) 2 ) = lim ǫ→0 1 0 dx (ǫ ′ −i) 2 {(ǫ ′ −i)(k 2 0 + (1−2x)k 0 p 0 + 1 4 p 2 0 ) − (ǫ ′ −i)(1−iǫ)( k 2 + (1−2x) k p + 1 4 p 2 )} 2 .(49)
For ǫ ′ < ǫ we have Re({. . . }) > 0 in the denominator of (49). We use analytic regularization [17]
= i 12(4π) 2 1 ε + ln µ 2 p 2 (p µ p ν − g µν p 2 ) + i (4π) 2 g µν p 2 ( 1 12 − 1 12 (γ+ ln 4+ψ( 5 2 ))) + p µ p ν ( 23 36 − 1 4 (γ+ ln 4+ψ( 3 2 ))) + O(ε) , lim
Figure 1 :
1Self-energy graphsAs usual, for each independent loop momentum k i we have the integration operator id 4 k i (2π) 4
to write (ǫ ′ −i) 2 {... } 2 → µ 2ε (ǫ ′ −i) 2+ε {... } 2+εand rewrite P (k, p) in terms of the Schwinger parameter α:P (k, p) → lim ǫ→0, ǫ ′ <ǫ µ 2ε (ǫ ′ −i) µ inthe numerator can now be obtained by differentiation with respect to q. For ε > 0 the various integrations can be performed and k) 2 + iǫ)(( p 2 + k) 2 + iǫ)
We refer to[14] for the power-counting behaviour of field theories on noncommuative R D .
( 7 2 ))) + p 2 T 4 κλµνρσ (p)(− 3349 78400 + 1 64 (γ+ ln 4+ψ( 7 2 ))) + T 6 κλµνρσ (p)( 7597 47040 − 5 64 (γ+ ln 4+ψ( 7 2 ))) + O(ε) .Here we have introduced the totally symmetric momentum tensors T 0 κλµν (p) := 1 2!2!2! π∈S(κλµν) g π(κ)π(λ) g π(µ)π(ν) ,g π(κ)π(λ) p π(µ) p π(ν) ,g π(κ)π(λ) g π(µ)π(ν) g π(ρ)π(σ) ,g π(κ)π(λ) g π(µ)π(ν) p π(ρ) p π(σ) ,g π(κ)π(λ) p π(µ) p π(ν) p π(ρ) p π(σ) ,where S(µ 1 . . . µ n ) is the set of permutations of the indices µ 1 . . . µ n . Let us finally mention that the divergent parts of the above integrals (52)-(54) are transversal.
String theory and noncommutative geometry. N Seiberg, E Witten, hep-th/9908142JHEP9909. 32N. Seiberg and E. Witten, "String theory and noncommutative geometry," JHEP9909 (1999) 032 [hep-th/9908142].
Gauge theory on noncommutative spaces. J Madore, S Schraml, P Schupp, J Wess, hep-th/0001203Eur. Phys. J. 16161J. Madore, S. Schraml, P. Schupp and J. Wess, "Gauge theory on noncommutative spaces," Eur. Phys. J. C16 (2000) 161 [hep-th/0001203].
Enveloping algebra valued gauge transformations for non-Abelian gauge groups on non-commutative spaces. B Jurco, S Schraml, P Schupp, J Wess, hep- th/0006246Eur. Phys. J. 17521B. Jurco, S. Schraml, P. Schupp and J. Wess, "Enveloping algebra valued gauge transformations for non-Abelian gauge groups on non-commutative spaces," Eur. Phys. J. C17 (2000) 521 [hep- th/0006246].
Are Nonrenormalizable Gauge Theories Renormalizable?. J Gomis, S Weinberg, hep-th/9510087Nucl. Phys. 469473J. Gomis and S. Weinberg, "Are Nonrenormalizable Gauge Theories Renormalizable?," Nucl. Phys. B469 (1996) 473 [hep-th/9510087].
Divergencies in a field theory on quantum space. T Filk, Phys. Lett. 37653T. Filk, "Divergencies in a field theory on quantum space," Phys. Lett. B376 (1996) 53.
Noncommutative perturbative dynamics. S Minwalla, M Van Raamsdonk, N Seiberg, hep-th/9912072S. Minwalla, M. Van Raamsdonk and N. Seiberg, "Noncommutative perturbative dynamics," hep-th/9912072.
Two-loop diagrams in noncommutative φ 4 4 theory. I Y Aref'eva, D M Belov, A S Koshelev, hep-th/9912075Phys. Lett. 476431I. Y. Aref'eva, D. M. Belov and A. S. Koshelev, "Two-loop diagrams in noncommutative φ 4 4 theory," Phys. Lett. B476 (2000) 431 [hep-th/9912075].
The Abelian Higgs-Kibble Model. Unitarity Of The S Operator. C Becchi, A Rouet, R Stora, Phys. Lett. 52344C. Becchi, A. Rouet and R. Stora, "The Abelian Higgs-Kibble Model. Unitarity Of The S Oper- ator," Phys. Lett. B52 (1974) 344.
Renormalization Of The Abelian Higgs-Kibble Model. C Becchi, A Rouet, R Stora, Commun. Math. Phys. 42127C. Becchi, A. Rouet and R. Stora, "Renormalization Of The Abelian Higgs-Kibble Model," Com- mun. Math. Phys. 42 (1975) 127.
Renormalization Of Gauge Theories. C Becchi, A Rouet, R Stora, Annals Phys. 98287C. Becchi, A. Rouet and R. Stora, "Renormalization Of Gauge Theories," Annals Phys. 98, (1976) 287.
Algebraic renormalization: Perturbative renormalization, symmetries and anomalies. O Piguet, S P Sorella, Springer134pBerlin, GermanyO. Piguet and S. P. Sorella, "Algebraic renormalization: Perturbative renormalization, symmetries and anomalies," Berlin, Germany: Springer (1995) 134 p.
Effective action in quantum gravity. I L Buchbinder, S D Odintsov, I L Shapiro, IOP413pBristol, UKI. L. Buchbinder, S. D. Odintsov and I. L. Shapiro, "Effective action in quantum gravity," Bristol, UK: IOP (1992) 413 p.
The θ-expanded noncommutative Yang-Mills theory: The three-point function. A A Bichl, J M Grimstrup, L Popp, M Schweda, R Wulkenhaar, In preparationA. A. Bichl, J. M. Grimstrup, L. Popp, M. Schweda and R. Wulkenhaar, "The θ-expanded non- commutative Yang-Mills theory: The three-point function," In preparation.
Gravity coupled with matter and the foundation of non-commutative geometry. A Connes, hep-th/9603053Commun. Math. Phys. 182155A. Connes, "Gravity coupled with matter and the foundation of non-commutative geometry," Commun. Math. Phys. 182 (1996) 155 [hep-th/9603053].
The IR/UV connection in the non-commutative gauge theories. A Matusis, L Susskind, N Toumbas, hep-th/0002075A. Matusis, L. Susskind and N. Toumbas, "The IR/UV connection in the non-commutative gauge theories," [hep-th/0002075].
Convergence theorem for non-commutative Feynman graphs and renormalization. I Chepelev, R Roiban, hep-th/0008090I. Chepelev and R. Roiban, "Convergence theorem for non-commutative Feynman graphs and renormalization," hep-th/0008090.
S Cho, R Hinterding, J Madore, H Steinacker, hep-th/9903239Finite field theory on noncommutative geometries. 9161S. Cho, R. Hinterding, J. Madore and H. Steinacker, "Finite field theory on noncommutative geometries," Int. J. Mod. Phys. D9 (2000) 161 [hep-th/9903239].
Convergence Of Bogoliubov's Method Of Renormalization In Momentum Space. W Zimmermann, Commun. Math. Phys. 15208W. Zimmermann, "Convergence Of Bogoliubov's Method Of Renormalization In Momentum Space," Commun. Math. Phys. 15 (1969) 208.
Dimensional And Analytic Renormalization. E R Speer, *Erice 1975, Proceedings, Renormalization Theory*. DordrechtE. R. Speer, "Dimensional And Analytic Renormalization," In *Erice 1975, Proceedings, Renor- malization Theory*, Dordrecht 1976, 25-93.
| []
|
[
"Hindered Settling of Well-Separated Particle Suspensions",
"Hindered Settling of Well-Separated Particle Suspensions",
"Hindered Settling of Well-Separated Particle Suspensions",
"Hindered Settling of Well-Separated Particle Suspensions"
]
| [
"Matthieu Hillairet \nInstitut Montpelliérain Alexander Grothendieck\nUniversité de Montpellier\nFrance\n",
"Richard M Höfer \nFakultät für Mathematik\nUniversität Regensburg\nGermany\n",
"Matthieu Hillairet \nInstitut Montpelliérain Alexander Grothendieck\nUniversité de Montpellier\nFrance\n",
"Richard M Höfer \nFakultät für Mathematik\nUniversität Regensburg\nGermany\n"
]
| [
"Institut Montpelliérain Alexander Grothendieck\nUniversité de Montpellier\nFrance",
"Fakultät für Mathematik\nUniversität Regensburg\nGermany",
"Institut Montpelliérain Alexander Grothendieck\nUniversité de Montpellier\nFrance",
"Fakultät für Mathematik\nUniversität Regensburg\nGermany"
]
| []
| We consider N identical inertialess rigid spherical particles in a Stokes flow in a domain Ω ⊆ R 3 . We study the average sedimentation velocity of the particles when an identical force acts on each particle. If the particles are homogeneously distributed in directions orthogonal to this force, then they hinder each other leading to a mean sedimentation velocity which is smaller than the sedimentation velocity of a single particle in an infinite fluid. Under suitable convergence assumptions of the particle density and a strong separation assumption, we identify the order of this hindering as well as effects of small scale inhomogeneities and boundary effects. For certain configurations we explicitly compute the leading order corrections. | null | [
"https://export.arxiv.org/pdf/2301.09547v1.pdf"
]
| 256,105,318 | 2301.09547 | 052d2193a8528aa607eb5bff2f894748c25e466c |
Hindered Settling of Well-Separated Particle Suspensions
January 24, 2023
Matthieu Hillairet
Institut Montpelliérain Alexander Grothendieck
Université de Montpellier
France
Richard M Höfer
Fakultät für Mathematik
Universität Regensburg
Germany
Hindered Settling of Well-Separated Particle Suspensions
January 24, 2023
We consider N identical inertialess rigid spherical particles in a Stokes flow in a domain Ω ⊆ R 3 . We study the average sedimentation velocity of the particles when an identical force acts on each particle. If the particles are homogeneously distributed in directions orthogonal to this force, then they hinder each other leading to a mean sedimentation velocity which is smaller than the sedimentation velocity of a single particle in an infinite fluid. Under suitable convergence assumptions of the particle density and a strong separation assumption, we identify the order of this hindering as well as effects of small scale inhomogeneities and boundary effects. For certain configurations we explicitly compute the leading order corrections.
Introduction
The sedimentation velocity of a single inertialess rigid sphere in an infinite fluid follows immediately from Stokes' law for the drag force. This law entails that the sphere falls parallel to the direction of the force acting on the particle (say gravity) with amplitude:
V St := |F | 6πµR , (1.1)
where F is the force acting on the particle, R its radius and µ the fluid viscosity. When several particles fall in the flow, the possible interactions between the particles through the fluid make however the situation much more complicated as soon as there are more than 3 particles, see [GM12, Section 6.1]. When F is gravity, computing the mean sedimentation velocity of a cloud of particles in a Stokes flow is then a classical problem that has been studied in many previous references [Bat72; Bur38; Feu84; GM88; Has59; Saf73], to mention a few. We refer to the review [DA85] and to the introduction of [DG22] for a historical perspective. In these works it has been observed (mostly on a formal level) that the mean sedimentation velocity of a cloud of N particles in the whole space remains parallel to F and that its magnitudeV sed N behaves in fundamentally different ways dependent on the particle distribution.
(Dil) There is a characterization of diluteness of suspensions for which the settling particles behave as if they were alone in the fluid [JO04].
(MF) If the particles are less dilute and not homogeneously distributed in directions orthogonal to gravity, a macroscopic fluid flow is created which enhances sedimentation: for sufficiently regular particle distributions, where not too much clustering occurs, the mean sedimentation velocity is of order
V sed N ∼ max V St , N F µL (1.2)
where N is the number of particles and L is the typical length scale of the particle cloud [Höf18;Mec19]. The additional term N F µL is precisely the parameter that characterizes diluteness in the above sense for such regular distributions and can be much larger than V St .
(HS) If the particles are closer and homogeneously distributed in directions orthogonal to gravity, the incompressibility of the fluid prevents the onset of a macroscopic fluid flow that enhances sedimentation. Instead, a small fluid backflow is created that hinders the particle sedimentation. The order of this hindering is again sensitive to the particle distribution:
a) If the particles are periodically distributed, then
V sed N = V St (1 − a per φ 1 3 + o(φ 1 3 )) (1.3)
for some a per > 0, where φ is the particle volume fraction inside the fluid [Has59].
b) If the particles are distributed according to hardcore Poisson process with hardcore distance 2R, thenV
sed N = V St (1 − a uni φ + o(φ)) (1.4)
for some a uni > 0 [Bat72].
The expansion (1.3) has been rigorously shown in [Has59] on the torus. In this contribution we show that it persists to hold asymptotically for large N if the particles are placed in a container Ω ⊆ R 3 such that • The particles respect a separation distance of order N −1/3 .
• The container Ω is bounded in directions orthogonal to the direction of the acting force and the particles are sufficiently close to a macroscopic density n which is constant in directions orthogonal to the acting force.
Although we are mainly interested in the (HS) situation, we complement the analysis in the case (MF) when the orthogonality assumption is not satisfied. The influence of the container on the sedimentation has been studied on a formal level in several works, see e.g. [BM85;GM88;Bru+96]. In these works, the particles are distributed according to a hardcore Poisson process as in [Bat72]. However, in contrast to [Bat72] where the whole space is considered, a nonoverlapping condition with the boundary ∂Ω restricts the particle centers to lie in Ω R = {x ∈ Ω : dist(x, ∂Ω) > R}. Since the particles are spherical, this leads to a lower mean volume concentration of particles in Ω \ Ω R than in Ω R (where this concentration is constant). This discrepancy leads to a macroscopic fluid flow v f just like in (MF). However, since the inhomogeneity only occurs in the small region Ω \ Ω R , this macroscopic fluid flow, called intrinsic convection, is much smaller than in (MF). The authors in [BM85; GM88;Bru+96] obtain v f = O(φV St ). Moreover v f decreases the sedimentation speed of particles close to the boundary of the container while it increases the sedimentation speed of particles in the bulk. In the present paper, we rigorously identify a related but quantitatively different effect. Namely, for particle configurations satisfying both items above, we analyze perturbations of the particle distributions on the N −1/3 -scale that occur in the bulk rather than at the boundary of the container. This leads to macroscopic fluid velocities v f = O(N 1/3 φ 1/3 V St ). The contribution of this macroscopic fluid velocity to the average sedimentation velocity is much lower though, namely of order φ 1/3 V St .
All these approaches to the computation of sedimentation velocity (including the present contribution) are based on a similar construction of the many-particle Stokes solution. Acting a force on each particles entails a microscopic disturbance in the flow around the particle that decays very slowly to zero at infinity. Summing the microscopic disturbances of all the particles cloud on one particle then creates a macroscopic disturbance that modifies its sedimentation velocity. A key-difficulty is then to prove that, despite the slow decay of the microscopic distubances, the macroscopic disturbance remains bounded, motivating many of the previous references on the topic. If the particles are sufficiently far one from the other then the macroscopic disturbance can be shown to be neglectible and we recover [JO04]. While, if the particles are closer, it turns out that the macroscopic disturbance can be proved to be bounded only because of a backflow due to the fluid incompressibility. For instance, in the case of particles on cubic lattices, Hasimoto mimicks the backflow on the torus by imposing the constraint that the total fluid flow (after extending the fluid flow inside of the particles) vanishes. By Fourier analysis, he then explicitly computed the expansion (1.3) [Has59].
In this contribution, we show that the boundaries make the macroscopic disturbance converge: they induce naturally a normalization of the pressure that makes the backflow explicit and the microscopic disturbances due to each particle decay faster. This improves the simplicity of the analysis.
Setting
Let Ω ⊆ R 3 be of class C 2 and contained in an infinite cylinder with an orientation ξ, i.e.,
∃ C 1 > 0, s.t. Ω ⊆ {x ∈ R 3 : dist(x, span{ξ}) < C 1 }.(H0)
We point out that Ω might be bounded as well as unbounded. For N ∈ N and r > 0, let
R N := N −1/3 r and X N i ∈ Ω such that B N i := B R N (X N i ) Ω and B N i ∩ B N j = ∅ for all 1 i = j N .
We will write R, X i and B i instead of R N , X N i and B N i in the following. We assume throughout the paper that the distribution of particles is regular in the following sense. Firstly, we have the following separation assumptions:
∃ c > 0 min i =j |X i − X j | cN −1/3 , min i=1,...,N dist(X i , ∂Ω) cN −1/3 .(H1)
The key information here is that the constant c does not depend on N. Secondly, we assume that the empirical measure
ρ N = 1 N N i=1 δ X i (1.5)
is close to a density n ∈ P(Ω) ∩ L ∞ (Ω) where P(Ω) denotes the space of probability measures on Ω. For this, we impose the following control on the infinite Wasserstein distance:
W ∞ (ρ N , n) C 0 N −1/3 .(H2)
Again, the key information here is that the constant C 0 is independent of the number of particles. For simplicity, we assume that the cloud of particles is uniformly bounded, i.e.,
∃ K Ω, ∀ i ∈ {1, . . . , N }, X i ∈ K.
(1.6) Our goal in this paper is to derive information on the mean sedimentation velocity of the particles when they are submitted to a given force F ∈ R 3 . Since we restrict to a linear Stokes problem, we assume without restriction that F is directed along the third vector e 3 of the canonical basis and we normalize its amplitude to N − 1 3 . In this way, the Stokes velocity (cf. (1.1)) is independent of N , namely,
V St = V St r = 1 6πr
.
(1.7)
We consider then the problem
−∆u N + ∇p N = 0 in Ω \ N i=1 B i , div u N = 0 in Ω \ N i=1 B i , u N = 0 on ∂Ω, u N (x) = V i + Ω i × (x − X i ) in B i for all 1 i N, −ˆ∂ B i σ[u N , p N ]ν = N − 1 3 e 3 for all 1 i N, −ˆ∂ B i (x − X i ) × σ[u N , p N ]ν = 0 for all 1 i N, lim |x|→∞ u N (x)= 0
(1.8)
In this system, we recall that ν is the normal to ∂B i (directed inwards B i ). The symbol σ stands for the fluid stress tensor given by Newton law:
σ[u, p] = 2D(u) − pI 3 = (∇u + ∇ u) − pI 3 .
Note that the first equation in (1.8) reads also: div(σ(u N , p N )) = 0 where the operator div acts rowwise on the matrix σ(u N , p N ). The symbols V i and Ω i stand respectively for the linear and angular velocities of particle B i . We emphasize that these velocities together with (u N , p N ) are the unknowns in (1.8). The system is then algebraically well-posed, the velocities (V i , Ω i ) being the Lagrange multipliers of the two last equations in (1.8). In particular, these velocities depend on N but we skip the dependencies for legibility. The last condition in (1.8) is needed in the case when Ω is unbounded in order to rule out Poiseuille type flows. We will in the following not write this condition explicitly. We will only consider velocity fields inḢ 1 (Ω) though, and Poiseuille type flows are not contained in this space.
We are interested in the average particle velocitȳ
V N := 1 N N i=1 V i . (1.9)
for large N under the assumption:
curl(ne 3 ) = ∇n × e 3 = 0. (Hom)
This assumption is reminiscent of (HS). We recall that, as mentioned in introduction, if the limit density n is not constant in the directions perpendicular to e 3 (namely, in case (MF)), the particles create a collective fluid velocity proportional to the number of particles N and the magnitude ofV N scales differently in N. The importance of this assumption can be observed as follows. If the particles are small and their distribution dilute, the force acting on the particles is seen reciprocally by the fluid as a forcing term f concentrated in the particles:
f ∼ N i=1 6πN −1/3 e 3 δ X i ∼ 6πN 2/3 ne 3 .
For large N we expect then that the leading term in the velocity-unknowns behaves like N 2/3 (u, p) with (u, p) solution to
−∆u + ∇p = 6πne 3 in Ω, div u = 0 in Ω u = 0 on ∂Ω.
(1.10)
One may then expect that the mean velocityV N has magnitude N 2/3 unless u = 0. In this latter case, we must have that ne 3 is a gradient or equivalently that (Hom) holds true. Even when (Hom) holds true, it will appear that the components ofV N have different magnitudes. Below, we call sedimentation velocity the projection ofV N along e 3 :
V sed N =V N · e 3 .
To end this subsection, we point out that (Hom) together with (H0) entail that, if the axis ξ and the force e 3 are orthogonal, then n is necessarily constant in the direction ξ which contradicts that n is a probability measure. This is not the situation that we are interested in here.
Main results
For N fixed, the system (1.8) is well posed via the following construction. A classical framework is the space of extended velocity-fields: We remind that, since the B i are connected, for arbitrary w ∈ H 0 [N ] there exists vectors (W 1 , . . . , W N ) and vectors (R 1 , . . . R N ) so that:
w(x) = W i + R i × (x − X i ) , ∀ x ∈ B i .
In particular, an extended velocity-field w ∈ H 0 [N ] encodes u N but also (
V i , Ω i ) i=1,...,N .
Classically, we only need to compute these unknowns to solve our system since the pressure p N is then recovered as the Lagrange multiplier of the divergence-free constraint. Eventually, we have the weak formulation of (1.8):
Find u N ∈ H 0 [N ] such that, Ω ∇u N : ∇w = N i=1 e 3 · W i N 1/3 , ∀ w ∈ H 0 [N ].
Such a weak formulation is obtained by mutliplying formally the Stokes equation with w and performing integration by parts to apply (pointwise and integral) boundary conditions on u N . From this weak formulation, we immediately deduce
∇u N 2 L 2 (Ω) = N i=1 e 3 · V i N 1/3 = N 2/3V sed N .
(1.12)
We see on this energy identity that there is a non-trivial relationship between u N andV N . One could have expected that the sedimentation velocityV sed N is of the same order (with respect to N ) as the fluid velocity u N itself. The energy identity, however, relates the sedimentation velocityV sed N to the gradient of the fluid velocity u N and reveals a factor N 2/3 between ∇u N 2 L 2 (Ω) andV sed N . Our first main result is then the identification of the magnitude ofV N in both cases when (Hom) holds true and does not hold true:
N − 2 3 |V N | C, (1.13) lim inf N →∞ N − 2 3V sed N 1 C .
(1.14)
(ii) If (Hom) is satisfied then there exists C depending on Ω and on C 0 , c such that
lim sup N →∞ N − 1 3 |V N | Cr −1/2 , (1.15) lim sup N →∞ |V sed N − V St r | C.
(1.16)
We remark that the factor r is related to the volume fraction φ through φ ∼ r 3 . For instance, if n is the indicator of some connected open set K Ω with |K| = 1 (say a unit cube for instance), we can compute a local volume fraction φ = 4πr 3 /3. We recall also that V St r ∼ r −1 . In particular, since r is independent of N we have N 2/3 V St for N 1, and therefore, in case (Hom) is not satisfied, (1.14) is coherent with (1.2). In case (Hom) holds true, (1.16) is a prerequisite in order that an expansion (1.3) can be valid.
If (Hom) holds true, the solution to (1.10) is a pure pressure. With similar arguments as previously, a more relevant approximation to (u N , p N ) for large N is then N 2/3 (ũ,p) where (ũ,p) is the solution to:
−∆ũ + ∇p = 6π(ρ N − n)e 3 in Ω, divũ = 0 in Ω u = 0 on ∂Ω. (1.17)
According to the rate of convergence (H2), one may then expect that the mean velocityV N is of size N 1/3 . The even smaller size of the sedimentation velocity (in powers of N ) comes from the remark that:
V sed N =V N · e 3 ∼ N 2/3ũ , ρ N e 3 = N 2/3 ũ, (ρ N − n)e 3
We used here again that, under assumption (Hom), the term ne 3 is a pressure gradient. The further gain of N 1/3 then yields from (H2) again. This gain can be generalized to the component ofV N along any vector e ∈ S 2 such that ∇n × e = 0.
In order to derive and characterize an expansion of the form (1.3), we introduce the two following additional structural assumptions. The first assumption regards a refined convergence of ρ N to n. To this end, we first smooth out the density ρ N as follows
σ N := 1 N N i=1 1 |Q i | 1 Q i ,ρ N = 1 N N i=1 1 |∂B i | H 2 ∂B i . (1.18)
Here H 2 ∂B i is the Hausdorff measure on ∂B i while the Q i are disjoint cubes centered at X i of volume
1 C 1 N |Q i | C 1 N . (1.19)
with C 1 independent of N. We emphasize that it is always possible to find such cubes thanks to assumption (H1) with C 1 = c −3 . However, the Q i are not unique and we might change construction depending on the computations. To characterize defects of ρ N to n, we impose that for a suitable choice of the cubes Q i , the following strong convergence holds:
N 1 3 (σ N − n) → g in H −1 (Ω) for some g ∈ H −1 (Ω).(Str)
We remark that by Proposition 2.1 below N
1 3 (σ N − n) is already bounded inḢ −1 (Ω) under assumption (H2)-(H1).
The second assumption is an almost periodicity assumption on the particles.:
∃ d > 0, t N ∈ R 3 , E N ⊆ Ω, I N ⊆ {1, . . . , N } s.t. |t N | ∞ N −1/3 , E N ⊆ E N +1 , |I N | N → 1, {X i : i ∈ I N } = E N ∩ (t N + dN − 1 3 Z) 3 , E N = i∈I N X i + [−N −1/3 d, −N −1/3 d] 3(Per)
Here E N should be understood as the set on which the configuration is periodic, I N the set of particles which are periodically distributed and t N allows those particles to be uniformly translated with respect to a lattice centered at the origin. We will give an example for a particle configuration that satisfies both (Per) and (Str) with a nontrivial g in Section 2.2.
To give a characterization of the mean velocity, we introduce the following velocity fields. We define v N,1 ∈Ḣ 1 (R 3 ) as the solution of the following Stokes equations in the whole space R 3 :
−∆v N,1 + ∇p N,1 = N i=1 1 |∂B R (X i )| H 2 | ∂B R (X i ) − 1 |Q i | 1 Q i e 3 , div v N,1 = 0. (1.20) Moreover, we consider the solution v ∞,3 ∈ H 1 0 (Ω) to the Stokes equations in Ω −∆v ∞,3 + ∇p ∞,3 = g, div v ∞,3 = 0 in Ω, v ∞,3 = 0 on ∂Ω.
(1.21)
We keep the index 2 for a further velocity-fields that we require for technical convenience below. With these definitions, our expansion is the content of the following result:
Theorem 1.2. Assume that assumption (H0)-(H2) and (Hom) are satisfied.
(i) If in addition (Str) is satisfied, then, for all δ > 0, there exists C > 0, depending only on C 0 , C 1 , c, δ from (H2), (H1) and (H1) respectively such that
lim sup N →∞ V sed N − N −2/3 ∇v N,1 2 L 2 (R 3 ) + ∇v ∞,3 2 L 2 (Ω) Cr 1−δ . (1.22) Moreover, ∇v ∞,3 2 L 2 (Ω) = lim N →∞ N 1/3 v ∞,3 ,ρ N e 3 .
(1.23) and there exists a sequence w N ∈ H 1 0 (Ω) with ∇w N L 2 (Ω) CN 1/3 r 3/2−δ such that
N −1/3 (u N − w N ) v ∞,3 weakly in H 1 0 (Ω). (1.24) (ii) If in addition (Per) is satisfied and Q i = X i + [−N −1/3 d, N −1/3 d] 3 for all i ∈ I N , then lim N →∞ N −2/3 ∇v N,1 2 L 2 (R 3 ) = ∇v per 2 L 2 (T 3 d ) = V St r (1 − a per r d + o(r)),(1.−∆v per + ∇p per = 1 |∂B r | H 2 | ∂Br(0) − 1 T 3 d F in T 3 d , div v per = 0 in T 3 d , T 3 d v per dx = 0, (1.26) where T 3 d = R 3 /(dZ) 3 .
A few remarks are in order. We first recall, in order to compare with the expansions forV sed N discussed at the beginning of the introduction, that r ∼ φ 1/3 . The estimate (1.22) characterizes the sedimentation velocityV sed N up to an O(r 1−δ ) error as the sum of two contributions. The first contribution, encoded in v N,1 , only depends on the particle configuration. It is completely independent of the container Ω. Under the periodicity assumption (Per), we characterize this contribution in (1.25) as the sum of the Stokes velocity V St r and a correction of order rV St r that can be computed from the problem on the torus (1.26). Recall from (1.7) that r|V St
r | = O(1). The second contribution toV sed N in (1.22) is encoded in v ∞,3 .
Note that v ∞,3 is independent of r and therefore the contribution ∇v ∞,3 L 2 (Ω) is of order 1 if curl g = 0. Moreover, the characterization (1.24) means that v ∞,3 is the leading order normalized macroscopic fluid flow and by (1.23) the contribution ∇v ∞,3 L 2 (Ω) equals the average of this leading order macroscopic fluid flow in the particles. Note that even though the macroscopic fluid flow is of order N 1/3 , (1.23) implies that its average at the particles is of order 1.
We also remark that the constant a per corresponds to the one from (1.3) analyzed in [Has59]. We do not investigate further the computation of ∇v N,1 2 L 2 (R 3 ) for particle configurations other than those satisfying (Per). One could expect though that the energy ∇v N,1
2 L 2 (R 3 )
can be generally expressed in terms of the 2-point correlation, similar as for the second order correction of the effective viscosity of a suspension obtained in [GH20; DG20].
Organization of the remainder of the paper and notations
The remainder of the paper is devoted to the proofs of Theorem 1.1 and 1.2.
Section 2, contains preliminary investigations on the probability densities involved in the analysis, namely the empirical measure of the particles smeared out to ∂B i , the measure σ N and the limit density n. Section 2.1 contains estimates between these densities which will be crucial for the subsequent analysis. In Section 2.2, we provide an example for assumption (Str) with a nontrivial function g.
In Section 3 we prove Theorem 1.1 (ii) as well as Theorem 1.2 (i). The proof is based on the 3 and w N that account for a whole space solution, boundary corrections, the defect between the measures σ N and n as well as higher order hydrodynamical interactions between the particles.
splitting of u N into v N,1 , v N,2 , v N,
In Section 4 we show Theorem 1.2 (ii) by analyzing periodic particle configurations. Finally, in Section 5, we give the proof of Theorem 1.1 (i) that concerns the mean particle velocity in the ill-prepared case, when (Hom) is not satisfied. We complement the proof by additional structural information, namely the strong convergence v N → v * in H 1 0 (Ω) of the leading part v N of u N in terms of the particle volume fraction r 3 as well as a characterization of the leading order of the limiting behavior of the mean velocityV N in terms of v * .
In what follows, we use classical notations for function spaces. We do not specify whether we handle vector or scalar functions. This shall be clear in the context.
If U ⊆ R 3 is bounded, we denote U f (x)dx = 1 |U |ˆU f (x)dx ∀ f ∈ L p (U ),
and L p 0 (U ) the subset of L p (U ) containing mean-free functions. Such definitions may be generalized to functions defined on hypersurface of R 3 . Finally, for arbitrary U ⊆ R 3 , we denoteḢ
1 (U ) = {u ∈ L 6 (U ) s.t. ∇u ∈ L 2 (U )}.
If U is bounded we haveḢ 1 (U ) = H 1 (U ) that we endow with the classical norm. If U is unbounded we endowḢ 1 (U ) with the norm
u Ḣ1 (U ) = ∇u L 2 (U )
for which it is also a Hilbert space. Below we use also constantly the symbol for an inequality involving a harmless (multiplying) constant.
Properties of (smoothened-)empirical measures
In our problem, particle distributions are encoded:
• via the associated empirical measures ρ N at the discrete level,
• via the density n in the continuous model.
For technical convenience, we need in the sequel smoothened versions of ρ N . Namely, we will use:
σ N = 1 N N i=1 1 |Q i | 1 Q iρ N = 1 N N i=1 1 |∂B i | H 2 ∂B i (2.1)
where we recall that Q i are cubes centered in the X i of volume scaling like 1/N (see assumption (1.19)) while H 2 ∂B i is the Hausdorff measure on ∂B i . In this section we prove at first some preliminary Poincaré type estimates that are crucial for the later analysis. These inequalities enable to control distances between smoothened empirical measures and between empirical measures and their continuous conterparts. We provide then examples of particle distributions for which assumption (Str) holds true with an explicit g.
Poincaré type inequalities
The first purpose of this section is the following estimates regarding particle distributions:
ρ N − σ N (W 1,p (Ω)) * Cr −(3/p−1) + N − 1 3 ,(2.
2)
and, if p > 3/2,
ρ N − σ N (W 2,p (Ω)) * CN −2/3 , (2.3)
where (·) + stands for the positive part of real numbers.
(ii) If p = 3 there exists a constant C that depends only on p and the constants C 0 , c and C 1 from (H2), (H1) and (1.19) such that
σ N − n (W 1,p (Ω)) * CN −1/3 ,(2.ρ N − n (W 1,p (Ω)) * Cr −(3/p−1) + N −1/3 , (2.5)
We note that item (i) entails in particular that for all p ∈ (1, ∞)
N 1/3 (ρ N − σ N ) 0 in (W 1,p (Ω)) * . (2.6)
It might be surprising that the scale in N changes between (2.2) and (2.3) making (2.2) seem far from optimal. It must be noted, though, that by symmetry, all affine functions tested on σ N −ρ N vanish. We will then obtain our result by comparing expansions of test-functions around each center X i . The discrepancy between both estimates is due to the fact that only zero-order expansions are available in W 1,p while first-order expansions are available in W 2,p . Finally, inequality (2.3) in case p > 3/2 could be complemented with a similar inequality in case p < 3/2. This will be however useless to our purpose.
For the proof, we furthermore introducẽ
ρ N := 1 N N i=1 1 |B i | 1 B i ,(2.7)
and we first show the following estimates involvingρ N :
Lemma 2.2. Let p ∈ [1, ∞] \ {3} and assume that B i ⊆ Q i for all i. ρ N − σ N (W 1,p (Ω)) * Cr −(3/p−1) + N −1/3 , (2.8)
where C depends only on p and the constant C 1 from (1.19).
Proof. We start with p < 3. Then, we may use the continuous embedding
W 1,p (Q i ) ⊆ L p * (Q i ),
where 1/p * = 1/p − 1/3, which implies here that for any ϕ ∈ W 1,p (Q i ) ∩ L p 0 (Q i ) we have:
ϕ L p * (Q i ) C(C 1 , q) ∇ϕ L q (Q i )
with a constant C(C 1 , p) independent of ϕ by a straghtforward homogeneity argument. Consequently, we have for all v ∈ W 1,p (Ω), via a sequence of discrete and continuous Hölder inequalities:
| ρ N − σ N , v | = 3 N 4πR 3 N i=1ˆB i v(x) − Q i v(z)dz dx 1 r 3 N i=1 |B i | 1− 1 p * ∇v L p (Q i ) 1 r 3 r 3(1− 1 p * ) N 1− 1 p −(1− 1 p * ) ∇v L p (Ω) 1 r 3 p * N 1 p * − 1 p ∇v L p (Ω) .
We conclude by recalling that 1/p * = 1/p − 1/3.
In the case p > 3 we have the embedding W 1,p (Q i ) ⊆ C 0,θ (Q i ) with θ = 1/3 − 1/p. This implies here that, for arbitrary ϕ ∈ W 1,p (Q i ) ∩ L p 0 (Q i ) we have, by standard homogeneity arguments:
ϕ L ∞ (Q i ) C(p, C 1 ) N 1 3 − 1 p ∇ϕ L p (Q i )
with a constant C(p, C 1 ) depending only on p and C 1 . By standard arguments, we have then that:
| ρ N − σ N , v | 1 N N −( 1 3 − 1 p ) N i=1 ∇v L p (Q i ) 1 N N −( 1 3 − 1 p ) N 1− 1 p v W 1,p (Ω) .
This finishes the proof of (2.2).
We are then in position to prove our main result. Step 1: Proof of (2.2): Our result follows also immediately from Lemma 2.2 and the standard Poincaré-like inequality
v − ∂B i v L p (B i ) C p R ∇v L p (B i ) ,
(2.9)
where C p depends only on p ∈ (1, ∞). Indeed, we splitρ N − σ N =ρ N −ρ N +ρ N − σ N . The second part is estimated via the previous lemma while for the first part, we have:
| ρ N −ρ N , v | 1 N N i=1 B i v − ∂B i v (2.10) 1 N N i=1 B i v − ∂B i v p 1 p C p N − 1 p R 1− 3 p ∇v L p (Ω) (2.11) = C p N − 1 3 r 1− 3 p ∇v L p (Ω) .
(2.12)
Step 2: Proof of (2.3): The argument is analogous as the proof of Lemma 2.2 in the case p > 3. Indeed, we observe that due to the assumption that Q i is centered in X i , we have
ρ N − σ N , v = 1 N 4πR 2 N i=1ˆ∂ B i v(x) − Q i v(z)dz − Q i ∇v(z)dz · (x − X i ) dx. (2.13)
Moreover, for all p > 3/2 and all ϕ ∈ W 2,p (Q i ) satisfyinĝ
Q i ϕ = 0ˆQ i ∇ϕ = 0
we have, by a standard homogeneity argument
ϕ C 0 (Q i ) C(p, C 1 ) N 2 3 − 1 p ∇ 2 ϕ L p (Q i )
where C(p, C 1 ) depends only on p and C 1 from (1.19). This entails that:
| ρ N − σ N , v | C(p, C 1 ) N 1−1/p N 2/3 N i=1 ∇ 2 v L p (Q i ) .
The assertion then follows again from application of the discrete Hölder inequality.
Step 3: Proof of (2.4): We observe that by the triangle inequality, assumption (H2) and the definition of σ N , we have
W ∞ (σ N , n) W ∞ (σ N , ρ N ) + W ∞ (ρ N , n) CN −1/3 .
(2.14)
where W ∞ is the Wasserstein distance built on the sup-norm. By definition, the first term on the right-hand side is bounded by 1/N 1/3 . Now the desired estimate follows from the result
µ − ν (W 1,p ) * C( µ ∞ + ν ∞ ) 1/p W ∞ (µ, ν
Explicit construction of distributions satisfying (Str)
We focus now on the construction of an example of particle distributions so that (Str) holds true: N 1 3 (σ N − n) converges in H −1 (Ω).
To this end, we consider the case Ω = (−1, 1) × (0, 1) × R. Fix M ∈ N * and N = 2M 3 . Firstly, we distribute N/2 particles covering (0, 1) 2 . For this, we construct the cubesQ k (k ∈ {0, . . . , M − 1} 3 ) with centers inX k = 1/M (k 1 + 1/2, k 2 + 1/2, k 3 + 1/2), radius 1/M and thus volume 2/N. We choose then λ ∈ (0, 1/2) and set X k =X k −λ/M e 1 . Q k = cube with center X k and radius 1/(2M ) if k 1 > 1 cube with center X k and radius 1/
(2M ) − λ/M if k 1 = 0
The remaining particles and cubes are obtained by transforming the X k with the symmetry σ 1 with respect to the plane {x 1 = 0}. One easily checks that (H2) is satisfied for n = 1 2 1 (−1,1)×(0,1) 2 by considering the transport map T (x) = X k for x ∈Q k . Note that n = 1 2 1 (−1,1)×(0,1] 2 satisfies (Hom). Explicit computations then show that, denotingk = (0, k 2 , k 3 ) for arbitrary (k 2 , k 3 ) ∈ {0, . . . , M − 1} 2 :
N 1/3 (σ N −n)| (0,1) 2 ×R = 2 − 2 3 M 2 N M −1 k 2 ,k 3 =0 1 |Qk| 1 Qk − 1 {x 1 ∈(0,(1−λ)/M )} − 1 {x 1 ∈(1−λ/M,1)} .
Classical computations then entail that:
M M −1 k 2 ,k 3 =0 1 M 3 |Qk| 1 Qk − 1 {x 1 ∈(0,(1−λ)/M )} → λδ {x 1 =0} in (W 1,2 ((0, 1) 2 × R) * , M 1 {x 1 ∈(1−λ/M,1} → −λδ {x 1 =1} in (W 1,2 ((0, 1) 2 × R) * .
Using symmetry at x 1 = 0 and that δ {x 1 =1} = 0 in H −1 ((−1, 1) × (0, 1) × R), we deduce that (Str) holds true with g = 2 1/3 λδ x 1 =0 in H −1 ((0, 1) 2 × R). We see on this example that the term g encodes a finer description of the particle distribution. Indeed, we created artificially a distribution in which particles around x 1 = 0 are closer and thus have larger interactions.
Particles near x 1 = 0 will therefore be slowed down in comparison to the particles near x 1 = 1. Such a difference will induce a variation of the velocity distribution in the cloud that is captured by the term v ∞,3 solution to (1.21).
Computation ofV sed N when (Hom) holds true
Throughout this section, we assume that (H0)-(H2) and (Hom) are satisfied. Let (u N , p N ) be the solution to (1.8). We remind the definition ofρ N from (2.1) introduce v N as the solution to
−∆v N + ∇q N = N 2 3ρ N e 3 in Ω, div v N = 0 in Ω, v N = 0 on ∂Ω.
(3.1) and the remainder
w N := u N − v N .
(3.2)
We will estimate the contribution of w N through the variational characterization of Stokes solution that entails ∇w N L 2 (Ω) C D(v N ) L 2 (∪ i B i ) . We therefore first turn to the analysis of v N itself.
We furthermore remind the definition of σ N from (2.1) and split
v N further into v N = v N,1 + v N,2 + v N,3 (resp. q N = q N,1 + q N,2 + q N,3 ) where −∆v N,1 + ∇q N,1 = N 2 3 (ρ N − σ N )e 3 in R 3 , div v N,1 = 0 in R 3 , (3.3) −∆v N,2 + ∇q N,2 = 0 in Ω div v N,2 = 0 in Ω, v N,2 = −v N,1 on ∂Ω, (3.4) and −∆v N,3 + ∇q N,3 = N 2 3 (σ N − n)e 3 in Ω, div v N,3 = 0 in Ω, v N,3 = 0 on ∂Ω. (3.5)
The identity v N = v N,1 + v N,2 + v N,3 holds because, due to assumption (Hom), the term involving n on the right-hand side of (3.5) can be absorbed into the pressure: there exists a function p n ∈ L 2 loc (Ω) such that ∇p n = ne 3 . We will show the following properties of these functions. C 1 from (1.19) such that the following holds.
(i) N −1/3 v N,1 0 weakly inḢ 1 (R 3 ) and N −1/3 v N,1 → 0 strongly in W 1,p (R 3 \ Ω) for all p ∈ (1, ∞). Moreover, V St r − N −2/3 ∇v N,1 2 L 2 (R 3 ) C. (3.6) (ii) For p = 2, N −1/3 ∇v N,2 L 2 (Ω) → 0. (3.7) (iii) For p = 2, N −1/3 ∇v N,3 L 2 (Ω) C. (3.8)
For all p ∈ (1, ∞) and all bounded sets Ω ⊆ Ω there holds:
N −1/3 ∇v N,3 L p (Ω ) C . (3.9) with C depending furthermore on Ω . If in addition (Str) is satisfied, then N −1/3 v N,3 → v ∞,3 , strongly in H 1 (Ω), where v ∞,3 is the solution to (1.21).
(iv) For all δ > 0 lim sup
N →∞ N −1/3 D(v N ) L 2 (∪ i B i ) Cr 3/2−δ (3.10)
where the constant C depends in addition on δ.
The proof of this proposition is postponed to Subsection 3.2.
Proof of Theorem 1.1 when (Hom) holds true
To treat the error w N , we note that w N can be associated to a pressureq N to yield a solution to
−∆ψ + ∇q = 0 in Ω \ N i=1 B i , div ψ = 0 in Ω \ N i=1 B i , ψ = 0 on ∂Ω, D(ψ) = D(ϕ) in B i for all 1 i N, ∂B i σ[ψ, q]n = 0 =ˆ∂ B i σ[ψ, q]n × (x − X i ) for all 1 i N. (3.11) with ϕ = −v N .ψ H 1 (Ω) C D(ϕ) L 2 (∪ i B i ) (3.12)
for a universal constant C.
We show how Proposition 2.1 and Proposition 3.1 imply Theorem 1.1 (ii) and Theorem 1.2 (i). 1.1 (ii). We fix the choice of the cubes Q i by |Q i | = c 3 N −1 where c is the constant from (H1). In this way, dependencies on C 1 from (1.19) become dependencies on c.
Proof of Theorem
Using that V i = ffl ∂B i u N , we first note that, for arbitrary direction e ∈ S 2 , there holds:
V N · e = ρ N e, u N
Writing that u N = v N,1 +v N,2 +v N,3 +w N and thatρ N =ρ N −n+n, we combine (3.6)-(3.7)-(3.9) together with (2.5) in case p = 2 to yield (1.15).
Using assumption (Hom) and the fact that u N is divergence free and that
V i = ffl ∂B i u N , we rewriteV sed N = (ρ N − n)e 3 , u N (3.13) We recall the decomposition u N = v N + w N = v N,1 + v N,2 + v N,3 + w N .| (ρ N − n)e 3 , w N | C δ r 1−δ .
(3.14)
Moreover, using the Stokes equations that v N solves,
(ρ N − n)e 3 , v N = −∆v N + ∇q N , v N = N −2/3 ∇v N 2 L 2 (Ω)
From (3.7) we infer that we have a remainder rem N going to 0 as N → ∞ such that:
(ρ N − n)e 3 , v N = N −2/3 ∇(v N,1 + v N,3 ) 2 L 2 (Ω) + rem N .
and thus:
lim sup N →∞ |V sed N − V St r | lim sup N →∞ |N −2/3 ∇(v N 1 + v N,3 ) 2 L 2 (Ω) − V St r | + C δ r 1−δ (3.15)
At this point, we realize that, with (2.2) and (3.9) with p > 3
N − 2 3ˆΩ ∇v N,1 : ∇v N,3 = (ρ N − σ N )e 3 , v N,3 C.
Therefore, expanding the square in (3.15), and using also (3.8) yields
lim sup N →∞ |V sed N − V St r | lim sup N →∞ |N −2/3 ∇v N 1 2 L 2 (Ω) − V St r | + C
and we obtain the expected result thanks to (3.6).
Proof of Theorem 1.2 (i) .
We now turn to the proof of Theorem 1.2 (i). This time, we choose the cubes Q i such that assumption (Str) is satisfied. We revisit the latter computations, using that Proposition 3.1 provides the weak convergence N −1/3 v N,1 0 inḢ 1 (Ω) and that, thanks to (Str), we have the strong convergence N −1/3 v N,3 → v ∞,3 . We infer:
lim sup N →∞ (ρ N − n)e 3 , v N = lim sup N →∞ N −2/3 ∇v N 2 L 2 (Ω) = lim sup N →∞ N −2/3 ∇v N,1 2 L 2 (R 3 ) + ∇v ∞,3 2 L 2 (Ω) , (3.16)
where we also used that N −1/3 v N,1 → 0 strongly inḢ 1 (R 3 \ Ω) in order to replace Ω by R 3 . Combining (3.16) with (3.13) and (3.14) yields (1.22). Moreover, by definition of v ∞,3 there holds:
∇v ∞,3 2 L 2 (Ω) = lim N →∞ N 1/3 v ∞,3 , (σ N − n)e 3 = lim N →∞ N 1/3 v ∞,3 ,ρ N e 3 + N 1/3 v ∞,3 , (σ N −ρ N )e 3 ,
(3.17)
where we used again that v N,3 , ne 3 = 0. We conclude (1.23) by observing that N 1/3 (σ N − ρ N ) 0 weakly inḢ −1 (Ω) due to (2.2)-(2.3). Finally, (1.24) is a consequence of the convergence N −1/3 (v N,1 + v N,2 ) 0 inḢ 1 (Ω) from Proposition 3.1 as well as the bound on w N that follows from Proposition 3.2 and (3.10).
Proof of Proposition 3.1
Item (iii) is independent and proven in a first step. Item (ii) is a consequence to the properties of v N,1 outside Ω and is proven in a last step after tackling item (i). Item (iv) will follow from estimates that we show along the proof of items (i)-(iii). All the constants C involved in the following computations are harmless constants. They may depend on the involved exponent p and the constants c, C 0 and C 1 appearing in (H2)-(H1)-(H0).
Step 1: Proof of (iii):
To obtain (3.9) we proceed in two steps showing in passing the other statements in item (iii). Firstly, since Ω is bounded in one direction (orthogonal to ξ) it is standard to adapt the classical construction of solutions to (3.5) (see for instance [Gal11, Section IV.1]) to yield that v N,3 ∈ H 1 0 (Ω) with
N −1/3 v N,3 H 1 0 (Ω)
C and claimed convergence when N → ∞ thanks to (Str) and the linearity of the Stokes equations. Then, we introduce a truncation function χ such that Ω χ = supp(χ) ∩ Ω is C 2 and satisfies Ω :
= {χ = 1} ⊆ Ω χ ⊆ Ω. We set v χ N,3 = χv N,3 −ṽ χ , q χ N,3 = χq N,3 (3.18)
whereṽ χ lifts the divergence of χv N,3 in H 1 0 (Ω χ \ Ω ). We have then that div(v χ N,3 ) = 0 and for any divergence-free w ∈ C ∞ c (Ω χ ). Firstly, we use the embedding H 1 0 (Ω) ⊆ L 6 (Ω χ ) to yield that up to a trivial extensionṽ χ ∈ W 1,6 0 (Ω χ ) (see [Gal11,Theorem III.3.1]). Thus, since p ∈ L 2 (Ω χ ) (see [Gal11, Lemma IV.1.1]) we deduce v χ N,3 ∈ W 1,6 0 (Ω χ ) (see [Gal11, Theorem IV.6.1]) with bounds that entail
N −1/3 v N,3 W 1,6 ({χ=1}) C χ
with C χ depending furthermore on Ω χ . This entails that v N,3 ∈ L ∞ ({χ = 1}) We can then reproduce the same argument with a second χ with support a little smaller to yield:
N −1/3 ∇v N,3 L p (Ω ) C
whatever p ∈ (1, ∞) with the expected dependencies for C .
Step 2: Proof of (i): The assertion that N −1/3 v N,1 0 inḢ 1 (R 3 ) is a consequence of (2.6). We mention here only that the estimate we derived in (W 1,2 (Ω)) * extends straightforwadly into an estimate in the dual ofḢ 1 (R 3 ). We write then:
v N,1 = N i=1 U i , (3.19) where −∆U i + ∇P i = N −1/3 (δ R i − 1 |Q i | 1 Q i )e 3 div U i = 0 in R 3 (3.20)
and δ R i = H 2 | ∂B i /|∂B i | is the normalized uniform measure on ∂B i . Then,
∇v N,1 2 L 2 (R 3 ) = N i,j=1ˆ∇ U i : ∇U j dx. (3.21)
For the diagonal terms, we split U i = U i,1 − U i,2 , P i = P i,1 − P i,2 corresponding respectively to the solutions of Stokes equations on R 3 with source terms N −1/3 δ R i e 3 and N −1/3 |Q i | −1 1 Q i e 3 . Thanks, to the theory on Stokes problem on R 3 (see [Gal11, Section IV.2]), we know that such solutions can be computed by convolution with a fundamental solution. We denote Φ the fundamental solution for the velocity-field. We shall use below extensively that Φ is (−1)-homogeneous (see [Gal11, Eq. IV.2.3] for the exact formula). In case of U 1,i the existence theory for Stokes problem in exterior domains yields that we have also an exact solution (see [Gal11, Section V, Eq. (V.0.4)]). This formula entails in particular that U i,1 = V St r in B i . With these remarks at-hand now, we obtain by multiplying the Stokes equations for U i,1 with U i,1 that:
∇U i,1 2 L 2 (R 3 ) = N −1/3 V St r . (3.22)
Moreover, we have, using first the weak formulation of the Stokes equations and then standard estimates for the convolution with Φ :
∇U i,2 2 L 2 + |(∇U i,1 , ∇U i,2 ) L 2 (R 3 ) | N −1/3 U i,2 C 0 (Q i ) CN −1/3 N 2/3 1 Q i 1/3 ∞ 1 Q i 2/3 1 CN −1/3 .
(3.23) Thus, expanding the sum for U i when computing the L 2 -norm, we obtain:
∇U i 2 L 2 − N −1/3 V St r CN −1/3 . (3.24)
Finally, since U i,1 is constant in B i , we may reproduce the convolution arguments with ∇U i,2 to yield:
∇U i L ∞ (B i ) N 2/3 1 Q i 2/3 ∞ 1 Q i 1/3 1 CN 1/3 . (3.25)
We are now in position to estimate the off-diagonal terms. For fixed i ∈ {1, . . . , N }, we consider two cases for index j = i. Firstly, we say that Q j is a neighbor of Q i if i = j and dist(Q i , Q j ) C 1 with C 1 being the constant from (1.19). We note that for each i there are at most M neighbors Q j of Q i where M ∈ N depends only on C 1 . We observe now from the explicit formula for U j,1
|U j,1 (x)| C N −1/3 |x − X j | (3.26)
for all x ∈ R 3 \ B j . Thus, combining this with the bound of U j,2 derived in (3.23), we have for
all i = j U j (x) L ∞ (Q i ) C. (3.27)
This yields after integration by parts:
N i=1 Q j neighb. Q iˆR 3 ∇U i : ∇U j N −1/3 N i=1 j | Q j neighb. Q i U j L ∞ (Q i ) CN 2/3 (3.28)
When Q j is not a neighbor of Q i we may use the following estimate for smooth test-functions which is reminiscent of (2.13):
δ R i − |Q i | −1 1 Q i , ϕ CN −2/3 ∇ 2 ϕ L ∞ (Q i ) .
(3.29)
Applying this twice, with Φ:
ˆ∇ U i : ∇U j dx = N −1/3 δ R i − |Q i | −1 1 Q i , U j · e 3 N −1 ∇ 2 U j L ∞ (Q i ) = N −4/3 δ R j − |Q j | −1 1 Q j , ∇ 2 Φ(x − ·)e 3 L ∞ (Q i ) N −2 |X i − X j | 5 (3.30) since |X i − X j | is comparable to |x − X j | uniformly in x ∈ Q j when Q i and Q j are not neighbors. Thus, N i=1 i =j Q i not neighb.Q jˆR 3 ∇U i : ∇U j dx C N i=1 i =j N −2 |X i − X j | 5 CN 2/3 .
(3.31)
Combining (3.24) and (3.28)-(3.31) yields (3.6).
Moreover, X j is in the center of Q j so that B i is far from the support of the convolution defining U j (the distance scales like 1/N 1/3 with a constant depending on the parameters c, C 1 involved in (H1)-(1.19)). Arguing as for (3.30), we find then a constant C such that, for i = j
∇U j L ∞ (B i ) C N −1 |X i − X j | 4 j =i ∇U j L ∞ (B i ) CN 1/3 .
(3.32)
Combining with (3.25) yields
D(v N,1 ) L 2 (∪ i B i ) Cr 3/2 sup i D(v N,1 ) L ∞ (B i ) Cr 3/2 N 1/3 . (3.33)
It remains to analyse the convergence of N −1/3 v N,1 → 0 outside Ω. For this, we first provide an L ∞ -bound that we formulate in the following lemma for future reference: Lemma 3.3. Assume that for all i = 1, . . . , N we have Q i ⊆ Ω for some compact Ω ⊆ R 3 . Then, for any x ∈ R 3 \ Ω , there holds:
N i=1 U i C 1 dist(x,Ω )<2 + 1 dist(x,Ω )>1 dist(x, Ω ) 3 N i=1 ∇U i C min(N 1/3 , dist(x, Ω ) −1 )1 dist(x,Ω )<2 + 1 dist(x,Ω )>1 dist(x, Ω ) 4 .
Proof. We provide a computation of the second bound since the first one is obtained similarly. Fix x ∈ R 3 \ Ω . We have then:
N i=1 ∇U i = Q i neighb.x (∇U i,1 + ∇U i,2 ) + Q i not neighb.x ∇U i . (3.34)
where we define "Q i neighboring x" as dist(Q i , x) < 2(C 1 /N ) 1/3 (with C 1 given in (1.19)). For the second sum, we proceed similarly to (3.32) to obtain that:
i|Q i not neighb. of x ∇U i i|Q i not neighb. of x CN −1 |x − X i | 4 C dist(x, ∪{Q i not neighb. of x})] −1 C min(N 1/3 , dist(x, Ω ) −1 )
We note then that we may only have a finite number of indices in the first sum in (3.34) and that, for each i neighbor of x there holds:
|∇U i,1 (x)| CN −1/3 |x − X i | 2 CN 1/3 .
since B i is C/N 1/3 far from ∂Q i . We treat the second term with convolution arguments as in (3.25) and we obtain |∇U i,2 (x)| CN 1/3 . Eventually, we conclude that:
N i=1 ∇U i (x) C min(N 1/3 , dist(x, Ω ) −1 ) + N 1/3 1 {dist(x,Ω )<(2C 1 /N ) 1/3 } .
We obtain the first bound when dist(x, Ω ) 2. When dist(x, Ω ) > 1 we remark that there are no neighboring Q i to x and the above computations yield:
N i=1 ∇U i (x) C N N i=1 1 |x − X i | 4
we conclude by noting that |x − X i | dist(x, Ω ) for each i in the sum.
We continue with the proof of Proposition 3.1 (i). Let Ω ⊆ Ω be chosen independent of N containing all the cubes Q i which is possible due to assumption (1.6). Then, since v N,1 = i U i , the above lemma implies with dominated convergence that for arbitrary p ∈ (1, ∞):
lim N →∞ N −1/3 ∇v N,1 L p (R 3 \Ω ) = 0.
(3.35)
In particular, we have the same convergence in W 1,p (R 3 \ Ω).
Proof of (ii): Using that v N,2 is solution to the (homogeneous) Stokes solution inside Ω (with boundary condition −v N,1 on ∂Ω), we have the variational characterization
∇v N,2 L 2 (Ω) = min{ ∇v L 2 (Ω) : v ∈Ḣ 1 (Ω), divv = 0, v | ∂Ω = −v N,1 }.
To construct a suitable competitor, we consider again a bounded (and connected) set Ω as above and set v = v N,1 in R 3 \ Ω . Inside of Ω we then take a divergencefree extension of v. It is classical that such an extension can be constructed (e.g. by use of a Bogovkǐi operator) since the condition´∂Ω v · n = 0 is satisfied because div v N,1 = 0, and that the extension satisfies
∇v L 2 (R 3 ) C ∇v L 2 (R 3 \Ω ) = C ∇v N,1 L 2 (R 3 \Ω )
In view of (3.35), this concludes the proof of (ii).
Proof of (iv): The statement is an immediate consequence of (3.33), item (ii) and item (iii) applied with Ω that contains K from assumption (1.6) and with p sufficiently large.
Explicit computation of the first order correction for periodic configurations
In this section, we complete the proof of Theorem 1.2 by justifying item (ii). We will thus assume (Per) throughout this section. We will assume without loss of generality that t d = 0 in (Per). Indeed, since we consider the norm of v N,1 in the whole space R 3 , the shift t d does not have any influence.
We first note that, by classical arguments, there is a unique v per ∈Ḣ 1 (T 3 d ) (homogeneous means here that we consider mean-free functions) to which we can associate a pressure p per ∈ L 2 (T 3 d ) such that (1.26) holds true. We consider then in analogy to the cubes Q i , 1
i N the covering of R 3 by cubes (Q α ) α∈Z 3 where Q α = N −1/3 d(α + (−1/2, 1/2) 3 . Similarly, we adapt the notations introduced in Section 3.2: for α ∈ Z 3 , (U α , P α ) is the solution to
−∆U α + ∇P α = N −1/3 (δ R α − 1 |Q α | 1 Qα )e 3 div U α = 0 in R 3 , lim |x|→∞ |U α (x)| = 0.
(4.1) and δ R α is the normalized uniform measure on ∂B α = ∂B rN −1/3 (dN −1/3 α). We keep for technical convenience the labels α ∈ Z 3 We then note by a scaling argument that:
v per (x) = α∈Z 3 U α (N −1/3 x) − [0,d] 3 U α (N −1/3 y) dy v N,1 = i∈I N U i + i ∈I N U i ,
where we recall the set I N from (Per) and use the convention that the sums over the index i runs over the set {1, . . . , N }. The fact that the first sum converges inḢ 1 (T 3 d ) follows from the decay of ∇U α (cf. (3.32)). Let Z N ⊆ Z 3 be such that ∪ α∈Z N Q α = ∪ i∈I N Q i = E N with E N as in (Per). We then obtain then:
∇v N,1 2 L 2 (R 3 ) = N 2/3 ∇v per (N 1/3 x) 2 L 2 (E N ) + rem 1,N + rem 2,N where rem 1,N = − α / ∈Z N ∇U α 2 L 2 (E N ) − 2ˆE N N 1/3 ∇v per (N 1/3 x) : α / ∈Z N ∇U α dx rem 2,N = i∈I N ∇U i 2 L 2 (R 3 \E N ) + i / ∈I N ∇U i 2 L 2 (R 3 ) + 2ˆR 3 ∇v N,1 : i / ∈I N ∇U i dx
By standard arguments, we have:
lim N →∞ ∇v per (N 1/3 x) 2 L 2 (E N ) = v per 2Ḣ 1 (T 3 d )
and the the second identity in (1.25) yields from the analysis of the periodic problem in [Has59]. Our proof thus reduces to obtaining that: lim sup N →∞ N −2/3 (rem 1,N + rem 2,N ) = 0.
Concerning rem 1,N we note that we have first the bound:
|rem 1,N | C ∇ α / ∈Z N U α L 2 (E N ) 1 + ∇ α / ∈Z N U α L 2 (E N )
Then, we apply Lemma 3.3 to yield that, for arbitrary x ∈ E N there holds:
α / ∈Z N |∇U α (x)| C min(N 1/3 , dist(x, ∂E N ) −1 ).
We note here that, to apply properly Lemma 3.3 we must invoke an "invading domain" argument and firstly approximate the infinite sum by finite sums. The above bound yields from the remark that the right-hand side does not depend on the finite subset of Z 3 \ Z N that we would choose. Recalling E N ⊆ E N +1 from (Per) and that on the other side the sets E N are contained in a compact set independently of N due to (1.6), we deduce with the dominated convergence theorem lim sup
N →∞ N −1/3 ∇ α / ∈Z N U α L 2 (E N ) = 0, lim sup N →∞ N −2/3 rem 1,N = 0.
Concerning rem 2,N , we can get similarly as above lim sup
N →∞ N −1/3 i∈I N ∇U i 2 L 2 (R 3 \E N ) = 0.
Moreover, since by assumption (Per) #{i / ∈ I N } N , we have from the bound on U i in (3.25) that
N −1/3 i / ∈I N ∇U i L 2 (R 3 ) = 0.
(4.2)
Combining these estimates yields lim N →∞ N −2/3 rem 2,N = 0 which concludes the proof.
Computations in the ill-prepared case
We provide here the computations in the ill-prepared case when (Hom) is not satisfied. Let (u N , p N ) be the solution to (1.8). We introduce again (v N , q N ) the solution to
−∆v N + ∇q N = N 2/3ρ N e 3 in Ω div v N = 0 in Ω v N = 0 on ∂Ω (5.1) and w N = u N − v N .
We point out that, without assumption (Hom) we may not normalize the pressure to add the −ne 3 term to the right-hand side without modifying v N . The main goal of this section is a proof of item (i) in Theorem 1.1. We complement the proof with a more refined description ofV N at the end of this section. To achieve our main goal we first provide the following proposition: Proof. Item i) is a direct consequence to (2.5) in Proposition 2.1 by standard arguments on generalized solutions to Stokes system (see [Gal11, Theorem IV.1.1]). We point out that the result holds actually whether data are well-prepared or ill-prepared. The main difference between the ill-prepared and well-prepared setting is that v * = 0 in the latter one.
Since v N /N 2/3 converges to v * in H 1 0 (Ω), we can bound w N as follows thanks to Proposition 3.2:
∇w N L 2 (Ω) C D(v N ) L 2 (∪B i ) CN 2/3 D(v * ) L 2 (∪B i ) + C N −2/3 v N − v * H 1 (Ω)
where the second term in the parenthesis can be made arbitrary small for N large. We remark then that n ∈ L ∞ (Ω) so that standard elliptic regularity results entail in particular that v * ∈ W 2,4 (Ω ) for arbitrary bounded Ω ⊆ Ω and thus v * ∈ C 1 (Ω). We infer then that the first term in the parenthesis is bounded by r 3/2 . This ends the proof.
Proof of Theorem 1.1(i). Assume (Hom) is not satisfied. Then, by combining Proposition 5.1 and (2.5) in Proposition 2.1 (which implies the strong convergence ofρ N to n in H −1 (Ω)), we infer: To complement the analysis of the ill-prepared case, we provide a sharper description ofV N for large values of N. For this, we introduce further notations for solutions to (5.2). Indeed, we remark that this solution is fixed by the vector e 3 so that changing this value to another vector e ∈ R 3 would yield a different velocity-field. Below, we highlight this possible dependency by writing v * [ẽ] the solution associated with the vectorẽ ∈ R 3 . We can now state our main proposition:
N −2/3V N = N −2/3 v N ,ρ N H 1 ,H −1 + N −2/3 w N ,ρ N H 1 ,H −1 → v * ,
Proposition 5.2. Assume that (Hom) does not hold. Then, there exists V * ∈ R 3 such that:
(i) there exists a constant C independent of N ∈ N sufficiently large for which
V N = N 2 3 (V * + rem N )
with |rem N | Cr.
(i) there holds:
V * · e =ˆΩ ∇v * [e 3 ] : ∇v * [e] ∀ e ∈ S 2 Remark 5.3. We first point out that v * does not depend on r. So, when ∇n × e = 0 we have indeed captured the first order ofV N with a remainder smaller than O(r). We note that we can use the system satisfied by v * to rewrite:
V * · e =ˆΩ v * [e 3 ] · ne =ˆΩ v * [e] · ne 3
We also recall that, in the degenerate case ∇n × e = 0, there holds v * [e] = 0. In this case, the computations of the previous section hold and show thatV N · e CN 1/3 similarly as we obtained (1.15). In particular, the results obtained in the present section are not optimal in this direction.
Proof. We prove that, for arbitrary e ∈ S 2 , there holds:
V N · e = N
H 0 [
0N ] := {w ∈ H 1 0 (Ω) s.t. div w = 0 on Ω and D(w) = 0 on B N i for all i} (1.11)
Proposition 3. 1 .
1There exists a constant C > 0 depending only on Ω, and on C 0 and c from (H2)-(H1) as well as on
N,3 : ∇w = N 2 3 (σ N −n)e 3 , χw +ˆΩ χ v·(2∇w∇χ+∆χw)−ˆΩ χ ∇ṽ χ : ∇w+ˆΩ χ p∇χ·w.
Proposition 5. 1 .
1The vector-fields v N and w N introduced above satisfy the following statements:(i) there exists (v * , q * ) ∈ H 1 0 (Ω) × H −1 (Ω) for which N −2/3 v N → v * inH 1 0 (Ω) and: −∆v * + ∇q * = ne 3 in Ω, div v * = 0 in Ω. (5.2) (i) there exists a constant C which depends only on c from (H1) and Ω such that, for N sufficiently large: w N H 1 (Ω) CN 2/3 r 3/2 . (5.3)
n H 1 ,H −1 + O(r 3/2 ) =ˆΩ v * n + O(r 3/2 ), which yields (1.13) since v * H 1 (Ω) C n L ∞ (Ω) . Moreover, we have via a standard energy estimate:ˆΩ v * · ne 3 = ∇v * 2 L 2 (Ω) , (5.4)which yields (1.14) since v * = 0 if (Hom) is not satisfied. This ends our proof.
w
* [e 3 ] : ∇v * [e] + O(r) This shall complete the two items of the proposition simultaneously.Given e ∈ S 2 , let us denote by u N [e], v N [e], w N [e] the velocity-fields associated to the problem (1.8) replacing e 3 by e (analogously as the notation v * [e] introduced above). By Proposition 5.1, we have:∇v N [e 3 ] L 2 + ∇v N [e] L 2 CN 2/3 , ∇w N [e 3 ] L 2 CN N [e 3 ] → v * [e 3 ] N − 2 3 v N [e] → v * [e]in H 1 0 (Ω). Furthermore, like in the previous proof, there holds:N − 2 3V N · e = N − 4 3 v N [e 3 ], −∆v N [e] + ∇q N [e] H 1 0 (R 3 ),H −1 (R 3 ) + N − 2 3 w N [e 3 ],ρ N e H 1 0 (Ω),H −1 (Ω) = N − 4 3ˆΩ ∇v N [e 3 ] : ∇v N [e] + N − 2 3 w N [e 3 ],ρ N e H 1 0 (Ω),H −1 (Ω)We conclude by remarking that byProposition 5.1 lim N →∞ˆΩ ∇v N [e 3 ] : ∇v N [e] =ˆΩ ∇v * [e 3 ] : ∇v * [e] and, by (2.2) and (2.4) in Proposition 2.1 and the above bound on w N , that: N [e 3 ],ρ N e H 1 0 (Ω),H −1 (Ω) N − 2 3 ρ N H −1 (Ω) w N [e 3 ] H 1 (Ω) Cr.
Theorem 1.1. Assume that (H0)-(H2) are satisfied.(i) Assume that (Hom) is not satisfied. Then, there exists C depending only on Ω, on n
and on C 0 , c, from (H2) and (H1) such that
lim sup
N →∞
25 )
25for some constant a per > 0 and where v per is the unique solution to
Proposition 2.1. Let p ∈ (1, ∞) and assume that B i ⊆ Q i for all i. (i) If p = 3, there exists a constant C that depends only on p and the constant C 1 from (1.19) such that
Let ϕ ∈ H 1 (∪ i B i ) and let ψ be the solution to (3.11). ThenThe estimate for w N then follows from the following standard estimate (see
e.g. [GH21, Equation (27)])
Proposition 3.2.
AcknowledgementsThe two authors warmly thank David Gérard-Varet and Amina Mecherbert for fruitful discussions during the preparation of this paper. M.H. acknowledges support of the Institut Universitaire de France and project "SingFlows" ANR-grant number: ANR-18-CE40-0027. This paper was partly written while M.H. was benefiting a "subside à savant" from Université Libre de Bruxelles. He would like to thank the mathematics department at ULB for its hospitality.
Sedimentation in a dilute dispersion of spheres. G Batchelor, J. Fluid Mech. 52G. Batchelor. "Sedimentation in a dilute dispersion of spheres". In: J. Fluid Mech. 52.2 (1972), pp. 245-268.
Is sedimentation container-shape dependent?. C Beenakker, P Mazur, In: The Physics of fluids. 28C. Beenakker and P. Mazur. "Is sedimentation container-shape dependent?" In: The Physics of fluids 28.11 (1985), pp. 3203-3206.
Intrinsic convection in a settling suspension. D Bruneau, F Feuillebois, R Anthore, E Hinch, Physics of Fluids. 8D. Bruneau, F. Feuillebois, R. Anthore, and E. Hinch. "Intrinsic convection in a settling suspension". In: Physics of Fluids 8.8 (1996), pp. 2236-2238.
On the motion of small particles of elongated form suspended in a viscous liquid. J M Burgers, Eerste Sectie). Kon. Ned. Akad. Wet. Verhand.16J. M. Burgers. "On the motion of small particles of elongated form suspended in a viscous liquid". In: Kon. Ned. Akad. Wet. Verhand.(Eerste Sectie) 16 (1938), pp. 113-184.
Sedimentation of noncolloidal particles at low Reynolds numbers. R H Davis, A Acrivos, Annual Review of Fluid Mechanics. 17R. H. Davis and A. Acrivos. "Sedimentation of noncolloidal particles at low Reynolds numbers". In: Annual Review of Fluid Mechanics 17.1 (1985), pp. 91-118.
On Einstein's effective viscosity formula. M Duerinckx, A Gloria, arXiv:2008.03837arXiv preprintM. Duerinckx and A. Gloria. "On Einstein's effective viscosity formula". In: arXiv preprint arXiv:2008.03837 (2020).
Sedimentation of random suspensions and the effect of hyperuniformity. M Duerinckx, A Gloria, Ann. PDE 8.166M. Duerinckx and A. Gloria. "Sedimentation of random suspensions and the effect of hyperuniformity". In: Ann. PDE 8.1 (2022), Paper No. 2, 66.
Sedimentation in a dispersion with vertical inhomogeneities. F Feuillebois, J. Fluid Mech. 139F. Feuillebois. "Sedimentation in a dispersion with vertical inhomogeneities". In: J. Fluid Mech. 139 (1984), pp. 145-71.
An introduction to the mathematical theory of the Navier-Stokes equations, steady-state problems. G P Galdi, Springer1018New York2nd ed. Springer Monographs in MathematicsG. P. Galdi. An introduction to the mathematical theory of the Navier-Stokes equations, steady-state problems. 2nd ed. Springer Monographs in Mathematics. Springer, New York, 2011, pp. xiv+1018.
Analysis of the viscosity of dilute suspensions beyond Einstein's formula. D Gérard-Varet, M Hillairet, Arch Rational Mech Anal (2020). D. Gérard-Varet and M. Hillairet. "Analysis of the viscosity of dilute suspensions beyond Einstein's formula". In: Arch Rational Mech Anal (2020), pp. 1349-1411.
Mild assumptions for the derivation of Einstein's effective viscosity formula. D Gérard-Varet, R M Höfer, Communications in Partial Differential Equations. 46D. Gérard-Varet and R. M. Höfer. "Mild assumptions for the derivation of Einstein's effective viscosity formula". In: Communications in Partial Differential Equations 46.4 (2021), pp. 611-629.
A physical introduction to suspension dynamics. É Guazzelli, J F Morris, Cambridge Texts in Applied Mathematics. CambridgeCambridge University Press229É. Guazzelli and J. F. Morris. A physical introduction to suspension dynamics. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2012, pp. xii+229.
Sedimentation of homogeneous suspensions in finite vessels. U Geigenmüller, P Mazur, J. Statist. Phys. 53U. Geigenmüller and P. Mazur. "Sedimentation of homogeneous suspensions in finite vessels". In: J. Statist. Phys. 53.1-2 (1988), pp. 137-173.
On the periodic fundamental solutions of the Stokes equations and their application to viscous flow past a cubic array of spheres. H Hasimoto, J. Fluid Mech. 5H. Hasimoto. "On the periodic fundamental solutions of the Stokes equations and their application to viscous flow past a cubic array of spheres". In: J. Fluid Mech. 5.2 (1959), pp. 317-328.
Sedimentation of inertialess particles in Stokes flows. R M Höfer, Comm. Math. Phys. 360R. M. Höfer. "Sedimentation of inertialess particles in Stokes flows". In: Comm. Math. Phys. 360.1 (2018), pp. 55-101.
The influence of Einstein's effective viscosity on sedimentation at very small particle volume fraction. R Höfer, R Schubert, Annales de l'Institut Henri Poincaré C, Analyse non linéaire. 38R. Höfer and R. Schubert. "The influence of Einstein's effective viscosity on sedimentation at very small particle volume fraction". In: Annales de l'Institut Henri Poincaré C, Analyse non linéaire 38.6 (2021), pp. 1897-1927.
Identification of the dilute regime in particle sedimentation. P.-E Jabin, F Otto, Comm. Math. Phys. 250P.-E. Jabin and F. Otto. "Identification of the dilute regime in particle sedimenta- tion". In: Comm. Math. Phys. 250.2 (2004), pp. 415-432.
Sedimentation of particles in Stokes flow. A Mecherbet, Kinet. Relat. Models. 12A. Mecherbet. "Sedimentation of particles in Stokes flow". In: Kinet. Relat. Models 12.5 (2019), pp. 995-1044.
On the settling speed of free and fixed suspensions. P Saffman, Studies in Applied Mathematics. 52P. Saffman. "On the settling speed of free and fixed suspensions". In: Studies in Applied Mathematics 52.2 (1973), pp. 115-127.
Optimal transport for applied mathematicians: Calculus of variations, PDEs, and modeling. Progress in Nonlinear Differential Equations and Their Applications. F Santambrogio, Springer International PublishingF. Santambrogio. Optimal transport for applied mathematicians: Calculus of varia- tions, PDEs, and modeling. Progress in Nonlinear Differential Equations and Their Applications. Springer International Publishing, 2015.
| []
|
[
"Improving TSP tours using dynamic programming over tree decompositions *",
"Improving TSP tours using dynamic programming over tree decompositions *"
]
| [
"Marek Cygan [email protected] \nInstitute of Informatics\nUniversity of Warsaw\nPoland\n",
"Łukasz Kowalik [email protected] \nInstitute of Informatics\nUniversity of Warsaw\nPoland\n",
"Arkadiusz Socała [email protected] \nInstitute of Informatics\nUniversity of Warsaw\nPoland\n"
]
| [
"Institute of Informatics\nUniversity of Warsaw\nPoland",
"Institute of Informatics\nUniversity of Warsaw\nPoland",
"Institute of Informatics\nUniversity of Warsaw\nPoland"
]
| []
| Given a traveling salesman problem (TSP) tour H in graph G a k-move is an operation which removes k edges from H, and adds k edges of G so that a new tour H ′ is formed. The popular k-OPT heuristic for TSP finds a local optimum by starting from an arbitrary tour H and then improving it by a sequence of k-moves.Until 2016, the only known algorithm to find an improving k-move for a given tour was the naive solution in time O(n k ). At ICALP'16 de Berg, Buchin, Jansen and Woeginger showed an O(n ⌊2/3k⌋+1 )-time algorithm.We show an algorithm which runs in O(n (1/4+ǫ k )k ) time, where lim k→∞ ǫ k = 0. It improves over the state of the art for every k ≥ 5. For the most practically relevant case k = 5 we provide a slightly refined algorithm running in O(n 3.4 ) time. We also show that for the k = 4 case, improving over the O(n 3 )-time algorithm of de Berg et al. would be a major breakthrough: an O(n 3−ǫ )-time algorithm for any ǫ > 0 would imply an O(n 3−δ )-time algorithm for the All Pairs Shortest Paths problem, for some δ > 0. * The work of M. Cygan and Ł. | 10.1145/3341730 | [
"https://arxiv.org/pdf/1703.05559v2.pdf"
]
| 7,742,574 | 1703.05559 | e6318ed94df25925592c5cf5834f37f645a042c1 |
Improving TSP tours using dynamic programming over tree decompositions *
Aug 2017
Marek Cygan [email protected]
Institute of Informatics
University of Warsaw
Poland
Łukasz Kowalik [email protected]
Institute of Informatics
University of Warsaw
Poland
Arkadiusz Socała [email protected]
Institute of Informatics
University of Warsaw
Poland
Improving TSP tours using dynamic programming over tree decompositions *
Aug 2017
Given a traveling salesman problem (TSP) tour H in graph G a k-move is an operation which removes k edges from H, and adds k edges of G so that a new tour H ′ is formed. The popular k-OPT heuristic for TSP finds a local optimum by starting from an arbitrary tour H and then improving it by a sequence of k-moves.Until 2016, the only known algorithm to find an improving k-move for a given tour was the naive solution in time O(n k ). At ICALP'16 de Berg, Buchin, Jansen and Woeginger showed an O(n ⌊2/3k⌋+1 )-time algorithm.We show an algorithm which runs in O(n (1/4+ǫ k )k ) time, where lim k→∞ ǫ k = 0. It improves over the state of the art for every k ≥ 5. For the most practically relevant case k = 5 we provide a slightly refined algorithm running in O(n 3.4 ) time. We also show that for the k = 4 case, improving over the O(n 3 )-time algorithm of de Berg et al. would be a major breakthrough: an O(n 3−ǫ )-time algorithm for any ǫ > 0 would imply an O(n 3−δ )-time algorithm for the All Pairs Shortest Paths problem, for some δ > 0. * The work of M. Cygan and Ł.
Introduction
In the Traveling Salesman Problem (TSP) one is given a complete graph G = (V, E) and a weight function w : E → N. The goal is to find a Hamiltonian cycle in G (also called a tour) of minimum weight. This is one of the central problems in computer science and operation research. It is well known to be NP-hard and has been researched from different perspectives, most notably using approximation [1,4,24], exponential-time algorithms [12,15] and heuristics [23,20,5].
In practice, TSP is often solved by means of local search heuristics where we begin from an arbitrary Hamiltonian cycle in G, and then the cycle is modified by means of some local changes in a series of steps. After each step the weight of the cycle should improve; when the algorithm cannot find any improvement it stops. One of the most successful examples of this approach is the k-opt heuristic, where in each step an improving k-move is performed. Given a Hamiltonian cycle H in a graph G = (V, E) a k-move is an operation that removes k edges from H and adds k edges of G so that the resulting set of edges H ′ is a new Hamiltonian cycle. The k-move is improving if the weight of H ′ is smaller than the weight of H. The k-opt heuristic has been introduced in 1958 by Croes [5] for k = 2, and then applied for k = 3 by Lin [19] in 1965. Then in 1972 Lin and Kernighan designed a complicated heuristic which uses k-moves for unbounded values of k, though restricting the space of k-moves to search to so-called sequential k-moves. A variant of this heuristic called LKH, implemented by Helsgaun [13], solves optimally instances up to 85 900 cities. Among other modifications, the variant searches for non-sequential 4-and 5-moves. From the theory perspective, the quality of the solutions returned by k-opt, as well as the length of the sequence of k-moves needed to find a local optimum, was studied, among others, by Johnson, Papadimitriou and Yannakakis [14], Krentel [17] and Chandra, Karloff and Tovey [3]. More recently, smoothed analysis of the running time and approximation ratio was investigated by Manthey and Veenstra [18] and Künnemann and Manthey [21].
In this paper we study the k-opt heuristic but we focus on its basic ingredient, namely on finding a single improving k-move. The decision problem k-opt Detection is to decide, given a tour H in an edge weighted complete graph G, if there is an improving k-move. In its optimization version, called k-opt Optimization, the goal is to find a k-move that gives the largest weight improvement, if any. Unfortunately, this is a computationally hard problem. Namely, Marx [22] has shown that k-opt Detection is W [1]-hard, which means that it is unlikely to be solvable in f (k)n O(1) time, for any function f . Later Guo, Hartung, Niedermeier and Suchý [11] proved that there is no algorithm running in time n o(k/ log k) , unless Exponential Time Hypothesis (ETH) fails. This explains why in practice people use exhaustive search running in O(n k ) time for every fixed k, or faster algorithms which explore only a very restricted subset of all possible k-moves.
Recently, de Berg, Buchin, Jansen and Woeginger [7] have shown that it is possible to improve over the naive exhaustive search. For every fixed k ≥ 3 their algorithm runs in time O(n ⌊2k/3⌋+1 ) and uses O(n) space. In particular, it gives O(n 3 ) time for k = 4. Thus, the algorithm of de Berg et al. is of high practical interest: the complexity of the k = 4 case now matches the complexity of k = 3 case, and hence it seems that one can use 4-opt in all the applications where 3-opt was fast enough. De Berg et al. show also that a progress for k = 3 is unlikely, namely they show that k-opt Detection has an O(n 3−ǫ )-time algorithm for some ǫ > 0 iff All Pairs Shortest Paths problem can be solved in O(n 3−δ )-time algorithm for some δ > 0.
Our Results. In this paper we extend the line of research started in [7]: we show an algorithm running in time O(n (1/4+ǫ k )k ) and using space O(n (1/8+ǫ k )k ) for every fixed k, where lim ǫ k = 0. We are able to compute the values of ǫ k for k ≤ 10. These values show that our algorithm improves the state of the art for every k = 5, . . . , 10 (see Table 1). A different adjustment of parameters of our algorithm results in time O(n k/2+3/2 ) and additional space of O( √ n), which improves the state of the art for every k ≥ 8.
We also show a good reason why we could not improve over the O(n 3 )-time algorithm of de Berg et al. for 4-opt Optimization: an O(n 3−ǫ )-time algorithm for some ǫ > 0 would imply that All Pairs Shortest Paths can be solved in time O(n 3−δ ) for some δ > 0. Note that although the family of 4-moves contains all 3-moves, it is still possible that there is no improving 3-move, but there is an improving 4-move. Thus the previous lower bound of de Berg et al. does not imply our lower bound, though our reduction is essentially an extension of the one by de Berg et al. [7] with a few additional technical tricks.
We also devote special attention to the k = 5 case of k-opt Optimization problem, hoping that it can still be of a practical interest. Our generic algorithm works in O(n 3.67 ) should be also possible for larger values of k. In Table 1 we present the running times for k = 5, . . . , 10.
Our Approach. Our algorithm applies dynamic programming on a tree decomposition. This is a standard method for dealing with some sparse graphs, like series-parallel graphs or outerplanar graphs. However, in our case we work with complete graphs. The trick is to work on an implicit structure, called dependence graph D. Graph D has k vertices which correspond to the k edges of H that are chosen to be removed. A subset of edges of D corresponds to the pattern of edges to be added (as we will see the number of such patterns is bounded for every fixed k, and one can iterate over all patterns). The dependence graph can be thought of as a sketch of the solution, which needs to be embedded in the input graph G. Graph D is designed so that if it has a separator S, such that D − S falls apart into two parts A and B, then once we find an optimal embedding of A ∪ S for some fixed embedding of S, one can forget about the embedding of A. This intuition can be formalized as dynamic programming on a tree decomposition of D, which is basically a tree of separators in D. The idea sketched above leads to an algorithm running in time O(n (1/3+ǫ k )k ) for every fixed k, where lim ǫ k = 0.
The reason for the exponent in the running time is that D is of maximum degree 4 and hence it has treewidth at most (1/3 + ǫ k )k, as shown by Fomin et al. [8].
The further improvement to O(n (1/4+ǫ k )k ) is obtained by yet another idea. We partition the n edges of H into n 1/4 buckets of size n 3/4 and we consider all possible distributions of the k edges to remove into buckets. If there are many nonempty buckets, then graph D has fewer edges, because some dependencies are forced by putting the corresponding edges into different buckets. As a result, the treewidth of D decreases and the dynamic programming runs faster. The case when there are few nonempty buckets does not give a large speed-up in the dynamic programming, but the number of such distributions is small.
Preliminaries
Throughout the paper let w 1 , w 2 , . . . , w n and e 1 , . . . , e n be sequences of respectively subsequent vertices and edges visited by H, so that e i = {w i , w i+1 } for i = 1, . . . , n−1 and e n = {w n , w 1 }. For i = 1, . . . , n − 1 we call w i the left endpoint of e i and w i+1 the right endpoint of e i . Also, w n is the left endpoint of e n and w 1 is its right endpoint.
We work with undirected graphs in this paper. An edge between vertices u and v is denoted either as {u, v} or shortly as uv.
For a positive integer i we denote [i] = {1, . . . , i}.
Connection patterns and embeddings
Formally, a k-move is a pair of sets (E − , E + ), both of cardinality k, where E − ⊆ {e 1 , . . . , e n }, E + ⊆ E(G), and E(H)\E − ∪E + is a Hamiltonian cycle. This is the most intuitive definition of a k-move, however it has a drawback, namely it is impossible to specify E + without specifying E − first. For this reason instead of listing the edges of E + explicitly, we will define a connection pattern, which together with E − expressed as an embedding fully specifies a k-move.
A k-embedding (or shortly: embedding) is any function f :
[k] → [n]
. A connection kpattern (or shortly: connection pattern) is any perfect matching in the complete graph on the vertex set [2k]. We call a connection pattern valid when one obtains a single k-cycle from M by identifying vertex 2i with vertex (2i + 1) mod 2k for every i = 1, . . . , k.
Let us show that every pair (E − , E + ) that defines a k-move has a corresponding pair of an embedding and a connection pattern, consequently giving an intuitive explanation of the above definition of embeddings and connection patterns. Consider a move Q = (E − , E + ). Let E − = {e i 1 , . . . , e i k }, where i 1 < i 2 < · · · < i k . For every j = 1, . . . , k, let v 2j−1 and v 2j be the left and right endpoint of e i j , respectively. An embedding of the k-move Q is the function
f Q : [k] → [n] defined as f Q (j) = i j for every j = 1, . . . , k. Note that f Q is increasing. A connection pattern of Q is every perfect matching M in the complete graph on the vertex set [2k] such that E + = {{v i , v j } | {i, j} ∈ M }.
Note that at least one such matching always exists, and if E − contains two incident edges then there is more than one such matching. Note also that M is valid, because otherwise after applying the k-move Q we do not get a Hamiltonian cycle.
Conversely, consider a pair (f, M ), where f is an increasing embedding and M is a valid connection pattern. We define E − f = {e f (j) | j = 1, . . . , k}. For every j = 1, . . . , k, let v 2j−1 and v 2j be the left and right endpoint of e f (j) , respectively. Then we also define
E + (f,M ) = {v i v j | {i, j} ∈ M }. It is easy to see that (E − f , E + (f,M )
) is a k-move. Because of the equivalence shown above, in what follows we abuse the notation slightly and a k-move Q can be described both by a pair of edges to remove and add (E − Q , E + Q ) and by an embedding-connection pattern pair (f Q , M Q ). The gain of Q is defined as
gain(Q) = w(E − Q ) − w(E + Q )
. Given a connection pattern M and an embedding f , we can also define an M -gain of f , denoted by gain M (f ) = gain(Q), where Q is the k-move defined by (f, M ). Note that k-opt Optimization asks for a k-move with maximum gain.
We note that the notion of connection pattern of a k-move was essentially introduced by de Berg et al. [7] under the name of 'signature', though they used a permutation instead of a matching, which we find more natural. They also show that one can reduce the problem k-opt Optimization so that it suffices to consider only k-moves where E − contains pairwise non-incident edges, but we do not find it helpful in the description of our algorithm (this assumption makes the connection pattern of a k-move unique).
Tree decomposition and nice tree decomposition
To make the paper self-contained, in this section we recall the definitions of tree and path decompositions and state their basic properties which will be used later in the paper. The content of this section comes from the textbook of Cygan et al. [6].
A tree decomposition of a graph G is a pair T = (T, {X t } t∈V (T ) ), where T is a tree whose every node t is assigned a vertex subset X t ⊆ V (G), called a bag, such that the following three conditions hold:
(T1) t∈V (T ) X t = V (G).
(T2) For every uv ∈ E(G), there exists a node t of T such that u, v ∈ X t .
(T3) For every u ∈ V (G), the set {t ∈ V (T ) | u ∈ X t } induces a connected subtree of T .
The width of tree decomposition T = (T, {X t } t∈V (T ) ), denoted by w(T), equals max t∈V (T ) |X t |− 1. The treewidth of a graph G, denoted by tw(G), is the minimum possible width of a tree decomposition of G. When E is a set of edges and V (E) the set of endpoints of all edges in E, by tw(E) we denote the treewidth of the graph (V (E), E).
A path decomposition is a tree decomposition T = (T, {X t } t∈V (T ) ), where T is a path. Then T is more conveniently represented by a sequence of bags (X 1 , . . . , X |V (T )| ), corresponding to successive vertices of the path. The pathwidth of a graph G, denoted by pw(G), is the minimum possible width of a path decomposition of G.
In what follows we frequently use the notion of nice tree decomposition, introduced by Kloks [16]. These tree decompositions are more structured, making it easier to describe dynamic programming over the decomposition. A tree decomposition T = (T, {X t } t∈V (T ) ) can be rooted by choosing a node r ∈ V (T ), called the root of T , which introduces a natural parent-child and ancestor-descendant relations in the tree T . A rooted tree decomposition (T, {X t } t∈V (T ) ) is nice if X r = ∅, X ℓ = ∅ for every leaf ℓ of T , and every non-leaf node of T is of one of the following three types:
• Introduce node: a node t with exactly one child t ′ such that X t = X t ′ ∪ {v} for some vertex v / ∈ X t ′ .
• Forget node: a node t with exactly one child t ′ such that X t = X t ′ \ {w} for some vertex w ∈ X t ′ .
• Join node: a node t with two children t 1 , t 2 such that X t = X t 1 = X t 2 .
A path decomposition is nice when it is nice as tree decomposition after rooting the path in one of the endpoints. (Note that it does not contain join nodes.) Proposition 1 (see Lemma 7.4 in [6]). Given a tree (resp. path) decomposition T = (T, {X t } t∈V (T ) ) of G of width at most k, one can in time O(k 2 ·max(|V (T )|, |V (G)|)) compute a nice tree (resp. path) decomposition of G of width at most k that has at most O(k|V (G)|) nodes.
We say that (A, B) is a separation of a graph G if A ∪ B = V (G) and there is no edge between A \ B and B \ A. Then A ∩ B is a separator of this separation. Lemma 2 (see Lemma 7.3 in [6]). Let (T, {X t } t∈V (T )
) be a tree decomposition of a graph G and let ab be an edge of T . The forest T − ab obtained from T by deleting edge ab consists of two connected components T a (containing a) and
T b (containing b). Let A = t∈V (Ta) X t and B = t∈V (T b ) X t . Then (A, B) is a separation of G with separator X a ∩ X b .
The algorithm
In this section we present our algorithms for k-opt Optimization. The brute-force algorithm verifies all possible k-moves. In other words, it iterates over all possible valid connection patterns and increasing embeddings. The brilliant observation of Berg et al. [7] is that we can iterate only over all possible connection patterns, whose number is bounded by (2k)!. In other words, we fix a valid connection pattern M and from now on, our goal is to find an increasing embedding f : [k] → [n] which, together with M , defines a k-move giving the largest weight improvement over all k-moves with connection pattern M . Instead of doing this by enumerating all Θ(n k ) embeddings, Berg et al. [7] fix carefully selected ⌊2/3k⌋ values of f in all n ⌊2/3k⌋ possible ways, and then show that the optimal choice of the remaining values can be found by a simple dynamic programming running in O(nk) time. Our idea is to find the optimal embedding for a given connection pattern using a different, more efficient approach.
Basic setup
Informally speaking, instead of guessing some values of f , we guess an approximation of f defined by appropriate bucketing. For each approximation b, finding an optimal embedding consistent with b is done by a dynamic programming over a tree decomposition. We would like to note that even without bucketing (i.e, by using a single trivial bucket of size n) our algorithm works in n (1/3+ǫ k )k time. Therefore the notion of bucketing is used to further improve the running time, but it is not essential to perform the dynamic programming on a tree decomposition.
More precisely, we partition the set [n], corresponding to the edges of H, into buckets. Each bucket is an interval {i, i + 1, . . . , j} ⊆ [n], for some 1 ≤ i ≤ j ≤ n. Let n b be the number of buckets and let B j denote the j-th bucket, for j = 1, . . . , n b . A bucket assignment is any nondecreasing function
b : [k] → [n b ].
Unless explicitly modified, we use all buckets of the same size ⌈n α ⌉, for a constant α which we set later. Then, for j = 1, . . . , b the j-th bucket is the set
B j = {(j − 1) ⌈n α ⌉ + 1, . . . , j ⌈n α ⌉} ∩ [n].
Given a bucket assignment b we define the set
O b = {{i, i + 1} ⊂ [k] | b(i) = b(i + 1)}. Definition 1 (b-monotone partial embedding). Let f : S → [n] be a partial embedding for some S ⊆ [k]. We say that f is b-monotone when (M1) for every i ∈ S we have f (i) ∈ B b(i) , and (M2) for every {i, i + 1} ∈ O b , if {i, i + 1} ⊆ S, then f (i) < f (i + 1).
Note that a b-monotone embedding f : [k] → [n] is always increasing, but a b-monotone partial embedding does not even need to be non-decreasing (this seemingly artificial design simplifies some of our proofs). In what follows, we present an efficient dynamic programming (DP) algorithm which, given a valid connection pattern M and a bucket assignment b finds a b-monotone embedding of maximum M -gain. To this end, we need to introduce the gain of a partial embedding. Let f : S → [n] be a b-monotone partial embedding, for some S ⊆ [k]. For every j ∈ S, let v 2j−1 and v 2j be the left and right endpoint of e f (j) , respectively. We define
E − f = {e f (i) | i ∈ S} E + f = {{v i ′ , v j ′ } | i, j ∈ S, i ′ ∈ {2i − 1, 2i}, j ′ ∈ {2j − 1, 2j}, {i ′ , j ′ } ∈ M }. Then, gain M (f ) = w(E − f ) − w(E + f ).
Note that gain M (f ) does not necessarily represent the actual cost gain of the choice of the edges to remove represented by f . Indeed, assume that for some pair
i, j ∈ [k] there are i ′ ∈ {2i − 1, 2i} and j ′ ∈ {2j − 1, 2j} such that {i ′ , j ′ } ∈ M .
Then we say that i interferes with j, which means that we plan to add an edge between an endpoint of the i-th deleted edge and the j-th deleted edge. Note that if i ∈ S (the i-th edge is chosen) and j ∈ S (the j-th edge is not chosen yet) this edge to be added is not known yet, and its cost is not represented in gain M (f ). However, the value of f (i) influences this cost. Consider the following set of interfering pairs:
I M = {{i, j} | i interferes with j}.
Note that I M is obtained from M by identifying vertex 2i − 1 with vertex 2i for every i = 1, . . . , k (and the new vertex is simply called i). In particular, this implies the following simple property of I M .
Dynamic programming over tree decomposition
Now we define the graph D M,b , called the dependence graph, where V (D M,b ) = [k] and E(D M,b ) = O b ∪ I M .
The vertices of the graph correspond to the k edges to be removed from H (i.e., j corresponds to the j-th deleted edge in the sequence e 1 , . . . , e n ). The edges of D M,b correspond to dependencies between the edges to remove (equivalently, elements of the domain of an embedding). The edges from O b are order dependencies: edge {i, i + 1} means that the (i + 1)-th deleted edge should appear further on H than the i-th deleted edge. (Note that in O b there are no edges between the last element of a bucket and the first element of the next bucket -this is because the corresponding constraint is forced by the assignment to buckets.) The edges from I M are cost dependencies (resulting from interference explained in Section 3.1).
The goal of this section is a proof of the following theorem. Let T = (T, {X t } t∈V (T ) ) be a nice tree decomposition of D M,b with minimum width. Such a decomposition can be found in O * (1.7347 k ) time by an algorithm of Fomin and Villanger [10], though for practical purposes a simpler O * (2 k )-time algorithm is advised by Bodlaender et al. [2]. For every t ∈ V (T ) we denote by V t the union of all the bags in the subtree of T rooted in t.
For every node t ∈ V (T ), and for every b-monotone function f : X t → [n], we will compute the following value.
T
t [f ] = max g:Vt→[n] g| X t =f g is b-monotone gain M (g).
Then, if r is the root of T , and ∅ denotes the unique partial embedding with empty domain, then T r [∅] is the required maximum M -gain of a b-monotone embedding. The embedding itself (and hence the corresponding k-move) can be also found by using standard DP techniques. The values of T t [f ] are computed in a bottom-up fashion. Let us now present the formulas for computing these values, depending on the kind of node in the tree T .
Leaf node. When t is a leaf of T , we know that X t = V t = ∅, and we just put T t [∅] = 0.
Introduce node. Assume X t = X t ′ ∪{i}, for some i ∈ X t ′ where node t ′ is the only child of t. Denote ∆E + f = E + f \ E + f | X t ′ .
Then, we claim that for every b-monotone function f :
X t → [n], T t [f ] = T t ′ [f | X t ′ ] + w(e f (i) ) − {u,v}∈∆E + f w({u, v}).(1)
We show that (1) holds by showing the two relevant inequalities. Let g be a function for which the maximum from the definition of
T t [f ] is attained. Let g ′ = g| V t ′ . Note that g ′ is b-monotone because g is b-monotone. Hence, gain M (g ′ ) ≤ T t ′ [f | X t ′ ]. It follows that T t [f ] = gain M (g) = gain M (g ′ ) + w(e f (i) ) − {u,v}∈∆E + f w({u, v}) ≤ T t ′ [f | X t ′ ] + w(e f (i) ) − {u,v}∈∆E + f w({u, v}).
Now we proceed to the other inequality. Assume g ′ is a function for which the maximum from the definition of
T t ′ [f | X t ′ ] is attained. Let g : V t → [n] be the function such that g| V t ′ = g ′ and g(i) = f (i). Let us show that g is b-monotone. The condition (M 1) is immediate, since g ′ and f are b-monotone. For (M 2), consider any {j, j + 1} ∈ O b such that {j, j + 1} ⊆ V t . If i ∈ {j, j + 1} then g(j) < g(j + 1) by b-monotonicity of g ′ , so assume i ∈ {j, j + 1}. Then {j, j + 1} ⊆ X t , for otherwise X t ∩ X t ′ does not separate j from j + 1, a contradiction with Lemma 2. For {j, j + 1} ⊆ X t , we have g(j) < g(j + 1) since f (j) < f (j + 1). Hence g is b-monotone, which implies T t [f ] ≥ gain M (g). Then it suffices to observe that gain M (g) = gain M (g ′ ) + w(e f (i) ) − {u,v}∈∆E + f w({u, v}) = T t ′ [f | X t ′ ] + w(e f (i) ) − {u,v}∈∆E + f w({u, v}).
This finishes the proof that (1) holds.
Forget node. Assume X t = X t ′ \ {i}, for some i ∈ X t ′ where node t ′ is the only child of t. Then the definition of T t [f ] implies that
T t [f ] = max f ′ :X t ′ →[n] f ′ | X t =f f ′ is b-monotone T t ′ [f ′ ].
(2)
Join node. Assume X t = X t 1 = X t 2 , for some nodes t, t 1 and t 2 , where t 1 and t 2 are the only children of t. Then, we claim that for every b-monotone function f :
X t → [n]
,
T t [f ] = T t 1 [f ] + T t 2 [f ] + w(E − f ) − w(E + f ) .(3)
Let us first show the ≤ inequality. Let g be a function for which the maximum from the definition of T t [f ] is attained. Let g 1 = g| Vt 1 and g 2 = g| Vt 2 . Note that g 1 and g 2 are b-monotone because g is b-monotone. This, together with the fact that g
i | Xt i = f for i = 1, 2 implies gain M (g i ) ≤ T t i [f ] for i = 1, 2. It follows that T t [f ] = gain M (g) = gain M (g 1 ) + gain M (g 2 ) + w(E − f ) − w(E + f ) ≤ T t 1 [f ] + T t 2 [f ] + w(E − f ) − w(E + f )
. Now we proceed to the ≥ inequality. Assume g 1 (resp. g 2 ) is a function for which the maximum from the definition of T t 1 [f ] (resp. T t 2 [f ]) is attained. Let g : V t → [n] be the function such that g| Vt 1 = g 1 and g| Vt 2 = g 2 . Note that g| Xt = f . Then gain M (g) =
gain M (g 1 ) + gain M (g 2 ) + w(E − f ) − w(E + f ) = T t 1 [f ] + T t 2 [f ] + w(E − f ) − w(E + f ) . It suf- fices to show that g is b-monotone, because then T t [f ] ≥ gain M (g). The condition (M 1) is
immediate, since g 1 and g 2 are b-monotone. For (M 2), consider any {j, j + 1} ∈ O b such that {j, j + 1} ⊆ V t . If {j, j + 1} ⊆ V t 1 or {j, j + 1} ⊆ V t 2 then g(j) < g(j + 1) by b-monotonicity of g 1 or g 2 , respectively. Hence, by symmetry, we can assume j ∈ V t 1 \ V t 2 and j + 1 ∈ V t 2 \ V t 1 . However, this cannot happen, because then X t does not separate j from j + 1, a contradiction with Lemma 2.
Running time. Since |V (T )| = O(k), in order to complete the proof of Theorem 4 it suffices to prove the following lemma. For a forget node, a direct evaluation of (2) for all b-monotone functions f :
X t → [n] takes O(k i∈X t ′ s i ) time, where t ′ is the only child of t.
Finally, for a join node a direct evaluation of (3) 3.3 An algorithm running in time O(n (1/3+ǫ)k ) for k large enough
We will make use of the following theorem due to Fomin, Gaspers, Saurabh, and Stepanov [8].
Theorem 6 (Fomin et al. [8]). For any ǫ > 0, there exists an integer n ǫ such that for every graph G with n > n ǫ vertices, pw(G) ≤ 1 6 n 3 + 1 3 n 4 + 13 30 n 5 + 23 45
n 6 + n ≥7 + ǫn,
where n i is the number of vertices of degree i in G for any i ∈ {3, . . . , 6} and n ≥7 is the number of vertices of degree at least 7.
We actually use the following corollary, which is rather immediate.
Corollary 7. For any ǫ > 0, there exists an integer n ǫ such that for every multigraph G with n > n ǫ vertices and m edges where for every vertex v ∈ V (G) we have 2 ≤ deg G (v) ≤ 4, the pathwidth of G is at most (m − n)/3 + ǫn.
Proof. The corollary follows from Theorem 6 by the following chain of equalities.
Let P k = {{i, i + 1} | i ∈ [k − 1]}.(4)I ′ M ∪ A) ≤ |I ′ M | + |A| − k)/3 + ǫ k k ≤ |A|/3 + ǫ k k.
By Lemma 8 it follows that the running time in Theorem 4 is bounded by O(n ( α 3 +ǫ)k ). If we do not use the buckets at all, i.e., α = 1 and we have one big bucket of size n, we get the O(n ( 1 3 +ǫ)k ) bound. By iterating over all at most (2k)! connection patterns we get the following result, which already improves over the state of the art for large enough k.
An algorithm running in time O(n (1/4+ǫ)k ) for k large enough
Let M k be the set of all valid connection k-patterns.
M ∈M k A⊆P k b:[k]→[⌈n/⌈n α M ⌉⌉] b nondecreasing O b =A O(n α M (tw(I M ∪A)+1) k 2 + 2 k ) = O(2 k ) M ∈M k A⊆P k n (1−α M )(k−|A|) · n α M (tw(I M ∪A)+1)(6)
For every M ∈ M k , the optimal value of α M can be found by a simple LP (see Section 3.6). The claim follows.
Saving space
The algorithm from Theorem 11, as described above, uses O(n (1/4+ǫ k )k ) space. However, a closer look reveals that the space can be decreased to O(n (1/8+ǫ k )k ). This is done by exploiting some properties of the specific tree decomposition of graphs of maximum degree 4, described by Fomin et al. [8], which we used in Theorem 6. This decomposition is obtained as follows. Let D be a k-vertex graph of maximum degree 4. As long as D contains a vertex v of degree 4, we remove v. As a result we get a set of removed vertices S and a subgraph D ′ = D − S of maximum degree 3. Then we construct a tree decomposition T ′ of D ′ , of width at most (1/6 + ǫ k )k, given in the paper of Fomin and Høie [9]. The tree decomposition T of D is then obtained by adding S to every bag of T ′ . An inductive argument (see [8]) shows that the width of T is at most 1 3 k 4 + 1 6 k 3 + ǫ k k. We suppose that more space/time trade-offs are possible by finding small sets whose removal makes the tree decomposition somewhat small.
Small values of k
The value of c(k) in Lemma 10 can be computed using a computer programme for small values of k, by enumerating all connection patterns and using formula (5) to find optimum α. We used a C++ implementation (see http://www.mimuw.edu.pl/˜kowalik/localtsp/localtsp.cpp for the source code) including a simple O(2 k ) dynamic programming for computing treewidth described in the work of Bodlaender et al. [2]. For every valid connection pattern M our program finds the value of min α∈[0,1] max A⊆P k ((1 − α)(k − |A|) + α(tw(I M ∪ A) + 1)) by solving a simple linear program, as follows.
minimize v
subject to v ≥ (1 − α)(k − s) + α max A⊆P k |A|=s (tw(I M ∪ A) + 1), s = 0, . . . , k − 1 α ∈ [0, 1]
We get running times for k = 5, . . . , 10 described in Table 2. It turns out that for k = 5, . . . , 10 the running time does not grow when we fix the same size of the buckets n α for all connection patterns, hence in Table 2 we present also the values of α.
A refined analysis of 5-opt Optimization
In this section we focus on 5-opt Optimization problem. This the first case where our findings may have a practical relevance, which motivates us towards a deepened analysis. It turns out that to get the entry for k = 5 in Table 2 we do not need a computer, and the proof is rather short, as one can see below. Proof. Let D = ( [5], I M ∪ A) be the dependence multigraph. Since K 5 is the only 5-vertex graph with treewidth larger than 3, and D has at most different 9 edges, we note that tw(D) ≤ 3. One can see that the tight cases in the above proof are |A| = 0 and |A| = 2. A closer look at the |A| = 2 case reveals that the source of hardness of this case is a single (up to isomorphism) graph ( [5], I M ∪ A) of treewidth 3. It turns out that using a different bucket partition design one can save some running time in this particular case. The details are given in the proof of Theorem 16. However, first we need a simple technical lemma, which extends Lemma 5 to general (not necessarily nice) path decompositions (it is true also for tree decompositions, but we do not need it). Proof. We create a nice path decomposition of D as follows. For every q = 1, . . . , r − 1 we insert between X q and X q+1 a sequence of forget nodes (one for every j ∈ X q \ X q+1 ) followed by a sequence of introduce nodes (one for every j ∈ X q+1 \ X q ). Thus, the resulting path decomposition has at most rk nodes. It is clear that for each of the added forget nodes with a bag X, we have i∈X s i ≤ i∈Xq s i , and for each of the added introduce nodes with a bag X, we have i∈X s i ≤ i∈X q+1 s i . The claim follows by Lemma 5. Proof. We will refine the proof of Theorem 16 by looking closer at the |A| = 2 case. ab ∈ I M . Then I M has one cycle abdce. Then D is the same 5-cycle, so it has pathwidth 2. CASE 1.2.3: ad ∈ I M . Then I M has one cycle adbce. Note that D contains a minor of K 4 , so it has treewidth 3. It follows that we need to modify the algorithm. We partition the bucket containing a and b into n α/2 buckets of size n α/2 and we consider all possible assignments of a and b to these buckets.
First consider the assignments where a and b are in the same small bucket. There are at most n 3(1−α) n α/2 = n 3−2.5α such assignments. Consider a path decomposition of D consisting of two adjacent nodes p and q with bags X p = {a, b, c, d} and X q = {a, b, e}. Note that each of the bags contains two vertices from a bucket of size n α/2 and at most two vertices from a bucket of size n α . By Lemma 15 nodes p and q can be processed in time O(n 2·α/2 · n 2α ) = O(n 3α ). Hence the computation for the assignments where a and b are in the same small bucket take O(n 3+α/2 ) time in total. Now consider the assignments where a and b are in different small buckets. There are at most n 3(1−α) n 2α/2 = n 3−2α such assignments. However, the corresponding dependence graph To sum up, by Case 1 of the proof of Theorem 14, Case 1 of the proof of Theorem 16 and Cases 1 and 2 above, the algorithm works in time O(n 5−2α + n 3+α/2 + n 2+ 5 3 α + n 1+3α ). Putting α = 4/5 finishes the proof.
Lower bound for k = 4
In this section we show a hardness result for 4-opt Optimization. More precisely, we work with the decision version, called 4-opt Detection, where the input is the same as in 4-opt Optimization and the goal is to determine if there is a 4-move which improves the weight of the given Hamiltonian cycle. To this end, we reduce the Negative Edge-Weighted Triangle problem, where the input is an undirected, complete graph G, and a weight function w : E(G) → Z. The goal is to determine whether G contains a triangle whose total edge-weight is negative.
Lemma 18. Every instance I = (G, w) of Negative Edge-Weighted Triangle can be reduced in O(|V (G)| 2 ) time into an instance I ′ = (G ′ , w ′ , C) of 4-opt Detection such that G contains a triangle of negative weight iff I ′ admits an improving 4-move. Moreover, |V (G ′ )| = O(|V (G)|), and the maximum absolute weight in w ′ is larger by a constant factor than the maximum absolute weight in w.
Proof. Let V (G) = {v 1 , . . . , v n }. Then let V up = {a 1 , b 1 , . . . , a n , b n }, V down = {a ′ 1 , b ′ 1 , . . . , a ′ n , b ′ n } and V (G ′ ) = V up∪ V down .
Let W be the maximum absolute value of a weight in w. Then let M 1 = 5W + 1 and M 2 = 21M 1 + 1 and let
w ′ (u, v) = 0 if (u, v) is of the form (a i , b ′ i ) w(v i , v j ) if (u, v) is of the form (a i , b j ) for i < j or (a ′ i , b j ) for j < i M 1 if (u, v) is of the form (a i , b i ) −3M 1 if (u, v) is of the form (a ′ i , b ′ i ) −M 2 if (u, v) is of the form (b i , a i+1 ) or (b ′ i , a ′ i+1 ) or (a 1 , a ′ 1 ) or (b n , b ′ n ) M 2
in other case.
Note that the cases are not overlapping. (Note also that although some weights are negative, we can get an equivalent instance with nonnegative weights by adding M 2 to all the weights.) Let C = a 1 , b 1 , . . . , a n , b n , b ′ n , a ′ n , . . . , b ′ 1 , a ′ 1 . If there is a negative triangle v i , v j , v k for some i < j < k in G then we can improve C by removing edges (a i , b i ), (a j , b j ), (a k , b k ) and (a ′ k , b ′ k ) and inserting edges (a i , b j ), (a j , b k ), (a k , b ′ k ) and (a ′ k , b i ). We obtain a cycle
a 1 , b 1 , . . . a i , b j , a j+1 . . . , a k , b ′ k , a ′ k+1 , . . . , b ′ n , b n , a n , . . . , b k , a j , b j−1 , . . . b i , a ′ k , b ′ k−1 , . . . , a ′ 1 .
The total weight of the removed edges is M 1 + M 1 + M 1 + (−3M 1 ) = 0 and the total weight of the inserted edges is
w(v i , v j )+w(v j , v k )+0+w(v k , v i ) < 0 hence indeed the cycle is improved.
Let us assume that C can be improved by removing 4 edges and inserting 4 edges. Note that all the edges of weight −M 2 belong to C and all the edges of weight M 2 do not belong to C. All the other edges have absolute values of their weights bounded by 3M 1 . Therefore even a single edge of the weight −M 2 cannot be removed and even a single edge of the weight M 2 cannot be inserted because a loss of M 2 cannot be compensated by any other 7 edges (inserted or removed), as they can result in a gain of at most 7 · 3M 1 < M 2 . Hence in the following we treat edges of weights ±M 2 as fixed, i.e., they cannot be inserted or removed from the cycle. Note that the edges of C that can be removed are only the edges of the form (a i , b i ) (of weights M 1 ) and (a ′ i , b ′ i ) (of weights −3M 1 ). All the edges of weight −3M 1 already belong to C and all the remaining edges of the graph that can be inserted or removed from the cycle are the edges of the weight M 1 belonging to C and the edges of absolute values of their weights bounded by W. Therefore we cannot remove more than one edge of the weight −3M 1 from C because a loss of 6M 1 cannot be compensated by any 2 removed and 4 inserted edges (we could potentially gain only 2M 1 + 4W < 3M 1 ). Figure 1: A simplified view of the instance (G ′ , w ′ , C) together with an example of a 4-move.
a i b i a j b j a k b k b ′ k a ′ k b n b ′ n a 1 a ′ 1 M 1 M 1 M 1 −3M 1 0 w(v i , v j ) w(v j , v k ) w(v i , v k )
The added edges are marked as blue (dashed) and the removed edges are marked as red (dotted).
Hence we can remove at most one edge of the weight −3M 1 from C. For the same reason if we do remove one edge of the weight −3M 1 (i.e., of the form (a ′ i , b ′ i )) from C we need to remove also three edges of the weights M 1 (i.e., of the form (a j , b j )) in order to compensate the loss of 3M 1 (otherwise we could compensate up to 2M 1 + 5W < 3M 1 ).
Note that the only edges that can be added (i.e., the edges with the weights less than M 2 that do not belong to C) are the edges of the form (a i , b j ) for i < j, (a ′ i , b j ) for j < i and (a i , b ′ i ). Therefore if the removed edges from G[V up ] are (a i 1 , b i 1 ), . . . , (a i ℓ , b i ℓ ) for some i 1 < . . . < i ℓ (and no other edges belonging to G[V up ]) then in order to close the cycle we need to insert some edge incident to b i 1 but since for any i 0 < i 1 there is no removed edge (a i 0 , b i 0 ) it cannot be an edge of the form (a i 0 , b i 1 ). Hence it has to be an edge of the form (a ′ j , b i 1 ) for some j > i 1 . But then also the edge (a ′ j , b ′ j ) has to be removed. Therefore if we remove at least one edge of the form (a i , b i ) then we need to remove also an edge of the form (a ′ j , b ′ j ) (and as we know this implies also that at least three edges of the form (a i , b i ) have to be removed). So if any edge is removed, then exactly three edges of the form (a i , b i ) and exactly one edge of the form (a ′ j , b ′ j ) have to be removed. Note that this implies also that the total weight of the removed edges has to be equal to zero.
Clearly the move has to remove at least one edge in order to improve the weight of the cycle. Let us assume that the removed edges are (a i , b i ), (a j , b j ) and (a k , b k ) for some i < j < k and (a ′ ℓ , b ′ ℓ ) for some ℓ. For the reason mentioned in the previous paragraph in order to obtain a Hamiltonian cycle one of the inserted edges has to be the edge (a ′ ℓ , b i ). Also the vertex b j has to be connected with something but the vertex a ′ ℓ is already taken and hence it has to be connected with the vertex a i . Similarly the vertex b k has to be connected with a j because a ′ ℓ and a i are already taken. Thus a k has to be connected with b ′ ℓ and this means that k = ℓ. The total weight change of the move is negative and therefore the total weight of the added edges has to be negative (since the total weight of the removed edges is equal to zero). Thus we have w(v i , v j ) + w(v j , v k ) + w(v k , v i ) = w ′ (a i , b j ) + w ′ (a j , b k ) + w ′ (a ′ k , b i ) + w ′ (a k , b ′ k ) < 0. So v i , v j , v k is a negative triangle in (G, w).
Theorem 19.
If there is ǫ > 0 such that 4-opt Detection admits an algorithm in time O(n 3−ǫ · polylog(M )), then there is δ > 0 such that both Negative Edge-Weighted Triangle and All Pairs Shortest Paths admit an algorithm in time O(n 3−δ · polylog(M )), where in all cases we refer to n-vertex input graphs with integer weights from {−M, . . . , M }.
Proof. The first part of the claim follows from Lemma 18, while the second part follows from the reduction of All Pairs Shortest Paths to Negative Edge-Weighted Triangle by Vassilevska-Williams and Williams (Theorem 1.1 in [25]).
O(n 3.4 ) O(n 4 ) O(n 4.25 ) O(n 4 2 3 ) O(n 5 ) O(n 5.2 )
Proposition 3 .
3Every connected component of the graph ([k], I M ) is a cycle or a single edge.
Theorem 4 .
4Let M be a valid connection k-pattern and let b : [k] → [n] be a bucket assignment, where every bucket is of size ⌈n α ⌉. Then, a b-monotone embedding of maximum M -gain can be found in O(n α(tw(D M,b )+1) k 2 + 2 k ) time.
Lemma 5 .
5Let T = (T, {X t } t∈V (T ) ) be a nice tree decomposition of D. Let t be a node of T . For every i ∈ X t let s i be the size of the bucket assigned to i. Then, all the values of T t can be found in time O(k i∈Xt s i ). In particular, if all buckets are of size ⌈n α ⌉, then t can be processed in time O(kn α|Xt| ).Proof. Obviously, in every leaf node the algorithm uses only O(1) time.For an introduce node, observe that evaluation of the formula(1)takes O(k) time for every f , since |∆E + f | ≤ 2 (the factor O(k) is needed to read off a single value from the table). By condition (M 1), each value f (i) of a b-monotone function f can be fixed in s i ways, so the number of b-monotone functions f : X t → [n] is bounded by i∈Xt s i . Hence all the values of T t are computed in time O(k i∈Xt s i ), which is O(kn α|Xt| ) when all buckets are of size ⌈n α ⌉.
takes O(k) time, since |E − f | ≤ k and |E + f | ≤ k.Hence all the values of T t are computed in time O(k i∈Xt s i ).
2 + 3n 3 + 4n 4 ) − (n 2 + n 3 + n 4 ) m − n).
Lemma 8 .
8For any A ⊆ P k we have pw(I M ∪ A) ≤ |A|/3 + ǫ k k, where lim k→∞ ǫ k = 0. Proof. Although ([k], I M ∪ A) may not be of minimum degree 2, we may consider the edge multiset I ′ M of the graph obtained from ([k], I M ) by replacing every single edge component {u, v} by a 2-cycle uvu. Then I ′ M is a cycle cover, so every vertex in multigraph ([k], I ′ M ∪ A) has degree between 2 and 4. Hence, by Corollary 7, for some sequence ǫ k with lim k→∞ ǫ k = 0 we have that pw(I M ∪ A) = pw(
Theorem 9 .
9For every fixed integer k, k-opt Optimization can be solved in time O(n (1/3+ǫ k )k ), where lim k→∞ ǫ k = 0.
Lemma 10. k-opt Optimization can be solved in time 2 O(k log k) n c(k)
α)(k − |A|) + α(tw(I M ∪ A) + 1)) .(5) Proof. We perform the algorithm from Theorem 4 for each possible valid connection pattern M and every bucket assignment b, with all the buckets of size ⌈n α M ⌉, for some α M ∈ [0, 1]. Let us bound the total running time. Let A ⊆ P k and consider a bucket assignment b such that O b = A. There are n (1−α M )(k−|A|) such bucket assignments, and by Theorem 4 for each of them the algorithm uses time O(n α M (tw(I M ∪A)+1) k 2 + 2 k ). Hence the total running time is bounded by
Theorem 11 .
11For every fixed integer k, k-opt Optimization can be solved in time O(n (1/4+ǫ k )k ), where lim k→∞ ǫ k = 0. Proof. Fix the same value α = 3/4 for every connection pattern M . By Lemma 8 we have (1 − α)(k − |A|) + α(tw(I M ∪ A) + 1) ≤ ( 1 4 + 3 4k + 3 4 ǫ ′ k )k. The claim follows by Lemma 10, after putting ǫ k = 3 4k + 3 4 ǫ ′ k .
Assume we are given a partial b-monotone embedding f 0 : S → [n], where S is the set of removed vertices mentioned in the previous paragraph. Consider the dynamic programming algorithm from Theorem 4, which finds a b-monotone embedding of maximum M -gain, for a given bucket assignment b and connection pattern M . It is straightforward to modify this algorithm so that it computes a b-monotone embedding of maximum M -gain that extends f 0 . The resulting algorithm runs in time O(n α(tw(D−S)+1) k 2 ) and uses space O(n α(tw(D−S)+1) ). Recalling that α = 3/4 and tw(D − S) ≤ (1/6 + ǫ k )k, we get the space bound of O(n (1/8+ǫ k )k ). Repeating this for each of n α|S| embeddings of S takes time O(n α(|S|+tw(D−S)+1) ) instead of O(n α(tw(D)+1) ) from Theorem 4. However, as explained above, the bound on tw(D) from Theorem 6 used in the proof of Theorem 11 is also a bound on |S| + tw(D − S), so the time of the whole algorithm is still bounded by O(n (1/4+ǫ k )k ). Theorem 12. For every fixed integer k, k-opt Optimization can be solved in time O(n (1/4+ǫ k )k ) and space O(n (1/8+ǫ k )k ), where lim k→∞ ǫ k = 0. Another interesting observation is that if we build set S by picking an arbitrary vertex of every edge in O b , then D ′ := D − S contains no edges of O b , so it has maximum degree at most 2. It follows that tw(D ′ ) ≤ 2. Thus, in Lemma 10 we can bound tw(I M ∪ A) ≤ |A| + 2 and for α = 1/2 we get the running time of O(n k/2+3/2 ). By using the approach of fixing all embeddings of S described above, we get the space of O(n αtw(D ′ ) ) = O(n 3/2 ) which is less than the Θ(n 2 ) space needed to store all the distances of the TSP instance. However, the additional space can be further improved. After fixing an embedding of S we find the embedding of every connected component of D − S separately. Consider such a component. If it is a cycle, we consider all O(n α ) = O(n 1/2 ) ways of fixing one of its vertices and we are left with a path, say v 1 , . . . , v ℓ . The dynamic programming described in Section 3.2 operates on a nice path decomposition of the form {v 1 }, {v 1 , v 2 }, {v 2 }, {v 2 , v 3 }, . . . , {v ℓ }. It uses space O(n 2α ) = O(n) in the bags of size 2. However, by combining formulas (1) and (2) one can compute the DP tables for size 1 bags only, using space O(n α ) = O(n 1/2 ).
Theorem 13 .
13For every fixed integer k, k-opt Optimization can be solved in time O(n k/2+3/2 ) and additional space O( √ n).
O(n 4 ) O(n 4.25 ) O(n 4 2 3 ) O(n 5 ) O(n 5.2 )
Theorem 14. 5-opt Optimization can be solved in time O(n 3 2 3 ).
CASE 1 :
1|A| ≤ 1. Then either D has maximum degree 2, or D is a 5-cycle with a single chord. In both cases it is easy to see that tw(D) ≤ 2. By Lemma 10 this case contributes O(n 5(1−α)+3α ) = O(n 5−2α ) to the running time.CASE 2: |A| ≥ 2. By Lemma 10, this case contributes O(n (5−|A|)(1−α)+4α ) = O(n 3+α ) to the running time.Putting α = 2/3 finishes the proof.
Lemma 15 .
15Let M be a valid connection k-pattern and let b : [k] → [n] be a bucket assignment. For every i ∈ [k] let s i be the size of the bucket assigned to i. Let (X 1 , . . . , X r ) be a path decomposition of D M,b . Then, a b-monotone embedding of maximum M -gain can be found in time O(rk 2 max t∈[r] i∈Xt s i ).
Theorem 16. 5-opt Optimization can be solved in time O(n 3.5 ).
CASE 1 :
1|A| = 2. By Lemma 10, when tw(D) ≤ 2, this case contributes O(n 3(1−α)+3α ) = O(n 3 ) to the running time, so a problem arises only in case tw(D) = 3. CASE 1.1: The two edges of A are incident. Let A = {ab, bc} and let d and e be the two vertices not incident to any edge of A. We claim that pw(D) ≤ 2. Indeed, the sequence of bags in the desired path decomposition is N [d], (N [d] ∪ N [e]) \ {d}, and {a, b, c} when de ∈ I M and N [d], {a, b, c} and N [e] otherwise. CASE 1.2: The two edges of A are not incident. Let A = {ab, cd} and let e be the vertex not incident to any edge of A. Assume e is not incident with {a, b}. Since M is a perfect matching, e is incident with c or d, by symmetry assume ec ∈ I M . Then {c, d, e}, N [c] \ {e}, {a, b, d} is a path decomposition of width 2. Hence by symmetry we can assume ae, ce ∈ I M . By Proposition 3 a belongs to a cycle in ([5], I M ), so there are 3 subcases to consider CASE 1.2.1: ac ∈ I M . Then I M consists of the cycle ace and edge bd. Then {a, c, e}, {a, b, c} and {b, c, d} is a path decomposition of width 2. CASE 1.2.2:
D
′ has one edge less than D, namely E(D ′ ) = E(D) \ {ab}. Consider a path decomposition of D ′ consisting of three consecutive bags {b, c, d}, {a, c, d} and {a, c, e}. Each of the bags contains two vertices from a bag of size n α and one vertex from a bag of size n α/2 . By Lemma 15 each of the three nodes can be processed in time O(n 2α+α/2 ) = O(n 2.5α ). Hence the computation for the assignments where a and b are in different small buckets also take O(n 3+α/2 ) time in total. CASE 2: |A| ≥ 3. By Lemma 10, this case contributes O(n (5−|A|)(1−α)+4α ) = O(n 2+2α ) to the running time.To sum up, by the above and Case 1 of the proof of Theorem 14, the algorithm works in time O(n 5−2α + n 3+α/2 + n 2+2α ). Putting α = 3/4 finishes the proof.The running time of Theorem 16 can be further improved by a careful refinement of the |A| = 3 case, as shown below.Theorem 17. 5-opt Optimization can be solved in time O(n 3.4 ).Proof. We will refine the proof of Theorem 16 by looking closer at the |A| = 3 case. By Lemma 10, when tw(D) ≤ 2, this case contributes O(n 2(1−α)+3α ) = O(n 3 ) to the running time, so a problem arises only in case tw(D) = 3. CASE 1: |A| = 3. CASE 1.1: The edges of A form a 3-path abcd. Let e be the vertex not incident to edges of A. By Proposition 3 e has a neighbor in {a, b, c, d}. By symmetry assume that e has a neighbor in {c, d}. We partition the bucket containing c and d into n α/2 buckets of size n α/2 and we consider all possible assignments of c and d to these buckets.First consider the assignments where c and d are in the same small bucket. There are at most n 2(1−α) n α/2 = n 2−1.5α such assignments. Consider the path decomposition of D with two bags {a, b, c, d} and N [e]. Note that each of the bags contains at most two vertices from a bucket of size n α/2 and at most two vertices from a bucket of size n α . By Lemma 15 each of the two nodes of path decomposition can be processed in time O(n 2·α/2 · n 2α ) = O(n 3α ). Hence the computation for the assignments where c and d are in the same small bucket takes time O(n 2+1.5α ) in total. Now consider the assignments where c and d are in different small buckets. There are at most n 2(1−α) n 2α/2 = n 2−α such assignments. However, the corresponding dependence graph D ′ has one edge less than D, namely E(D ′ ) = E(D) \ {cd}. If ed ∈ E(D ′ ),consider the path decomposition of D ′ consisting of three consecutive bags N [e], (N [d] ∪ N [e]) \ {e}, {a, b, c}. Otherwise, i.e., when N D ′ (e) = {c}, consider the path decomposition N [e], {a, b, c}, N [d]. In both cases, each of the bags is of size at most three and contains at least one vertex from a bucket of size n α/2 . By Lemma 15 each of the three nodes of path decomposition can be processed in time O(n 2α+α/2 ) = O(n 2.5α ). Hence the computation for the assignments where c and d are in different small buckets takes O(n 2+1.5α ) time in total. CASE 1.2: Graph ([5], A) has two connected components: a single edge ab and a 2-path cde. Note that N (a) ∩ {c, d, e} = N (b) ∩ {c, d, e} = {c, e} contradicts Proposition 3. It follows that one of the following four cases holds: N (a) ∩ {c, d, e} ⊆ {c, d}, N (a) ∩ {c, d, e} ⊆ {d, e}, N (b) ∩ {c, d, e} ⊆ {c, d}, N (b) ∩ {c, d, e} ⊆ {d, e}. Hence, by symmetry, we can assume the first of them, i.e., N (a) ∩ {c, d, e} ⊆ {c, d}. We partition the bucket containing c, d and e into n α/3 buckets of size n 2 3 α . First we generate all assignments where c and d are in the same small bucket. There are at most n 2(1−α) n α/3 = n 2− 5 3 α such assignments. In DP we use the following path decomposition: {b, c, d, e}, N [a]. Note that N [a] \ {c, d} = {a, b}, so each of the bags contains at most two vertices from a bucket of size n 2 3 α and two vertices from a bucket of size n α . By Lemma 15 each of the three nodes of path decomposition can be processed in time O(n 2· 2 3 α · n 2α ) = O(n 10 3 α ). Hence the computation for the assignments where c and d are in the same small bucket takes time O(n 2+ 5 3 α ) in total. Here we branch into two subcases. CASE 1.2.1: N D (c) = {a, b, d}. Then we generate all remaining bucket assignments, i.e., where c and d are in different small buckets. There are at most n 2(1−α) n 2α/3 = n 2− 4 3 α such assignments. In the new dependence graph D ′ we have E(D ′ ) = E(D)\{cd}. By Proposition 3 and our assumptions N (a) ∩ {c, d, e} ⊆ {c, d} and N (c) = {a, b, d}, we get that either A has two components, namely A = {ab, bc, ca, de} or A is a single 5-cycle A = {ad, de, eb, bc, ca}. In both cases we use the path decomposition {b, d, e}, {a, b, d}, {a, b, c}. Each of the bags is of size at most three and contains at least one vertex from a bucket of size n 2 3 α . By Lemma 15 each of the three nodes of path decomposition can be processed in time O(n 2α+ 2 3 α ) = O(n 8 3 α ). Hence the computation for the assignments where c and d are in different small buckets takes O(n 2+ 4 3 α ) time in total. CASE 1.2.2: N D (c) = {a, b, d}. We continue by generating all assignments where d and e are in the same small bucket. There are at most n 2(1−α) n α/3 = n 2− 5 3 α such assignments. In the new dependence graph D ′ we have E(D ′ ) = E(D) \ {de}. In DP we use the following path decomposition: N [c]∪{d, e}, {a, b, d, e}. Note that each of the bags contains two vertices from a bucket of size n 2 3 α and at most two vertices from a bucket of size at most n α . By Lemma 15 each of the three nodes of path decomposition can be processed in time O(n 2· 2 3 α · n 2α ) = O(n 10 3 α ). Hence the computation for the assignments where d and e are in the same small bucket takes time O(n 2+ 5 3 α ) in total. Finally, we generate all assignments where c, d and e are in three different small buckets. There are at most n 2(1−α) n 3α/3 = n 2−α such assignments. In the new dependence graph D ′ we have E(D ′ ) = E(D) \ {cd, de} = I M ∪ {ab}. By Proposition 3, I M is a 5-cycle or a 3-cycle and a single edge (not incident to the cycle). Hence D ′ is an outerplanar graph, and hence it has a tree decomposition of width 2. In this decomposition every bag has size a most 3 and if it has size 3, then it contains at least one vertex from {c, d, e}. Hence every bag B contains at least |B| − 2 vertices from a bucket of size n 2 3 α . By Lemma 15 each of the three nodes of path decomposition can be processed in time O(n 2α+ 2 3 α ) = O(n 8 3 α ). Hence the computation for the assignments where c, d and e are in three different small bucket takes time O(n 2+ 5 3 α ) in total. CASE 2: |A| = 4. By Lemma 10, this case contributes O(n (1−α)+4α ) = O(n 1+3α ) to the running time.
Table 1 :
1New running times for k = 5, . . . , 10.time in this case. However, we show that in this case the algorithm can be further refined,
obtaining the O(n 3.4 ) running time. We suppose that similar improvements of order n Ω(1)
Table 2 :
2Running times of the algorithm from Theorem 11 for k = 5, . . . , 10.
Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems. S Arora, Journal of the ACM (JACM). 455S. Arora. Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems. Journal of the ACM (JACM), 45(5):753-782, 1998.
On exact algorithms for treewidth. H L Bodlaender, F V Fomin, A M C A Koster, D Kratsch, D M Thilikos, ACM Trans. Algorithms. 91H. L. Bodlaender, F. V. Fomin, A. M. C. A. Koster, D. Kratsch, and D. M. Thilikos. On exact algorithms for treewidth. ACM Trans. Algorithms, 9(1):12:1-12:23, 2012.
New results on the old k-opt algorithm for the traveling salesman problem. B Chandra, H J Karloff, C A Tovey, SIAM J. Comput. 286B. Chandra, H. J. Karloff, and C. A. Tovey. New results on the old k-opt algorithm for the traveling salesman problem. SIAM J. Comput., 28(6):1998-2029, 1999.
Worst-case analysis of a new heuristic for the travelling salesman problem. N Christofides, DTIC Document. Technical reportN. Christofides. Worst-case analysis of a new heuristic for the travelling salesman problem. Technical report, DTIC Document, 1976.
A method for solving traveling-salesman problems. G A Croes, Operations research. 66G. A. Croes. A method for solving traveling-salesman problems. Operations research, 6(6):791-812, 1958.
. M Cygan, F V Fomin, L Kowalik, D Lokshtanov, D Marx, M Pilipczuk, M Pilipczuk, S Saurabh, Parameterized Algorithms. SpringerM. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Parameterized Algorithms. Springer, 2015.
Fine-grained complexity analysis of two classic TSP variants. M Berg, K Buchin, B M P Jansen, G J Woeginger, 14. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik. 55ICALPM. de Berg, K. Buchin, B. M. P. Jansen, and G. J. Woeginger. Fine-grained complexity analysis of two classic TSP variants. In ICALP, volume 55 of LIPIcs, pages 5:1-5:14. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik, 2016.
On two techniques of combining branching and treewidth. F V Fomin, S Gaspers, S Saurabh, A A Stepanov, Algorithmica. 542F. V. Fomin, S. Gaspers, S. Saurabh, and A. A. Stepanov. On two techniques of combining branching and treewidth. Algorithmica, 54(2):181-207, 2009.
Pathwidth of cubic graphs and exact algorithms. F V Fomin, K Høie, Inf. Process. Lett. 975F. V. Fomin and K. Høie. Pathwidth of cubic graphs and exact algorithms. Inf. Process. Lett., 97(5):191-196, 2006.
Finding Induced Subgraphs via Minimal Triangulations. F V Fomin, Y Villanger, 27th International Symposium on Theoretical Aspects of Computer Science. J.-Y. Marion and T. SchwentickDagstuhl, GermanyZentrum fuer Informatik5of Leibniz International Proceedings in Informatics (LIPIcs)F. V. Fomin and Y. Villanger. Finding Induced Subgraphs via Minimal Triangulations. In J.-Y. Marion and T. Schwentick, editors, 27th International Symposium on Theoretical Aspects of Computer Science, volume 5 of Leibniz International Proceedings in Informatics (LIPIcs), pages 383-394, Dagstuhl, Germany, 2010. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
The parameterized complexity of local search for tsp, more refined. J Guo, S Hartung, R Niedermeier, O Suchý, Algorithmica. 671J. Guo, S. Hartung, R. Niedermeier, and O. Suchý. The parameterized complexity of local search for tsp, more refined. Algorithmica, 67(1):89-110, 2013.
A dynamic programming approach to sequencing problems. M Held, R M Karp, Journal of the Society for Industrial and Applied Mathematics. 101M. Held and R. M. Karp. A dynamic programming approach to sequencing problems. Journal of the Society for Industrial and Applied Mathematics, 10(1):196-210, 1962.
An effective implementation of the lin-kernighan traveling salesman heuristic. K Helsgaun, European Journal of Operational Research. 1261K. Helsgaun. An effective implementation of the lin-kernighan traveling salesman heuris- tic. European Journal of Operational Research, 126(1):106 -130, 2000.
How easy is local search?. D S Johnson, C H Papadimitriou, M Yannakakis, J. Comput. Syst. Sci. 371D. S. Johnson, C. H. Papadimitriou, and M. Yannakakis. How easy is local search? J. Comput. Syst. Sci., 37(1):79-100, 1988.
Dynamic programming meets the principle of inclusion and exclusion. R M Karp, Operations Research Letters. 12R. M. Karp. Dynamic programming meets the principle of inclusion and exclusion. Op- erations Research Letters, 1(2):49-51, 1982.
T Kloks, Treewidth, Computations and Approximations. Springer842T. Kloks. Treewidth, Computations and Approximations, volume 842 of Lecture Notes in Computer Science. Springer, 1994.
On finding and verifying locally optimal solutions. M W Krentel, SIAM J. Comput. 194M. W. Krentel. On finding and verifying locally optimal solutions. SIAM J. Comput., 19(4):742-749, 1990.
Towards understanding the smoothed approximation ratio of the 2-opt heuristic. M Künnemann, B Manthey, ICALP (1). Springer9134M. Künnemann and B. Manthey. Towards understanding the smoothed approximation ratio of the 2-opt heuristic. In ICALP (1), volume 9134 of Lecture Notes in Computer Science, pages 859-871. Springer, 2015.
Computer solutions of the traveling salesman problem. S Lin, The Bell System Technical Journal. 4410S. Lin. Computer solutions of the traveling salesman problem. The Bell System Technical Journal, 44(10):2245-2269, 1965.
An effective heuristic algorithm for the traveling-salesman problem. S Lin, B W Kernighan, Operations Research. 212S. Lin and B. W. Kernighan. An effective heuristic algorithm for the traveling-salesman problem. Operations Research, 21(2):498-516, 1973.
Smoothed analysis of the 2-opt heuristic for the TSP: polynomial bounds for gaussian noise. B Manthey, R Veenstra, ISAAC. Springer8283B. Manthey and R. Veenstra. Smoothed analysis of the 2-opt heuristic for the TSP: poly- nomial bounds for gaussian noise. In ISAAC, volume 8283 of Lecture Notes in Computer Science, pages 579-589. Springer, 2013.
Searching the k-change neighborhood for TSP is w[1]-hard. D Marx, Oper. Res. Lett. 361D. Marx. Searching the k-change neighborhood for TSP is w[1]-hard. Oper. Res. Lett., 36(1):31-36, 2008.
A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. M Padberg, G Rinaldi, SIAM Review. 331M. Padberg and G. Rinaldi. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Review, 33(1):60-100, 1991.
Shorter tours by nicer ears: 7/5-approximation for the graphtsp, 3/2 for the path version, and 4/3 for two-edge-connected subgraphs. A Sebö, J Vygen, Combinatorica. 345A. Sebö and J. Vygen. Shorter tours by nicer ears: 7/5-approximation for the graph- tsp, 3/2 for the path version, and 4/3 for two-edge-connected subgraphs. Combinatorica, 34(5):597-629, 2014.
Subcubic equivalences between path, matrix and triangle problems. V V Williams, R Williams, FOCS. IEEE Computer SocietyV. V. Williams and R. Williams. Subcubic equivalences between path, matrix and triangle problems. In FOCS, pages 645-654. IEEE Computer Society, 2010.
| []
|
[
"Prospects for future studies using deep imaging: Analysis of individual Galactic cirrus filaments",
"Prospects for future studies using deep imaging: Analysis of individual Galactic cirrus filaments"
]
| [
"Anton A Smirnov \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n\nSaint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia\n",
"Sergey S Savchenko \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n\nSaint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia\n\nSpecial Astrophysical Observatory\nRussian Academy of Sciences\n369167 Nizhnij ArkhyzRussia\n",
"Denis M Poliakov \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n\nSaint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia\n",
"Alexander A Marchuk \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n\nSaint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia\n",
"Aleksandr V Mosenkov \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n\nDepartment of Physics and Astronomy\nBrigham Young University\nN283 ESC, 84602ProvoUTUSA\n",
"Vladimir B Il'in \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n\nSaint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia\n\nSaint Petersburg University of Aerospace Instrumentation\nBol. Morskaya ul. 67A, St. Petersburg 190000Russia\n",
"George A Gontcharov \nCentral (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia\n",
"Javier Román \nKapteyn Astronomical Institute\nUniversity of Groningen\nPO Box 8009700 AVGroningenThe Netherlands\n\nInstituto de Astrofísica de Canarias\nc/ Vía Láctea s/n, La LagunaE-38205TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nLa LagunaE-38206TenerifeSpain\n",
"Jonah Seguine \nDepartment of Physics and Astronomy\nBrigham Young University\nN283 ESC, 84602ProvoUTUSA\n"
]
| [
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Saint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia",
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Saint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia",
"Special Astrophysical Observatory\nRussian Academy of Sciences\n369167 Nizhnij ArkhyzRussia",
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Saint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia",
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Saint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia",
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Department of Physics and Astronomy\nBrigham Young University\nN283 ESC, 84602ProvoUTUSA",
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Saint Petersburg State University\nUniversitetskij pr. 28198504St. PetersburgRussia",
"Saint Petersburg University of Aerospace Instrumentation\nBol. Morskaya ul. 67A, St. Petersburg 190000Russia",
"Central (Pulkovo) Astronomical Observatory\nRussian Academy of Sciences\nPulkovskoye chaussee 65/1196140St. PetersburgRussia",
"Kapteyn Astronomical Institute\nUniversity of Groningen\nPO Box 8009700 AVGroningenThe Netherlands",
"Instituto de Astrofísica de Canarias\nc/ Vía Láctea s/n, La LagunaE-38205TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nLa LagunaE-38206TenerifeSpain",
"Department of Physics and Astronomy\nBrigham Young University\nN283 ESC, 84602ProvoUTUSA"
]
| [
"MNRAS"
]
| The presence of Galactic cirrus is an obstacle for studying both faint objects in our Galaxy and low surface brightness extragalactic structures. With the aim of studying individual cirrus filaments in SDSS Stripe 82 data, we develop techniques based on machine learning and neural networks that allow one to isolate filaments from foreground and background sources in the entirety of Stripe 82 with a precision similar to that of the human expert. Our photometric study of individual filaments indicates that only those brighter than 26 mag arcsec −2 in the SDSS r band are likely to be identified in SDSS Stripe 82 data by their distinctive colours in the optical bands. We also show a significant impact of data processing (e.g. flat-fielding, masking of bright stars, and sky subtraction) on colour estimation. Analysing the distribution of filaments' colours with the help of mock simulations, we conclude that most filaments have colours in the following ranges: 0.55 ≤ g − r ≤0.73 and 0.01 ≤ r − i ≤ 0.33. Our work provides a useful framework for an analysis of all types of low surface brightness features (cirri, tidal tails, stellar streams, etc.) in existing and future deep optical surveys. For practical purposes, we provide the catalogue of dust filaments. | 10.1093/mnras/stac3765 | [
"https://export.arxiv.org/pdf/2301.12410v1.pdf"
]
| 255,046,137 | 2301.12410 | 416d73029fa356f13feaa139e38a315b9833b0f2 |
Prospects for future studies using deep imaging: Analysis of individual Galactic cirrus filaments
1-?? (2022
Anton A Smirnov
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Saint Petersburg State University
Universitetskij pr. 28198504St. PetersburgRussia
Sergey S Savchenko
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Saint Petersburg State University
Universitetskij pr. 28198504St. PetersburgRussia
Special Astrophysical Observatory
Russian Academy of Sciences
369167 Nizhnij ArkhyzRussia
Denis M Poliakov
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Saint Petersburg State University
Universitetskij pr. 28198504St. PetersburgRussia
Alexander A Marchuk
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Saint Petersburg State University
Universitetskij pr. 28198504St. PetersburgRussia
Aleksandr V Mosenkov
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Department of Physics and Astronomy
Brigham Young University
N283 ESC, 84602ProvoUTUSA
Vladimir B Il'in
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Saint Petersburg State University
Universitetskij pr. 28198504St. PetersburgRussia
Saint Petersburg University of Aerospace Instrumentation
Bol. Morskaya ul. 67A, St. Petersburg 190000Russia
George A Gontcharov
Central (Pulkovo) Astronomical Observatory
Russian Academy of Sciences
Pulkovskoye chaussee 65/1196140St. PetersburgRussia
Javier Román
Kapteyn Astronomical Institute
University of Groningen
PO Box 8009700 AVGroningenThe Netherlands
Instituto de Astrofísica de Canarias
c/ Vía Láctea s/n, La LagunaE-38205TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna
La LagunaE-38206TenerifeSpain
Jonah Seguine
Department of Physics and Astronomy
Brigham Young University
N283 ESC, 84602ProvoUTUSA
Prospects for future studies using deep imaging: Analysis of individual Galactic cirrus filaments
MNRAS
0001-?? (2022Accepted XXX. Received YYY; in original form ZZZPreprint 31 January 2023 Compiled using MNRAS L A T E X style file v3.0ISM: clouds -ISM: dust, extinction
The presence of Galactic cirrus is an obstacle for studying both faint objects in our Galaxy and low surface brightness extragalactic structures. With the aim of studying individual cirrus filaments in SDSS Stripe 82 data, we develop techniques based on machine learning and neural networks that allow one to isolate filaments from foreground and background sources in the entirety of Stripe 82 with a precision similar to that of the human expert. Our photometric study of individual filaments indicates that only those brighter than 26 mag arcsec −2 in the SDSS r band are likely to be identified in SDSS Stripe 82 data by their distinctive colours in the optical bands. We also show a significant impact of data processing (e.g. flat-fielding, masking of bright stars, and sky subtraction) on colour estimation. Analysing the distribution of filaments' colours with the help of mock simulations, we conclude that most filaments have colours in the following ranges: 0.55 ≤ g − r ≤0.73 and 0.01 ≤ r − i ≤ 0.33. Our work provides a useful framework for an analysis of all types of low surface brightness features (cirri, tidal tails, stellar streams, etc.) in existing and future deep optical surveys. For practical purposes, we provide the catalogue of dust filaments.
INTRODUCTION
Cirrus clouds are dust clouds usually observed at high galactic latitudes (b 20 • ). They have filamentary wispy appearance and visually resemble the cirrus clouds observed in the Earth's atmosphere. Cirri were identified and studied over a wide range of wavelengths: in the infrared (Low et al. 1984;Kiss et al. 2001Kiss et al. , 2003Martin et al. 2010;Planck Collaboration et al. 2011;Pénin et al. 2012;Schisano et al. 2020), optical (de Vaucouleurs 1955, 1960de Vaucouleurs & Freeman 1972;Sandage 1976;Mattila 1979;de Vries & Le Poole 1985;Ienaka et al. 2013;Miville-Deschênes et al. 2016;Román et al. 2020), and ultraviolet (Haikala et al. 1995;Gillmon & Shull 2006;Boissier et al. 2015;Akshaya et al. 2019). The cirri manifested in the visual and infrared, as well as in emission in the molecular CO and H2 lines, were found to spatially correlate (Weiland et al. 1986;de Vries et al. 1987;Gillmon & Shull 2006;Ienaka et al. 2013;Román et al. 2020). E-mail:[email protected] Cirrus clouds are unique objects both from theoretical and practical standpoints. They usually appear as numerous filaments rather than a cloud of a particular shape. Various studies (Bazell & Desert 1988;Falgarone et al. 1991;Hetem & Lepine 1993;Vogelaar & Wakker 1994;Elmegreen & Falgarone 1996;Sánchez et al. 2005;Juvela et al. 2018;Marchuk et al. 2021) of cirrus geometric properties proved that these clouds have a fractal nature. The fractal appearance of molecular clouds is thought to be due to the various physical processes that structure them: turbulence (Padoan et al. 2001;Kowal & Lazarian 2007;Federrath et al. 2009;Konstandin et al. 2016;Beattie et al. 2019a,b), shock waves (Koyama & Inutsuka 2000), colliding flows (Vazquez-Semadeni et al. 2007), and other factors, like the instability of a self-gravitating sheet (Nagai et al. 1998) or various instabilities in non-self-gravitating clumps, which arise because of the presence of magnetic fields (Hennebelle 2013).
Considering the internal parts of cirrus, the optical spectrum of the diffuse galactic light (DGL), measured over 92.000 sky spectra from the Sloan Digital Sky Survey (SDSS, York et al. 2000), is found to be consistent with the spectrum of the scattered light (Brandt & Draine 2012;Chellew et al. 2022) produced by a dust model of Zubko et al. (2004). Ienaka et al. (2013) showed that this model can underestimate the correlation between the diffuse galactic light and the emission at 100 µm by up to a factor of two if one measures the spectral properties of individual clouds.
From a practical standpoint, studies of cirrus are important for the following reason. With the progress in observational power and processing methods, it was shown that translucent cirrus clouds and other filamentary dusty structures are rather common inhabitants of sky regions at both high and low Galactic latitudes (Barrena et al. 2018;Schisano et al. 2020;Román et al. 2020). Thus, they can interfere with studies of various extragalactic sources (Cortese et al. 2010;Sollima et al. 2010;Rudick et al. 2010;Davies et al. 2010;Duc et al. 2018; Barrena et al. 2018). This problem was thoroughly discussed in Román et al. (2020) in their study of optical cirrus based on SDSS Stripe 82 deep images (Abazajian et al. 2009;. Román et al. (2020) identified and analysed sixteen clouds in the optical g, r, i, and z bands. One of the most important results of their work was that the cirrus clouds differ from typical extragalactic sources in terms of the optical colours g − r and r − i. The authors suggested the following criterion, which allows one to distinguish cirrus filaments from any extragalactic objects based on the corresponding colours of specific image pixels:
(r − i) < 0.43 × (g − r) − 0.06.(1)
Since criterion (1) includes only the optical colours, it provides an opportunity to distinguish the cirrus by means of optical data alone. Because various data sets have different resolutions, this criterion can become a valuable tool to identify the cirrus presence in deep optical images. It is even more important when there is no complementary infrared data available, which is most frequently used to identify the presence of cirrus. Considering the nature of the suggested criterion, we should emphasise two important facts. First, the cirrus colours that appear in the inequality, are not the colours of each and every pixel of a cloud. Rather, they are the colours obtained from the linear fitting of the distribution of fluxes in the (g, r) and (r, i) planes (or by Gaussian plus Lorentzian fitting of the actual colour distributions) of a large sample of pixels. Such an approach implicitly assumes that a whole cloud, spanning several degrees of the sky, can be characterised by its unique colour, neglecting the possible variance of the colour over the different parts of the cloud. At the same time, we should note that almost every cirrus cloud consists of numerous filaments of different densities, surface brightnesses, etc. If the colour properties of the filaments vary too, it is important to verify the degree of their variance and the reliability of criterion (1) as introduced by Román et al. (2020) in this case.
The second important fact is that the spatial location of cirrus clouds were identified by Román et al. (2020) by a visual inspection. In this work, we opt to take it a step further by using a more novel approach. Since cirrus clouds typically have similar wispy and filamentary structures, they are potentially good targets for automatic selection. For example, in a recent work by Schisano et al. (2020), such structures were identified in Hi-Gal photometric survey data (Molinari et al. 2010) based on their cylindrical-like shape, which is estimated using a Hessian matrix. A similar approach was used in Planck Collaboration et al. (2016) and Soler et al. (2022) to study the relative orientation between the magnetic field and dust structures and between the HI filamentary structures and Galactic disc, respectively. In earlier works, Men'shchikov (2013) proposed to distinguish filaments (specifically those found in Galactic star-forming regions) using the decomposition of the images over a wide range of spatial scales. In Salji et al. (2015), authors applied a ridge detection technique and successfully extracted the filaments constituting a large "integral shaped filament" in Orion A North. Koch & Rosolowsky (2015) suggested a complex approach, consisting of an arctan transformation of the image, Gaussian smoothing, and adaptive thresholding.
In the present work, we adopt machine learning methods are suitable for an automatic search of cirrus clouds. Our goal is to test whether or not machine learning methods are suitable for automatic search of cirrus clouds. By identifying more cirrus clouds, we hope to acquire more reliable statistics of the cirrus photometric properties over different spatial scales.
The structure of the work is as follows. In Section 2, we describe the data and processing steps required for a measurement of cirrus colours: masking, the removal of the instrumental scattered light, and the cirrus filaments identification based on their visual appearance and the correlation with infrared data. In Section 3, we further improve the cirrus filaments identification with the aid of machine learning methods. Here we give the details about the setup of the method and the training of our neural network, and compare the results of the neural network and human identification. In Section 4, we analyse the general properties of the sample of identified filaments. In Section 5, we discuss various pitfalls of the photometric analysis of the individual filaments and compare different approaches to the colour measurement using mock simulations. Here we also study how reliable colours are measured depending on the area and average surface brightness of the filament. In Section 6, we present the results of our colour measurement for a subsample of identified filaments and briefly discuss the spatial dependence of the colours on the galactic coordinates. We summarise our results in Section 7.
DATA
We use the same Stripe 82 deep images as Román et al. (2020), where a large number of cirrus filaments/clouds can be distinguished simply by eye. The Stripe 82 data (Abazajian et al. 2009) consists of 1100 fields covering a thin strip of the sky, 110 degrees wide (−50 • < α < 60 • ) and only 2.5 degrees in height (−1.25 • < δ < 1.25 • ). The original raw fields were obtained using the 2.5-meter Apache Point Observatory telescope with an exposure time of one hour and a pixel scale of 0.396 arcsec. The fields were further stacked by and carefully processed in Román & Trujillo (2018), where the residues of the co-adding process were removed and improved sky-rectified images were obtained. The resulting fields are two magnitudes deeper than the regular SDSS data. The data from mentioned works is publicly available at http://research.iac.es/proyecto/ stripe82/. Below we describe how we further processed the data from Román & Trujillo (2018) to identify the cirrus filaments.
Masking
Images in the Stripe 82 survey contain numerous objects such as bright stars or galaxies, which have to be masked out before one can proceed with an analysis of Galactic cirri. In Román et al. (2020), segmentation maps were created by running the SEXTRACTOR package (Bertin & Arnouts 1996) with various parameters to make initial mask images, which were further edited manually to include image artefacts.
To reduce our workload, we decided to use the mask images created by Román et al. (2020) for a set of Stripe 82 fields to train a neural network to generate masks for all Stripe 82 fields. We use an image-to-image algorithm based on the conditional adversarial network described in Isola et al. (2016) as the neural network architecture. In this approach, two networks, a generator and a discriminator, are trained simultaneously. In the setting of our problem, the purpose of the generator is to create a synthetic mask image based on a science image, and the goal of the discriminator is to determine if a particular mask image was created by a generator or by Román et al. (2020) (the discriminator also has access to the optical images). During the training process, the generator learns to make more realistic masks to fool the discriminator. The discriminator in turn learns to more effectively distinguish between real and synthetic maps to overcome the generator.
To create a training sample, we use optical images (in the g, r, i, and z bands) and masks for these images provided by Román et al. (2020), which were randomly cut into 256 × 256 pixels segments (the input size of our networks). During the training process, we feed such cutout images to the generator and the discriminator and update their weights until the process converges. After that, the generative part of the network can be used to create masks for new (i.e. not covered by previous work) fields.
The results of the network training applied to a Stripe 82 field are shown in Fig. 1: we show an r-band image, an original (the so-called ground truth) mask, and the prediction of the network for two random cutouts. It can be seen that, while the fine details of the generated masks differ, they generally cover all objects that present in the image. To measure the similarity between the predicted and true mask, we use the intersection over union (IoU) metric:
IoU = TP TP + FP + FN ,(2)
where TP is the number of true positive pixel outcomes where the model correctly predicts the positive class, FP is the number of false positive pixel outcomes where the model incorrectly predicts the positive class, FN is the number of false negative pixel outcomes, where the model incorrectly predicts the negative class. For the trained network, the IoU median value for all the fields of the test sample is 0.69. It should be noted that the network only deals with targets which are visible in the image. It is not aware of objects that may be outside of the image (but whose scattered light is present in the image), so the fine structure of the mask at r-band image Original mask Predicted mask Overlap Figure 1. Examples of the masks created by a neural network for two randomly selected patches of the Stripe 82. Top panels: original images in the r-band, second row: masks made by Román et al. (2020), third row: masks generated by our neural networks. The bottom row shows the comparison of original and predicted masks: blue colour -original masks, green -predicted, yellowoverlapping area of two masks. the borders can be affected by this lack of data. For example, the faint wings of a bright star can be barely distinguishable in the image, but they would be covered by the mask if the centre of the star was visible. If the star is outside of the image provided to the network, the network is not aware of it and can miss the faint wings of the star. To deal with this problem, we only use central regions of the generated mask, and consider the data outside of this region as the context. To cover the whole field, we slide such a window across it until the full mask for the field is created.
Cirrus segmentation
A crucial moment in the cirrus analysis is detecting and selecting their locations in these images, i.e. selecting image pixels that are dominated by the cirrus scattered light and which do not contain other objects. To do this, we applied the masks from Sect. 2.1 to the images to cover all non-cirrus objects and used a threshold of 29 mag arcsec −2 in the r band (determined as the average 3σ limit for all Stripe 82 fields) to create a segmentation map of faint extended objects. Such segments constitute joint areas with the surface brightness above the given limit. Hereinafter, we define filaments as such joint areas. Thus, the filaments we identify here can be considered as separate segments of large cirrus clouds commonly studied in the literature.
It turned out that even after applying the masks to the background objects, some other extended objects (not only cirrus) appeared in the image above the specified flux level. Among them are the faint extended wings of bright oversaturated stars and the reflections of such stars in the telescope optics, which manifest themselves as faint extended regions and can not be easily distinguished from cirrus by some easyto-estimate parameter.
To solve this problem, we decided to manually check every field by eye and individually select all of the regions that contained cirrus. We separated this list from that only contained the instrumental scattered stellar light. Fig. 2 shows the stages of the manual cirrus segmentation for one field. The field itself is shown in panel a), while the field regions that are brighter than the 29-th isophote on the r-band are shown in panel d) (this regions were computed using the masked version of the image, so do not cover all visible stars). Panel e) of the figure shows the same segments separated into the ones that cover cirrus regions (black) and the ones that cover image artefacts (grey). We also removed the regions that are close to the brightest stars (one in the middle of the panel e) to exclude them from consideration. To help with the selection, we also compare regions with infrared IRIS counterpart (Miville-Deschênes & Lagache 2005), available with lower resolution. In total we marked about 6.4 square degrees of the whole Stripe 82 area as cirrus (which is about 2% of the survey area).
Removing the scattered stellar light
As noted by Román et al. (2020), the images of Stripe 82 are contaminated by the light of the extended wings of the brightest stars. The point spread functions (PSFs) have different widths in different passbands (redder passbands have wider faint wings in their PSF), and also the colours of stars are different. The result of these two factors is that different regions of Stripe 82 fields have different background colours depending on the distance to the bright stars, which significantly affect the measured cirrus properties.
To eliminate this problem, we follow the approach of Román et al. (2020) and fit the extended PSF 1 models into locations of the brightest stars to subtract them from the images and therefore remove the background colour variations. In this work, we use the TRACTOR software (Lang et al. 2016) 2 to fit multiple extended PSF images prepared by Infante-Sainz et al. (2020) to the Stripe 82 fields. In each field, we select all stars brighter than 15-th magnitude in gband, similar to Román et al. (2020), and fit them iteratively starting with the single brightest star and adding the next brightest star to the model at each step (computationally, this approach proved stabler than fitting all the stars in one step). During the fitting, we mask out the regions that were marked as cirrus to exclude the influence of the cirrus on the fitting of the stars (otherwise the cirrus contamination would be included in the model of the extended PSF wings and removed after the model subtraction). Fig. 2 shows the result of the stellar light modelling and subtraction for a randomly selected region that contains both cirrus and some bright stars. Panel g) shows the model of the stellar light. Panel h) demonstrates the same region with the model subtracted.
We note that this crucial step in the cirrus analysis pipeline requires a good knowledge of the extended PSF wings. This problem is a typical obstacle for works in which low surface brightness structures are analysed (Sandin 2014;Karabal et al. 2017), and the proper PSF image should be created before proceeding to the actual analysis of the data (for example, Rich et al. 2019;Poliakov et al. 2021).
AUTOMATIC CIRRUS SEGMENTATION
Manual annotation of cirrus is very time-consuming for human experts. Careful annotation of a single 0.5 • × 0.5 • field in a semi-automatic approach may take up to 10 minutes. To investigate if the process of cirrus annotation can be fully automated and if the results of manual annotation can be further improved, we trained several U-Net (Ronneberger et al. 2015) based networks. In general, the U-Net architecture consists of two symmetrical paths: an encoder to capture context and a decoder to get precise localisation. The encoder follows the typical architecture of a convolutional network with repeating convolution and max-pooling operations. Every step in the decoder consists of an upsampling of the feature map followed by a convolution. Thus, the decoder increases the resolution of the output. To get localisation, the features from the encoder are combined with the upsampled features from the decoder via skip connections.
Originally, U-Net was proposed for biomedical image segmentation. This type of network architecture is successfully applied to various scientific and applied tasks such as medical image analysis (Iglovikov et al. 2017b;Ching et al. 2017;Ing et al. 2018a;Ing et al. 2018b;Andersson et al. 2019;Nazem et al. 2021), cell biology (Kandel et al. 2020), and satellite image analysis (Iglovikov et al. 2017a). It is also used in astronomical applications such as denoising, enhancing astronomical images (Vojtekova et al. 2021), and stellar spectrum normalization (Różański et al. 2022). In this section, we consistently describe these neural network models, through datasets (Sect. 3.1), network architecture (Sect. 3.2), and training methods (Sect. 3.3). In Section 3.4, we conduct our model analysis and discuss the results.
Dataset for neural network training
In Section 2.2, we carried out a manual identification procedure for cirrus filaments. Here we further translate the segmentation data to train an appropriate neural network. It is done in the following manner. All pixels in Stripe 82 fields were annotated into 3 categories, in which 90.4 % of all pixels were background, 2.0 % were cirrus, and other extended sources the remaining 7.6 %. The annotation for each field is stored in the corresponding annotation mask file. A value of 0 for a mask's pixel denotes background, 1 denotes cirrus, and 2 denotes other extended sources. As the field image has a large size (4553 × 4553 pixels), we employed square windows with smaller sizes for our models. It allowed us to decrease the time, memory capacity, and volume of manually annotated data required for training of network models.
To obtain the training, validation, and testing sample, we randomly chose three separate groups of fields consisting of 200, 50, and 100 fields, respectively. Here we briefly provide the main training and validation data pre-processing steps.
(i) We calculated the common 99.9th percentile values for each optical band (g, r, i) separately for all training and validation fields (250 fields). Then we performed corresponding clipping. This moderates the problem of brightest pixels which reduces the image contrast, and therefore it increases the training efficiency.
(ii) Next, we applied a natural logarithm transformation followed by min-max normalization to [0 : 255] range.
(iii) Then, we randomly chose ntr square windows (w × w pixels) for each field in the training group and n val for each field in the validation group. If the size of the obtained windows was too large for a current model, we resized each window to the spatial shape of the model input tensor (win, win), using cv2.resize method with cv2.INTER_LINEAR interpolation. As all considered architectures takes a 3 channel image input, the input tensor shape is (3, win, win).
(iv) The corresponding annotation mask's windows were obtained from the annotation mask files and resized, using cv2.INTER_AREA interpolation.
(v) Lastly, during the formation of the input tensor we were applying min-max normalization to the [−1 : 1] range and augmenting the data by symmetry group of square. This group consists of π/2 rotations, reflections and their compositions (8 elements). Therefore, this procedure increased the number of windows by a factor of 8.
Network architecture
To resolve the task of cirrus annotation, we created several models based on the encoder-decoder U-Net-like architecture (Ronneberger et al. 2015). All our experiments are conducted in the TensorFlow2.x framework (Abadi et al. 2015). The precise manner in which each of these models described in this section is used to solve the issue of cirrus annotation is publicly available 3 . Fig. 3 shows a representation of the general architecture used. The key difference between the considered architectures is the encoder. As the encoder, we used ResNet50V2 (He et al. 2016), MobileNetV2 (Sandler et al. 2018), and the classical U-Net encoder.
The decoder architecture is identical for all models under consideration and consists of 4 steps (see Fig. 3). Each of these steps have an upsampling of the feature map carried out with a 2×2 transposed convolution, a concatenation with the corresponding feature map from the encoder (skip connection), and two 3 × 3 convolutions with zero padding, each followed by a ReLU. At the final layer, a 3 × 3 transposed convolution with stride = 1 and zero padding is used to map
Training methods
For each models under consideration, we used a sparse categorical cross-entropy loss function derived from the logits output tensor. To optimise the loss function, we employed the Adam optimization method with various learning rates r.
During the training experiments, we varied some parameters that influence the fitting process and the final model performance: the spatial shape of the input tensor (win×win), the scale factor s between the window size w and the input tensor spatial size win (w = swin), the training strategy («training from scratch», «transfer learning», «fine-tuning»), number of classes nc, class weights ωc, etc. We considered two cases for the number of classes, 3 classes which had been annotated in fields, and 2 classes when the «background» class was extended by the «other extended sources» class. In «transfer learning» strategy, we took a pre-trained ImageNet dataset (Deng et al. 2009) encoder and froze it before the training process. In «fine-tuning» strategy, we also used a pre-trained encoder but did not freeze it.
To train our network models, we used a single NVIDIA GeForce RTX 3060 GPU. Batches consisted of 32 windows or 16 windows for models with the largest input tensor spatial size (win = 448). We employed 30 epochs in all training experiments, since the lowest validation loss is reached in 10-20 epochs.
Experimental results and model analysis
As demonstrated in Fig. 2 (panels e and f), the cirrus map generated by our best model is quite similar to the map obtained by human experts, and the model can successfully reproduce small cirrus filaments. To find this model, put the models through various comparative experiments. To compare models with each other, we use the IoU metric for the cirrus class (see eq. 2), which measures the similarity between the predicted and true cirrus. Human annotation performance yields a 0.67 IoU for cirrus. This number was achieved by one expert on 100 random fields annotated by other experts of our team. Each of these fields was first annotated by one of the member of the group of experts. This annotation is considered as the ground truth annotation. The annotation of the single expert was then compared against this annotation. The annotation procedure itself, carried out by a single expert, was done in a similar manner as it was done in Section 2.2, with the help of IRIS data (Miville-Deschênes & Lagache 2005).
Quantitative results for different models and training methods are shown in Table A1. We summarise the results of our experiments as follows.
(i) To find a more appropriate encoder, we conducted several experiments with various encoders. As one can see in Table A1, models with the MobileNetV2 encoder demonstrate the highest performance (IoU = 0.576). Furthermore, these models are less resource-intensive and are more lightweight when compared to the others.
(ii) The «fine-tuning» strategy demonstrates the highest performance, but according to Table A1, the advantage over models trained from scratch is insignificant.
(iii) As one can see in Table A1, models with moderately large windows (w = 224, 448, 896) are better than models with small windows. We assume that this might be related to the deficiency of semantic context in small windows relative to large ones.
(iv) We also analyzed the models with 3 classes, but, as one can see in Table A1, these models do not demonstrate an increase in performance when compared with the models with 2 classes.
(v) The best of our models yields a 0.576 IoU. Since the advantage of human annotation is not great (0.67 IoU), it is possible to use this approach either the primary or supporting tool for annotating low surface brightness structures in deep astronomical images. It is remarkable that such an effective model was trained on only 250 fields out of 1100. The model makes a cirrus segmentation for one field in about 25 seconds when running prediction on an AMD Ryzen 9 3900X 12-Core CPU and about 7 seconds when running on an NVIDIA GeForce RTX 3060 GPU.
The fuzzy nature of cirri makes it difficult to translate the achieved IoU value into some transparent quality of the cirri detection. Even if some algorithm detects all the clouds in the image, the possible difference in the boundary threshold level will lead to an IoU value below unity. To give some perspective on the performance of our algorithm, we note that 89% of the regions larger than 36 square arcseconds marked as cirri by humans have positive detections on the neural network inside their boundaries. Therefore, the vast majority of the cirri clouds can be detected by our network in an "alert" regime.
Correlation with IR and UV data
The IRIS data which we use to support our identification of cirrus filaments in the optical have a low resolution of 90 arcsec. Therefore, it is instructive to verify how the fluxes are correlated between commonly used dust indicators, such as UV and IR, and optical for distinguished filaments if we consider more accurate data. For this purpose, we analyse only a single cirrus cloud, through one which is quite unique. It is located at α ≈ 2.5 • , δ ≈ −0.25 • and appears in both the Hershel (Viero et al. 2014) and GALEX (Martin et al. 2005) datasets. The cloud is one of the richest cirrus clouds in Stripe 82 that was also studied by Román et al. (2020) (their Field#5).
In Fig. 4, we present a map of individual filaments for this cloud. For each of the depicted filaments, we fill its area with the colour corresponding to the value of the correlation coefficient between Hershel 250 µm and Stripe 82 r-band data (top panel) and between GALEX far-ultraviolet (FUV) and the same r-band (bottom panel) data. For each individual filament, a correlation coefficient ρ is calculated by taking into account the fluxes in pixels within the area of the filament:
ρ = n i=1 (xi −x)(yi −ȳ) n i=1 (xi −x) 2 n i=1 (yi −ȳ) 2 ,(3)
where xi and yi are the fluxes in different bands, andx andȳ are their mean values, respectively. The summation is carried out over all pixels within the filament area. Thus, each filament is characterised by its individual correlation coefficient. All analysed data are rebinned to a spatial scale of 12 arcsec to reduce the effect of the differences in their PSF, as well as possible pixel-scale spatial shifts of the datasets relative to each other. We also apply a very extensive mask, combining our mask produced by the neural network from Section 2.1, the mask for this cloud from Román et al. (2020), and the mask obtained by cutting the bright sources in the UV and negative fluxes in the optical. Note that the corresponding correlation coefficients for each of the filaments are obtained using only the pixels within the area of the corresponding filaments. Fig. 4 clearly shows that the dust emission in the IR and the scattered light in the optical are well-correlated (ρ > 0.5) for most of the individual filaments, suggesting that we do indeed identify dust features. We also measured overall correlation coefficients using three separate sets of pixels: ρ all = 0.74 (measured over all pixels in the depicted area), ρ filaments = 0.69 (measured over the pixels within the filaments), and ρrest = 0.52 (measured over the pixels that are outside of the filament boundaries). The fact that the ρrest 0.5 and ρ all > ρ filaments may indicate that we miss some part of the cirrus in the optical. This also clearly follows from the comparison with the GALEX data (bottom panel of Fig. 4). There, ρ filaments = 0.43 is smaller than both ρ all and ρrest. Although, as can be seen, for many filaments ρ, is still close to 0.5. At the same time, for some filaments, there is no correlation with GALEX, although such a correlation is present when using the Hershel data for the same filament (see a big filament in the lower left corner of both maps marked by a green cross). We should note that a qualitatively similar discrepancy regarding infrared and UV data was noted by Boissier et al. (2015) in their study of cirrus in the Virgo cluster. The authors found that some cirrus regions that appear in the FUV maps are not visible in the infrared or Planck maps and vice versa.
The presented comparison with other data sets shows that the areas which were distinguished as cirrus filaments by our neural network do indeed host dust features. The comparison also indicates that we do not identify a portion of the cirrus in the optical. It is hard to estimate exactly how much of the filaments we miss, but this is expected because we are limited by the depth of the data and, therefore, we cannot identify dim filaments which can appear more prominently in the IR and UV.
RESULTING SAMPLE OF FILAMENTS
The resulting sample of filaments identified by our neural network consists of about 5 · 10 5 spatially separated regions. The total area covered is about 6.6 square degrees, which is greater than that obtained via manual picking by 0.2 square degrees. For illustrative purposes, we present Fig. 5, which shows a cirrus rich area at α ≈ 55 • − 60 • (one of the ends of Stripe 82). The whole presented area contains about one hundred original Stripe 82 fields (∼ 5 square degrees). The top panel of the figure shows an intensity map with the masking and source's subtracting carried out, while the bottom panel shows the areas identified by neural network as cirrus filaments. In this section, we describe some general properties of the filaments' sample, as well as some preliminary steps that must be taken before analysing the colours of the filaments.
First of all, at the original pixel scale, the data is dominated by the noise that ever-present in astronomical images. To facilitate the analysis of dust colours, we reduced the noise contribution by rebinning each field's images to a spatial resolution of 6 arcsec, similarly to Román et al. (2020). They decided on that resolution in that work as a compromise between optimal spatial resolution and image depth. To make the comparison between the results in Román et al. (2020) and this work clearer, we decided to use the same spatial resolution in present work. At this step, we assume that if half of the small pixels with a scale of 0.396 arcsec (which constitute the large 6 arcsec pixel) are initially marked as being dominated by scattered cirrus light, the large pixel should also be marked as dominated by cirrus. In the other case, the large pixel is simply removed from the analysis. As a result of this procedure, the filaments' number is significantly reduced to 23290, while the total marked area does not change (the same 6.6 square degrees). The decrease in the number of filaments is explained by the fact that the original sample contains a significant amount of small features with a spatial scale of only a few pixels. When we rebin the images, such features are either removed from the analysis or merge into a single filament with a larger size. Fig. 6 shows the spatial distribution of the filaments over Stripe 82 after the rebinning has been carried out, and Fig. 7 presents a variety of statistics, such as the distribution of filaments over the average surface brightness and the area. In the right panel of the figure, we display the distribution of filaments by the correlation between the (g, r) and (r, i) pairs of the optical bands. We do not consider data in the z band because it is less deep and our observations showed no correlation at all between the z band and others in many small filaments. We also depict the distributions for a subsample of 4575 filaments where the correlation is reliably measured, that is the subsample includes only those filaments with pvalue smaller than 1% (which means that a random data has less than a 1% chance of resulting such a strong correlation). Below, we present the results of the colour measurements for filaments only from this subsample. Thus, the total number of analysed filaments is 4575 and the total area is about 4.5 square degrees (70% of the original area).
The left panel of Fig. 7 emphasises the difference between the current analysis and those executed previously, such as in Guhathakurta & Tyson (1989) and Román et al. (2020). In these works, authors considered distributions of the fluxes for all pixels within an area of several or more square degrees. The typical area of the filaments considered in this work is smaller by an order of a magnitude. As for the surface brightness, the majority of our filaments are dim features with µg > 27 mag arcsec −2 . These differences imply that special care must be taken if one tries to measure the colours of such features. We thoroughly discuss this problem in the next section.
COLOUR MEASUREMENT
There is a list of factors that can strongly affect the results of the colour measurements for individual filaments. First of all, at the considered level of surface brightness, the noise can strongly affect the distribution of fluxes. Moreover, the noise also has its own colour properties (due to the differences in band depth), and there is a possibility that the measured colours simply reflect the colours of the noise. Secondly, some other factors are likely to contribute to the measured colours, such as an inaccurate subtraction of the scattered stellar light in the case of very bright stars or the existence Table 1. Details of the mock simulations used over course of the present work. The first column shows a simulation ID, columns to thorough five present the limits of the physical parameters of the squares representing the cirrus filaments in simulations, namely area, surface brightness, and colours. "U" (uniform) and "RL" (real-like) abbreviations, given in brackets, indicate whether the adopted distribution for each particular parameter is a uniform one ("U") or specifically prepared to resemble the corresponding distribution for the real filaments ("RL", see Fig. 7, left). The sixth column gives the description of the background into which the squares are injected, while the seventh column gives a short description of the problem, which is solved using each particular simulation. Name Area, arcsec 2 µ, mag/arcsec 2 g − r r − i Background Purpose of the so-called "hot" pixels, which contain emission of some bright, yet poorly resolved sources. Another crucial factor is the sky subtraction, which creates background fluctuations affecting the photometry of extremely low surface brightness sources. How all these factors cumulatively affect the dust colours is hard to estimate analytically. Therefore, to estimate the impact of all these factors, we carried out a series of mock simulations. The general idea of the simulations is to inject an artificial source with a priori known colours into Stripe 82 data and re-measure its colours in a realistic environment where the source is affected by noise, residues of stellar light subtraction, etc. Throughout the present work, we used several types of simulations that differ in the setup of physical parameters. To facilitate the reader, we listed the details of all simulations in Table 1. In this particular section, we discuss the results only for two of them, which are dedicated to study how colour measurement procedures work for individual filaments in general. The respective simulations are labelled as S1 and S2 in the table. The rest will be discussed below in Section 6. The general setup all simulations follow includes the following steps:
(i) First, we prepare a sample of mock filaments with random sizes, surface brightnesses, and optical colour values. Below we discuss the results for two types of samples, one with a real-like distribution of filament sizes and brightnesses (S3 and S4, Section 6), and the other with a uniform distribution of these properties (S1 and S2, this Section). The distribution of the colours for real filaments is actually unknown and, therefore, we choose the colours uniformly in some predefined range. We consider a rather wide range of colours, for example, from 0.1 to 0.8 for g − r (see Table 1), because, as we show below, real filaments' colours also tend to have a wide spread. To simplify the analysis, each filament has a squarelike shape. As for the areas, we originally selected them in the following range: from 144 arcsec 2 (four pixels) up to 10 5 arcsec 2 (≈ 27 arcmin 2 ). But due to the mask, some pixels are cut, and, therefore, the actual area of each filament slightly varies from the predetermined set of values.
(ii) Secondly, we inject a square with the selected size, surface brightness, and colours from the prepared sample into Stripe 82 data. The centre of the square is chosen randomly, that is, the square is located at random point of Stripe 82. Next, we add some flux values in each band to all the pixels within the square area. The values are selected so as to have some average value corresponding to the initially selected surface brightness value, with a small variance. The variance is the same for all filaments and is equal to 20 counts (in the g band). The value is close to the typical spread of values for real filaments. For r and i bands, the fluxes are determined from the fluxes in the g band, assuming the constant value of g − r and r − i optical colours over the square. For each band, we also modify the distribution of fluxes to take into account the Poisson noise from the source. To calculate the Fig. 4. Two left panels: the linear fitting method. Two right panels: Gaussian fitting of the colour distributions of the filament pixels. Figure 9. Comparison of the real and measured g − r (two left panels) and r − i colours (two right panels) for the mock sample of simulated squares with fixed colours. In each pair, the left panel shows the colour obtained by the linear correlation method and the right panel shows the results obtained by the Gaussian fitting (see the main text). The blue line marks the one-to-one correspondence. Figure 10. Probability density function for true minus measured colours obtained via Gaussian fitting for simulated filaments. Blue and green vertical lines mark one, two, and three sigma limits for the corresponding distributions.
number of events for the Poisson statistics, we assume the following gain values: 3.85, 4.735, and 5.15 for g, r, and i bands, respectively. These values are obtained by averaging gain values for different camcol parameters of SDSS imaging camera.
(iii) Thirdly, we measure the colours in exactly the same way as we do for real filaments (real cirrus filaments are also masked for the purpose of simulations). For measurements themselves, we adopt two different approaches (see Fig. 8): a classical linear correlation method (Guhathakurta & Tyson 1989;Sujatha et al. 2010;Murthy 2014;Román et al. 2020) and the method suggested by Román et al. (2020), which is based on the analysis of colour distribution of individual pixels. We discuss the applicability of both methods to the measurement of individual filaments below.
(iv) Finally, we compare the measured colours with their true values, and check what factors are important for reliable colour measurements.
Here we briefly discuss the details of the two adopted methods of colour measurement.
The essence of the first method is the linear correlation between the fluxes in different bands. While fitting the linear dependence to the distribution of fluxes, for example, in g and r bands, one finds a linear coefficient, which can be translated into the corresponding g − r colour value (see Fig. 8, two left panels). While this method is commonly used for colour measurement, the resulting colours obtained using this method can be significantly affected by the noise in a low surface brightness regime as we show below.
The second method, introduced in Román et al. (2020), assumes that, for a particular cloud, the real distribution of dust colours should be close to Gaussian, and the position of the Gaussian maximum should correspond to the actual colour of the cloud. The noise contributions are accounted for in this approach through the simultaneous fitting of the Lorentzian function, which describes the noise, and Gaussian function, which describes the distribution of real dust colours. Testing Figure 11. Dependence of the colour measurement error on surface brightness of the simulated filaments in the g band (top panel) and their area (bottom panel). In both plots, green lines mark the location of the most probable value, while the error bars correspond to 1σ limits. how this approach works for different filaments, for which the number of pixels is considerably smaller than in Román et al. (2020), we found that a simultaneous fitting of Gaussian and Lorentz functions with a full set of free parameters can lead to degenerate results, or there can be a set of close solutions that have different colour values. The problem can be solved by a manual analysis of the fitting results and rejecting nonphysical results, but an identification of the fitting failure for a large number of the filaments is a complex problem. Thus, we use a more constrained approach, omitting the Lorentz part and fitting only the Gaussian part. We justify such a simplification based on our results from simulations presented below. We should also note that, originally, we tried to estimate the Lorentz function parameters from the layer of the pixels that are close to the filament, but which do not include it. Then we tried to fit the Gaussian function along with the Lorentz function, fixing some parameters for both functions (like the Lorentz peak location and its scale, and the Gaussian amplitude). We found that, for such a setup, the resulting colours are very close to the case when we fit only a Gaussian part. Fig. 9 presents the distribution of real versus measured g − r and r − i colours for both approaches discussed above. The values were obtained by measuring the colours of 2 · 10 3 squares with uniformly distributed colours from 0.1 to 0.8 for both g − r and r − i and surface brightnesses ranging from 25 mag arcsec −2 to 29 mag arcsec −2 in the g band (see S1 simulation from Table 1).
As can be clearly seen from Fig. 9, for both g − r and r − i, there is no consistency between real and measured colours if the colours are obtained using the fluxes correlation method (r − i colours are systematically greater on average). At the same time, there is much desired one-to-one correspondence for most of the filaments if we measure the colours by fitting the Gaussian function to the colour distribution. Our results show that the mode of the colour distribution is a more stable parameter than the coefficient of the linear correlation in the case of a significant noise contribution to the fluxes. We also note that we apply linear correlation method without introducing some limiting surface brightness value like in Román et al. (2020), since the vast majority of our filaments have a very low surface brightness and insufficient to make such cuts. Based on the results, we conclude that the linear correlation method is unreliable for colour measurement of individual filaments, while our second method allows one to retrieve the actual colours for most of the clouds.
The probability density function of the true minus measured colours obtained via Gaussian fitting is presented in Fig. 10. We also mark three limits: 0.08, 0.31, 0.90 for g − r and 0.10, 0.40, 1.20 for r − i. Within these limits lie 68.27%, 95.45%, and 99.73% of all filaments, respectively. The values give a qualitative understanding of what errors one should expect from the measurement of real filaments. It also shows that, unfortunately, for individual filaments the errors can be quite large, up to 1.0. With such an error, any physical comparison with other sources is essentially meaningless. At the same time, if one considers a large sample, there should be many filaments for which the colours are measured with an acceptable error of 0.1 − 0.2 (in absolute units, that is, mag). We exploit this fact below when interpreting the observed distribution of the colours of real filaments.
To facilitate future studies of dust colour over small spatial scales, we verify how the difference between true and measured colours depends on filament area and surface brightness. Fig. 11 presents the mentioned dependencies for g − r (for r − i all presented dependencies are qualitatively the same). As one naturally expects, the surface brightness is important and, as the surface brightness increases, the colour measurement error decreases. As can be seen from the figure, for most filaments of 26 mag/arcsec 2 and brighter, the error of colour measurement is smaller than 0.05. Such bright filaments are most likely to be identified by their true colours in Stripe 82. For dim filaments, the error increases rapidly after 26.5 mag/arcsec 2 , reaching about 0.10 at 27 mag/arcsec 2 and about 0.20 at 28 mag/arcsec 2 . As mentioned above, such a large error makes it hard to distinguish the filaments from other sources by their colours in practice. As shown in the lower panel of Fig. 11, increasing the area of the filaments certainly helps too, although the effect is not that prominent when compared to the case of surface brightness.
As an additional test, we performed similar simulations inserting mock filaments into the artificial field with only noise present (no other sources, no mask, etc). This simulation is labelled as S2 in Table 1. The only difference between this simulation and the previously considered S1 is the background into which the squares are injected. In the case of S1, the background is Stripe 82 fields, while for S2, the background consists only of artificially created noise. The noise character-istics were selected to reflect g and r Stripe 82 depth limits, which are µ g,lim = 29.2 mag arcsec −2 and µ r,lim = 28.7 mag arcsec −2 , respectively, measured over boxes of 10 arcsec. As can be seen from Fig. 12, in the "ideal" situation with only the noise present, one should retrieve the colours of the filaments with a much higher degree of accuracy than the filaments from the actual Stripe 82 data show.
The top panel of Fig. 12 shows that the error in colours of very faint filaments with µ ≈ 31 mag arcsec −2 is small ( 0.1 mag), despite the fact that such surface brightnesses are clearly below the surface brightness limits introduced earlier. There is actually no contradiction because the limiting surface brightness values are those typically defined in 10x10 square arcseconds. However, the simulated filaments have areas that are orders of magnitude larger than the area in which the surface brightnesses limits are defined. This is clearly seen in the bottom panel of Fig. 12, where the main limiting factor is in fact the area of the filaments. Since the filaments have such a large size, the limiting surface brightnesses in this extremely large area range are very high. For example, the limiting surface brightness of SDSS Stripe 82 at 10x10 arcsec 2 is 29.2 mag arcsec −2 in the g band, which translates to 31.2 mag arcsec −2 at 1x1 arcmin 2 , typical explored area of the filaments. This surface brightness value is at the upper limit of the magnitudes considered in our tests.
Overall, the comparison of Fig. 11 and Fig. 12 indicates that a significant portion of error in colour measurement comes from the background into which the squares are injected. Ideally, if the background is processed accurately, this should not be the case. This indicates that the data processing itself is a very important factor for colour measurements. There are many steps to it, including those carried out not in the present work, like flatfielding and sky subtraction. It would be an interesting problem to consider how much each of these steps contribute to the overall error, but we do not go further in this direction in the present work.
RESULTS
The 2D distribution of the g − r and r − i colours for the filaments identified in Stripe 82 are presented in Fig. 13. The colours are measured using the Gaussian fitting method described in the previous section. In the same figure, we also depict the results of Román et al. (2020) for their sixteen fields, and the line (r − i) = 0.43 × (g − r) − 0.06, which should separate the colours of cirrus filaments and other extragalactic sources, as Román et al. (2020) suggested. In subpanels of the figure, we plot individual distributions of the colours, their respective Gaussian approximations (magenta lines), and 3σ limits (thick blue rectangles). From the figure, one can indicate two important properties of the colour distribution. First, there is a peak of the density contours at about g − r ≈ 0.6 and r − i ≈ 0.2. Secondly, there is a large spread of values in both g − r and r − i colours, σ is about 0.3 for g − r and 0.4 for r − i.
We note that peak locations of the 1D distributions, displayed in the side panels of Fig. 13, can be somewhat misleading. For example, the Gaussian of r − i colours has the peak located at r − i = 0.59. This is significantly greater than the corresponding r − i ≈ 0.2 of the 2D peak. The reason for this is that, for different r − i values, g − r values are also distributed differently. In the upper part of the plot (r − i 0.3-0.4), g − r colour are distributed sparsely for a fixed value of r − i and, thus, no density peak is observed. For r − i 0.3-0.4, g − r are clustered very closely and there is a density peak. For a fixed value of r − i (for example, for r − i ≈ 0.6 and r − i ≈ 0.4), the total number of filaments is nearly the same in both cases.
In the previous section, we concluded that the colours of dim filaments are significantly affected by various contaminating factors (noise, masking residues, etc). Therefore, it is only natural to ask to what degree the observed spread of the values corresponds to the real dispersion of the cirrus colours. To answer this question, we consider a sample of mock filaments (squares) with the sizes and surfaces brightnesses distributed according to the distribution of these properties for real filaments, presented in the left panel of Fig. 7 (in contrast to a sample considered in Section 5, where these properties were uniformly distributed). This simulation is labelled as S3 in Table 1. We carry out the simulation for such a sample in the same manner as it was done in Section 5 comparing real and measured colours. For the original colours, we select a uniform distribution within the following limits: 0.5 ≤ g − r ≤ 0.7 and 0.0 ≤ r − i ≤ 0.2. Fig. 14 shows the resulting colours for the sample. The light blue square marks the limits of the original colours. The obtained distribution is qualitatively similar to that for the real filaments. Again, there is a clear density peak at g − r = 0.60, r − i = 0.10 (average of the originally selected values) and rather extended wings (see blue rectangles). These wings lie outside of the square of the original colours. This means that the wings arise due to contamination factors and, therefore, do not reflect the real dispersion of the originally selected colours. For the real filaments, we assume that the situation should be qualitatively the same. The large spread of colours displayed in Fig. 13 should be due to contamination factors discussed in Section 5, and does not reflect the real difference in the cirrus colours. The real variation of cirrus colour should manifest itself in the structure of the densest part of the distribution.
Since the distributions for real and simulated filaments are still qualitatively similar (the dense part plus the wings), one can try to identify the real dispersion of cirrus colours applying some kind of "deconvolution" procedure. We use the following approach. First, we expand our simulations and consider a sample of mock filaments with the colours initially distributed uniformly in a wide range, 0.0 g − r 1.5 and −0.5 r − i 1.0 (simulation S4 in Table 1). Then we construct a specific function, the purpose of which is to produce 2D density maps of the filament colours on the (g − r, r − i) plane based on the true colours of the filaments. The details are as follows:
(i) First, the function accepts some colour ranges as arguments and finds the filaments in the simulated sample with the original colours within the originally selected limits (Fig. 15, leftmost panel). For simplicity, the selected area has a rectangular shape.
(ii) Secondly, the function assesses the measured colours, which differ from thier true colours, and which are distributed in a manner similar to that shown in Fig. 13 and Fig. 14. From the distribution, a smooth density profile is created via the kernel density estimation procedure from the python package sklearn (Fig. 15, second left panel). The resolution of the prepared map is 0.05 along both axes.
(iii) Thirdly, we prepare a similar smooth density map for real filaments (Fig. 15, third left panel).
(iv) At the last step, we find an optimal range of the colours which minimises the sum of square differences between the density map for simulated filaments and the similar map for real filaments. A typical residual map is presented in the rightmost panel of Fig. 15.
As a result of the analysis, we find that the closest to real observable distribution is produced by filaments with colours in the following ranges: 0.55 g − r 0.73 and 0.01 r − i 0.33. We marked this area by a light blue rectangle in Fig. 13. As can be seen, almost all clouds from Román et al. (2020) have the colours within these limits, except for two of them, which are outside of the region. Thus, for most filaments from our sample, the colours are consistent with those measured by Román et al. (2020), that is, when the colour is averaged over large spatial areas in Stripe 82. As for criterion (1), suggested by Román et al. (2020), the most part of the rectangle is located below the separating line, although there is also a slight area above it. Does this mean that the condition is violated? The correct answer is that, given the accuracy for colour estimation for individual filaments (which should be about 0.1 for most filaments, see Fig. 10), it is impossible to say whether this is really the case. Moreover, our approach to identify the real colours assumes that the colours of filaments are distributed uniformly, which is, of course, a massive simplification. Thus, we conclude that more precise data is required to verify whether condition (1), suggested by Román et al. (2020), holds on a spatial scale of individual filaments.
An additional argument to support the consistency between our results and those of by Román et al. (2020) comes from the analysis of the filament colours depending on the average surface brightness and the area of the filaments. Fig. 16 shows the corresponding distributions. As can be seen, the Figure 15. Finding the dispersion of the colours for the real filaments. Leftmost panel: an initial uniform distribution of colours for mock filaments and a selection of a smaller area (the magenta rectangle) to find an optimal colour range for the real filaments. Second left panel: a smoothed map of the colour distribution produced by the filaments from the magenta square marked in the leftmost panel. Third left panel: a similar smoothed map for the real filaments. Rightmost panel: the residue between the map for real filaments and the best-fit map for the mock filaments. The residual values correspond to the difference between the probability density functions, obtained by properly normalizing both maps. Thus, the units of the colourbar are the units of the probability density function, 1/(mag × mag).
larger and brighter the filament, the likelier its colours fall within the limits determined by Román et al. (2020).
It is also worthwhile to consider the distribution of the cirrus colours over galactic coordinates. In Fig. 17 we present the distribution of the cirrus colours for all filaments over galactic latitude and longitude (left and right columns, respectively). Each even row shows the distribution as is, while each odd row shows the corresponding 2D histogram by the number of filaments with a bin size of over 1 degree along the x-axis and a bin with a colour of 0.05. As can be seen, there is almost no dependence on the coordinates, which is consistent with the results of Román et al. (2020), thus we confirm the result for individual filaments. One exception is a clear trend at l ≈ 180 deg, where the cirrus clouds become redder. This is connected with an increase of the dust column density in the region, as shown in Fig. 18, where we present the distribution of the colours depending on the average far IR emission in the 100 µm IRIS band. A similar tendency was found by Román et al. (2020) for their clouds, and we confirm their result for individual filaments. The density maps presented in Fig. 17, also show that the peaks of filaments' distribution appear near the values measured by Román et al. (2020).
CONCLUSIONS
In the present work, we studied the colour properties of the optical cirrus in Stripe 82 data. The work is inspired by study of Román et al. (2020), where the cirrus colour properties were investigated using the same Stripe 82 data, but only for the largest cirrus clouds. Román et al. (2020) manually selected some areas of Stripe 82 that contain cirrus clouds and analysed the distribution of the fluxes and colours of all the pixels in those selected areas, then filtered them from all noncirrus sources of light. Here, we adopted a different approach and tried to identify individual cirrus filaments under the assumption that they can be described as extended objects, the surface brightness of which is greater in each pixel than some specific value determined from the value of background noise (29 mag arcsec −2 in the r band for our data). Such a definition of filaments allows one to track the structure of the clouds more accurately, and in particular, measure the colour variance over filaments that constitute the same cloud, for instance.
To identify filaments in Stripe 82, we carried out a masking procedure, then selected all sources with µr < 29 mag arcsec −2 and visually inspected each of these sources to verify whether or not they appear due to the cirrus scattered light. The latter step is required since the data is contaminated by various sources and extended wings of the PSF. As a result, we marked about 6.4 square degrees of the whole Stripe 82 area as the area dominated by the cirrus scattered light.
Since the annotation process of the cirrus is so time consuming, we tested the possibility of optimising it using machine learning methods. We trained a suitable neural network using the results of manual cirrus annotation as a training sample and analysed how the training setup (encoder model, training strategy, window size, etc.) affects the results of annotation. We found that models with the MobileNetV2 encoder demonstrate the highest performance and intersection over union metric value IoU = 0.576, which is comparable to the IoU achieved by a human expert (one of the authors). This proves that machine learning methods can be used to solve the problem of cirrus identification. In particular, creating catalogues of cirrus filaments such as those presented by Schisano et al. (2020).
The resulting sample of identified filaments consists of mostly dim and small features with typical surface brightness about µg ≈ 27 mag/arcsec 2 and area of about 1 arcmin 2 . Since the values differ by an order of a magnitude from those typically considered in previous works, we pay special attention to measuring the optical colours of such features. To this end, we carried out a series of mock simulations, injecting artificial extended sources with a priori known colours into Stripe 82 data. We compared true versus measured colours for such sources and studied the dependence of the measurement error on the surface brightness and area of the filament. As a result, we identified several pitfalls in the analysis of individual filaments, which should be accounted for in future studies of very faint extended objects (including low surface brightness features around galaxies):
(i) The linear fitting method for colour estimation does not allow one to retrieve the actual colours of the filaments. Instead, one should use Gaussian fitting suggested by Román et al. (2020).
(ii) There is a clear dependence of the colour measurement error on the surface brightness, which is rather expected. However, it is important that the dependence works in an average way, that is, even for bright filaments, some may still have large errors on colours, greater than 0.1 − 0.2. At the same time, for most filaments of 26 mag/arcsec 2 and brighter, the error of colour measurement is smaller than 0.05. Such bright filaments are most likely to be identified by their true colours in Stripe 82. For dim filaments, the error increases monotonically up to about 0.2 at 28 mag/arcsec 2 .
(iii) Comparing the colours measured for fields where only noise presents and the actual Stripe 82 data, we found that the colour measurement error should arise mostly from other factors, not due the noise (flatfielding, background subtraction, etc).
As for the optical colours of the filaments distinguished in Stripe 82 data, we found the following. The observed as is, the distribution of the g − r and r − i colours shows a large spread of values arising due to large errors from the contaminating factors, not from the real dispersion of the filaments' colours. At the same time, for most filaments, their colours cluster at some specific values. The comparison of the results of the mock simulations and the data for real filaments indicates that the real colours of the identified filaments should occupy the following ranges: 0.55 g −r 0.73 and 0.01 r − i 0.33. These ranges are mostly consistent with those previously found in Román et al. (2020). The colours of the filaments also show the tendency to become close to the values measured by Román et al. (2020) as surface brightness or filament area increases.
Overall, the present work provides a useful framework for a future analysis of the upcoming deep optical surveys like Euclid (Laureijs et al. 2011) or the Vera C. Rubin Observatory (former LSST, LSST Science Collaboration et al. 2009). We expect that Galactic cirrus filaments can be identified and studied in these surveys using similar techniques to those developed in this work. This paper has used archival data from the Herschel mission. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author. The catalogue of distinguished filaments is available online at https://physics. byu.edu/faculty/mosenkov/data. Table A1. Results gathered on the conducted training experiments. It lists the encoder model (MobileNetV2, ResNet50V2, U-Net), training strategy («training from scratch», «transfer learning», «fine-tuning»), window size, input tensor spatial size, number of annotated classes, class weights for background, cirrus and other extended sources if 3 classes are considered, learning rate and IoU, precision, recall for all tests fields for cirrus class. To train all models, we selected 200 square windows from each of the 200 training fields and 100 windows from each of the 50 validation fields.
Figure 2 .
2Different stages of the cirrus segmentation applied to one Stripe 82 field. Panels are: a) the original field image in the r-band; b) mask of background and foreground objects generated using our neural network; c) masked original image with enhanced low surface brightness structures; d) image segments brighter than 29-th isophote in the r-band; e) cirrus/artefacts segmentation results; f) neural network generated cirrus mask; g) model of the scattered stellar light for bright stars; h) original masked image after the stars' model subtraction. See text for detailed information on the whole pipeline.
Figure 3 .
3The encoder-decoder architecture used in this work. each 64-component feature vector to the required number of classes.
Figure 4 .
4Colour coded correlation coefficients between Hershel and Stripe 82 r band (top panel) and GALEX and Stripe 82 r band (bottom panel) for filaments of the cloud observed at α ≈ 2.5 • , δ ≈ −0.25 • . White areas correspond to the masked pixels. The green cross marks the filament that appears prominently in the IR data while not showing in the UV data.
Figure 5 .
5Cirrus rich area at α = 55 • − 60 • containing about one hundred of Stripe 82 fields (∼ 5 square degrees): intensity map (top panel) and cirrus map created by neural network (bottom panel).
Figure 6 .
6Distribution of identified filaments across the sky plane. Each small rectangle in this map corresponds to one of the original Stripe 82 fields each with an area of about 900 arcmin 2 , while the colour of the rectangle corresponds to the total area within the field marked as dominated by cirrus.
Figure 7 .
7Left panel: distribution of filaments over the average surface brightness in the g band and the cloud area. Right panel: distributions of the correlation coefficients between the fluxes for all filaments (red and brown lines) and a subsample of filaments with p-value smaller than 0.01 for the measured correlation (blue and green lines).
Figure 8 .
8Approaches used for measuring the colours of the filaments in the present work applied to the largest filament from
Figure 12 .
12Same asFig. 11, but for simulated filaments with wider ranges of area and surface brightness and inserted into a clean field, where only Gaussian noise presents without other sources, mask, etc.
Figure 13 .
13Distribution of the cirrus colours (red points) from the present work and Román et al. (2020) (green points). The blue line (r − i) = 0.43 × (g − r) − 0.06 should separate the cirrus colours and the colours of extragalactic sources, as suggested by Román et al. (2020). The light blue rectangle marks the estimated dispersion of true cirrus colours (see the main text). Blue bars mark 3σ limits of the corresponding distributions.
Figure 14 .
14Same asFig. 13, but for a sample of simulated filaments, the true colours of which are uniformly distributed within the area marked by a light blue square.
Figure 16 .
162D distributions of surface brightness (left) and area (right) depending on the cirrus colours.
Figure 17 .
17Distributions of the cirrus colours g − r (the first two rows) and r − i (the third and fourth rows) colours depending on the galactic longitude (left column) and latitude (right column). First and third rows show usual scatter plots. Second and forth rows show the corresponding 2D histogram by the number of filaments.
Figure 18 .
18Distributions of the cirrus colours depending on the average far IR emission in the 100 µm IRIS band. Green points mark the filaments, which are observed in the region with l < 180 deg, while red points mark those that have l 180 deg.
National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
Encoder model Training strategyw (pixel), w in (pixel)nc
ωc
r
IoU
precision
recall
MobileNetV2
training from scratch
128
128
2
(1, 1)
0.001
0.261
0.846
0.274
MobileNetV2
training from scratch
128
128
2
(1, 2)
0.001
0.437
0.702
0.536
MobileNetV2
training from scratch
128
128
2
(1, 4)
0.001
0.416
0.64
MNRAS 000, 1-?? (2022)
http://research.iac.es/proyecto/stripe82/pages/ advanced-data-products/the-sdss-extended-psfs.php 2 http://thetractor.org/
https://bitbucket.org/PolyakovD/cirrus_segmentation/ src/master/ MNRAS 000, 1-?? (2022)
ACKNOWLEDGEMENTSWe acknowledge financial support from the Russian Science Foundation (grant no. 20-72-10052). Within this grant we performed the following parts of the work: the data analysis (including manual and automatic cirrus segmentation), software development, the creation of the neural network, colour measurements, and the numerical tests. JR acknowledges support from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant "The structure and evolution of galaxies and their central regions" with reference PID2019-105602GB-I00/10.13039/501100011033. JR also acknowledges funding from University of La Laguna through the Margarita Salas Program from the Spanish Ministry of Universities ref.UNI/551/2021-May 26, and under the EU Next Generation. The funding of these grants was used for the data preparation.We also thank the anonymous referee for the review and appreciate the comments, which allowed us to improve the quality of the publication.Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions.SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org.SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics
M Abadi, 10.1088/0067-0049/182/2/543TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 182543Abadi M., et al., 2015, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, https://www.tensorflow.org/ Abazajian K. N., et al., 2009, ApJS, 182, 543
. M S Akshaya, J Murthy, S Ravichandran, R C Henry, J Overduin, 10.1093/mnras/stz2186MNRAS. 4891120Akshaya M. S., Murthy J., Ravichandran S., Henry R. C., Over- duin J., 2019, MNRAS, 489, 1120
. J Andersson, H Ahlström, J Kullberg, Magnetic Resonance in Medicine. 821177Andersson J., Ahlström H., Kullberg J., 2019, Magnetic Resonance in Medicine, 82, 1177
. R Barrena, 10.1051/0004-6361/201732315A&A. 61642Barrena R., et al., 2018, A&A, 616, A42
. D Bazell, F X Desert, 10.1086/166751ApJ. 333353Bazell D., Desert F. X., 1988, ApJ, 333, 353
. J R Beattie, C Federrath, R S Klessen, 10.1093/mnras/stz1416MNRAS. 4872070Beattie J. R., Federrath C., Klessen R. S., 2019a, MNRAS, 487, 2070
. J R Beattie, C Federrath, R S Klessen, N Schneider, 10.1093/mnras/stz1853MNRAS. 4882493Beattie J. R., Federrath C., Klessen R. S., Schneider N., 2019b, MNRAS, 488, 2493
. E Bertin, S Arnouts, 10.1051/aas:1996164A&AS. 117393Bertin E., Arnouts S., 1996, A&AS, 117, 393
. S Boissier, 10.1051/0004-6361/201526089A&A. 57929Boissier S., et al., 2015, A&A, 579, A29
. T D Brandt, B T Draine, 10.1088/0004-637X/744/2/129ApJ. 744129Brandt T. D., Draine B. T., 2012, ApJ, 744, 129
. B Chellew, T D Brandt, B S Hensley, B T Draine, E Matthaey, arXiv:2201.01378Chellew B., Brandt T. D., Hensley B. S., Draine B. T., Matthaey E., 2022, arXiv e-prints, p. arXiv:2201.01378
. T Ching, 10.1101/142760Ching T., et al., 2017, bioRxiv
. Cortese L Bendo, G J Isaak, K G Davies, J I Kent, B R , 10.1111/j.1745-3933.2009.00808.xMNRAS. 40326Cortese L., Bendo G. J., Isaak K. G., Davies J. I., Kent B. R., 2010, MNRAS, 403, L26
. J I Davies, 10.1111/j.1365-2966.2010.17774.xMNRAS. 409102Davies J. I., et al., 2010, MNRAS, 409, 102
J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L., 2009, in 2009 IEEE conference on computer vision and pattern recog- nition. pp 248-255
. P.-A Duc, J.-C Cuillandre, F Renaud, 10.1093/mnrasl/sly004MNRAS. 47540Duc P.-A., Cuillandre J.-C., Renaud F., 2018, MNRAS, 475, L40
. B G Elmegreen, E Falgarone, 10.1086/178009ApJ. 471816Elmegreen B. G., Falgarone E., 1996, ApJ, 471, 816
. E Falgarone, T G Phillips, C K Walker, 10.1086/170419ApJ. 378186Falgarone E., Phillips T. G., Walker C. K., 1991, ApJ, 378, 186
. C Federrath, R S Klessen, W Schmidt, 10.1088/0004-637X/692/1/364ApJ. 692364Federrath C., Klessen R. S., Schmidt W., 2009, ApJ, 692, 364
. J Fliri, I Trujillo, 10.1093/mnras/stv2686MNRAS. 4561359Fliri J., Trujillo I., 2016, MNRAS, 456, 1359
. K Gillmon, J M Shull, 10.1086/498055ApJ. 636908Gillmon K., Shull J. M., 2006, ApJ, 636, 908
. P Guhathakurta, J A Tyson, 10.1086/168058ApJ. 346773Guhathakurta P., Tyson J. A., 1989, ApJ, 346, 773
. L K Haikala, K Mattila, S Bowyer, T P Sasseen, M Lampton, J Knude, 10.1086/187829ApJ. 44333Haikala L. K., Mattila K., Bowyer S., Sasseen T. P., Lampton M., Knude J., 1995, ApJ, 443, L33
. K He, X Zhang, S Ren, J Sun, arXiv:1603.05027arXiv e-printsHe K., Zhang X., Ren S., Sun J., 2016, arXiv e-prints, p. arXiv:1603.05027
. P Hennebelle, 10.1051/0004-6361/201321292A&A. 556153Hennebelle P., 2013, A&A, 556, A153
. A J Hetem, J R D Lepine, A&A. 270451Hetem A. J., Lepine J. R. D., 1993, A&A, 270, 451
. N Ienaka, K Kawara, Y Matsuoka, H Sameshima, S Oyabu, T Tsujimoto, B A Peterson, 10.1088/0004-637X/767/1/80ApJ. 76780Ienaka N., Kawara K., Matsuoka Y., Sameshima H., Oyabu S., Tsujimoto T., Peterson B. A., 2013, ApJ, 767, 80
. V Iglovikov, S Mushinskiy, V Osin, arXiv:1706.06169arXiv e-printsIglovikov V., Mushinskiy S., Osin V., 2017a, arXiv e-prints, p. arXiv:1706.06169
. V Iglovikov, A Rakhlin, A Kalinin, A Shvets, arXiv:1712.05053arXiv eprintsIglovikov V., Rakhlin A., Kalinin A., Shvets A., 2017b, arXiv e- prints, p. arXiv:1712.05053
. R Infante-Sainz, I Trujillo, J Román, 10.1093/mnras/stz3111MNRAS. 4915317Infante-Sainz R., Trujillo I., Román J., 2020, MNRAS, 491, 5317
. N Ing, Z Ma, J Li, H Salemi, C Arnold, B S Knudsen, A Gertych, 10.1117/12.2293000Digital Pathology. SPIE. Tomaszewski J. E., Gurcan M. N.10581Medical ImagingIng N., Ma Z., Li J., Salemi H., Arnold C., Knudsen B. S., Ger- tych A., 2018a, in Tomaszewski J. E., Gurcan M. N., eds, Vol. 10581, Medical Imaging 2018: Digital Pathology. SPIE, pp 343 -355, doi:10.1117/12.2293000, https://doi.org/10.1117/12. 2293000
N Ing, Z Ma, J Li, H Salemi, C Arnold, B S Knudsen, A Gertych, 10.1117/12.2293000Medical Imaging 2018: Digital Pathology. 105811Ing N., Ma Z., Li J., Salemi H., Arnold C., Knudsen B. S., Ger- tych A., 2018b, in Medical Imaging 2018: Digital Pathology. p. 105811B, doi:10.1117/12.2293000
. P Isola, J.-Y Zhu, T Zhou, A A Efros, arXiv:1611.07004Isola P., Zhu J.-Y., Zhou T., Efros A. A., 2016, arXiv e-prints, p. arXiv:1611.07004
. M Juvela, J Malinen, J Montillaud, V M Pelkonen, I Ristorcelli, L V Tóth, 10.1051/0004-6361/201630304A&A. 61483Juvela M., Malinen J., Montillaud J., Pelkonen V. M., Ristorcelli I., Tóth L. V., 2018, A&A, 614, A83
. M E Kandel, Nature Communications. 11Kandel M. E., et al., 2020, Nature Communications, 11
. E Karabal, P A Duc, H Kuntschner, P Chanial, J C Cuillandre, S Gwyn, 10.1051/0004-6361/201629974A&A. 60186Karabal E., Duc P. A., Kuntschner H., Chanial P., Cuillandre J. C., Gwyn S., 2017, A&A, 601, A86
. C Kiss, P Ábrahám, U Klaas, M Juvela, D Lemke, 10.1051/0004-6361:20011394A&A. 3791161Kiss C.,Ábrahám P., Klaas U., Juvela M., Lemke D., 2001, A&A, 379, 1161
. C Kiss, P Ábrahám, U Klaas, D Lemke, P Héraudeau, C Del Burgo, U Herbstmeier, 10.1051/0004-6361:20021787A&A. 399177Kiss C.,Ábrahám P., Klaas U., Lemke D., Héraudeau P., del Burgo C., Herbstmeier U., 2003, A&A, 399, 177
. E W Koch, E W Rosolowsky, 10.1093/mnras/stv1521MNRAS. 4523435Koch E. W., Rosolowsky E. W., 2015, MNRAS, 452, 3435
. L Konstandin, W Schmidt, P Girichidis, T Peters, R Shetty, R S Klessen, 10.1093/mnras/stw1313MNRAS. 4604483Konstandin L., Schmidt W., Girichidis P., Peters T., Shetty R., Klessen R. S., 2016, MNRAS, 460, 4483
. G Kowal, A Lazarian, 10.1086/521788ApJ. 66669Kowal G., Lazarian A., 2007, ApJ, 666, L69
. H Koyama, S.-I Inutsuka, 10.1086/308594The Astrophysical Journal. 532980Koyama H., Inutsuka S.-I., 2000, The Astrophysical Journal, 532, 980
The Tractor: Probabilistic astronomical source detection and measurement. D Lang, D W Hogg, D Mykytyn, record ascl:1604.008 (ascl:1604.008Astrophysics Source Code Library. Lang D., Hogg D. W., Mykytyn D., 2016, The Tractor: Probabilistic astronomical source detection and measure- ment, Astrophysics Source Code Library, record ascl:1604.008 (ascl:1604.008)
. R Laureijs, arXiv:1110.3193Laureijs R., et al., 2011, arXiv e-prints, p. arXiv:1110.3193
. F J Low, 10.1086/184213ApJ. 27819Low F. J., et al., 1984, ApJ, 278, L19
. A A Marchuk, A A Smirnov, A V Mosenkov, V B Il'in, G A Gontcharov, S S Savchenko, J Román, 10.1093/mnras/stab2846MNRAS. 5085825Marchuk A. A., Smirnov A. A., Mosenkov A. V., Il'in V. B., Gontcharov G. A., Savchenko S. S., Román J., 2021, MNRAS, 508, 5825
. D C Martin, 10.1086/426387ApJ. 6191Martin D. C., et al., 2005, ApJ, 619, L1
. P G Martin, 10.1051/0004-6361/201014684A&A. 518105Martin P. G., et al., 2010, A&A, 518, L105
. K Mattila, A&A. 78253Mattila K., 1979, A&A, 78, 253
. ' Men, A , 10.1051/0004-6361/201321885A&A. 56063Men'shchikov A., 2013, A&A, 560, A63
. M.-A Miville-Deschênes, G Lagache, 10.1086/427938ApJS. 157302Miville-Deschênes M.-A., Lagache G., 2005, ApJS, 157, 302
. M A Miville-Deschênes, P A Duc, F Marleau, J C Cuillandre, P Didelon, S Gwyn, E Karabal, 10.1051/0004-6361/201628503A&A. 5934Miville-Deschênes M. A., Duc P. A., Marleau F., Cuillandre J. C., Didelon P., Gwyn S., Karabal E., 2016, A&A, 593, A4
. S Molinari, 10.1086/651314Publications of the Astronomical Society of the Pacific. 122314Molinari S., et al., 2010, Publications of the Astronomical Society of the Pacific, 122, 314
. J Murthy, 10.1088/0067-0049/213/2/32ApJS. 21332Murthy J., 2014, ApJS, 213, 32
. T Nagai, S Ichiro Inutsuka, S M Miyama, 10.1086/306249The Astrophysical Journal. 506306Nagai T., ichiro Inutsuka S., Miyama S. M., 1998, The Astrophys- ical Journal, 506, 306
. F Nazem, F Ghasemi, A Fassihi, A M Dehnavi, Journal of bioinformatics and computational biology. 2150006Nazem F., Ghasemi F., Fassihi A., Dehnavi A. M., 2021, Journal of bioinformatics and computational biology, p. 2150006
. P Padoan, M Juvela, A A Goodman, Nordlundå, 10.1086/320636ApJ. 553227Padoan P., Juvela M., Goodman A. A., NordlundÅ., 2001, ApJ, 553, 227
. A Pénin, 10.1051/0004-6361/201015929A&A. 543123Pénin A., et al., 2012, A&A, 543, A123
. 10.1051/0004-6361/201116481A&A. 53622Planck Collaboration et al., 2011, A&A, 536, A22
. 10.1051/0004-6361/201425044A&A. 586135Planck Collaboration et al., 2016, A&A, 586, A135
. D Poliakov, A V Mosenkov, N Brosch, S Koriski, R M Rich, 10.1093/mnras/stab853MNRAS. 5036059Poliakov D., Mosenkov A. V., Brosch N., Koriski S., Rich R. M., 2021, MNRAS, 503, 6059
. R M Rich, 10.1093/mnras/stz2106MNRAS. 4901539Rich R. M., et al., 2019, MNRAS, 490, 1539
. J Román, I Trujillo, 10.3847/2515-5172/aad8b8Research Notes of the American Astronomical Society. 2144Román J., Trujillo I., 2018, Research Notes of the American As- tronomical Society, 2, 144
. J Román, I Trujillo, M Montes, 10.1051/0004-6361/201936111A&A. 64442Román J., Trujillo I., Montes M., 2020, A&A, 644, A42
. O Ronneberger, P Fischer, T Brox, arXiv:1505.04597arXiv e-printsRonneberger O., Fischer P., Brox T., 2015, arXiv e-prints, p. arXiv:1505.04597
. T Różański, E Niemczura, J Lemiesz, N Posi, P Różański, 10.1051/0004-6361/202141480A&A. 659199Różański T., Niemczura E., Lemiesz J., Posi lek N., Różański P., 2022, A&A, 659, A199
. C S Rudick, J C Mihos, P Harding, J J Feldmeier, S Janowiecki, H L Morrison, 10.1088/0004-637X/720/1/569ApJ. 720569Rudick C. S., Mihos J. C., Harding P., Feldmeier J. J., Janowiecki S., Morrison H. L., 2010, ApJ, 720, 569
. C J Salji, 10.1093/mnras/stv369MNRAS. 4491782Salji C. J., et al., 2015, MNRAS, 449, 1782
. N Sánchez, E J Alfaro, E Pérez, 10.1086/429553ApJ. 625849Sánchez N., Alfaro E. J., Pérez E., 2005, ApJ, 625, 849
. A Sandage, 10.1086/111975AJ. 81954Sandage A., 1976, AJ, 81, 954
. C Sandin, 10.1051/0004-6361/201423429A&A. 56797Sandin C., 2014, A&A, 567, A97
. M Sandler, A Howard, M Zhu, A Zhmoginov, L.-C Chen, arXiv:1801.04381Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L.-C., 2018, arXiv e-prints, p. arXiv:1801.04381
. E Schisano, 10.1093/mnras/stz3466MNRAS. 4925420Schisano E., et al., 2020, MNRAS, 492, 5420
. J D Soler, 10.1051/0004-6361/202243334A&A. 66296Soler J. D., et al., 2022, A&A, 662, A96
. A Sollima, A Gil De Paz, D Martinez-Delgado, R J Gabany, J J Gallego-Laborda, T Hallas, 10.1051/0004-6361/201014085A&A. 51683Sollima A., Gil de Paz A., Martinez-Delgado D., Gabany R. J., Gallego-Laborda J. J., Hallas T., 2010, A&A, 516, A83
. N V Sujatha, J Murthy, R Suresh, Conn Henry, R Bianchi, L , 10.1088/0004-637X/723/2/1549ApJ. 7231549Sujatha N. V., Murthy J., Suresh R., Conn Henry R., Bianchi L., 2010, ApJ, 723, 1549
. I Trujillo, J Fliri, 10.3847/0004-637X/823/2/123ApJ. 823123Trujillo I., Fliri J., 2016, ApJ, 823, 123
The Observatory. G De Vaucouleurs, 75129de Vaucouleurs G., 1955, The Observatory, 75, 129
The Observatory. G De Vaucouleurs, 80106de Vaucouleurs G., 1960, The Observatory, 80, 106
. G De Vaucouleurs, K C Freeman, 10.1016/0083-6656(72)90026-8Vistas in Astronomy. 14163de Vaucouleurs G., Freeman K. C., 1972, Vistas in Astronomy, 14, 163
. E Vazquez-Semadeni, G C Gomez, A K Jappsen, J Ballesteros-Paredes, R F Gonzalez, R S Klessen, 10.1086/510771The Astrophysical Journal. 657Vazquez-Semadeni E., Gomez G. C., Jappsen A. K., Ballesteros- Paredes J., Gonzalez R. F., Klessen R. S., 2007, The Astro- physical Journal, 657, 870
. M P Viero, 10.1088/0067-0049/210/2/22ApJS. 21022Viero M. P., et al., 2014, ApJS, 210, 22
. M G R Vogelaar, B P Wakker, A&A. 291557Vogelaar M. G. R., Wakker B. P., 1994, A&A, 291, 557
. A Vojtekova, M Lieu, I Valtchanov, B Altieri, L Old, Q Chen, F Hroch, 10.1093/mnras/staa3567MNRAS. 5033204Vojtekova A., Lieu M., Valtchanov I., Altieri B., Old L., Chen Q., Hroch F., 2021, MNRAS, 503, 3204
. C P De Vries, Le Poole, R S , A&A. 1457de Vries C. P., Le Poole R. S., 1985, A&A, 145, L7
. H W De Vries, A Heithausen, P Thaddeus, 10.1086/165492ApJ. 319723de Vries H. W., Heithausen A., Thaddeus P., 1987, ApJ, 319, 723
. J L Weiland, L Blitz, E Dwek, M G Hauser, L Magnani, L J Rickard, 10.1086/184714ApJ. 306101Weiland J. L., Blitz L., Dwek E., Hauser M. G., Magnani L., Rickard L. J., 1986, ApJ, 306, L101
. D G York, 10.1086/301513The Astronomical Journal. 1201579York D. G., et al., 2000, The Astronomical Journal, 120, 1579
. V Zubko, E Dwek, R G Arendt, 10.1086/382351The Astrophysical Journal Supplement Series. 152211Zubko V., Dwek E., Arendt R. G., 2004, The Astrophysical Journal Supplement Series, 152, 211
| []
|
[
"CONFIDENCE-BASED FEATURE IMPUTATION FOR GRAPHS WITH PARTIALLY KNOWN FEATURES",
"CONFIDENCE-BASED FEATURE IMPUTATION FOR GRAPHS WITH PARTIALLY KNOWN FEATURES",
"CONFIDENCE-BASED FEATURE IMPUTATION FOR GRAPHS WITH PARTIALLY KNOWN FEATURES",
"CONFIDENCE-BASED FEATURE IMPUTATION FOR GRAPHS WITH PARTIALLY KNOWN FEATURES"
]
| [
"Daeho Um [email protected] \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Jiwoong Park \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Seulki Park [email protected] \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Jin Young Choi [email protected] \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Daeho Um [email protected] \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Jiwoong Park \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Seulki Park [email protected] \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n",
"Jin Young Choi [email protected] \nDepartment of Electrical and Computer Engineering\nASRI Seoul National University\n\n"
]
| [
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n",
"Department of Electrical and Computer Engineering\nASRI Seoul National University\n"
]
| []
| This paper investigates a missing feature imputation problem for graph learning tasks. Several methods have previously addressed learning tasks on graphs with missing features. However, in cases of high rates of missing features, they were unable to avoid significant performance degradation. To overcome this limitation, we introduce a novel concept of channel-wise confidence in a node feature, which is assigned to each imputed channel feature of a node for reflecting certainty of the imputation. We then design pseudo-confidence using the channel-wise shortest path distance between a missing-feature node and its nearest known-feature node to replace unavailable true confidence in an actual learning process. Based on the pseudo-confidence, we propose a novel feature imputation scheme that performs channel-wise inter-node diffusion and node-wise inter-channel propagation. The scheme can endure even at an exceedingly high missing rate (e.g., 99.5%) and it achieves state-of-the-art accuracy for both semi-supervised node classification and link prediction on various datasets containing a high rate of missing features. Codes are available at https://github.com/daehoum1/pcfi. Michael Bronstein. On the unreasonable effectiveness of feature propagation in learning on graphs with missing node features. arXiv preprint arXiv: | 10.48550/arxiv.2305.16618 | [
"https://export.arxiv.org/pdf/2305.16618v2.pdf"
]
| 258,947,197 | 2305.16618 | 132009e5f1d256b8806442bbf3924946b32a4b5a |
CONFIDENCE-BASED FEATURE IMPUTATION FOR GRAPHS WITH PARTIALLY KNOWN FEATURES
Daeho Um [email protected]
Department of Electrical and Computer Engineering
ASRI Seoul National University
Jiwoong Park
Department of Electrical and Computer Engineering
ASRI Seoul National University
Seulki Park [email protected]
Department of Electrical and Computer Engineering
ASRI Seoul National University
Jin Young Choi [email protected]
Department of Electrical and Computer Engineering
ASRI Seoul National University
CONFIDENCE-BASED FEATURE IMPUTATION FOR GRAPHS WITH PARTIALLY KNOWN FEATURES
Published as a conference paper at ICLR 2023
This paper investigates a missing feature imputation problem for graph learning tasks. Several methods have previously addressed learning tasks on graphs with missing features. However, in cases of high rates of missing features, they were unable to avoid significant performance degradation. To overcome this limitation, we introduce a novel concept of channel-wise confidence in a node feature, which is assigned to each imputed channel feature of a node for reflecting certainty of the imputation. We then design pseudo-confidence using the channel-wise shortest path distance between a missing-feature node and its nearest known-feature node to replace unavailable true confidence in an actual learning process. Based on the pseudo-confidence, we propose a novel feature imputation scheme that performs channel-wise inter-node diffusion and node-wise inter-channel propagation. The scheme can endure even at an exceedingly high missing rate (e.g., 99.5%) and it achieves state-of-the-art accuracy for both semi-supervised node classification and link prediction on various datasets containing a high rate of missing features. Codes are available at https://github.com/daehoum1/pcfi. Michael Bronstein. On the unreasonable effectiveness of feature propagation in learning on graphs with missing node features. arXiv preprint arXiv:
INTRODUCTION
In recent years, graph neural networks (GNNs) have received considerable attention and have performed outstandingly on numerous problems across multiple fields (Zhou et al., 2020;Wu et al., 2020). While various GNNs handling attributed graphs are designed for node representation (Defferrard et al., 2016;Kipf & Welling, 2016a;Veličković et al., 2017;Xu et al., 2018) and graph representation learning (Kipf & Welling, 2016b;Sun et al., 2019;Velickovic et al., 2019), GNN models typically assume that features of all nodes are fully observed. In real-world situations, however, features in graph-structured data are often partially observed, as illustrated in the following cases. First, collecting complete data for a large graph is prohibitively expensive or even impossible. Second, measurement failure is common. Third, in social networks, most users desire to protect their personal information selectively. As data security regulation continues to tighten around the world (GDPR), access to full data is expected to become increasingly difficult. Under these circumstances, most GNNs cannot be applied directly due to incomplete features.
Several methods have been proposed to solve learning tasks with graphs containing missing features (Jiang & Zhang, 2020;Chen et al., 2020;Taguchi et al., 2021), but they suffer from significant performance degradation at high rates of missing features. A recent work by (Rossi et al., 2021) demonstrated improved performance by introducing feature propagation (FP), which iteratively propagates known features among the nodes along edges. However, even FP cannot avoid a considerable accuracy drop at an extremely high missing rate (e.g., 99.5%). We assume that it is because FP takes graph diffusion through undirected edges. Consequently, in FP, message passing between two nodes occurs with the same strength regardless of the direction. Moreover, FP only diffuses observed features channel-wisely, which means that it does not consider any relationship between channels. Therefore, to better impute missing features in a graph, we propose to consider both inter-channel and inter-node relationships so that we can effectively exploit the sparsely known features. To this end, we design an elaborate feature imputation scheme that includes two processes. The first process is the feature recovery via channel-wise inter-node diffusion, and the second is the feature refinement via node-wise inter-channel propagation. The first process diffuses features by assigning different importance to each recovered channel feature, in contrast to usual diffusion. To this end, we introduce a novel concept of channel-wise confidence, which reflects the quality of channel feature recovery. This confidence is also used in the second process for channel feature refinement based on highly confident feature by utilizing the inter-channel correlation.
The true confidence in a missing channel feature is inaccessible without every actual feature. Thus, we define pseudo-confidence for use in our scheme instead of true confidence. Using channel-wise confidence further refines the less confident channel feature by aggregating the highly confident channel features in each node or through the highly confident channel features diffused from neighboring nodes.
The key contribution of our work is summarized as follows: (1) we propose a new concept of channel-wise confidence that represents the quality of a recovered channel feature. (2) We design a method to provide pseudo-confidence that can be used in place of unavailable true confidence in a missing channel feature. (3) Based on the pseudo-confidence, we propose a novel feature imputation scheme that achieves the state-of-the-art performance for node classification and link prediction even in an extremely high rate (e.g., 99.5%) of missing features.
RELATED WORK
LEARNING ON GRAPHS WITH MISSING NODE FEATURES
The problem with missing data has been widely investigated in the literature (Allison, 2001;Loh & Wainwright, 2011;Little & Rubin, 2019;You et al., 2020). Recently, focusing on graph-structured data with pre-defined connectivity, there have been several attempts to learn graphs with missing node features. (Monti et al., 2017) proposed recurrent multi-graph convolutional neural networks (RMGCNN) and separable RMGCNN (sRMGCNN), a scalable version of RMGCNN. Structureattribute transformer (SAT) (Chen et al., 2020) models the joint distribution of graph structures and node attributes through distribution techniques, then completes missing node attributes. GCN for missing features (GCNMF) (Taguchi et al., 2021) adapts graph convolutional networks (GCN) (Kipf & Welling, 2016a) to graphs that contain missing node features via representing the missing features using the Gaussian mixture model. Meanwhile, a partial graph neural network (PaGNN) (Jiang & Zhang, 2020) leverages a partial message-propagation scheme that considers only known features during propagation. However, these methods experience large performance degradation when there exists a high feature missing rate. Feature propagation (FP) (Rossi et al., 2021) reconstructs missing features by diffusing known features. However, in diffusion of FP, a missing feature is formed by aggregating features from neighboring nodes regardless of whether a feature is known or inferred. Moreover, FP does not consider any interdependency among feature channels. To utilize relationships among channels, we construct a correlation matrix of recovered features and additionally refine the features.
DISTANCE ENCODING
Distance encoding (DE) on graphs defines extra features using distance from a node to the node set where the prediction is made. (Zhang & Chen, 2018) extracts a local enclosing subgraph around each target node pair, and uses GNN to learn graph structure features for link prediction. (Li et al., 2020) exploits structure-related features called DE that encodes distance between a node and its neighboring node set with graph-distance measures (e.g., shortest path distance or generalized PageRank scores (Li et al., 2019)). (Zhang et al., 2021) unifies the aforementioned techniques into a labeling trick. Heterogeneous graph neural network (HGNN) (Ji et al., 2021) proposes a heterogeneous distance encoding in consideration of multiple types of paths in enclosing subgraphs of heterogeneous graphs. Distance encoding in existing methods improves the representation power of GNNs. We use distance encoding to distinguish missing features based on the shortest path distance from a missing feature to known features in the same channel.
GRAPH DIFFUSION
Diffusion on graphs spreads the feature of each node to its neighboring nodes along the edges (Coifman & Lafon, 2006;Shuman et al., 2013;Guille et al., 2013). There are two types of transition matrices commonly used for diffusion on graphs: symmetric transition matrix (Kipf & Welling, Figure 1: Overall scheme of the proposed Pseudo-Confidence-based Feature Imputation (PCFI) method. Based on the graph structure and partially known features, we calculate the channel-wise shortest path distance between a node with a missing feature and its nearest source node (SPD-S). Based on SPD-S, we determine the pseudo-confidence in the recovered feature, using a predetermined hyper-parameter α (0 < α < 1). Pseudo-confidence plays an important role in the two stages: channel-wise Inter-node diffusion and node-wise inter-channel propagation.
2016a; Klicpera et al., 2019;Rossi et al., 2021) and random walk matrix (Page et al., 1999;Chung, 2007;Perozzi et al., 2014;Grover & Leskovec, 2016;Atwood & Towsley, 2016;Klicpera et al., 2018;Lim et al., 2021). While these matrices work well for each target task, from a node's perspective, the sum of edge weights for aggregating features is not one in general. Therefore, since features are not updated at the same scale of original features, these matrices are not suitable for missing feature recovery.
PROPOSED METHOD
OVERVIEW
We address a problem with graph learning tasks containing missing node features. To demonstrate the effectiveness of our feature imputation, we target two main graph learning tasks. The first target task, semi-supervised node classification, is to infer the labels of the unlabeled nodes from the partially known features/labels and the fully known graph structure. The second target task, link prediction, is to predict whether two nodes are likely to share a link. Figure 1 depicts the overall scheme of the proposed feature imputation. Our key idea is to assign different pseudo-confidence to each imputed channel features. To this end, the proposed imputation scheme includes two processes. The first process is the feature recovery via channel-wise inter-node diffusion, and the second is the feature refinement via node-wise inter-channel propagation. The imputed features obtained from the two processes are used for downstream tasks via off-the-shelf GNNs.
In Sec. 3.2, we begin by introducing the notations used in this paper. In Sec. 3.3, we outline the proposed PC (pseudo-confidence)-based feature imputation (PCFI) scheme that imputes missing node features. We then propose a method to determine the pseudo-confidence in Sec. 3.4. In Sec. 3.5, we present channel-wise inter-node diffusion that iteratively propagates known features with consideration of PC. In Sec. 3.6, we present node-wise inter-channel propagation that adjusts features based on correlation coefficients between channels.
NOTATIONS
Basic notation on graphs. An undirected connected graph is represented as
G = (V, E, A) where V = {v i } N i=1
is the set of N nodes, E is the edge set with (v i , v j ) ∈ E, and A ∈ {0, 1} N ×N denotes an adjacency matrix. X = [x i,d ] ∈ R N ×F is a node feature matrix with N nodes and F channels, i.e., x i,d , the d-th channel feature value of the node v i . N (v i ) denotes the set of neighbors of v i . Given an arbitrary matrix M ∈ R n×m , we let M i,: denote the i-th row vector of M . Similarly, we let M :,j denote the j-th column vector of M .
Notation for graphs with missing node features. As we assume that partial or even very few node features are known, we define V (d) k as a set of nodes where the d-th channel feature values are known (k in V (d) k means 'known'). The set of nodes with the unknown d-th channel feature values is denoted by V
(d) u = V \ V (d) k . Then V (d) k and V (d)
u are referred to source nodes and missing nodes, respectively. By reordering the nodes according to whether a feature value is known or not for the d-th channel, we can write graph signal for the d-th channel features and adjacency matrix as:
x (d) = x (d) k x (d) u A (d) = A (d) kk A (d) ku A (d) uk A (d) uu .
Here,
x (d) , x (d) k , and x (d)
u are column vectors that represent corresponding graph signal. Since the graph is undirected, A (d) is symmetric and thus (A
(d) ku ) ⊤ = A (d) uk . Note that A (d) is different from A due to reordering while they represent the same graph structure.X = [x i,d ] denotes recovered features for X from {x (d) k } F d=1 and {A (d) } F d=1 .
PC-BASED FEATURE IMPUTATION
The proposed PC-based feature imputation (PCFI) scheme leverages the shortest path distance between nodes to compute pseudo-confidence. PCFI consists of two stages: channel-wise inter-node diffusion and node-wise inter-channel propagation. The first stage, channel-wise inter-node diffusion, findsX (recovered features for X) through PC-based feature diffusion on a given graph G. Then, the second stage, node-wise inter-channel, refinesX to the final imputed featuresX by considering PC and correlation between channels.
To perform node classification and link prediction, a GNN is trained with imputed node features X. In this work, PCFI is designed to perform the downstream tasks well. However, since PCFI is independent of the type of learning task, PCFI is not limited to the two tasks. Therfore, it can be applied to various graph learning tasks with missing node features.
Formally, the proposed framework can be expressed aŝ
X = f 1 ({x (d) k } F d=1 , {A (d) } F d=1 ) (1a) X = f 2 (X) (1b) Y = g θ (X, A),(1c)
where f 1 is channel-wise inter-node diffusion, f 2 is node-wise inter-channel propagation, andŶ is a prediction for desired output of a given task. Here, PCFI is expressed as f 2 • f 1 , and any GNN architecture can be adopted as g θ according to the type of task.
PSEUDO-CONFIDENCE
We begin by defining the concept of confidence in the recovered featurex i,d of a node v i for channel d in the first process. Definition 1. Confidence in the recovered channel featurex i,d is defined by similarity betweenx i,d and true one x i,d , which is a value between 0 and 1.
Note that the feature x i,d of a source node is observed and thus its confidence becomes 1. When the recoveredx i,d is far from the true x i,d , the confidence inx i,d will decrease towards 0. However, it is a chicken and egg problem to determinex i,d and its confidence. That is, the confidence inx i,d is unavailable before attainingx i,d according to Definition 1, whereas the proposed scheme can not yieldx i,d without the confidence.
To navigate this issue, instead of true confidence, we design a pseudo-confidence using the shortest path distance between a node and its nearest source node for a specific channel (SPD-S). For instance, SPD-S of the i-th node for the d-th channel feature is denoted by S i,d , which is calculated via
S i,d = s(v i |V (d) k , A (d) ),(2)
where s(·) yields the shortest path distance between the i-th node and its nearest source node in V
(d) k on A (d) .
It is notable that, if the i-th node is a source node, its nearest source node is itself, meaning S i,d becomes zero. We construct SPD-S matrix S ∈ R N ×F of which elements are S i,d .
ConsiderX = [x i,d ] that represents the recovered features of X with consideration of feature homophily (McPherson et al., 2001) that represents a local property on a graph (Bisgin et al., 2010;Lauw et al., 2010;Bisgin et al., 2012). Due to feature homophily, the feature similarity between any two nodes tends to increase as the shortest path distance between the two nodes decreases.
Based on feature homophily, we assume that the recovered featurex i,d of a node v i more confidently becomes similar to the given feature of its nearest source node as SPD-S of v i (S i,d ) decreases.
According to the assumption, we define pseudo-confidence using SPD-S in Definition 2.
Definition 2. Pseudo-confidence (PC) inx i,d is defined by a function ξ i,d = α S i,d where α ∈ (0, 1)
is a hyper-parameter.
By Definition 2, PC becomes 1 forx i,d = x i,d on source nodes. Moreover, PC decreases exponentially for a missing node features as S i,d increases. Likewise, PC reflects the tendency toward confidence in Definition 1. We verified that this tendency exists regardless of imputation methods via experiments on real datasets (see Figure 7 in APPENDIX). Therefore, pseudo-confidence using SPD-S is properly designed to replace confidence. To the best of our knowledge, ours is the first model that leverages a distance for graph imputation.
CHANNEL-WISE INTER-NODE DIFFUSION
To recover missing node features in a channel-wise manner via graph diffusion, source nodes independently propagate their features to their neighbors for each channel. Instead of simple aggregating all neighborhood features with the same weights, our scheme aggregates features with different importance according to their confidences. As a result, the recovered features of missing nodes are aggregated in low-confidence and the given features of source nodes are aggregated in highconfidence, which is our design objective. To this end, we design a novel diffusion matrix based on the pseudo-confidence.
For the design, Definition 3 first defines 'Relative PC' that represents an amount of PC in a particular node feature relative to another node feature.
Definition 3. Relative PC ofx j,d relative tox i,d is defined by ξ j/i,d = ξ j,d /ξ i,d = α S j,d −S i,d .
Then, suppose that a missing node feature x i,d of v i aggregates features from v j ∈ N (v i ). If v i and v j are neighborhoods to each other, the difference between SPD-S of v i and SPD-S of v j cannot exceed 1. Hence, the relative PC of a node to its neighbor can be determined using Proposition 1. Proposition 1. If S i,d = m ≥ 1, v i is a missing node, then ξ j/i,d for v j ∈ N (v i ) is given by
ξ j/i,d = α −1 if S i,d > S j,d , ξ j/i,d = 1 if S i,d = S j,d , ξ j/i,d = α if S i,d < S j,d , Otherwise, v i is a source node (S i,d = 0), then ξ j/i,d for v j ∈ N (v i ) is given by ξ j/i,d = 1 if v j is a source node(S j,d = 1), ξ j/i,d = α if v j is a missing node(S j,d = 0).
The proof of Proposition 1 is given in Appendix A.1.
Before defining a transition matrix, we temporarily reorder nodes according to whether a feature value is known for the d-th channel, i.e., x (d) and A (d) are reordered for each channel as Section 3.2 describes. After the feature diffusion stage, we order the nodes according to the original numbering.
Built on Proposition 1, we construct a weighted adjacency matrix W (d) for the d-th channel.
W (d) ∈ R N ×N is defined as follows, W (d) i,j = ξ j/i,d if i ̸ = j , A (d) i,j = 1 0 if i ̸ = j , A (d) i,j = 0 1 if i = j.(3)
Published as a conference paper at ICLR 2023
Note that self-loops are added to W (d) with a weight of 1 so that each node can keep some of its own feature.
W d i,j is an edge weight corresponding to message passing from v j to v i . Proposition 1 implies that α −1 is assigned to high-PC neighbors, 1 to same-PC, and α to low-PC neighbors. That is, W (d) allows a node to aggregate high PC more than low PC channel features from its neighbors. Furthermore, consider message passing between two connected nodes
v i and v j s.t. W (d) i,j = ξ j/i,d = α. By Definition 3, ξ i/j,d = ξ −1 j/i,d , so that W (d) j,i = (W (d) i,j ) −1 = α −1 .
This means that message passing from a high confident node to a low confident node occurs in a large amount, while message passing in the opposite direction occurs in a small amount. The hyper-parameter α tunes the strength of message passing depending on the confidence.
To ensure convergence of diffusion process, we normalize
W (d) to W (d) = (D (d) ) −1 W (d) through row-stochastic normalization with D (d) ii = j W i,j . Since x d k with true feature values should be preserved, we replace the first |V (d) | rows of W (d) with one-hot vectors indicating V (d)
k . Finally, the channel-wise inter-node diffusion matrix W (d) for the d-th channel is expressed as
W (d) = I 0 ku W (d) uk W (d) uu ,(4)
where
I ∈ R |V (d) k |×|V (d) k | is an identity matrix and 0 ku ∈ {0} |V (d) k |×|V (d)
u | is a zero matrix. Note that W (d) remains row-stochastic despite the replacement. An aggregation in a specific node can be regarded as a weighted sum of features on neighboring nodes. A row-stochastic matrix for transition matrix means that when a node aggregates features from its neighbors, the sum of the weights is 1. Therefore, unlike a symmetric transition matrix (Kipf & Welling, 2016a;Klicpera et al., 2019;Rossi et al., 2021) or a column-stochastic (random walk) transition matrix (Page et al., 1999;Chung, 2007;Perozzi et al., 2014;Grover & Leskovec, 2016;Atwood & Towsley, 2016;Klicpera et al., 2018;Lim et al., 2021), features of missing nodes can form at the same scale of known features. Preserving the original scale allows features to recover close to the actual features. Now, we define channel-wise inter-node diffusion for the d-th channel aŝ
x (d) (0) = x (d) k 0 u x (d) (t) = W (d)x(d) (t − 1),(5)wherex (d) (t) is a recovered feature vector for x (d) after t propagation steps, 0 u is a zero-column vector of size |V (d) u |, and t ∈ [1, K].
Here we initialize missing feature values x (d) u to zero. As K → ∞, this recursion converges (the proof is provided in Appendix A.2). We approximate the steady state tox (d)
(K), which is calculated by ( W (d) ) Kx(d) (0) with large enough K. The diffusion is performed for each channel and outputs {x (d) (K)} F d=1 .
Due to the reordering of nodes for each channel before the diffusion, node indices inx (d) (K) for d ∈ {1, ..., F } differ. Therefore, after unifying different ordering in eachx (d) (K) according to the original order in X, we concatenate allx (d) (K) along the channels intoX, which is the final output in this stage. des
NODE-WISE INTER-CHANNEL PROPAGATION
In the previous stage, we obtainedX = [x i,d ] (recovered features for X) via channel-wise internode diffusion performed separately for each channel. The proposed feature diffusion is enacted based on the graph structure and pseudo-confidence, but it does not consider dependency between channels. Since the dependency between channels can be another important factor for imputing missing node features, we develop an additional scheme to refineX to improve the performance of downstream tasks by considering both channel correlation and pseudo-confidence. At this stage, within a node, a low-PC channel feature is refined by reflecting a high-PC channel feature according to the degree of correlation between the two channels.
We first prepare a correlation coefficient matrix R = [R a,b ] ∈ R F ×F , giving the correlation coefficient between each pair of channels for the proposed scheme. R a,b , the correlation coefficient betweenX :,a andX :,b , is calculated by
R a,b = 1 N −1 N i=1 (x i,a − m a )(x i,b − m b ) σ a σ b (6) where m d = 1 N N i=1x i,d and σ d = 1 N −1 N i=1 (x i,d − m d ) 2 .
In this stage, unlike looking across the nodes for each channel in the previous stage, we look across the channels for each node. As the right-hand graph of Figure 1 illustrates, we define fully connected directed graphs {H (i) } N i=1 called node-wise inter-channel propagation graphs from the given graph G. H (i) for the i-th node in G is defined by
H (i) = (V (i) , E (i) , B (i) ),(7)
where (8). We design B (i) for inter-channel propagation in each node to achieve three goals: (1) highly correlated channels should exchange more information to each other than less correlated channels, (2) a low-PC channel feature should receive more information from other channels for refinement than a high-PC channel feature, and (3) a high PC channel feature should propagate more information to other node channels than a low PC channel feature. Based on these design goals, the weight of the directed edge from the b-th channel to the a-th channel (B
V (i) = {v (i) d } F d=1 is a set of nodes in H (i) , E (i) is a set of directed edges in H (i) , and B (i) ∈ R F ×F is a weighted adjacency matrix for refiningX i,: . To refinex i,d of the i-th node via inter-channel propagation, we assignx i,d to each v (i) d as a scalar node feature for the d-th channel (d ∈ {1, ..., F }). The weights in E (i) are given by B (i) in(i) a,b ) in B (i) is designed by B (i) a,b = β(1 − α Si,a )α S i,b R a,b if a ̸ = b 0 if a = b ,(8)
where R a,b , (1 − α Si,a ), and α S i,b are the terms for meeting design goals (1), (2), and (3), respectively. α is hyper-parameter for pseudo-confidence in Definition 2, and β is the scaling hyperparameter.
Node-wise inter-channel propagation on H (i) outputs the final imputed features for X i,: . We define node-wise inter-channel propagation as
X ⊤ i,: =X ⊤ i,: + B (i) (X i,: − [m 1 , m 2 , · · · , m F ]) ⊤ ,(9)
whereX i,: andX i,: are row vectors. Preserving the pre-recovered channel feature values (as self loops), message passing among different channel features is conducted along the directed edges of B (i) . After calculatingX i,: for i ∈ {1, ..., N }, we obtain the final recovered features by concatenating them, i.e.,X = [X ⊤ 1,:X ⊤ 2,: · · ·X ⊤ N,: ] ⊤ . Moreover, since R is calculated via recovered featuresX for all nodes in G, channel correlation propagation injects global information into recovered features for X. In turn,X is a final output of PC-based feature imputation and is fed to GNN to solve a downstream task.
EXPERIMENTS
To validate our method, we conducted experiments for two main graph learning tasks: semisupervised node classification and link prediction.
EXPERIMENTAL SETUP
Datasets. We experimented with six benchmark datasets from two different domains: citation networks (Cora, CiteSeer, PubMed (Sen et al., 2008) and OGBN-Arxiv (Hu et al., 2020)) and recommendation networks (Amazon-Computers and Amazon-Photo (Shchur et al., 2018)). For link prediction, we evaluated all methods on the five benchmark datasets except OGBN-Arxiv that was caused out of memory. The datasets are described in Appendix A.4.1. Compared Methods. For semi-supervised node classification, we compared our method to two baselines and four state-of-the-art methods. we set Baseline 1 to a simple scheme that directly fed the graph data with missing features to GNN without recovery, where all missing values in a feature matrix were set to zero. We set Baseline 2 to label propagation (LP) (Zhuŕ & GhahramaniŕH, 2002) which does not use node features and propagates only partially-known labels for inferring the remaining labels. That is, LP corresponds to the case of 100% feature missing. The four state-of-theart methods can be categorized into two approaches: For link prediction, we compared our method with sRMGCNN and FP, which are the feature imputation approach. To perform link prediction on the imputed features by each method, graph autoencoder (GAE) (Kipf & Welling, 2016b) models were adopted. We used features inferred by each method as input of GAE models. We further compared against GCNMF (Taguchi et al., 2021) for link prediction. We report the detailed implementation in Appendix A.3. Data Settings. Regardless of task type, we removed features according to missing rate r m (0 < r m < 1). Missing features were selected in two ways.
• Structural missing. We first randomly selected nodes in a ratio of r m among all nodes.
Then, we assigned all features of the selected nodes to missing (unknown) values (zero). • Uniform missing. We randomly selected features in a ratio of r m from the node feature matrix X, and we set the selected features to missing (unknown) values (zero).
For semi-supervised node classification, we randomly generated 10 different training/validation/test splits, except OGBN-Arxiv where the split was fixed according to the specified criteria. For link prediction, we also randomly generated 10 different training/validation/test splits for each datasets. We describe the generated splits in detail in Appendix A.4.2.
Hyper-parameters. Across all the compared methods, we tuned hyper-parameters based on validation set. For PCFI, we analyzed the influence of α and β in Appendix A.3.2. We used grid search to find the two hyper-parameters in the range of 0 < α < 1 and 0 < β ≤ 1 on validation sets. For the node classification, (α, β) was determined by the best pair from {(α, β)|α ∈ {0.1, 0.2, · · · , 0.9}, β ∈ {10 −6 , 10 −5.5 , · · · , 1}}. For the link prediction, the best (α, β) was searched from {(α, β)|α ∈ {0.1, 0.2, · · · , 0.9}, β ∈ {10 −6 , 10 −5 , · · · , 1}}, as shown in Figure 3, 4 of APPENDIX.
Ablation Study. We present the ablation study to show the effectiveness of each component (rowstochastic transition matrix, channel-wise inter-node diffusion, and node-wise inter-channel propagation) of PCFI in Appendix A.4.4. Figure 2 demonstrates the trend of an average accuracy of compared methods for node classification on six datasets with different r m . The performance gain of PCFI is remarkable at r m = 0.995. In contrast, the average accuracy of existing methods rapidly decrease as r m increases and are overtaken by LP which does not utilize features. In the case of uniform missing features, FP exhibits better resistance than LP, but the gap from ours increases as r m increases. Table 1 illustrates the detailed results of node classification with r m = 0.995. sRMGCNN and GCNMF show significantly low performance for all experiments in this extremely challenging environment. Baseline 2 (LP) outperforms PaGNN in general, and even FP shows worse accuracy than Baseline 2 (LP) in certain settings. For all the datasets, PCFI performed in a manner that was superior to the other methods at r m = 0.995. Table 2 demonstrates the results for the link prediction task at r m =0.995. PCFI achieves state-ofthe-art performance across all settings except PubMed with structural missing. Based on the results on semi-supervised node classification and link prediction, which are representative graph learning tasks, PCFI shows the effectiveness at a very high rate of missing features.
SEMI-SUPERVISED NODE CLASSIFICATION RESULTS
LINK PREDICTION RESULTS
CONCLUSION
We introduced a novel concept of channel-wise confidence to impute highly rated missing features in a graph. To replace the unavailable true confidence, we designed a pseudo-confidence obtainable from the shortest path distance of each channel feature on a node. Using the pseudo-confidence, we developed a new framework for missing feature imputation that consists of channel-wise internode diffusion and node-wise inter-channel propagation. As validated in experiments, the proposed method demonstrates outperforming performance on both node classification and link prediction. The channel-wise confidence approach for missing feature imputation can be straightforwardly applied to various graph-related downstream tasks with missing node features.
ETHICS STATEMENT
The intentionally removed private or confidential information can be recovered using the proposed method and the recovered information can be misused. Therefore, the work is suggested to be used for positive impacts on society in areas such as health care (
REPRODUCIBILITY STATEMENT
For theoretical results, we explained the assumptions and the complete proofs of all theoretical results in Section 3.4, 3.5, and Appendix. In addition, we include the data and implementation details to reproduce the experimental results in Section 4 and Appendix A.3. The codes are available at https://github.com/daehoum1/pcfi.
Proposition 1. If S i,d = m ≥ 1, v i is a missing node, then ξ j/i,d for v j ∈ N (v i ) is given by ξ j/i,d = α −1 if S i,d > S j,d , ξ j/i,d = 1 if S i,d = S j,d , ξ j/i,d = α if S i,d < S j,d , Otherwise, v i is a source node (S i,d = 0), then ξ j/i,d for v j ∈ N (v i ) is given by ξ j/i,d = 1 if v j is a source node (S j,d = 1), ξ j/i,d = α if v j is a missing node (S j,d = 0).
Proof. Let v a and v b be arbitrary nodes, and let δ(v a , v b ) denote the number of edges in the shortest path between v a and v b . The shortest path distance from v i to its nearest source node for the d-th feature channel, S i,d , is given by
S i,d = min{δ(v i , v s )| v s ∈ V (d) k }. Claim 1: S i,d = 0 ⇔ v i ∈ V (d) k . Proof: Since v i is a source node, S i,d = 0. Claim 2: S i,d ≥ 1 ⇔ v i / ∈ V (d) k ⇔ v i ∈ V (d) u . Proof: Since v i is not a known node (v i / ∈ V (d) k ) if and only of v i is unknown node (v i ∈ V (d) u ), then S i,d ≥ 1 is obvious. Let v s be a known node such that δ(v s , v i ) = m which exists because S i,d = m. Then δ(v s , v j ) ≤ δ(v s , v i ) + δ(v i , v j ) = m + 1
holds by the triangle inequality since shortest path distance is a metric on the graph. This proves that S j,d ≤ m + 1, and also included the case of S i,d = 0 as a special case.
Assume that there is some known node
v s ′ such that δ(v j , v s ′ ) ≤ m − 2. Then δ(v i , v s ′ ) ≤ δ(v i , v j ) + δ(v j , v s ′ ) ≤ 1 + m − 2 = m − 1
by the triangle inequality. However, this contradicts S i,d = m. Therefore, for all source node s ′ , δ(v j , v s ′ ) ≥ m − 1 which implies S j,d ≥ m − 1.. Then, the following Claim 3 also holds.
Claim 3: If S i,d = m ≥ 1, then S j,d − S i,d ∈ {−1, 0, 1} for v j ∈ N (v i ). Otherwise, if S i,d = 0, then S j,d − S i,d ∈ {0, 1} for v j ∈ N (v i )
According to Claim 3 and ξ j/i,d = α S j,d −S i,d in Definition 3 of the main text, the proposition 1 holds trivially.
A.2 CONVERGENCE OF CHANNEL-WISE INTER-NODE DIFFUSION
The convergence of the proposed Channel-wise Inter-node Diffusion is presented in the following Proposition.
Proposition A.1. The channel-wise inter-node diffusion matrix for the d-th channel, W (d) , is expressed by
W (d) = I 0 ku W (d) uk W (d) uu , where W (d)
is the row-stochastic matrix calculated by normalizing W (d) . The recursion in channelwise inter-node diffusion for the d-th channel is defined bŷ
x (d) (0) = x (d) k 0 u x (d) (t) = W (d)x(d) (t − 1) Then, lim K→∞x (d) u (K) converges to (I − W (d) uu ) −1 W (d) uk x (d) k , where x (d)
k is the known feature of the d-th channel.
The proof of this Proposition follows that of (Rossi et al., 2021) which proves the case of a symmetrically-normalized diffusion matrix. In our proof, the diffusion matrix is not symmetric. For proof of Proposition A.1, we first give Lemma A.1 and A.2.
Lemma A.1. W (d)
is the row-stochastic matrix calculated by normalizing W (d) which is the weighted adjacency matrix of the connected graph G. That is, (Berman & Plemmons, 1994). Then, by Corollary 2.1.5 of (Berman & Plemmons, 1994), ρ(W
W (d) = (D (d) ) −1 W (d) where D (d) ii = j W i,j . Let W (d) uu be the |x (d) u | × |x (d) u | bottom-right submatrix of W(W (d) uu0 = 0 kk 0 ku 0 uk W (d) uu where 0 kk ∈ {0} |x (d) k |×|x (d) k | , 0 ku ∈ {0} |x (d) k |×|x (d) u | , and 0 uk ∈ {0} |x (d) u |×|x (d) k | . Since W((d) uu0 ) < ρ(W (d) ).
Since the spectral radius of a stochastic matrix is one (Theorem 2.5.3 in (Berman & Plemmons, 1994)
), ρ(W (d) ) = 1. Furthermore, since W (d) uu0 and W (d)
uu share the same non-zero eigenvalues,
ρ(W (d) uu0 ) = ρ(W (d) uu ). Finally, ρ(W (d) uu ) = ρ(W (d) uu0 ) < ρ(W (d) ) = 1. Lemma A.2. I − W (d) uu is invertible where I is the |x (d) u | × |x (d) u | identity matrix.
Proof. Since 1 is not an eigenvalue of W In the following, we give the proof of Proposition 1. Proof of Proposition 1. Unfolding the recurrence relation gives uŝ
x (d) (t) = x (d) k (t) x (d) u (t) = I 0 ku W (d) uk W (d) uu x (d) k (t − 1) x (d) u (t − 1) = x (d) k (t − 1) W (d) ukx (d) k (t − 1) + W (d) uux (d) u (t − 1) .
Published as a conference paper at ICLR 2023
Sincex (d) k (t) =x (d) k (t − 1) in the first |x (d) k | rows,x (d) k (K) = ... =x (d) k . That is,x (d) k (K) remains x (d) k . Hence lim K→∞x (d) k (K) converges to x (d) k .
Now, we just consider the convergence of lim K→∞x (d) u (K). Unrolling the recursion of the last |x
(d) u | rows become,x (d) u (K) = W (d) uk x (d) k + W (d) uux (d) u (K − 1) = W (d) uk x (d) k + W (d) uu (W (d) uk x (d) k + W (d) uux (d) u (K − 2)) = . . . = ( K−1 t=0 (W (d) uu ) t )W (d) uk x (d) k + (W (d) uu ) Kx(d) u (0) Since lim K→∞ (W (d) uu ) K = 0 by Lemma A.1, lim K→∞ (W (d) uu ) Kx (d) u (0) = 0 regardless of the initial state forx (d) u (0). (We replacex (d)
u (0) with a zero column vector for simplicity.) Thus, it remains to consider lim
K→∞ ( K−1 t=0 (W (d) uu ) t )W (d) uk x (d) k .
Since ρ(W
u (K) = lim K→∞ ( K−1 t=0 (W (d) uu ) t )W (d) uk x (d) k = (I − W (d) uu ) −1 W (d) uk x (d) k .
Thus, the recursion in channel-wise inter-node diffusion converges. For all the compared methods, we followed all the hyper-parameters in original papers or codes if feasible. If hyper-parameters (the number of layers and hidden dimension) of a model for certain datasets are not clarified in the papers, we searched the hyper-parameters using grid search.
In that case, we searched the number of layers from {2, 3} and the hidden dimension from {16, 32, 64, 128, 256}.
We present the pseudo-code of our PCFI in Sec. A.6. Our code will be available upon publication. We set K to 100 throughout all the experiments. PCFI has two hyper-parameters α and β. α is used to calculate PC for channel-wise inter-node diffusion and node-wise inter-channel propagation. β controls the degree of node-wise inter-channel propagation. To analyze the effects of the hyperparameter, we conducted experiments with various α and β. Figure 3 and Figure 4 demonstrate the influence of α and β.
A.3.2 PCFI HYPER-PARAMETERS
For node classification, we set search range of α and β same as in Figure 3. For node classification, we chose α from {0.1, 0.2, ..., 0.9}, and β from {10 −6 , 10 −5.5 , 10 −5 , ..., , 1} using a validation set. Then, for link prediction, we set search range of α and β same as in Figure 4. We selected α from {0.1, 0.2, ..., 0.9}, and β from {10 −6 , 10 −5 , 10 −4 , ..., , 1}. To find the best hyper-parameter, we used grid search on a validation set. The detailed setting of hyper-parameters for all datasets used in our paper are listed in Table 3 and Table 4.
To train downstream GCNs added to PCFI for node classification, we fix a learning rate to 0.005. Then, to train downstream GAE added to PCFI for link prediction, we set a learning rate to 0.01 and 0.001 for {Cora, CiteSeer, PubMed} and {Photo, Computers}, respectively.
Downstream GCN for node classification. We set the number of layers to 3, and We fix dropout rate p = 0.5. The hidden dimension was set to 64 for all datasets except OGBN-Arxiv where 256 is used. For OGBN-Arxiv, as the Jumping Knowledge scheme (Xu et al., 2018) with max aggregation was applied to FP, we also utilized the scheme.
A.3.3 IMPLEMENTATION OF BASELINES
GCNMF (Taguchi et al., 2021). We used publicly released code by the authors. The code for GCNMF 1 is MIT licensed.
FP (Rossi et al., 2021). We used publicly released code by the authors. The code for FP 2 is Apache-2.0 licensed.
PaGNN (Jiang & Zhang, 2020). We used re-implemented Apache-2.0-licensed code 3 since we could not find officially released code by the authors for PaGNN.
sRMGCNN (Monti et al., 2017). Due to the compatibility problem from the old-version Tensorflow (Abadi et al., 2016) of the code, we only updated the version of publicly released code 4 to Tensorflow 2.3.0. The code is GPL-3.0 licensed.
Label propagation (Zhuŕ & GhahramaniŕH, 2002).
We employed re-implemented code included in MIT-licensed Pytorch Geometric.
We tuned hyper-parameter α of LP in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95} by grid search.
A.4 EXPERIMENTS
A.4.1 DATASETS All the datasets used in our experiments are publicly available from the MIT-licensed Pytorch Geometric package. We conducted all the experiments on the largest connected component of each given graph. For a disconnected graph, we can simply apply PCFI to each connected component independently. The description for the datasets is summarized in Table 5.
A.4.2 RANDOM SPLIT GENERATION
Node classification. We randomly generated 10 different training/validation/test splits, except OGBN-Arxiv where the split was given according to the published year of papers. As the setting in (Klicpera et al., 2019), in each generated split, 20 nodes per class were assigned to the training nodes. Then, the number of validation nodes is determined by the number that becomes 1500 by adding to the number of the training nodes. As the test nodes, we used all nodes except training and validation nodes.
Link prediction. We randomly generated 10 different training/validation/test splits for each datasets. In each split, we applied the same split ratio regardless of datasets. As the setting in (Kipf & Welling, 2016b), we commonly used 85% edges for train, 5% edges for validation, and the 10% edges for test.
A.4.3 GAIN OF PCFI ACCORDING TO FEATURE HOMOPHILY
In this section, since pseudo-confidence is based on the assumption that linked nodes have similar features, we explored how feature homophily of graphs impacts the performance of PCFI. We further analyzed the gain of PCFI over FP in terms of feature homophily. For the experiments, we generated features with different feature homophily on graphs from the synthetic dataset where each graph contains 5000 nodes that belong to one of 10 classes.
We selected two graphs (graphs with class homophily of 0.3 and 0.5) from the dataset. Preserving the graph structure and class distribution, we newly generated multiple sets of node features for each graph so that each generated graph has different feature homophily. The features for nodes were sampled from overlapping 10 5-dimensional Gaussians which correspond to each class and the means of the Gaussians are set differently to be the same distance from each other. For the covariance matrix of each Gaussian, diagonal elements were set to the same value and the other elements were set to 0.1 times the diagonal element. That is, the covariance between different channels was set to 0.1 times the variance of a channel. For each feature generation of a graph, we changed the scale of the covariance matrix so that features are created with different feature homophily. As the scale of the covariance matrix decreases, the overlapped area between Gaussians decreases. By doing so, more similar features are generated within the same class and the graph has higher feature homophily. Then, for semi-supervised node classification, we randomly generated 10 different training/validation/test splits. For each split, we set the numbers of nodes in training, validation, and test set to be equal. Figure 5 demonstrates the trend of the accuracy of PCFI and FP under a structural-missing setting with r m = 0.995. We can confirm that the gain of PCFI over FP exceeds 10% on both graphs with high feature homophily. Furthermore, PCFI shows superior performance regardless of levels of feature homophily. This is because confidence based on feature homophily is the valid concept on graphs with missing features and pseudo-confidence is designed properly to replace confidence.
A.4.4 ABLATION STUDY
To verify the effectiveness of each element of PCFI, we carried out an ablation study. We started by measuring the performance of FP that performs simple graph diffusion with a symmetricallynormalized transition matrix. Firstly, we changed the normalization type of the transition matrix. We replaced the symmetrically-normalized transition matrix with a row-stochastic transition matrix (row-ST in Table 6 and Table7). The row-stochastic transition matrix leads to feature recovery at the same scale of actual features. Secondly, we introduced PC to the diffusion process, which means PC-based channel-wise inter-node diffusion was performed (CID in Table 6 and Table7). Lastly, we performed node-wise channel propagation on recovered features obtained by channel-wise internode diffusion (NIP in Table 6 and Table7). We compared the performance of these four cases.
In this experiment, for CiteSeer, we performed node classification under structural-missing setting with r m = 0.995. Then, for Cora and PubMed, we performed link prediction with r m = 0.995. We applied structural missing and uniform missing to Cora and PubMed, respectively. As shown in Table 6 and Table 7, each component of PCFI contributes to performance improvement throughout the various settings. To verify the resistance of PCFI against missing features, we compared the averaged node classification accuracy for the 6 datasets by increasing r m as shown in Table. 8. We compared each average accuracy with different r m from r m = 0 to r m = 0.95. For structural missing, PCFI loses only 2.23% of relative average accuracy with r m = 0.9, and 4.82% with r m = 0.995. In the case of uniform missing, PCFI loses only 1.81% of relative average accuracy with r m = 0.9, and 2.66% with r m = 0.995. Note that classification with r m = 0.995 is the extremely missing case in which only 0.5% of features are known. This result demonstrates that PCFI is robust to missing features.
Even at the same r m , we observed the average accuracy for structural missing is lower than one for uniform missing. We analyzed this observation in the aspect of confidence. Since features are missing in a node unit for structural missing, all channel features of a node have the same SPD-S, i.e., S i,1 = . . . = S i,F . Hence, in a node far from its nearest source node, every channel feature has low confidence. Then, every missing feature of the node can not be improved via nodewise inter-channel propagation due to the absence of highly confident channel features in the node. Thus, the node is likely to be misclassified. In contrast, for uniform missing, channel features of a node have various PC where known channel features have high confidence and propagate their feature information to unknown channel features in the node. Hence, most nodes can be classified well owing to the recovered features from highly confident channel features. We claim that this observation shows the validity of the concept of channel-wise confidence. The tables containing accuracy at different r m for the two missing types is in Section A.5.
A.4.6 CLASSIFICATION ACCURACY ACCORDING TO PSEUDO-CONFIDENCE (a) Cora (b) Citeseer Figure 6: Node classification accuracy (%) according to SPD-S of test nodes. For both Cora and CiteSeer, structural missing with r m = 0.995 is applied. PCFI* denotes PCFI without node-wise inter-channel propagation. PCFI shows a noticeable performance gain especially for nodes with low-PC missing features (large-SPD-S nodes). Also, node-wise inter-channel propagation shows its effectiveness on nodes with low-PC features.
Under a structural missing setting, node features in a node have the same SPD-S, i.e., S i,1 = ... = S i,d for the i-th node. Since pseudo confidence ξ i,d is calculated by ξ i,d = α S i,d , node features within a node also have the same pseudo-confidence (PC) for structural missing, i.e., ξ i,1 = ... = ξ i,d . We refer to SPD-S of node features within a node as SPD-S of the node. Similarly, we refer to PC of node features with a node as PC of the node. The test nodes are divided according to SPD-S of the nodes. Then, to observe the relationship between PC and classification accuracy, we calculated classification accuracy of nodes in each group. We conducted experiments on Cora and CiteSeer under a structural missing setting with r m = 0.995. We compared PCFI with sRMGCNN and FP. Figure 6 shows node classification accuracy according to SPD-S of test nodes. For both datasets, as SPD-S of nodes increases, the accuracy of FP tends to decrease. However, for large-SPD-S nodes, PCFI gains noticeable performance improvement compared to FP. Furthermore, the results on Cora show that PCFI outperforms PCFI without node-wise inter-channel propagation on large-SPD-S nodes. Since large SPD-S means low PC, we can observe that PCFI imputes low-PC missing features effectively.
A.4.7 DEGREE OF FEATURE RECOVERY ACCORDING TO PSEUDO-CONFIDENCE : Cosine similarity between X i,: (original features in a node) andX i,: (its recovered features) according to SPD-S of features within the node. For both datasets, we randomly selected 99.5% nodes and remove all features of the selected nodes (structural missing) so that all channel features within a node have the same SPD-S. The imputed feature similarity between two nodes tends to decrease as the shortest path distance between the two nodes increases.
We further conducted experiments to observe the degree of feature recovery according to SPD-S of nodes. To evaluate the degree of feature recovery for each node, we measured the cosine similarity between recovered node features and original node features. The setting for experiments is the same as in section A.4.6. Figure 7 demonstrates the results on Cora and CiteSeer. As SPD-S of nodes increases from zero, which means PC of nodes decreases, the cosine similarity between tends to decreases. In other words, PC of nodes increases, the cosine similarity tends to increase. This shows that the pseudoconfidence is designed properly based on SPD-S, which reflect the confidence.
Unlike the tendency in node classification accuracy, PCFI shows almost the same degree of feature recovery as PCFI without node-wise inter-channel propagation. This means that node-wise interchannel propagation improves performance with very little refinement. That is, higher classification accuracy on nodes does not necessarily mean higher degree of feature recovery of the nodes. We leave a detailed analysis of this observation for future work. To compare computational cost, we measured the total training time on a single split of Cora. We performed node classification under a structural-missing setting with r m = 0.995. The training time of each method is shown in Figure 8. The training time for the feature imputation methods (sRMGCNN, FP, PCFI) includes both the time for feature imputation and training of a downstream GCN. PCFI shows less training time than the other methods except for FP. Even compared to FP, PCFI takes only 15.4% more time than FP. For PCFI under a uniform-missing setting, the time for computing SPD-S increases by the number of channels. PCFI outperforms the state-of-the-art methods with less or comparable computational cost to the other methods.
A.6 PYTORCH-STYLE PSEUDO-CODE OF PSEUDO-CONFIDENCE-BASED FEATURE IMPUTATION (PCFI)
Algorithm [:,d].reshape(-1,1)).reshape(-1) out[mask [:,d],d] = x[mask [:,d],d] # Node-wise inter-channel propagation cor = torch.corrcoef(out.T).nan_to_num().fill_diagonal_(0) a1 = (self.alpha ** SPDS.T) * (out -torch.mean(out,\ dim=0)) a2 = torch.matmul(a1, cor) out1 = self.beta * (1 -self.alpha ** SPDS.T) * a2 out = out + out1 return out
Figure 2 :
2Average accuracy (%) on the six datasets with r m ∈ {0, 0.5, 0.9, 0.995}. sRMGCNN and GCNMF are excepted due to OOM results in certain datasets and the significantly poor performance on all the available datasets, as table 1 shows.
GCN-variant model={GCNMF (Taguchi et al., 2021), PaGNN (Jiang & Zhang, 2020)} and feature imputation= {sRMGCNN (Monti et al., 2017), FP (Rossi et al., 2021)}. While GCN-variant models were designed to perform node classification directly with partially known features, feature imputation methods combine with GNN models for downstream tasks. In Baseline 1, sRMGCNN, FP, and our method, we commonly used vanilla GCN (Kipf & Welling, 2016a) for the downstream task.
Wang et al., 2020; Deng et al., 2020), crime prediction (Wang et al., 2021, and weather forecasting(Han et al., 2022).
Pytorch (Paszke et al., 2017) and Pytorch Geometric(Fey & Lenssen, 2019) for the experiments on an NVIDIA GTX 2080 Ti GPU with 11GB of memory.Node classification. We trained GCN-variant models (GCNMF, PaGNN) and GCN models for feature imputation methods (Baseline 1, sRMGCNN, FP, PCFI) as follows. We used Adam optimizer(Kingma & Ba, 2014) and set the maximal number of epochs to 10000. We used an early stopping strategy with patience of 200 epochs. By grid search on each validation set, learning rates of all experiments are chosen from {0.01, 0.005, 0.001, 0.0001}, and dropout (Srivastava et al., 2014) was applied with p selected in {0.0, 0.25, 0.5}. Link prediction. For GCNMF and GAE used as common downstream models for feature imputation methods, we trained the models with Adam optimizer for 200 iterations. By grid search on the validation set, learning rates of all methods are searched from {0.1, 0.01, 0.005, 0.001, 0.0001} for each dataset, and dropout was applied to each layer with p searched from {0.0, 0.25, 0.5}. As specified in (Kipf & Welling, 2016b) and (Taguchi et al., 2021), we used a 32-dim hidden layer and 16-dim latent variables for the all auto-encoder models.
Figure 3 :
3Node classification accuracy on CiteSeer with different α and β. The experiments are conducted under a structural-missing setting with r m = 0.995.
Figure 4 :
4Link prediction results on Cora with different α and β. The experiments are conducted under a structural-missing setting with r m = 0.995.
Photo α 0.2 0.4 0.6 0.2 0.2 0.5 β 10 −4 10 −6 10 −2.5 10 −1.5 10 −6 10 4.5 Computers α 0.1 0.1 0.3 0.1 0.2 0.4 β 10 −3.5 10 −4 10 −5.5 10 −2.5 10 −4 10 −5.5 OGBN-Arxiv α 0.1 0.4 0.2 0.2 0.8 0.8 β 10 −6 10 −6 10 −6 10 −4 10 −6 10 −2.5
Figure 7
7Figure 7: Cosine similarity between X i,: (original features in a node) andX i,: (its recovered features) according to SPD-S of features within the node. For both datasets, we randomly selected 99.5% nodes and remove all features of the selected nodes (structural missing) so that all channel features within a node have the same SPD-S. The imputed feature similarity between two nodes tends to decrease as the shortest path distance between the two nodes increases.
Figure 8 :
8Training time (in seconds) of methods on Cora under a structural-missing setting with r m = 0.995.
Table 1 :
1Node classification accuracy (%) at missing rate r m = 0.995. OOM denotes out of memory. * denotes incalculable average for six datasets due to OOM results.Missing type Dataset
Baseline 1
Baseline 2 (LP) sRMGCNN
GCNMF
PaGNN
FP
PCFI
Structural
missing
Cora
44.15 ± 8.44
74.52 ± 1.60
29.31 ± 0.71 29.20 ± 1.13 30.55 ± 8.85 72.84 ± 2.85 75.49 ± 2.10
CiteSeer
31.68 ± 4.50
65.89 ± 2.29
24.21 ± 1.35 24.50 ± 1.52 25.69 ± 3.98 59.76 ± 2.47 66.18 ± 2.75
PubMed
48.20 ± 3.65
72.25 ± 3.78
OOM
40.19 ± 0.95 50.82 ± 4.61 72.69 ± 2.66 74.66 ± 2.26
Photo
79.68 ± 2.17
82.42 ± 2.57
26.10 ± 1.89 26.82 ± 6.33 66.91 ± 3.99 86.57 ± 1.50 87.70 ± 1.29
Computers
72.03 ± 1.91
76.28 ± 1.43
37.15 ± 0.12 30.59 ± 9.81 56.50 ± 3.29 77.45 ± 1.59 79.25 ± 1.19
OGBN-Arxiv 54.52 ± 0.63
67.56 ± 0.00
OOM
OOM
57.43 ± 0.36 68.23 ± 0.27 68.72 ± 0.28
Average
55.04
73.15
*
*
47.98
72.92
75.33
Uniform
missing
Cora
62.63 ± 2.64
74.52 ± 1.60
29.32 ± 0.74 27.85 ± 2.27 53.75 ± 2.03 77.55 ± 2.01 78.53 ± 1.39
CiteSeer
63.19 ± 1.83
65.89 ± 2.29
24.66 ± 1.90 24.29 ± 1.47 44.95 ± 2.59 68.00 ± 2.16 69.40 ± 1.85
PubMed
54.70 ± 3.03
72.25 ± 3.78
OOM
39.47 ± 0.76 60.24 ± 3.78 73.88 ± 2.35 76.44 ± 1.64
Photo
85.40 ± 1.33
82.42 ± 2.57
26.58 ± 1.68 25.98 ± 3.90 85.30 ± 1.05 87.75 ± 1.07 88.60 ± 1.30
Computers
79.49 ± 1.21
76.28 ± 1.43
37.16 ± 0.12 34.78 ± 4.69 78.04 ± 1.18 81.47 ± 0.91 81.79 ± 0.70
OGBN-Arxiv 58.12 ± 0.46
67.56 ± 0.00
OOM
OOM
65.30 ± 0.22 68.67 ± 0.38 70.19 ± 0.15
Average
67.26
73.15
*
*
64.6
76.22
77.49
Table 2 :
2Link prediction results (%) at missing rate r m = 0.995. OOM denotes out of memory.Dataset
Full features
Structural missing
Uniform missing
sRMGCNN
GCNMF
FP
PCFI
sRMGCNN
GCNMF
FP
PCFI
Cora
AUC 92.05 ± 0.75 66.34 ± 5.78 68.26 ± 1.07 83.74 ± 1.05 86.45 ± 1.15 66.46 ± 5.63 67.25 ± 1.10 86.31 ± 1.40 87.30 ± 1.33
AP
92.58 ± 0.86 68.80 ± 6.44 71.09 ± 0.87 86.12 ± 1.04 88.26 ± 0.97 68.87 ± 6.36 70.78 ± 0.86 88.73 ± 1.16 89.24 ± 1.08
CiteSeer
AUC 90.50 ± 0.92 67.75 ± 1.95 67.75 ± 1.98 79.74 ± 1.71 80.12 ± 1.59 64.35 ± 5.19 65.71 ± 1.80 82.02 ± 1.95 82.98 ± 2.30
AP
91.65 ± 0.99 69.08 ± 1.88 69.10 ± 1.95 83.24 ± 1.43 83.88 ± 1.30 66.30 ± 5.65 68.55 ± 1.72 85.81 ± 1.47 86.28 ± 1.77
PubMed
AUC 95.82 ± 0.27
OOM
87.14 ± 0.28 78.93 ± 1.51 82.65 ± 0.91
OOM
81.67 ± 2.27 77.05 ± 3.54 85.26 ± 0.36
AP
95.95 ± 0.26
OOM
86.07 ± 0.31 84.30 ± 0.98 87.02 ± 0.41
OOM
82.70 ± 1.39 83.26 ± 2.24 88.52 ± 0.20
Photo
AUC 95.76 ± 0.38 81.48 ± 0.29 81.45 ± 0.30 94.05 ± 1.18 96.40 ± 0.42 81.53 ± 0.27 81.48 ± 0.30 95.97 ± 0.21 97.07 ± 0.21
AP
95.34 ± 0.42 81.07 ± 0.33 81.03 ± 0.34 93.57 ± 1.06 96.01 ± 0.49 81.14 ± 0.29 81.07 ± 0.33 95.54 ± 0.24 96.89 ± 0.23
Computers
AUC 93.78 ± 1.16 83.37 ± 0.17 83.33 ± 0.17 90.57 ± 1.23 94.65 ± 0.40 83.39 ± 0.18 83.36 ± 0.17 93.96 ± 0.24 95.98 ± 0.21
AP
93.79 ± 1.09 83.66 ± 0.24 83.62 ± 0.24 90.92 ± 1.05 94.67 ± 0.43 83.68 ± 0.26 83.65 ± 0.25 93.90 ± 0.24 96.03 ± 0.22
Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. 17 GDPR. General data protection regulation. https://gdpr.eu/. Accessed: 2022-09-28. 1 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 17 Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional net-Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks.Xu Chen, Siheng Chen, Jiangchao Yao, Huangjie Zheng, Ya Zhang, and Ivor W Tsang. Learning on
attribute-missing graphs. IEEE transactions on pattern analysis and machine intelligence, 2020.
1, 2
Fan Chung. The heat kernel as the pagerank of a graph. Proceedings of the National Academy of
Sciences, 104(50):19735-19740, 2007. 3, 6
Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic
analysis, 21(1):5-30, 2006. 2
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on
graphs with fast localized spectral filtering. Advances in neural information processing systems,
29, 2016. 1
Songgaojun Deng, Shusen Wang, Huzefa Rangwala, Lijing Wang, and Yue Ning. Cola-gnn: Cross-
location attention based graph neural networks for long-term ili prediction. In Proceedings of the
29th ACM International Conference on Information & Knowledge Management, pp. 245-254,
2020. 10
Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings
of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,
pp. 855-864, 2016. 3, 6
Adrien Guille, Hakim Hacid, Cecile Favre, and Djamel A Zighed. Information diffusion in online
social networks: A survey. ACM Sigmod Record, 42(2):17-28, 2013. 2
Jindong Han, Hao Liu, Haoyi Xiong, and Jing Yang. Semi-supervised air quality forecasting via
self-supervised hierarchical graph neural network. IEEE Transactions on Knowledge and Data
Engineering, 2022. 10
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta,
and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances
in neural information processing systems, 33:22118-22133, 2020. 7
Houye Ji, Cheng Yang, Chuan Shi, and Pan Li. Heterogeneous graph neural network with distance
encoding. In 2021 IEEE International Conference on Data Mining (ICDM), pp. 1138-1143.
IEEE, 2021. 2
Bo Jiang and Ziyan Zhang. Incomplete graph representation and learning via partial graph neural
networks. arXiv preprint arXiv:2003.10130, 2020. 1, 2, 8, 19
works. arXiv preprint arXiv:1609.02907, 2016a. 1, 2, 6, 8
Thomas N Kipf and Max Welling.
Variational graph auto-encoders.
arXiv preprint
arXiv:1611.07308, 2016b. 1, 8, 17, 20
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate:
Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018.
3, 6
Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learn-
ing. arXiv preprint arXiv:1911.05485, 2019. 3, 6, 20
Hady Lauw, John C Shafer, Rakesh Agrawal, and Alexandros Ntoulas. Homophily in the digital
world: A livejournal case study. IEEE Internet Computing, 14(2):15-23, 2010. 5
Pan Li, I Chien, and Olgica Milenkovic. Optimizing generalized pagerank methods for seed-
expansion community detection. Advances in Neural Information Processing Systems, 32, 2019.
2
Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably
more powerful neural networks for graph representation learning. Advances in Neural Information
Processing Systems, 33:4465-4478, 2020. 2
Jongin Lim, Daeho Um, Hyung Jin Chang, Dae Ung Jo, and Jin Young Choi. Class-attentive dif-
fusion network for semi-supervised classification. In Thirty-Fifth AAAI Conference on Artificial
Intelligence, AAAI, pp. 2-9, 2021. 3, 6
Roderick JA Little and Donald B Rubin. Statistical analysis with missing data, volume 793. John
Wiley & Sons, 2019. 2
Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing data:
Provable guarantees with non-convexity. Advances in neural information processing systems, 24,
2011. 2
Jiaxuan You, Xiaobai Ma, Yi Ding, Mykel J Kochenderfer, and Jure Leskovec. Handling missing
data with graph representation learning. Advances in Neural Information Processing Systems, 33:
19075-19087, 2020. 2
Advances in neural
information processing systems, 31, 2018. 2
Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using
graph neural networks for multi-node representation learning. Advances in Neural Information
Processing Systems, 34, 2021. 2
Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang,
Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applica-
tions. AI Open, 1:57-81, 2020. 1
Xiaojin Zhuŕ and Zoubin GhahramaniŕH. Learning from labeled and unlabeled data with label
propagation. 2002. 8, 19
A APPENDIX
A.1 PROOF OF PROPOSITION 1
d) is the weighted adjacency matrix of connected graph G, W(d)
uu0 ≤ W
(d) element-
wisely and W
(d)
uu0 ̸ = W
(d) . Moreover, since W
(d)
uu0 + W
(d) is a weighted adjacency matrix of a
strongly connected graph, W
(d)
uu0 + W
(d) is irreducible due to Theorem 2.2.7 of
Table 3 :
3Hyper-parameters of PCFI used in node classification.Missing type
Structural missing
Uniform missing
rm
0.995
0.9
0.5
0.995
0.9
0.5
Table 4 :
4Hyper-parameters of PCFI used in link prediction.missing type Dataset Cora CiteSeer PubMed Photo Computers
Structural
α
0.9
0.9
0.2
0.1
0.1
β
1
10 −1
1
10 −1
10 −1
Uniform
α
0.9
0.6
0.3
0.1
0.1
β
1
1
1
1
1
Table 5 :
5Dataset statistics.Dataset
# Nodes
# Edges
# Features # Classes
Cora
2,485
5,069
1,433
7
CiteSeer
2,120
3,679
3,703
6
PubMed
19,717
44,324
500
3
Photo
7,487
119,043
745
8
Computers
13,381
245,778
767
10
OGBN-Arxiv 169,343 1,166,243
128
40
Table 6 :
6Ablation study of PCFI. row-ST, CID and, NIP denote a row-stochastic transition matrix, channel-wise inter-node diffusion, and node-wise inter-channel propagation, respectively.row-ST CID NIP
CiteSeer
✗
✗
✗
59.76 ± 2.47
✓
✗
✗
64.80 ± 2.60
✓
✓
✗
65.40 ± 2.77
✓
✓
✓
66.18 ± 2.75
Table 7 :
7Ablation study of PCFI. row-ST, CID and, NIP denote a row-stochastic transition matrix, channel-wise inter-node diffusion, and node-wise inter-channel propagation, respectively.A.4.5 RESISTANCE OF PCFI TO MISSING FEATURESrow-ST
CID
NIP
Cora
PubMed
AUC (%)
AP (%)
AUC (%)
AP (%)
✗
✗
✗
83.74 ± 1.05
86.12 ± 1.04
77.05 ± 3.54
83.26 ± 2.24
✓
✗
✗
83.96 ± 1.02
86.14 ± 1.07
80.72 ± 1.28
84.99 ± 0.68
✓
✓
✗
84.16 ± 1.23
86.24 ± 1.24
82.88 ± 1.23
87.20 ± 0.40
✓
✓
✓
86.45 ± 1.15
88.26 ± 0.97
85.26 ± 0.36
88.52 ± 0.20
Table 8 :
8Node classification accuracy (%) of PCFI at different missing rates of for two missing types. For each experiment, we report the mean with standard deviation (mean ± std). For each missing type, we report average accuracy with relative drop (%p) compared to a full-feature setting (average (drop)).Missing type
Dataset
Full features
50% missing
90% missing
99.5% missing
Structural
missing
Cora
82.35 ± 1.49
80.37 ± 1.55
78.88 ± 1.43
75.49 ± 2.10
CiteSeer
70.98 ± 1.46
70.10 ± 2.02
69.76 ± 1.96
66.18 ± 2.75
PubMed
77.49 ± 2.05
75.93 ± 1.44
76.12 ± 1.87
74.66 ± 2.26
Photo
92.14 ± 0.62
91.81 ± 0.54
89.96 ± 0.68
87.70 ± 1.29
Computers
85.67 ± 1.41
84.91 ± 0.88
82.40 ± 1.38
79.25 ± 1.19
OGBN-Arxiv
72.28 ± 0.11
71.64 ± 0.19
70.39 ± 0.19
68.72 ± 0.28
Average
80.15
79.13 (−1.02)
77.92 (−2.23)
75.33 (−4.82)
Uniform
missing
Cora
82.35 ± 1.49
81.28 ± 1.59
79.55 ± 1.32
78.53 ± 1.39
CiteSeer
70.98 ± 1.46
71.68 ± 1.92
69.92 ± 1.68
69.40 ± 1.85
PubMed
77.49 ± 2.05
76.88 ± 2.09
76.56 ± 2.08
76.44 ± 1.64
Photo
92.14 ± 0.62
91.83 ± 0.58
89.84 ± 1.00
88.60 ± 1.30
Computers
85.67 ± 1.41
84.96 ± 1.15
83.14 ± 0.72
81.79 ± 0.70
OGBN-Arxiv
72.28 ± 0.11
71.78 ± 0.09
70.91 ± 0.17
70.19 ± 0.15
Average
80.15
79.73 (−0.42)
78.34 (−1.81)
77.49 (−2.66)
1 PyTorch-style pseudo-code of PCFI class PCFI(torch.nn.Module): def __init__(self, K, alpha, beta): super(PCFI, self).__init__() self.K = K self.alpha = alpha self.beta = beta # edge_index has shape [2, |E|] # mask is a boolean tensor indicating known features # with True def propagate(self, x, edge_index, mask, mask_type): nv, feat_dim = x.shape # Channel-wise inter-node diffusion out = torch.zeros_like(x) out[mask] = x[mask] # structural missing case if mask_type == "structural": SPDS = self.compute_SPDS(edge_index, mask, mask_type) Wbar = self.compute_Wbar(edge_index, mask, mask_type) for t in range(self.K): out = torch.sparse.mm(Wbar, out) out[mask] = x[mask] SPDS = SPDS.repeat(feat_dim, 1) # uniform missing case if mask_type == "uniform": SPDS = self.compute_SPDS(edge_index, mask, \ mask_type, feat_dim) for d in range(feat_dim): Wbar = self.compute_Wbar(edge_index, SPDS[d]) for i in range(self.K): out[:,d] = torch.sparse.mm(Wbar, \ out
https://github.com/marblet/GCNmf 2 https://github.com/twitter-research/feature-propagation 3 https://github.com/twitter-research/feature-propagation 4 https://github.com/fmonti/mgcnn
ACKNOWLEDGMENTSAlgorithm 2 PyTorch-style pseudo-code for SPD-S # get SPD-S of each node by computing the k-hop subgraph # around the source nodes # In this code, SPDS has shape[F , N ]
A system for {Large-Scale} machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, 12th USENIX symposium on operating systems design and implementation (OSDI 16). 19Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. {TensorFlow}: A system for {Large-Scale} machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), pp. 265-283, 2016. 19
Missing data. Paul D Allison, Sage publicationsPaul D Allison. Missing data. Sage publications, 2001. 2
Diffusion-convolutional neural networks. James Atwood, Don Towsley, Advances in neural information processing systems. 296James Atwood and Don Towsley. Diffusion-convolutional neural networks. Advances in neural information processing systems, 29, 2016. 3, 6
Nonnegative matrices in the mathematical sciences. Abraham Berman, J Robert, Plemmons, SIAM. 15Abraham Berman and Robert J Plemmons. Nonnegative matrices in the mathematical sciences. SIAM, 1994. 15
Investigating homophily in online social networks. Halil Bisgin, Nitin Agarwal, Xiaowei Xu, 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology. IEEE1Halil Bisgin, Nitin Agarwal, and Xiaowei Xu. Investigating homophily in online social networks. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Tech- nology, volume 1, pp. 533-536. IEEE, 2010. 5
A study of homophily on social media. Halil Bisgin, Nitin Agarwal, Xiaowei Xu, World Wide Web. 152Halil Bisgin, Nitin Agarwal, and Xiaowei Xu. A study of homophily on social media. World Wide Web, 15(2):213-232, 2012. 5
NODE CLASSIFICATION ACCURACY ACCORDING TO MISSING RATE FOR EACH SETTING Figure 9: Average accuracy (%) for each setting (missing type, dataset). A , A.5 NODE CLASSIFICATION ACCURACY ACCORDING TO MISSING RATE FOR EACH SETTING Figure 9: Average accuracy (%) for each setting (missing type, dataset).
| [
"https://github.com/daehoum1/pcfi.",
"https://github.com/daehoum1/pcfi.",
"https://github.com/marblet/GCNmf",
"https://github.com/twitter-research/feature-propagation",
"https://github.com/twitter-research/feature-propagation",
"https://github.com/fmonti/mgcnn"
]
|
[
"Stecformer: Spatio-temporal Encoding Cascaded Transformer for Multivariate Long-term Time Series Forecasting",
"Stecformer: Spatio-temporal Encoding Cascaded Transformer for Multivariate Long-term Time Series Forecasting"
]
| [
"Zheng Sun \nAlibaba Group\n\n",
"Yi Wei \nAlibaba Group\n\n",
"Wenxiao Jia [email protected] \nAlibaba Group\n\n",
"Long Yu \nAlibaba Group\n\n"
]
| [
"Alibaba Group\n",
"Alibaba Group\n",
"Alibaba Group\n",
"Alibaba Group\n"
]
| []
| Multivariate long-term time series forecasting is of great application across many domains, such as energy consumption and weather forecasting. With the development of transformer-based methods, the performance of multivariate long-term time series forecasting has been significantly improved, however, the study of spatial features extracting in transformer-based model is rare and the consistency of different prediction periods is unsatisfactory due to the large span. In this work, we propose a complete solution to address these problems in terms of feature extraction and target prediction. For extraction, we design an efficient spatio-temporal encoding extractor including a semi-adaptive graph to acquire sufficient spatio-temporal information. For prediction, we propose a Cascaded Decoding Predictor (CDP) to strengthen the correlation between different intervals, which can also be utilized as a generic component to improve the performance of transformer-based methods. The proposed method, termed as Spatio-temporal Encoding Cascaded Transformer (Stecformer), achieving a notable gap over the baseline model and is comparable with the state-of-the-art performance of transformer-based methods on five benchmark datasets. We hope our attempt will serve as a regular configuration in multivariate long-term time series forecasting in the future. | 10.48550/arxiv.2305.16370 | [
"https://export.arxiv.org/pdf/2305.16370v1.pdf"
]
| 258,947,320 | 2305.16370 | 2bec5382993cfba2066c03940676de92917f4ef6 |
Stecformer: Spatio-temporal Encoding Cascaded Transformer for Multivariate Long-term Time Series Forecasting
Zheng Sun
Alibaba Group
Yi Wei
Alibaba Group
Wenxiao Jia [email protected]
Alibaba Group
Long Yu
Alibaba Group
Stecformer: Spatio-temporal Encoding Cascaded Transformer for Multivariate Long-term Time Series Forecasting
Spatio-temporal encoding extractor · Cascaded decoding predictor · Multivariate long-term time series
Multivariate long-term time series forecasting is of great application across many domains, such as energy consumption and weather forecasting. With the development of transformer-based methods, the performance of multivariate long-term time series forecasting has been significantly improved, however, the study of spatial features extracting in transformer-based model is rare and the consistency of different prediction periods is unsatisfactory due to the large span. In this work, we propose a complete solution to address these problems in terms of feature extraction and target prediction. For extraction, we design an efficient spatio-temporal encoding extractor including a semi-adaptive graph to acquire sufficient spatio-temporal information. For prediction, we propose a Cascaded Decoding Predictor (CDP) to strengthen the correlation between different intervals, which can also be utilized as a generic component to improve the performance of transformer-based methods. The proposed method, termed as Spatio-temporal Encoding Cascaded Transformer (Stecformer), achieving a notable gap over the baseline model and is comparable with the state-of-the-art performance of transformer-based methods on five benchmark datasets. We hope our attempt will serve as a regular configuration in multivariate long-term time series forecasting in the future.
Introduction
The application of time series forecasting in energy consumption, retail management and disease propagation analysis has increased dramatically in recent years. Meanwhile, in the field of multivariate long-term time series forecasting (MLTSF), there are more and more requirements for automatic prediction of deep learning tools [3,13,12,7] especially transformer-based methods [9]. Due to high computational complexity and memory requirement of transformer [15], many works are addicted to reducing the time and memory cost and getting not too much sacrifices on the performance [8,6,20,16].
Despite the great achievements of transformer-based methods in MLTSF, they tend to ignore the spatial information contained in multi-features and fail arXiv:2305.16370v1 [cs.LG] 25 May 2023 to ensure the consistency of different prediction periods. Therefore, the drawbacks of existing transformer-based methods are as follows: (i) The point-wise information is considered as an entity in the process of constructing point-wise [20] or series-wise [6] attention matrix, which dismisses the spatial relations between features inevitably. (ii) The relations between points that are far away are not given special attention, which means the prediction results of different periods vary greatly over the long time span. As for the former, recent works [2,17,18] focus on utilizing a graph to build spatial connections between features. For instance, Cui et al. [2] propose a generic framework with multi-scale temporal graphs neural networks, which models the dynamic and cross-scale variable correlations simultaneously. As for the latter, some recent works [12,10] attempt to use multiple stacked blocks to predict different intervals. However, the consistency of different prediction intervals with transformer-based methods has not been studied to our best knowledge. Therefore, there is a strong demand to rethink the implementation of transformer structure in MLTSF.
In order to address the aforementioned obstacles, contributions have been made to spatial features extraction and the consistency of different prediction periods in this paper. For encoding process, we propose a spatio-temporal encoding extractor that incorporates a vanilla self-attention module and an extra graph convolution module. The former captures temporal correlations between series points. The latter prompts the models to focus on the spatial details in point-wise features by semi-adaptive graph structure. Different from existing learned dynamic graphs in METRO [2], our graph convolution module combine the learned graph and the computed graph, called the semi-adaptive graph, to enhance the robustness of the model to abnormal data. For decoding process, we try to eliminate the influence of long time span on the prediction results. Under this motivation, we propose Cascaded Decoding Predictor (CDP) to balance the prediction accuracy of different time periods. As shown in Figure 1, the proposed CDP consists of a series of concatenated decoders, each of which is responsible for a specified prediction interval. Each decoder is customized by means of intermediate supervision and input of pre-query. The tightly cascaded decoders can effectively alleviate the prediction volatility of the model in the long term. The main contributions of this paper can be summarized as follows:
-We propose an effective semi-adaptive graph structure called spatio-temporal encoding extractor, which assists the transformer encoder to dig deeply into the spatial correlations inside the point-wise features. -We analyze the discrepancies between short and long-term prediction intervals. The designed Cascaded Decoding Predictor is customized to narrow the prediction gap and can be utilized as a generic component to improve the performance of different transformer-based models. -We conduct extensive experiments over 5 benchmark datasets across many domains, such as energy, economics, weather and disease. With the above techniques, our Stecformer achieves a notable gap over the baseline model and is comparable with the state-of-the-art performances of transformerbased methods on public benchmark datasets.
Related Works
Transformer-based Model
As one of the most important attention mechanisms in deep learning, transformer has demonstrated its great superiority in MLTSF [1,11,21]. Observations in time series data are treated as points in transformer, and the correlations between different points are built through self-attention and cross-attention mechanisms. However, the quadratic computation complexity is inherent in such point-wise setting, which has led to the emergence of many excellent works to reduce the time and memory cost of transformer. Li et al. [8] propose LogTrans, which consists of several variants of the self-attention mechanism, such as Restart Attention, Local Attention and LogSparse Attention. The points involved in the attention matrix are selected according to the distance of exponential length and the selection seems to be heuristic. Kitaev et al. [6] use local sensitive hash attention to replace the global dot product attention to reduce the complexity. Similarly, Zhou et al. [20] employ KL-divergence to the top-k points selection, which accelerates the computation of attention matrix. Both of these two works utilize hand-designed metrics to construct the sparser attention matrix. Cirstea et al. [1] develop an efficient attention mechanism, namely Patch Attention, which ensures an overall linear complexity along with triangular, multi-layer structure. Liu et al. [11] introduce Pyraformer to simultaneously capture temporal dependencies of different ranges in a compact multi-resolution fashion. Another emerging strategy is to discover a more reasonable representation that replaces the original input sequence. In Wu's research [16], the sequence is decomposed into trend-cyclical and seasonal representation, accounting for mainstream forecast and seasonal fluctuations, respectively. Zhou et al. [21] focus on input sequence denoising and use Fourier Transform to retain low frequency information. In this work, we inherit the some components of Autoformer [16], and the implement of transformer structure on multivariate long-time series forecasting is reconsidered on this basis.
Graph-based Model
Graph is usually used to establish the spatial dependencies between different nodes, and the weights of edges indicate the closeness between nodes. In recent years, some works have applied graph structure to multivariate time series forecasting, especially to describing the relationships among variables. Guo et al. [4] propose a novel attention based spatial-temporal graph convolution network (ASTGCN) to model the dynamic correlations of traffic data. Similarly, Yao et al. [19] introduce a flow gating mechanism to learn the dynamic similarity between locations. Wu et al. [17] propose a general graph neural network to automatically extract the uni-directed relations among variables through a graph learning module, into which external knowledge like variable attributes can be easily integrated. Beyond these, Cui et al. [2] develop a generic multiscale temporal graph neural network framework that leverages both dynamic and
Self-Attention
Feed-Forward
ST-Encoder
Layer 1
ST-Encoder Layer 2
Intermediate Supervision
Graph Convolution
ST-Encoder
Layer M
Spatio-temporal Encoding Extractor
Cascaded Decoding Predictor cross-scale variable correlations, which shows that previous graph-based models can be interpreted as specific instances.
Methods
As depicted in Figure 1, the proposed Stecformer contains two key components: 1) A spatio-temporal encoding extractor to generate features that contain both spatial and temporal information; 2) A Cascaded Decoding Predictor (CDP) to predict the results of predetermined intervals. It is worth noting that all the cascaded decoders in CDP share the same features output from the encoding stage. At the end of this section, we give a concise description of the loss function of the proposed Stecformer.
Spatio-temporal Encoding Extractor
As shown in Figure 1, the spatio-temporal encoding extractor consists of several spatio-temporal encoder layers. In each layer, two parallel branches, including a vanilla self-attention module and an extra graph convolution module, are attached to the shared input embedding. In this paper, we replace self-attention with auto-correlation in Autoformer [16] unless otherwise specified. Therefore, the vanilla self-attention module captures temporal correlations between series points as Autoformer does. Inspired by the adaptive graph for skeleton-based action recognition [14] in computer vision, we redesign the customized graph convolution module and set it at a reasonable place for transformer-based multivariate time series forecasting. As depicted in Figure 2, the proposed graph convolution module covers a semi-adaptive graph, which combines the learned graph G l and the computed graph G c to prompt the model to focus on the spatial details in point-wise features.
: × : 1×1 × × !" : 1×1 : 1×1 × × #!$ #!$ × : × × + × %& : × !" × : 1×1 + '() × ℎ: 1×1 : × × '()
Semi-adaptive Graph
Consider we have V time series, denoted as x = [x 1 (t), ..., x V (t)], t = 1, 2, .
.., T . T stands for the point numbers corresponding to the observations entered into the model, and V is the number of variables. In our graph convolution module, we rethink the point-wise variables on spatial dimension and utilize a convolution operator f to expand the single channel (1) into multiple channels (C in ). Then, we apply the normalized embedded Gaussian function to get the similarity of two nodes in the computed graph G c :
G c (v i , v j ) = e ϕ(vi) T φ(vj ) V j=1 e ϕ(vi) T φ(vj )(1)
where v i , v j stand for i th , j th node tensor after the transformation of f , ϕ and φ are two convolution operators to change the number of channels from C in to C mid . Meanwhile, we randomly initialize the matrix A as the learned graph G l and set the sum of G c and G l as our final semi-adaptive graph G sa . Therefore, the whole process of graph convolution module is described below:
z = W h W f x + W g W f xG sa(2)
where W g , W h , W f are the weights of different convolution operators and the shape transformations are omitted for simplicity. Finally, we acquire spatiotemporal feature maps by the weighted sum of graph convolution module and the self-attention module. It is worth noting that the output of spatio-temporal encoding extractor will run through the whole Cascaded Decoding Predictor, so it is necessary to get the fully expressed feature maps.
Cascaded Decoding Predictor (CDP)
As shown in Figure 1, all the cascaded decoders in the decoding process are attached to the shared feature maps output from encoding phase. Each decoder takes the output from the previous decoder as the query, and takes the output from the encoding phase as the key and the value. In the subsequent sections, we will introduce the details of the proposed CDP.
Consistency of adjacent intervals. The whole interval to be predicted is decomposed into many small continuous sub-intervals, and the prediction accuracy of different intervals is narrowed by the cascaded decoders. Consider we have N decoders to predict results of N intervals, denoted as D = {D 1 , D 2 , ..., D N } → I = {I 1 , I 2 , ..., I N }. In the first decoder D 1 , we predict the results of intervals from I 1 to I N , the result of I 1 has two destinations. On the one hand, it is used as the intermediate output of the model. On the other hand, it is used as a part of the start-token of the second decoder, and forms a new query with the results of the remaining intervals (from I 2 to I N ) input into the second decoder. We get the output of all intervals in the same way. With the help of natural structure of query, key, and value in the transformer decoder, we can use the features extracted in the encoding stage (key and value) and the results of the previous interval (query) to predict the later interval. The whole process of CDP can be expressed as the following recursive formula:
q i = Concat(X i token , Q i−1 [I i : I N ]) Q i = D i (q i , k, v) y i = Q i [I i ](3)
where q i , Q i stand for the input query and the output of decoder D i , and k, v are the output from the spatio-temporal encoding extractor. Q i [I i ] means to select the results of interval I i in Q i , and I i : I N represents the set of intervals from I i to I N . Notably, y i is part of the output Q i and performs intermediate supervision with the ground truth labels. The decoder D i simplifies auto-correlation module and series decomposition block in Autoformer [16] as follows:
s 0 i , t 0 i = SeriesDecomp(q i ) s 1 i , t 1 i = SeriesDecomp(Auto-Correlation(s 0 i ) + s 0 i ) s 2 i , t 2 i = SeriesDecomp(Auto-Correlation(s 1 i , k, v) + s 1 i ) s 3 i , t 3 i = SeriesDecomp(F eedF orward(s 2 i ) + s 2 i ) Q i = s 3 i + t 0 i + t 1 i + t 2 i + t 3 i(4)
The motivation for such a cascaded paradigm is that results over a shorter period of time can be considered accurate enough, or even closed to be true. When predicting a later interval, the "real" results of the previous interval can be used to predict the next adjacent one. In the experimental section, this cascaded structure is shown to ensure consistent prediction accuracy across different intervals.
Forward start-token. The query of each decoder consists of start-token and the predicted results of the previous decoder. In the first decoder D 1 , we sample an earlier slice before the output sequence. Take predicting 96 points as an example, we will take the known 48 points before the target sequence as start-token, and pad the rest of 96 points with 0 to get the first input query q 1 = Concat(X 1 token , X 0 ). When it comes to the decoder D i , we use a small number of real points (or none) and the predicted result y 1 , y 2 , ..., y i−1 of the previous decoders as the start-token X i token . The rest is padding with the Q i−1 [I i : I N ], which is is closer to the real results of the interval I i to I N than 0 padding. The reason for such forward start-token setting is that the results output by the previous decoder has a higher confidence probability to be a guide for predicting the results of the subsequent one.
Loss Function
Follow the spirit of transformer-based methods [16,21] in multivariate time series forecasting, we choose the MSE loss function to train all the modules in Stecformer jointly. The overall loss function is as follows:
L = N i λ i M SE(y i ,ŷ i )(5)
where λ i , y i ,ŷ i are the control parameter, predicted and ground-truth labels for the interval I i , respectively.
Experiments
Datasets and Evaluation Protocols
We conduct extensive experiments on five popular public datasets, including energy, economics, weather and disease, to verify the effectiveness of the proposed method. The details of the experiment datasets are summarized as follows: 1) ETTm2 [20] contains 2-year data of electric power deployment. Each record consists of 6 power load features and the target value "oil temperature", which is collected every 15 minutes from July 2016 to July 2018. 2) ECL 1 contains 3year data (July 2016˜July 2019) of electricity consumption (Kwh). Each record consists of 321 clients and is collected every 1 hour. 3) Exchange [7] contains the daily exchange rates of 8 countries from January 1990 to October 2010. 4) Weather 2 contains 21 meteorological indicators, such as humidity and air temperature, which is recorded every 10 minutes from January 2020 to December 2020 in Germany. 5) Illness 3 contains influenza-like illness patients number from Centers for Disease Control and Prevention in the United States [16], which is recorded weekly from January 2002 to July 2020. All datasets are split into training set, validation set and test set by the ratio of 7:1:2. The mean square error (MSE) and mean absolute error (MAE) are used as evaluation protocols.
Implementation Details
All the experiments are implemented in PyTorch v1.9.0 and conducted on a workstation with 2 Nvidia Tesla M40 12GB GPUs. All models are optimized by ADAM [5] algorithm with batch size 16. The learning rate is set to 1e −4 . An early stopping counter is employed to stop the training process after 3 epochs if no less degradation on the valid set is observed. With respect to spatio-temporal encoding extractor, we use only 1 spatio-temporal encoder to extract the features, and the weight of graph convolution module is empirically set as 0.5. In Cascaded Decoding Predictor, we set up 4 decoders and predict the intermediate results every 2 decoders in all experiments, which prevents too many decoders from leading to overfitting. The length of the prediction interval is 1/4 and 3/4 of the output sequence, respectively. The control parameters in the loss function take on step-down reduction of 0.1 at a time from the furthest interval to the nearest one, and the parameter λ N is set as 1.
Comparisons with Prior Arts
We compare our proposed Stecformer with several state-of-the-art methods on public ETTm2, ECL, Exchange, Weather and ILI datasets. As reported in Table 1, our Stecformer achieves the best performance on almost five benchmark datasets at all horizons. The only flaw is the 720-day MAE metric of the ETTm2 dataset. We guess that this is because the ETTm2 dataset is noisy, and the large amount of parameters will cause overfitting. This also explains why our Stecformer does not improve too much compared to other models on ETTm2.
Ablation Studies
We conduct a series of experiments to evaluate the effectiveness of different modules in our approach on Exchange dataset, and varify the universality and consistency of the proposed CDP on ECL and Weather datasets.
Effectiveness of different module. The baseline in ablation studies is formed by replacing the self-attention and cross-attention module with autocorrelation and series decomposition block in Autoformer [16]. The results of Exp 2 reported in Table 2 show that the performance is improved by an average 15.6% MSE reduction with the help of CDP. When collaborating with GCM, the MSE reduction of Exp 3 decreases to 13.6% compared to the baseline. This shows that our GCM helps model to extract more comprehensive features. When the baseline model is equipped with all components in Exp 5, including CDP and GCM, the MSE reduction comes to 20.0% at all horizons finally. Notably, the results of Exp 3 and Exp 4 imply the necessity of the semi-adaptive graph, which is superior to the common graph structure.
Universality and consistency of CDP. We conduct several experiments on ECL and Weather datasets to verify the universality and consistency of the proposed CDP. We select several representative works based on transformer in the multivariate time series forecasting field, including a variant of Informer [20], Autoformer [16] and FEDformer [21]. The variant of Informer means replacing all the full self-attention module with ProbSparse self-attention [20] in the original Informer. As reported in Table 3, the performances of different baseline models are all improved with the help of CDP. Beyond these, we investigate the mechanism of CDP by dividing the forecast period into six equal parts and calculate the MSE metrics for the Informer over six sub-periods. As shown in Figure 3, the metrics of CDP-based models over different periods change more slippery, unlike the sudden jitter in the CDP-free models. In the ECL dataset, the metrics of the model with CDP on each period are close to a straight line, which shows that CDP can prompt the model to acquire consistant results over different periods. Especially, in Figure 3(b) and (d), CDP forces the model to sacrifice short-term prediction accuracy to ensure more accurate long-term results.
Conclusion
In this paper, we present Stecformer, a new approach for multivariate long-term time series forecasting, which contains two effective components: a semi-adaptive graph based extractor for generating fully expressed spatio-temporal feature maps and a cascaded decoders based predictor to narrow the prediction gaps between different time periods. Our Stecformer achieves a notable gap over the baseline model and is comparable with state-of-the-art transformer-based methods on the public datasets. We further validate the effectiveness of individual components in our approach. Especially, the proposed Cascaded Decoding Predictor can be applied to various transformer-based methods to ensure a higher accuracy and the consistency of different prediction periods.
Fig. 1 .
1An overview of the proposed Stecformer.
Fig. 2 .
2Graph convolution module, including a semi-adaptive graph.
Table 1 .
1Multivariate long-term time series forecasting results on five benchmark datasets. * denotes the fourier version in FEDformer. The input length is fixed to 96and the prediction length are fixed to be 96, 192, 336, and 720, respectively (For ILI
dataset, we set input length as 36 and prediction length as 24, 36, 48, 60).
Methods
Stecformer
FEDformer* Autoformer
Informer
LogTrans
Reformer
Metric
MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE
Table 2 .
2Ablation studies of several modules in Stecformer on Exchange dataset. GCM stands for graph convolution module, GCM* means the common graph without the learned graph G l .Exp ID baseline CDP GCM GCM* Metric
Exchange
96
192
336
720
Exp 1
✓
MSE
0.180 0.273 0.481 1.213
MAE
0.311 0.383 0.517 0.861
Exp 2
✓
✓
MSE
0.129 0.233 0.420 1.132
MAE
0.259 0.350 0.478 0.824
Exp 3
✓
✓
MSE
0.130 0.242 0.440 1.133
MAE
0.256 0.353 0.491 0.826
Exp 4
✓
✓
MSE
0.152 0.279 0.449 1.143
MAE
0.282 0.386 0.492 0.828
Exp 5
✓
✓
✓
MSE
0.113 0.229 0.399 1.095
MAE
0.243 0.349 0.463 0.804
Table 3 .
3Effectiveness of Cascaded Decoding Predictor on different transformer-based methods. * denotes the fourier version in FEDformer.Weather CDP-free Weather CDP-based ECL CDP-free ECL CDP-based (d) T=720Fig. 3. Consistency of Informer on different time periods. We demonstrate the consistency over six periods of the same length based on ECL and Weather datasets. The solid lines indicate that CDP exists, while the dotted lines indicate the opposite. The x-axis is the different periods, and the y-axis is the corresponding MSE metrics. T stands for the prediction length.Methods
Metric
ECL
Weather
96
192
336
720
96
192
336
720
Informer
MSE
0.345
0.367
0.376
0.396
0.398
0.520
0.684
1.159
MAE
0.423
0.443
0.453
0.459
0.436
0.510
0.582
0.790
+CDP
MSE
0.303 0.321 0.327 0.352 0.341 0.487 0.641 1.022
MAE
0.387 0.408 0.412 0.426 0.404 0.497 0.570 0.745
Autoformer
MSE
0.201
0.222
0.231
0.254
0.266
0.307
0.359
0.419
MAE
0.317
0.334
0.338
0.361
0.336
0.367
0.395
0.428
+CDP
MSE
0.191 0.210 0.215 0.248 0.256 0.283 0.344 0.410
MAE
0.305 0.325 0.329 0.354 0.336 0.348 0.391 0.419
FEDformer*
MSE
0.193
0.201
0.214
0.246
0.217
0.276
0.339
0.403
MAE
0.308
0.315
0.329
0.355
0.296
0.336
0.380
0.428
+CDP
MSE
0.187 0.200 0.210 0.238 0.200 0.265 0.319 0.389
MAE
0.302 0.314 0.326 0.349 0.279 0.323 0.365 0.408
https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014 2 https://www.bgc-jena.mpg.de/wetter/ 3 https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html
Triformer: Triangular, variable-specific attentions for long sequence multivariate time series forecasting-full version. R G Cirstea, C Guo, B Yang, T Kieu, X Dong, S Pan, arXiv:2204.13767arXiv preprintCirstea, R.G., Guo, C., Yang, B., Kieu, T., Dong, X., Pan, S.: Triformer: Tri- angular, variable-specific attentions for long sequence multivariate time series forecasting-full version. arXiv preprint arXiv:2204.13767 (2022)
Metro: a generic graph neural network framework for multivariate time series forecasting. Y Cui, K Zheng, D Cui, J Xie, L Deng, F Huang, X Zhou, Proceedings of the VLDB Endowment. the VLDB Endowment15Cui, Y., Zheng, K., Cui, D., Xie, J., Deng, L., Huang, F., Zhou, X.: Metro: a generic graph neural network framework for multivariate time series forecasting. Proceedings of the VLDB Endowment 15(2), 224-236 (2021)
St-norm: Spatial and temporal normalization for multi-variate time series forecasting. J Deng, X Chen, R Jiang, X Song, I W Tsang, Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. the 27th ACM SIGKDD conference on knowledge discovery & data miningDeng, J., Chen, X., Jiang, R., Song, X., Tsang, I.W.: St-norm: Spatial and temporal normalization for multi-variate time series forecasting. In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. pp. 269-278 (2021)
Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. S Guo, Y Lin, N Feng, C Song, H Wan, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence33Guo, S., Lin, Y., Feng, N., Song, C., Wan, H.: Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33, pp. 922-929 (2019)
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Reformer: The efficient transformer. N Kitaev, L Kaiser, A Levskaya, arXiv:2001.04451arXiv preprintKitaev, N., Kaiser, L., Levskaya, A.: Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451 (2020)
Modeling long-and short-term temporal patterns with deep neural networks. G Lai, W C Chang, Y Yang, H Liu, The 41st international ACM SIGIR conference on research & development in information retrieval. Lai, G., Chang, W.C., Yang, Y., Liu, H.: Modeling long-and short-term tempo- ral patterns with deep neural networks. In: The 41st international ACM SIGIR conference on research & development in information retrieval. pp. 95-104 (2018)
Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. S Li, X Jin, Y Xuan, X Zhou, W Chen, Y X Wang, X Yan, Advances in neural information processing systems. 32Li, S., Jin, X., Xuan, Y., Zhou, X., Chen, W., Wang, Y.X., Yan, X.: Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in neural information processing systems 32 (2019)
Temporal fusion transformers for interpretable multi-horizon time series forecasting. B Lim, S Ö Arık, N Loeff, T Pfister, International Journal of Forecasting. 374Lim, B., Arık, S.Ö., Loeff, N., Pfister, T.: Temporal fusion transformers for inter- pretable multi-horizon time series forecasting. International Journal of Forecasting 37(4), 1748-1764 (2021)
Time series is a special sequence: Forecasting with sample convolution and interaction. M Liu, A Zeng, Z Xu, Q Lai, Q Xu, arXiv:2106.09305arXiv preprintLiu, M., Zeng, A., Xu, Z., Lai, Q., Xu, Q.: Time series is a special sequence: Fore- casting with sample convolution and interaction. arXiv preprint arXiv:2106.09305 (2021)
Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. S Liu, H Yu, C Liao, J Li, W Lin, A X Liu, S Dustdar, International Conference on Learning Representations. Liu, S., Yu, H., Liao, C., Li, J., Lin, W., Liu, A.X., Dustdar, S.: Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and fore- casting. In: International Conference on Learning Representations (2021)
N-beats: Neural basis expansion analysis for interpretable time series forecasting. B N Oreshkin, D Carpov, N Chapados, Y Bengio, arXiv:1905.10437arXiv preprintOreshkin, B.N., Carpov, D., Chapados, N., Bengio, Y.: N-beats: Neural ba- sis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437 (2019)
Deepar: Probabilistic forecasting with autoregressive recurrent networks. D Salinas, V Flunkert, J Gasthaus, T Januschowski, International Journal of Forecasting. 363Salinas, D., Flunkert, V., Gasthaus, J., Januschowski, T.: Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Fore- casting 36(3), 1181-1191 (2020)
Two-stream adaptive graph convolutional networks for skeleton-based action recognition. L Shi, Y Zhang, J Cheng, H Lu, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionShi, L., Zhang, Y., Cheng, J., Lu, H.: Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 12026-12035 (2019)
Attention is all you need. Advances in neural information processing systems. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, 30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Advances in neural information pro- cessing systems 30 (2017)
Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. H Wu, J Xu, J Wang, M Long, Advances in Neural Information Processing Systems. 34Wu, H., Xu, J., Wang, J., Long, M.: Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in Neural Information Processing Systems 34, 22419-22430 (2021)
Connecting the dots: Multivariate time series forecasting with graph neural networks. Z Wu, S Pan, G Long, J Jiang, X Chang, C Zhang, Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. the 26th ACM SIGKDD international conference on knowledge discovery & data miningWu, Z., Pan, S., Long, G., Jiang, J., Chang, X., Zhang, C.: Connecting the dots: Multivariate time series forecasting with graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. pp. 753-763 (2020)
Multivariate time series forecasting based on causal inference with transfer entropy and graph neural network. H Xu, Y Huang, Z Duan, J Feng, P Song, arXiv:2005.01185arXiv preprintXu, H., Huang, Y., Duan, Z., Feng, J., Song, P.: Multivariate time series forecasting based on causal inference with transfer entropy and graph neural network. arXiv preprint arXiv:2005.01185 (2020)
Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction. H Yao, X Tang, H Wei, G Zheng, Z Li, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence33Yao, H., Tang, X., Wei, H., Zheng, G., Li, Z.: Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction. In: Proceedings of the AAAI conference on artificial intelligence. vol. 33, pp. 5668-5675 (2019)
Informer: Beyond efficient transformer for long sequence time-series forecasting. H Zhou, S Zhang, J Peng, S Zhang, J Li, H Xiong, W Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., Zhang, W.: Informer: Beyond efficient transformer for long sequence time-series forecasting. In: Proceed- ings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 11106-11115 (2021)
Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. T Zhou, Z Ma, Q Wen, X Wang, L Sun, R Jin, arXiv:2201.12740arXiv preprintZhou, T., Ma, Z., Wen, Q., Wang, X., Sun, L., Jin, R.: Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. arXiv preprint arXiv:2201.12740 (2022)
| []
|
[
"RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation",
"RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation"
]
| [
"Gabriele Sarti [email protected] \nUniversity of Groningen ‡ AWS AI Labs\n\n",
"PhuMon Htut \nUniversity of Groningen ‡ AWS AI Labs\n\n",
"Xing Niu \nUniversity of Groningen ‡ AWS AI Labs\n\n",
"Benjamin Hsu \nUniversity of Groningen ‡ AWS AI Labs\n\n",
"Anna Currey \nUniversity of Groningen ‡ AWS AI Labs\n\n",
"Georgiana Dinu \nUniversity of Groningen ‡ AWS AI Labs\n\n",
"Maria Nadejde [email protected] \nUniversity of Groningen ‡ AWS AI Labs\n\n"
]
| [
"University of Groningen ‡ AWS AI Labs\n",
"University of Groningen ‡ AWS AI Labs\n",
"University of Groningen ‡ AWS AI Labs\n",
"University of Groningen ‡ AWS AI Labs\n",
"University of Groningen ‡ AWS AI Labs\n",
"University of Groningen ‡ AWS AI Labs\n",
"University of Groningen ‡ AWS AI Labs\n"
]
| []
| Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods. To address this limitation, we propose Retrieval and Attribute-Marking enhanced Prompting (RAMP), which leverages large multilingual language models to perform ACT in few-shot and zero-shot settings. RAMP improves generation accuracy over the standard prompting approach by (1) incorporating a semantic similarity retrieval component for selecting similar in-context examples, and (2) marking in-context examples with attribute annotations. Our comprehensive experiments show that RAMP is a viable approach in both zero-shot and few-shot settings. | 10.48550/arxiv.2305.17131 | [
"https://export.arxiv.org/pdf/2305.17131v1.pdf"
]
| 258,947,531 | 2305.17131 | abcda41679759e17a7bd89ddf1ed069391f9fbea |
RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation
Gabriele Sarti [email protected]
University of Groningen ‡ AWS AI Labs
PhuMon Htut
University of Groningen ‡ AWS AI Labs
Xing Niu
University of Groningen ‡ AWS AI Labs
Benjamin Hsu
University of Groningen ‡ AWS AI Labs
Anna Currey
University of Groningen ‡ AWS AI Labs
Georgiana Dinu
University of Groningen ‡ AWS AI Labs
Maria Nadejde [email protected]
University of Groningen ‡ AWS AI Labs
RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation
Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently limited by dataset availability, since most prior approaches rely on supervised methods. To address this limitation, we propose Retrieval and Attribute-Marking enhanced Prompting (RAMP), which leverages large multilingual language models to perform ACT in few-shot and zero-shot settings. RAMP improves generation accuracy over the standard prompting approach by (1) incorporating a semantic similarity retrieval component for selecting similar in-context examples, and (2) marking in-context examples with attribute annotations. Our comprehensive experiments show that RAMP is a viable approach in both zero-shot and few-shot settings.
Introduction
Text style transfer (TST) is a task that aims to control stylistic attributes of an input text without affecting its semantic content (Jin et al., 2022). Research in TST has largely focused on English, thanks to the availability of large monolingual English datasets covering stylistic attributes like formality and simplicity (Rao and Tetreault 2018, Zhu et al. 2010, inter alia). In recent years, however, multilingual and cross-lingual applications of TST have seen a steady gain in popularity (Briakou et al., 2021;Garcia et al., 2021;Krishna et al., 2022). A notable instance of cross-lingual TST is attributecontrolled translation (ACT), in which attribute 1 conditioning is performed alongside machine translation (MT) to ensure that translations are not only * Work conducted during an internship at Amazon. 1 In this paper, we prefer the term attribute rather than style, since not all the attributes addressed here (e.g., gender) can be considered styles. ACT is especially important for sectors like customer service and business communication, where stylistic differences can have an impact on user perception (e.g., misgendering customers or speaking to them in an appropriately informal tone can be offensive or disconcerting). Table 1 gives examples of ACT for formality and gender. Most prior work on ACT relies on a supervised adaptation component that conditions the generative model on the selective attribute. However, few annotated ACT datasets are available, and they generally cover only a limited set of languages and attributes. Thus, enabling few-shot or zero-shot ACT would facilitate applying attribute control to less-resourced attributes and langauges.
In this paper, we introduce a new approach for ACT: Retrieval and Attribute-Marking enhanced EN: I wish you welcome and enjoy your stay. IT formal: Le do il benvenuto e si goda il soggiorno.
Here is a sentence: {You will always be welcome here.} Here is its Spanish translation written in a formal style: {Siempre será bienvenido aquí.} The translated sentence conveys a formal style by using words such as 'será'.
----Here is a sentence: {I wish you welcome and enjoy your stay.} Here is its Italian translation written in a formal style: {Le do il benvenuto e si goda il soggiorno.} The translated sentence conveys a formal style by using words such as 'Le', 'si goda'.
Method
Preliminaries
Attribute-Controlled Translation ACT takes two inputs, a sentence x and a desired target attribute a ∈ A (with A being the space of attributes), and outputs a translation y that complies with the specified attribute. It can be formulated as a function f : (x, a) → y. In our experiments, we use attribute values provided by the COCOA-MT formality translation dataset and the MT-GENEVAL gender translation dataset, i.e., A = {formal, infor-mal} or {female, male}. 2 Prompting In the prompting paradigm for decoder-only LLMs, inputs are given as decoding prefixes to the model, usually combined with natural language instructions for output generation. In style-controlled translation, we formulate the prompt for target language l and attribute a using the text "Here is a sentence: {x} Here is its l translation written in a a style:" to produce the 2 See Section 5 for ethical considerations. output y. 3 In the few-shot setting, we provide a sequence of k labeled in-context examples before the unlabeled input, which can be formulated as a function f : {(x 1 , l 1 , a, y 1 ), . . . , (x k+1 , l k+1 , a)} → y k+1 . Figure 1, the in-context example "You will always be welcome here." has the highest similarity to the test example "You're welcome." so it is prompted first. 2022), we include for every in-context example an additional sentence directly after the target sentence that specifies which text spans convey the desired attribute (e.g., "The translated sentence conveys a formal style by using words such as 'Vous'."). In our experiments, we use the gold attribute spans included in the CoCoA-MT and MT-GenEval datasets. In section 4 we suggest possibilities for automatically deriving attribute spans when gold training labels are not available.
Our
COCOA-MT ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ MT-GENEVAL ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ XGLM ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ BLOOM ✓ ✓ ✓ ✓ ✓
Cross-Lingual Prompting
The similarity retrieval component of RAMP requires a large pool D A from which to find appropriate incontext examples for prompting. Low-resource attributes or language pairs may have insufficient or no annotated data from which to retrieve such examples. To mitigate this issue, we introduce crosslingual prompting, in which the target side of the in-context examples differs from the desired target language of the translation task. As demonstrated in Figure 1, we study whether the system can leverage examples in one language (e.g., attribute indicators in Spanish) to produce the same attribute in another (e.g., French). Two main features of our RAMP model allow us to perform cross-lingual prompting: (1) the use of multilingual LLMs, and (2) the example retrieval step, which is done on the source language only.
Experiments
Datasets
We experiment on two multilingual ACT datasets: instead explicitly controlling target gender. Both datasets have gold annotations for attributemarked target spans, and both cover translation from English into multiple diverse target languages. We list their target languages in Table 2.
• COCOA-MT (
Large Language Models (LLMs)
We select three massively multilingual decoderonly LLMs for the prompting experiments: XGLM (Lin et al., 2022), BLOOM (BigScience, 2022) and GPT-NEOX (Black et al., 2022). The selected models span three orders of magnitude in terms of number of parameters and differ in the languages that they cover (see Table 2). Appendix D motivates our choice of models in more detail. GPT-3 is not included because it is not freely accessible and it is not intended for multilingual use-cases.
Baseline
Attribute tagging is a standard method for ACT, so we include a baseline following the approach and configuration used by Nadejde et al. We refer to this as adapted MT.
Evaluation Metrics
We measure translation quality with BLEU ( Table 3 presents the accuracy of the classification models on the test sets of their respective datasets, averaged over all languages. 4
COCOA-MT
MT-GENEVAL BLEU COMET L-Acc S-Acc BLEU COMET L-Acc S-Acc Table 4: BLEU, COMET, Lexicaland Sentential-Accuracy of selected LLMs using 16 same-language in-context examples on two tasks, alongside adapted MT models. Scores are aggregated across seen languages (w.r.t. BLOOM pre-training) and both attributes for each task. (Decomposed results are included in Table 6-9.)
Unlike lexical accuracy, the multilingual attribute classifier does not penalize text generated in incorrect languages. Thus, in cross-lingual prompting experiments, we include a step of language detection 5 so that generated sentences not in the requested target language are considered incorrect.
Results: Same-Language Prompting
We first evaluate the effectiveness of RAMP for formality-and gender-controlled translation where the language pair used for in-context examples is the same as the one used in the prompt candidate (e.g., EN→ES formality-controlled translation using EN→ES in-context examples). We test XGLM 7.5B and BLOOM 175B with 16 in-context examples on both tasks. 6 Table 4 presents our results alongside the adapted MT baseline. The base model uses in-context examples that are sampled randomly from the pool of labeled examples. We also include an ablation that adds attribute marking only on top of base, without similarity retrieval (+mark).
Using just attribute marking consistently improves attribute accuracy of the generated text, but it leads to degradation of COMET on COCOA-MT. The complete RAMP with similarity retrieval not only compensates for the COMET degradation but also improves quality and attribute metrics across the board, especially for the high-capacity BLOOM 175B model.
Adapted MT outperforms BLOOM 175B on MT-GENEVAL in all metrics, but underperforms it on COCOA-MT. This suggests that it is challenging to do fine-grained comparison between LLMs and standard MT systems as they might have different domain coverage. BLOOM 175B consistently 5 https://pypi.org/project/langdetect/ 6 We proceed with this setting based on a preliminary evaluation of 3 LLMs and 4 numbers of examples in Appendix E. outperforms XGLM 7.5B in both generic translation quality and attribute control accuracy, so we proceed with using BLOOM 175B in the crosslingual prompting setting.
Results: Cross-Lingual Prompting
We have demonstrated the effectiveness of selecting similar same-language examples to build the prompt, echoing contemporary work (Liu et al., 2022; Agrawal et al., 2022). In this section, we evaluate the cross-lingual prompting option, i.e., retrieving in-context examples from other target languages besides the desired language of translation. We test this zero-shot setting using the leave-oneout strategy, and results of tested language pairs are averaged. 7 Table 4 presents our results using BLOOM 175B. On both test sets, compared to the baseline, we observe improved attribute accuracy and comparable or better generic translation quality when using RAMP with cross-lingual prompting.
We do observe translation quality degradation with RAMP on some target languages of COCOA-MT, e.g., ES. Manual analysis shows that repeated inaccurate retrieval results could lead to hallucinations. 8 For example, RAMP retrieves multiple sentences containing "million" for the input "If you got it why not? He is worth over 20 billion dollars after all.". This results in mistranslation of billion to million (millionario): "Si lo tienes, ¿por qué no? Es millonario después de todo.". We give detailed examples in Appendix H.
We introduced the new RAMP in-context learning approach to leverage attribute annotations and similar same-language or cross-lingual examples for better prompting quality. We demonstrated its effectiveness with multilingual LLMs for both formalitycontrolled and gender-controlled translation. We use gold annotations for attribute marking, but we leave unsupervised automatic attribute span extraction as future work.
Limitations
• We currently rely on gold annotations for attribute marking, which are not always available depending on the dataset. However, RAMP could be easily extended to unsupervised settings through LLM feature attribution (Sarti et al., 2023), i.e., extracting salient tokens driving the attribute prediction. This approach builds upon recent techniques in unsupervised language generation metrics (Fomicheva et al., 2021, 2022; Leiter et al., 2022). We leave an empirical evaluation of its effectiveness to future work.
• Besides the choice of in-context examples, prompting is also sensitive to their ordering (Lu et al., 2022) and the design of the template (Jiang et al., 2020). We refrain from tuning example orders and templates to avoid introducing too many variables.
• Multilingual LLMs perform competitive MT out of the box for languages seen during their pre-training. However, we noticed that BLOOM 175B produces better EN-IT translations than XGLM 7.5B even though IT is not listed as a training language of BLOOM. This could possibly be due to typological similarity between Italian and the Romance languages included in BLOOM training. We leave experiments of unseen languages as future work.
• Multilingual LLMs like the ones used in this paper require larger GPU resources for inference than standard bilingual MT systems.
• One test set we use (MT-GENEVAL) provides only two gender values (female and male), but we do not intend to imply that other genders do not exist. spanning 46 natural languages (and 13 programming languages). However, many of the test set languages are not part of its pre-training corpus (see Table 2). We evaluate two variants of the model (7.1B and 175B parameters) to assess how it is affected by a massive scaling in model parameters.
References
The larger variant has a parameter count comparable to the one of GPT-3, while it is presently the largest publicly available multilingual LLM. GPT-NEOX (Black et al., 2022) is a 20Bparameter model trained on The Pile (Gao et al., 2021), a large English-centric corpus covering a broad range of domains. While the model saw mainly English data during pre-training and as such is not intended for multilingual usage, it exhibits interesting generalization performances for many of our target languages.
E Preliminary Evaluation of Same-Language Prompting
We conduct preliminary evaluations aimed at reducing the number of experimental settings. We perform formality-controlled translation using COCOA-MT, and evaluate LLMs by varying the number of in-context examples (i.e., 4-8-16-32, selected based on the feasible context length 10 ). Figure 2 presents results averaged across all four languages seen by BLOOM during its pretraining. 11 Observations:
• RAMP generally outperforms base prompting (i.e., random in-context examples and no attribute marking) across most LLMs and example settings for both BLEU and formality accuracy.
• BLEU and formality accuracy improve with increased model size and with the number of examples, until this number reaches 16.
Based on these results we move forward with the XGLM 7.5B and BLOOM 175B models and 16 examples.
F Detailed Scores of Aggregated Results
• Table 5: Detailed scores of same-language prompting on COCOA-MT (preliminary evaluation). 12 10 BLOOM 175B encountered out-of-memory errors with 32 in-context example using eight 40GB A100 GPUs. 11 Detailed scores are included in Table 5. 12 We set maximum output length as 50 tokens in the preliminary evaluation, while we use 100 tokens in the main • Table 6: Decomposed results of samelanguage prompting on COCOA-MT (full evaluation).
• Table 7: Decomposed results of samelanguage prompting on MT-GENEVAL (full evaluation).
• Table 8: Decomposed results of cross-lingual prompting on COCOA-MT.
• Table 9: Decomposed results of cross-lingual prompting on MT-GENEVAL.
evaluation. Early truncating leads to slightly lower scores in Table 5 than in Table 4.
G Amended Details of Cross-Lingual Prompting
We test the zero-shot setting using the leave-oneout strategy, i.e. we retrieve in-context examples from every languages except the desired language of translation. We ensure that we retrieve an equal Table 7: Decomposed results of same-language prompting on MT-GENEVAL (full evaluation).
H Error Analysis of Cross-Lingual Prompting
After retiring from teaching, Cook became a novelist.Feminine Ref (NL)Nadat ze stopte met lesgeven, werd Cook schrijfster.Masculine Ref (NL)Nadat hij stopte met lesgeven, werd Cook schrijver.
1 :
1Examples of attribute triplets from COCOA-MT and MT-GENEVAL. Attribute markers in the attribute-controlled translations are underlined. correct but match user-specified preferences, such as formality/honorifics (Sennrich et al., 2016; Niu et al., 2017; Michel and Neubig, 2018; Niu and Carpuat, 2020; Nadejde et al., 2022; Wang et al., 2022), gender (Rabinovich et al., 2017; Vanmassenhove et al., 2018; Saunders and Byrne, 2020), and length (Lakew et al., 2019; Schioppa et al., 2021).
Prompting (RAMP). Recent studies have shown that large language models (LLMs) can perform MT out of the box using the prompting paradigm (Brown et al., 2020; Lin et al., 2022; Chowdhery et al., 2022). We build on this, prompting LLMs to perform attribute-controlled MT through two innovations: (1) retrieval of similar examples and (2)
Approach: RAMP RAMP builds on the success of the prompting paradigm on few-shot generation tasks such as monolingual text style transfer (Reif et al., 2022) and MT (Garcia and Firat, 2022; Agrawal et al., 2022) by creating more informative prompts through similarity retrieval and attribute marking. See Figure 1 for an illustration of RAMP. Similarity Retrieval In standard prompting, incontext examples are sampled randomly from the pool of labeled examples D A . In RAMP, we select examples based on their similarity with the input text. We first embed both the input text and the source texts of D A using all-MiniLM-L6-v2 (Wang et al., 2020). Then, the top-k most similar examples are retrieved for the input text based on cosine similarity. These are then used in a descending order w.r.t. similarity as the in-context examples in the inference prompt. As demonstrated in
Attribute
Marking In standard prompting, incontext examples are provided without explicit information on why they satisfy the prompting objective. Inspired by recent studies that have shown that decomposition of complex tasks can improve prompting quality (Nye et al., 2021; Wei et al., AR ES FR HI PT DE IT JA RU NL
(2022): a transformer MT model (Vaswani et al., 2017) pre-trained on public parallel data and further finetuned on contrastive training pairs with attribute tags (from either COCOA-MT or MT-GENEVAL).
Papineni et al., 2002) and COMET (Rei et al., 2020). For attribute accuracy, we use both (1) the lexical matching metrics provided with COCOA-MT and MT-GENEVAL (Lexical-Accuracy) and (2) sentence encoders trained on contrastive examples (Sentential-Accuracy). For (2), we train multilingual classifiers on top of the mDeBERTa-v3 encoder (He et al., 2021). High-performance pretrained classifiers have been shown to produce attribute accuracy estimates closer to human judgments for style transfer (Lai et al., 2022).
Figure 2 :
2BLEU and sentential formality accuracy of prompt outputs on COCOA-MT test set for different amounts of incontext examples. Confidence intervals are obtained base setting by sampling in-context examples using 3 seeds.
number of examples from all languages: the number of examples retrieved from each language is the total desired number of in-context examples divided by number of training languages. In COCOA-MT, we retrieve 14 in-context examples from 7 languages. In MT-GENEVAL, we retrieve 8 in-context examples from 8 languages. We reduced the number of in-context examples in this setting to avoid out-of-memory errors with BLOOM 175B.
-0.044 0.020 0.818 0.860 0.686 0.739 0.139 0.212 0.779 0.816 0.502 L-Acc 0.845 0.956 0.660 0.815 0.608 0.900 0.574 0.961 0.680 0.882 0.788 S-Acc 0.479 0.703 0.605 0.953 0.497 0.956 0.105 0.870 0.613 0.951 0.673
Table
----Here is a sentence: {You're welcome.} Here is its French translation written in a formal style: { style transfer have mainly focused on the generalization capabilities of large English-centric LMs for zero-shot style transfer using previously unseen style descriptions (Suzgun et al., 2022; Reif et al., 2022). However, prior work on other NLP tasks has shown that cross-lingual prompting of multilingual LLMs can be effective (Zhao and Schütze, 2021; Zhou et al., 2022; Huang et al., 2022). As such, we leverage multilingual LLMs and extend their ACT capabilities cross-lingually to languages not covered by the in-context examples, thus enabling zero-shot ACT.EN: You're welcome.
FR formal:
input
✗
✓
✓
similarity retrieval =
• source & target • language & attribute • attribute marker
Large Language
Model
Je vous en prie.
Cross-Lingual Prompt
output
Figure 1: An example of RAMP using 2 in-context examples. (Left) The input sentence is embedded by a sentence similarity
model, and the top-k most similar labeled examples are retrieved from a pool of training data to build the prompt context. (Right)
Labeled cross-lingual examples are used to fill in the English prompt template, which is then provided to the LLM to generate
the output.
explicit attribute marking.
Recent works adopting the prompting paradigm
for text
Table 2 :
2Target languages in the test sets and languages seen by LLMs in pre-training. We report results on languages seen by both LLMs. Language codes are defined in Appendix B.
Table 3 :
3Dataset statistics. We report # of triplets in the train/test split aggregated across all languages and the classification accuracy on the test split of the classifiers.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100-114, Dublin, Ireland and Online. Association for Computational Linguistics.Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke
Zettlemoyer, and Marjan Ghazvininejad. 2022. In-
context examples selection for machine translation.
CoRR, abs/2212.02437.
BigScience. 2022.
BLOOM: A 176b-parameter
open-access multilingual language model. CoRR,
abs/2211.05100.
Sidney Black, Stella Biderman, Eric Hallahan, Quentin
Anthony, Leo Gao, Laurence Golding, Horace
He, Connor Leahy, Kyle McDonell, Jason Phang,
Michael Pieler, Usvsn Sai Prashanth, Shivanshu Puro-
hit, Laria Reynolds, Jonathan Tow, Ben Wang, and
Samuel Weinbach. 2022. GPT-NeoX-20B: An open-
source autoregressive language model. In Proceed-
ings of BigScience Episode #5 -Workshop on Chal-
lenges & Perspectives in Creating Large Language
Models, pages 95-136, virtual+Dublin. Association
for Computational Linguistics.
Eleftheria Briakou, Di Lu, Ke Zhang, and Joel Tetreault.
2021. Olá, bonjour, salve! XFORMAL: A bench-
mark for multilingual formality style transfer. In
Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 3199-3216, Online. Association for Computa-
tional Linguistics.
the 2nd Workshop on Evaluation and Comparison
of NLP Systems, pages 165-178, Punta Cana, Do-
minican Republic. Association for Computational
Linguistics.
Marina Fomicheva, Lucia Specia, and Nikolaos Ale-
tras. 2022. Translation error detection as rationale
extraction. In Findings of the Association for Com-
putational Linguistics: ACL 2022, pages 4148-4159,
Dublin, Ireland. Association for Computational Lin-
guistics.
Leo Gao, Stella Biderman, Sid Black, Laurence Gold-
ing, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn
Presser, and Connor Leahy. 2021. The pile: An
800gb dataset of diverse text for language modeling.
CoRR, abs/2101.00027.
Xavier Garcia, Noah Constant, Mandy Guo, and Orhan
Firat. 2021. Towards universality in multilingual text
rewriting. CoRR, abs/2107.14749.
Xavier Garcia and Orhan Firat. 2022. Using natural
language prompts for machine translation. CoRR,
abs/2202.11822.
Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021.
Debertav3: Improving deberta using electra-style pre-
training with gradient-disentangled embedding shar-
ing. CoRR, abs/2111.09543.
Lianzhe Huang, Shuming Ma, Dongdong Zhang, Furu
Wei, and Houfeng Wang. 2022. Zero-shot cross-
lingual transfer of prompt-based tuning with a unified
multilingual prompt. In Proceedings of the 2022 Con-
ference on Empirical Methods in Natural Language
Processing, pages 11488-11497, Abu Dhabi, United
Arab Emirates. Association for Computational Lin-
guistics.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham
Neubig. 2020. How can we know what language
models know? Transactions of the Association for
Computational Linguistics, 8:423-438.
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova,
and Rada Mihalcea. 2022. Deep learning for text
style transfer: A survey. Computational Linguistics,
48(1):155-205.
Kalpesh Krishna, Deepak Nathani, Xavier Garcia,
Bidisha Samanta, and Partha Talukdar. 2022. Few-
shot controllable style transfer for low-resource mul-
tilingual settings. In Proceedings of the 60th Annual
Meeting of the Association for Computational Lin-
guistics (Volume 1: Long Papers), pages 7439-7468,
Dublin, Ireland. Association for Computational Lin-
guistics.
Huiyuan Lai, Jiali Mao, Antonio Toral, and Malvina
Nissim. 2022. Human judgement as a compass to
navigate automatic metrics for formality transfer. In
Proceedings of the 2nd Workshop on Human Eval-
uation of NLP Systems (HumEval), pages 102-115,
Dublin, Ireland. Association for Computational Lin-
guistics.
Surafel Melaku Lakew, Mattia Di Gangi, and Marcello
Federico. 2019. Controlling the output length of neu-
ral machine translation. In Proceedings of the 16th
International Conference on Spoken Language Trans-
lation, Hong Kong. Association for Computational
Linguistics.
Christoph Leiter, Piyawat Lertvittayakumjorn, Marina
Fomicheva, Wei Zhao, Yang Gao, and Steffen Eger.
2022. Towards explainable evaluation metrics for
natural language generation. CoRR, abs/2203.11131.
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu
Wang, Shuohui Chen, Daniel Simig, Myle Ott, Na-
man Goyal, Shruti Bhosale, Jingfei Du, Ramakanth
Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav
Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettle-
moyer, Zornitsa Kozareva, Mona Diab, Veselin Stoy-
anov, and Xian Li. 2022. Few-shot learning with
multilingual generative language models. In Proceed-
ings of the 2022 Conference on Empirical Methods
in Natural Language Processing, pages 9019-9052,
Abu Dhabi, United Arab Emirates. Association for
Computational Linguistics.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel,
and Pontus Stenetorp. 2022. Fantastically ordered
prompts and where to find them: Overcoming few-
shot prompt order sensitivity. In Proceedings of the
60th Annual Meeting of the Association for Compu-
tational Linguistics (Volume 1: Long Papers), pages
8086-8098, Dublin, Ireland. Association for Compu-
tational Linguistics.
Paul Michel and Graham Neubig. 2018. Extreme adap-
tation for personalized neural machine translation. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 2:
Short Papers), pages 312-318, Melbourne, Australia.
Association for Computational Linguistics.
Maria Nadejde, Anna Currey, Benjamin Hsu, Xing
Niu, Marcello Federico, and Georgiana Dinu. 2022.
CoCoA-MT: A dataset and benchmark for contrastive
controlled MT with application to formality. In Find-
ings of the Association for Computational Linguistics:
NAACL 2022, pages 616-632, Seattle, United States.
Association for Computational Linguistics.
Xing Niu and Marine Carpuat. 2020. Controlling neural
machine translation formality with synthetic supervi-
sion. In The Thirty-Fourth AAAI Conference on Ar-
tificial Intelligence, AAAI 2020, New York, NY, USA,
February 7-12, 2020, pages 8568-8575. AAAI Press.
Xing Niu, Marianna Martindale, and Marine Carpuat.
2017. A study of style in machine translation: Con-
trolling the formality of machine translation output.
In Proceedings of the 2017 Conference on Empiri-
cal Methods in Natural Language Processing, pages
2814-2819, Copenhagen, Denmark. Association for
Computational Linguistics.
Maxwell I. Nye, Anders Johan Andreassen, Guy Gur-
Ari, Henryk Michalewski, Jacob Austin, David
Bieber, David Dohan, Aitor Lewkowycz, Maarten
Bosma, David Luan, Charles Sutton, and Augustus
Odena. 2021. Show your work: Scratchpads for inter-
mediate computation with language models. CoRR,
abs/2112.00114.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. Bleu: a method for automatic evalu-
ation of machine translation. In Proceedings of the
40th Annual Meeting of the Association for Compu-
tational Linguistics, pages 311-318, Philadelphia,
Pennsylvania, USA. Association for Computational
Linguistics.
Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia
Specia, and Shuly Wintner. 2017. Personalized ma-
chine translation: Preserving original author traits. In
Proceedings of the 15th Conference of the European
Chapter of the Association for Computational Lin-
guistics: Volume 1, Long Papers, pages 1074-1084,
Valencia, Spain. Association for Computational Lin-
guistics.
Sudha Rao and Joel Tetreault. 2018. Dear sir or madam,
may I introduce the GYAFC dataset: Corpus, bench-
marks and metrics for formality style transfer. In
Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
Volume 1 (Long Papers), pages 129-140, New Or-
leans, Louisiana. Association for Computational Lin-
guistics.
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon
Lavie. 2020. COMET: A neural framework for MT
evaluation. In Proceedings of the 2020 Conference
on Empirical Methods in Natural Language Process-
ing (EMNLP), pages 2685-2702, Online. Association
for Computational Linguistics.
Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen,
Chris Callison-Burch, and Jason Wei. 2022. A recipe
for arbitrary text style transfer with large language
models. In Proceedings of the 60th Annual Meet-
ing of the Association for Computational Linguistics
(Volume 2: Short Papers), pages 837-848, Dublin,
Ireland. Association for Computational Linguistics.
Gabriele Sarti, Nils Feldhus, Ludwig Sickert, and Os-
kar van der Wal. 2023. Inseq: An interpretabil-
ity toolkit for sequence generation models. CoRR,
abs/2302.13942.
Danielle Saunders and Bill Byrne. 2020. Reducing gen-
der bias in neural machine translation as a domain
adaptation problem. In Proceedings of the 58th An-
nual Meeting of the Association for Computational
Linguistics, pages 7724-7736, Online. Association
for Computational Linguistics.
Andrea Schioppa, David Vilar, Artem Sokolov, and
Katja Filippova. 2021. Controlling machine transla-
tion for multiple attributes with additive interventions.
In Proceedings of the 2021 Conference on Empiri-
cal Methods in Natural Language Processing, pages
6676-6696, Online and Punta Cana, Dominican Re-
public. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Controlling politeness in neural machine trans-
lation via side constraints. In Proceedings of the 2016
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, pages 35-40, San Diego,
California. Association for Computational Linguis-
tics.
Mirac Suzgun, Luke Melas-Kyriazi, and Dan Juraf-
sky. 2022. Prompt-and-rerank: A method for zero-
shot and few-shot arbitrary textual style transfer with
small language models. In Proceedings of the 2022
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 2195-2222, Abu Dhabi,
United Arab Emirates. Association for Computa-
tional Linguistics.
Eva Vanmassenhove, Christian Hardmeier, and Andy
Way. 2018. Getting gender right in neural machine
translation. In Proceedings of the 2018 Conference
on Empirical Methods in Natural Language Process-
ing, pages 3003-3008, Brussels, Belgium. Associa-
tion for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In Advances in Neural Information Pro-
cessing Systems 30: Annual Conference on Neural
Information Processing Systems 2017, December 4-9,
2017, Long Beach, CA, USA, pages 5998-6008.
David Vilar, Markus Freitag, Colin Cherry, Jiaming
Luo, Viresh Ratnakar, and George F. Foster. 2022.
Prompting palm for translation: Assessing strategies
and performance. CoRR, abs/2211.09102.
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan
Yang, and Ming Zhou. 2020. MiniLM: Deep self-
attention distillation for task-agnostic compression
of pre-trained transformers. In Advances in Neural
Information Processing Systems 33: Annual Confer-
ence on Neural Information Processing Systems 2020,
NeurIPS 2020, December 6-12, 2020, virtual.
Yifan Wang, Zewei Sun, Shanbo Cheng, Weiguo Zheng,
and Mingxuan Wang. 2022. Controlling styles in
neural machine translation with activation prompt.
CoRR, abs/2212.08909.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
and Denny Zhou. 2022. Chain-of-thought prompt-
ing elicits reasoning in large language models. In
NeurIPS.
Mengjie Zhao and Hinrich Schütze. 2021. Discrete and
soft prompting for multilingual models. In Proceed-
ings of the 2021 Conference on Empirical Methods
in Natural Language Processing, pages 8547-8555,
Online and Punta Cana, Dominican Republic. Asso-
ciation for Computational Linguistics.
Meng Zhou, Xin Li, Yue Jiang, and Lidong Bing. 2022.
Enhancing cross-lingual prompting with mask token
augmentation. CoRR, abs/2202.07255.
Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych.
2010. A monolingual tree-based translation model
for sentence simplification. In Proceedings of the
23rd International Conference on Computational Lin-
guistics (Coling 2010), pages 1353-1361, Beijing,
China. Coling 2010 Organizing Committee.
Table 10
10COCOA-MT translates the same set of English sentences to different languages while MT-GENEVAL collects English sentences independently; (2) There are no duplicated source (English) sentences for each language. (Therefore, if RAMP retrieves duplicated English sentences as inTable 10, their reference translations are guaranteed to be in different languages.)shows two examples where RAMP per-
forms significantly worse than the base model in
terms of COMET. In the first example, having mul-
tiple in-context examples containing "million" led
the model to mis-translate "billion" to "million".
In the second example, we observe that the color re-
lated in-context examples led the model to produce
hallucinated output about clothing colors.
Repeated misleading in-context examples are
less observed on MT-GENEVAL and in the same-
language setting because (1)
Table 5 :
5Detailed scores of same-language prompting on COCOA-MT (preliminary evaluation). Numbers in the header represent the number of in-context examples used for prompting, including zero-shot prompting (0). Scores are averaged across two available formality values (formal, informal) and languages (ES,FR,HI,PT).ES
FR
HI
PT
F
I
F
I
F
I
F
I
AVG
XGLM
7.5B
base
BLEU
30.1
33.0
30.7
28.8
18.5
16.9
35.7
35.4
28.6
COMET 0.500
0.527 0.348 0.350 0.454 0.425 0.547 0.554 0.463
L-Acc
0.524
0.966 0.977 0.633 0.976 0.744 0.931 0.928 0.835
S-Acc
0.507
0.958 0.953 0.840 0.963 0.748 0.888 0.912 0.846
+mark
BLEU
31.0
33.2
29.4
27.4
19.2
18.6
35.7
35.5
28.7
COMET 0.498
0.541 0.207 0.188 0.439 0.409 0.552 0.552 0.423
L-Acc
0.728
0.972 0.985 0.923 0.986 0.860 0.960 0.947 0.920
S-Acc
0.697
0.958 0.963 0.917 0.983 0.838 0.927 0.937 0.902
RAMP
BLEU
32.8
33.5
32.7
31.0
21.0
20.3
34.2
34.4
30.0
COMET 0.480
0.511 0.314 0.302 0.502 0.491 0.488 0.522 0.451
L-Acc
0.842
0.963 0.989 0.926 0.993 0.885 0.961 0.943 0.938
S-Acc
0.803
0.952 0.975 0.922
0.98 0.873 0.928 0.948 0.923
BLOOM
175B
base
BLEU
44.3
45.0
42.9
41.0
27.1
25.8
47.3
45.7
39.9
COMET 0.728
0.759 0.611 0.600 0.673 0.645 0.762 0.750 0.691
L-Acc
0.795 0.96032 0.987 0.890 0.978 0.885 0.987 0.954 0.930
S-Acc
0.889
0.963 0.987 0.888 0.980 0.863 0.987 0.960 0.940
+mark
BLEU
45.8
44.5
43.3
41.8
28.4
27.1
46.4
45.3
40.3
COMET 0.726
0.745 0.610 0.594 0.677 0.659 0.751 0.745 0.688
L-Acc
0.930
0.987 0.996 0.958 0.995 0.936 0.989 0.972 0.970
S-Acc
0.942
0.985 0.992 0.957 0.992 0.925 0.990 0.977 0.970
RAMP
BLEU
46.4
46.2
43.9
42.9
30.8
29.2
48.8
47.4
41.9
COMET 0.718
0.759 0.611 0.610 0.721 0.713 0.782 0.771 0.711
L-Acc
0.956
0.984 0.998 0.952 0.991 0.947 0.993 0.962 0.973
S-Acc
0.957
0.982 0.995 0.945 0.993 0.935 0.990 0.967 0.970
Adapted
MT
BLEU
44.4
43.7
43.4
37.8
19.1
17.0
53.0
49.9
38.5
COMET 0.712
0.724 0.559 0.547 -0.191 -0.263 0.783 0.764 0.454
L-Acc
0.697
0.598 0.822 0.377 0.869 0.449 0.972 0.744 0.691
S-Acc
0.700
0.600 0.810 0.400 0.680 0.600 0.950 0.800 0.693
Table 6 :
6Decomposed results of same-language prompting on COCOA-MT (full evaluation).AR
ES
FR
HI
PT
F
M
F
M
F
M
F
M
F
M
AVG
XGLM
7.5B
base
BLEU
7.6
7.5
35.5
38.2
27.1
28.6
13.8
16.4
29.2
33.1
23.7
COMET -0.040 -0.012 0.694 0.738 0.509 0.555 0.304 0.332 0.661 0.713 0.445
L-Acc
0.848 0.947 0.688 0.808 0.715 0.880 0.585 0.956 0.621 0.855 0.790
S-Acc
0.617 0.866 0.651 0.938 0.581 0.920 0.303 0.962 0.494 0.934 0.727
+mark
BLEU
7.7
7.8
35.4
38.2
27.5
28.7
14.0
16.7
29.1
32.4
23.
Table 8 :
8Decomposed results of cross-lingual prompting on COCOA-MT.AR
ES
FR
HI
PT
F
M
F
M
F
M
F
M
F
M
AVG
BLOOM
175B
base
BLEU
10.6
11.6
43.3
47.4
34.2
38.2
11.4
15.0
34.4
38.6
28.5
COMET 0.
Table 9 :
9Decomposed results of cross-lingual prompting on MT-GENEVAL.In-context
examples
(EN)
We adopt prompt templates similar to the one used by Reif et al. (2022), and we write the prompt template in English. Complete templates are provided in Appendix A.
More details of datasets and classifiers are in Appendix C.
Languages that are not seen during the LLM pre-training are included in the prompt but not tested.8 Vilar et al. (2022) also observe hallucinations when the retrieved examples have bad translations (i.e., non-parallel sentences).
https://huggingface.co/microsoft/mdeberta-v3-base
A Prompt TemplatesFormality-Controlled Translation Here is a sentence: {x} Here is its l translation written in a a style: {y} The translated sentence conveys a a style by using words such as 'w 1 ', 'w 2 '.Gender-Controlled TranslationHere is a sentence: {x} Here is its l translation in which the person is a: {y} In the translation, the a gender of the person is made explicit by words such as 'w 1 ', 'w 2 '.B LanguageC Additional Details of Datasets Splits and Pre-Trained Attribute ClassifiersWe use the original train/test split provided by the COCOA-MT dataset. Each split contains telephony and topical_chat domains. We use the topical_chat domain in our experiments. MT-GENEVAL contains a dev and test split, and we use the dev split as training data for the classification model and prompting experiments. We finetune MDEBERTA-V3-BASE model 9 on the contrastive examples in the respective training sets to get the attribute classifiers. We finetune the classifier for 2 epochs with a batch size of 8, learning rate 2e-5, 500 warm up steps, max sequence length of 256, and save checkpoint every 500 steps. We do not do hyperparameter tuning, and thus, a validation set is not used.D Selection of Large Language ModelsXGLM (Lin et al., 2022) is a 7.5B-parameter model trained on a balanced corpus containing 30 languages (excluding NL). It was shown to outperform much larger models such as GPT-3 on tasks related to machine translation and cross-lingual language understanding. We select it due to its broad linguistic coverage and its manageable size.BLOOM (BigScience, 2022) is a model available in multiple sizes, trained on a curated corpus
Yeah that makes sense, did you heard about the $10 million bunker he has?. Yeah that makes sense, did you heard about the $10 million bunker he has?
I have. I heard that he started a library in 1895 with 32,000 books in it. All from his personal collection. Can you imagineI have. I heard that he started a library in 1895 with 32,000 books in it. All from his personal collection. Can you imagine?
Yeah that makes sense, did you heard about the $10 million bunker he has?. Yeah that makes sense, did you heard about the $10 million bunker he has?
Yeah that makes sense, did you heard about the $10 million bunker he has?. Yeah that makes sense, did you heard about the $10 million bunker he has?
What did you think about that guy findin 3 million dollars worth of old baseball cards in his grandpas attic. Maybe he shouldMaybe he should. What did you think about that guy findin 3 million dollars worth of old baseball cards in his grandpas attic.
That is really expensive I agree, did you watch the Lego Batman movie?. That is really expensive I agree, did you watch the Lego Batman movie?
That is really expensive I agree, did you watch the Lego Batman movie? 11. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 12. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 13. He doesnt look like he has 56 years! I heard he made 75000000 from Mission Impossible 3 14. Really? I guess he made a valuable contribution to science and also to medicine, did you hear of that species of flying snakes Input (EN) If you got it why not? He is worth over 20 billion dollars after all. Reference (ES) Si lo tiene, ¿por qué no? Al fin y al cabo, vale más de 20 000 millones de dólares. RAMP (ES) Si lo tienes. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 10. ¿por qué no? Es millonario después de todo. base (ES) Si lo tienes. Él vale más de 20 mil millones de dólares después de todoYeah that makes sense, did you heard about the $10 million bunker he has? 9. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 10. That is really expensive I agree, did you watch the Lego Batman movie? 11. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 12. That is crazy. Do you like Tom Hanks, he's grossed over 8.5 billion at the box office 13. He doesnt look like he has 56 years! I heard he made 75000000 from Mission Impossible 3 14. Really? I guess he made a valuable contribution to science and also to medicine, did you hear of that species of flying snakes Input (EN) If you got it why not? He is worth over 20 billion dollars after all. Reference (ES) Si lo tiene, ¿por qué no? Al fin y al cabo, vale más de 20 000 millones de dólares. RAMP (ES) Si lo tienes, ¿por qué no? Es millonario después de todo. base (ES) Si lo tienes, ¿por qué no? Él vale más de 20 mil millones de dólares después de todo.
thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 2. For sure lol, it was so nice talking with you, say hi to your cats for me! 3. thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 4. What can't dogs do! I know they aren't color blind like we were taught when young. In-context examples (EN) 1.. It was so nice chatting with you. tell yuki hiIn-context examples (EN) 1. thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 2. For sure lol, it was so nice talking with you, say hi to your cats for me! 3. thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 4. What can't dogs do! I know they aren't color blind like we were taught when young. It was so nice chatting with you, tell yuki hi!
For sure lol, it was so nice talking with you, say hi to your cats for me!. For sure lol, it was so nice talking with you, say hi to your cats for me!
That sucker was mean let me tell you. I think I would have a rabid dog chase me, which by the way dogs are not color blind!. Oh yeahOh yeah. That sucker was mean let me tell you. I think I would have a rabid dog chase me, which by the way dogs are not color blind!
Thanks you too! And watch out for dogs, they're not actually colorblind but they just don't see as many colors as a normal human can. Thanks you too! And watch out for dogs, they're not actually colorblind but they just don't see as many colors as a normal human can
LOL That's funny! Google prefers dogs over cats! The wrote in their code of conduct that they are a dog company. It's been fun chatting with you! 9. Such gorgeous dogs! Do you & your dogs live in a northern climateLOL That's funny! Google prefers dogs over cats! The wrote in their code of conduct that they are a dog company. It's been fun chatting with you! 9. Such gorgeous dogs! Do you & your dogs live in a northern climate?
It's been fun chatting with you! 11. thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 12. Such gorgeous dogs! Do you & your dogs live in a northern climate? 13. haha me too! I heard that they wore clothes that were full of color. LOL That's funny! Google prefers dogs over cats! The wrote in their code of conduct that they are a dog company. Surprised me because I always thought of them as wearing black and whiteLOL That's funny! Google prefers dogs over cats! The wrote in their code of conduct that they are a dog company. It's been fun chatting with you! 11. thats such a fun thing to have in your code of conduct. are you more of a dog person than cat person? 12. Such gorgeous dogs! Do you & your dogs live in a northern climate? 13. haha me too! I heard that they wore clothes that were full of color. Surprised me because I always thought of them as wearing black and white.
For sure lol, it was so nice talking with you, say hi to your cats for me! Input (EN) lol I had an African grey that could say everything I said around the house. Like clean your room! as well any loud sounds. Reference (PT) "lol Tinha um papagaio-cinzento que conseguia dizer tudo o que ouvia em casa. Tipo. Limpe o seu quarto!"" e também quaisquer sons altos" RAMP (PT) haha eu também! Eu ouvi dizer que eles usam roupas de todas as cores. Surpreendeu-me porque eu sempre pensei neles usando preto e branco. base (PT) hahaha eu tinha um papagaio cinza africano que dizia tudo o que eu dizia em casa. Como limpar o quarto! Bem como qualquer som altoFor sure lol, it was so nice talking with you, say hi to your cats for me! Input (EN) lol I had an African grey that could say everything I said around the house. Like clean your room! as well any loud sounds. Reference (PT) "lol Tinha um papagaio-cinzento que conseguia dizer tudo o que ouvia em casa. Tipo ""Limpe o seu quarto!"" e também quaisquer sons altos" RAMP (PT) haha eu também! Eu ouvi dizer que eles usam roupas de todas as cores. Surpreendeu-me porque eu sempre pensei neles usando preto e branco. base (PT) hahaha eu tinha um papagaio cinza africano que dizia tudo o que eu dizia em casa. Como limpar o quarto! Bem como qualquer som alto.
Examples of COCOA-MT (formal) where RAMP performs worse than the base model in cross-lingual zero-shot setting. Potentially problematic in-context examples leading to mistranslations or hallucinations are highlighted. 10Table 10: Examples of COCOA-MT (formal) where RAMP performs worse than the base model in cross-lingual zero-shot setting. Potentially problematic in-context examples leading to mistranslations or hallucinations are highlighted.
| []
|
[
"Action Sensitivity Learning for Temporal Action Localization",
"Action Sensitivity Learning for Temporal Action Localization"
]
| [
"Jiayi Shao \nReLER Lab, CCAI\nZhejiang University\n\n",
"Xiaohan Wang \nReLER Lab, CCAI\nZhejiang University\n\n",
"Ruijie Quan \nReLER Lab, CCAI\nZhejiang University\n\n",
"Junjun Zheng \nAlibaba Group\n\n",
"Jiang Yang \nAlibaba Group\n\n",
"Yi Yang \nReLER Lab, CCAI\nZhejiang University\n\n"
]
| [
"ReLER Lab, CCAI\nZhejiang University\n",
"ReLER Lab, CCAI\nZhejiang University\n",
"ReLER Lab, CCAI\nZhejiang University\n",
"Alibaba Group\n",
"Alibaba Group\n",
"ReLER Lab, CCAI\nZhejiang University\n"
]
| []
| Temporal action localization (TAL), which involves recognizing and locating action instances, is a challenging task in video understanding. Most existing approaches directly predict action classes and regress offsets to boundaries, while overlooking the discrepant importance of each frame. In this paper, we propose an Action Sensitivity Learning framework (ASL) to tackle this task, which aims to assess the value of each frame and then leverage the generated action sensitivity to recalibrate the training procedure. We first introduce a lightweight Action Sensitivity Evaluator to learn the action sensitivity at the class level and instance level, respectively. The outputs of the two branches are combined to reweight the gradient of the two sub-tasks. Moreover, based on the action sensitivity of each frame, we design an Action Sensitive Contrastive Loss to enhance features, where the action-aware frames are sampled as positive pairs to push away the action-irrelevant frames. The extensive studies on various action localization benchmarks (i.e., MultiThumos, Charades, Ego4D-Moment Queries v1.0, Epic-Kitchens 100, Thumos14 and Activi-tyNet1.3) show that ASL surpasses the state-of-the-art in terms of average-mAP under multiple types of scenarios, e.g., single-labeled, densely-labeled and egocentric. | 10.48550/arxiv.2305.15701 | [
"https://export.arxiv.org/pdf/2305.15701v1.pdf"
]
| 258,887,395 | 2305.15701 | f2d601df600f6ca21aeb5f1a0ff22e242ead24bc |
Action Sensitivity Learning for Temporal Action Localization
Jiayi Shao
ReLER Lab, CCAI
Zhejiang University
Xiaohan Wang
ReLER Lab, CCAI
Zhejiang University
Ruijie Quan
ReLER Lab, CCAI
Zhejiang University
Junjun Zheng
Alibaba Group
Jiang Yang
Alibaba Group
Yi Yang
ReLER Lab, CCAI
Zhejiang University
Action Sensitivity Learning for Temporal Action Localization
Temporal action localization (TAL), which involves recognizing and locating action instances, is a challenging task in video understanding. Most existing approaches directly predict action classes and regress offsets to boundaries, while overlooking the discrepant importance of each frame. In this paper, we propose an Action Sensitivity Learning framework (ASL) to tackle this task, which aims to assess the value of each frame and then leverage the generated action sensitivity to recalibrate the training procedure. We first introduce a lightweight Action Sensitivity Evaluator to learn the action sensitivity at the class level and instance level, respectively. The outputs of the two branches are combined to reweight the gradient of the two sub-tasks. Moreover, based on the action sensitivity of each frame, we design an Action Sensitive Contrastive Loss to enhance features, where the action-aware frames are sampled as positive pairs to push away the action-irrelevant frames. The extensive studies on various action localization benchmarks (i.e., MultiThumos, Charades, Ego4D-Moment Queries v1.0, Epic-Kitchens 100, Thumos14 and Activi-tyNet1.3) show that ASL surpasses the state-of-the-art in terms of average-mAP under multiple types of scenarios, e.g., single-labeled, densely-labeled and egocentric.
Introduction
With an increasing number of videos appearing online, video understanding has become a prominent research topic in computer vision. Temporal action localization (TAL), which aims to temporally locate and recognize human actions with a set of categories in a video clip, is a challenging yet fundamental task in this area, owing to its various applications such as sports highlighting, human action analysis and security monitoring [23,56,40,16,14].
We have recently witnessed significant progress in TAL, where most methods can be mainly divided into two parts: 1) Two-stage approaches [67,77] tackle this task accompanied by the generation of class-agnostic action proposals and then perform classification and proposal boundaries refinement in proposal-level. 2) One-stage approaches [71, Action Instance: clothes drying take out clothes from laundry basket hang clothes on the hanger take out laundry basket What helps to recognize this action?
What helps to find the boundary? Figure 1. The motivation of our method. We show the action instance of clothes drying and depict the possible importance of each frame to recognizing the action category and locating action boundaries. Each frame's importance is different. 64,28] simultaneously recognize and localize action instances in a single-shot manner. Typical methods [68,25] of this type predict categories as well as locate corresponding temporal boundaries in frame-level, which achieve stronger TAL results currently. In training, they classify every frame as one action category or background, and regress the boundaries of frames inside ground-truth action segments. However, these works treat each frame within action segments equally in training, leading to sub-optimal performance.
When humans intend to locate action instances, the discrepant information of each frame is referred to. For the instance of action: clothes drying, as depicted in Fig 1, frames in the purple box promote recognizing clothes drying most as they describe the intrinsic sub-action: hang clothes on the hanger. Analogously, frames in red and gray boxes depict take out clothes from laundry basket and lift laundry basket, which are more informative to locate precise start and end timestamps respectively. In a word, each frame's contribution is quite different, due to intrinsic patterns of actions, as well as existing transitional or blurred frames.
Can we discover informative frames for classifying and localizing respectively? To this end, we first introduce a concept -Action Sensitivity, to measure the frame's importance. It is disentangled into two parts: action sensitivity to classification sub-task and action sensitivity to localization sub-task. For one sub-task, the higher action sensitivity each frame has, the more important it will be for this sub-task. With this concept, intuitively, more attention should be paid to action sensitive frames in training.
Therefore in this paper, we propose a lightweight Action Sensitivity Evaluator (ASE) for each sub-task to better exploit frame-level information. Essentially, for a specific sub-task, ASE learns the action sensitivity of each frame from two perspectives: class-level and instance-level. The class-level perspective is to model the coarse action sensitivity distribution of each action category and is achieved by incorporating gaussian weights. The instance-level perspective is complementary to class-level modeling and is supervised in a prediction-aware manner. Then the training weights of each frame are dynamically adjusted depending on their action sensitivity, making it more reasonable and effective for model training.
With the proposed ASE, we build our novel Action Sensitivity Learning framework dubbed ASL to tackle temporal action localization task (TAL) effectively. Moreover, to furthermore enhance the features and improve the discrimination between actions and backgrounds, we design a novel Action Sensitive Contrastive Loss (ASCL) based on ASE. It is implemented by elaborately generating various types of action-related and action-irrelevant features and performing contrasting between them, which brings multiple merits for TAL.
By conducting extensive experiments on 6 datasets and detailed ablation studies, we demonstrate ASL is able to classify and localize action instances better. In a nutshell, our main contributions can be summarized as follows:
• We propose a novel framework with an Action Sensitivity Evaluator component to boost training, by discovering action sensitive frames to specific sub-tasks, which is modeled from class level and instance level.
• We design an Action Sensitive Contrastive Loss to do feature enhancement and to increase the discrimination between actions and backgrounds.
• We verify ASL on various action localization datasets of multiple types: i) densely-labeled (i.e., Multi-Thumos [66] and Charades [46]). ii) egocentric (Ego4d-Moment Queries v1.0 [18] and Epic-Kitchens 100 [11]). iii) nearly single-labeled (Thumos14 [50] and ActivityNet1.3 [2]), and achieve superior results.
Related Works
Temporal Action Localization. Temporal action localization is a long-standing research topic. Contemporary approaches mostly fall into two categories, i.e. two-stage and one-stage paradigms. Previous two-stage methods usually focused on action proposal generation [27,29,49,51,58]. Others have integrated action proposal, calibrated backbone, classification and boundary regression or refinement modules into one single model [44,61,43,73]. Recent efforts have investigated the proposal relations [67,77,59], utilized graph modeling [64,67], or designed fine-grained temporal representation [38,48]. One-stage approaches usually perform frame-level or segment-level classification and directly localization or merging segments [43,72,28]. [71,34] process the video with the assistance of pre-defined anchors or learned proposals, while others utilize existing information and are totally anchor-free [25,68,70]. Currently, some works introduce pretrain-finetune to TAL task [62,63] or attempt to train the model in an efficient end-to-end manner [33,7,32]. Others focused on denselylabeled setting [54,10,9,22,52,8]. With the success of DETR [3] in object detection, query-based methods have also been proposed [42,51,52,33]. Our method falls into the one-stage TAL paradigm and performs frame-level classification and localization. Notably, [37,34] incorporate Gaussian kernels to improve receptive fields and optimize the temporal scale of action proposals, [22] use fixed gaussian-like weights to fuse the coarse and fine stage. We also utilize gaussian weights as one part of ASE, but it differs in that: i) Our gaussian-like weights in ASE serve as modeling class-level action sensitivity and to boost effective training, while [22,37,34] use it only to better encode the videos. ii) Our learned gaussian weights describe frames' contributions to each sub-task and can be easily visualized, whereas the semantic meaning of gaussian weights in [22,37,34] is unclear. iii) Our gaussian-like weights are totally learnable, category-aware and disentangled to different sub-tasks.
One-stage Object Detection. Analogous to TAL task, the task of object detection shares a few similarities. As a counterpart in object detection, the one-stage paradigm has surged recently. Some works remain anchor-based [31], while others are anchor-free, utilizing a feature pyramid network [30,53] and improved label-assign strategies [69,75,76,45]. Moreover, some works define key points in different ways (e.g. corner [24], center [13,53] or learned points [65]). These methods bring some inspirations and prior knowledge to design a better TAL framework.
Contrastive Learning. Contrastive learning [6,19,21] is a type of unsupervised learning objective that aims to bring similar examples closer together in feature space while pushing dissimilar examples apart. NCE [20] and Info-NCE [36] are two typical methods that mine data features by distinguishing between data and noise or negative samples. In TAL, [25] leverages ranking loss to boost discrimination between foreground and background while [42] contrasts different actions with a global representation of action segments. But we design a new contrastive loss both across different actions and between actions and backgrounds. Moreover, it is also performed on different parts within an action to mitigate the misalignment of sub-tasks.
! !"# ( ! , ! ) ( ! , ! ) ( " , " ) ( # , # ) ! ℎ $%&
Classification Head
Localization Head ⨂ ⨂ Action Sensitivity Evaluator
!"# $%&' ( & , ' , # ) … GT Location "(! … &'( ')& " ) ( #, ')& *+ &'( $%& ! %'$ ! () ! $%& " %'$ " () "
Instancelevel Evaluator
!"# ",! ℎ %-$
Instance-level Action Sensitivity Figure 2. The overview of ASL. Given a video clip, we first leverage a pre-trained 3D-CNN to extract the video feature and then utilize a Transformer Encoder to encode feature. We then use ground-truth location sampling to sample all ground-truth segments and feed these into Action Sensitivity Evaluator. In this module, we model sub-task-specific action sensitivity of each frame from class level and instancelevel. The former is learned by incorporating learnable gaussian-like weights and the latter is learned with an instance-level evaluator. Then each frame's weight in training is adjusted based on action sensitivity. Moreover, we propose an Action Sensitive Contrastive Loss to better enhance the feature and alleviate misalignment problems.
Method Problem Formulation. The task of temporal action localization (TAL) is to predict a set of action instances
{(t s m , t e m , c m )} M m=1
, given a video clip, where M is the number of predicted action instances, t s m ,t e m ,c m are the start, end timestamp and action category of the m-th predicted action instance. ASL is built on an anchor-free representation that classifies each frame as one action category or background, as well as regresses the distances from this frame to the start time and end time.
Overview. The overall architecture of ASL is shown in Fig 2. ASL is composed of four parts: video feature extractor, feature encoder, action sensitivity evaluator, and two sub-task heads. Concretely, given a video clip, we first extract the video feature using a pre-trained 3D-CNN model. Then we exert a feature encoder involving a pyramid network to better represent the temporal features at multiple levels. We propose an action sensitivity evaluator module to access the action sensitivity of frames to a specific subtask. The pyramid features combined with frames' action sensitivity are further processed by sub-task heads to generate predictions. We now describe the details of ASL.
Feature Encoder
With the success of [68,25], ASL utilizes a Transformer encoder and feature pyramid network to encode feature sequences into a multiscale representation. To enhance features, in Transformer encoder we design a new attention mechanism that operates temporal attention and channel attention parallelly and then fuses these two outputs.
For normal temporal attention that is performed in the temporal dimension, input features generate query, key and value tensors (Q t , K t , V t ) ∈ R T ×D , where T is the number of frames, D is the embedding dimension, then the output is calculated:
f ′ ta = softmax( QtK T t √ D )Vt(1)
For channel attention that is conducted in the channel dimension, input features generate query, key and value ten-
sors (Q c , K c , V c ) ∈ R C×D ,
where C is the number of channels. Then the output is calculated:
f ′ ca = softmax( QcK T c √ D )Vc(2)
Above two outputs are then added with a coefficient θ:
f ′ = (1 − θ)f ′ ta + θf ′ ca .
Then it is processed by layer normalization and feedforward network to obtain the encoded video representation f ∈ R T ×D .
Action Sensitivity Evaluator
As discussed in 1, not all frames inside ground-truth segments contribute equally to the sub-task (i.e., localization or classification). Thus we designed an Action Sensitivity Evaluator (ASE) module, the core idea of which is to determine the sub-task-specific action sensitivity of each frame and help the model pay more attention to those valuable frames. Besides, this module is lightweight, leading to efficient and effective training.
Digging into action instances, a key observation is that actions of a particular category often share a similar pattern, but they appear a little different in diverse scenarios or under different behavior agents. For example, action instances of category:wash vegetables inherently contain sub-actions: turn the tap on, take vegetables, wash, turn the tap off, where frames depicting washing are more sensitive to classification, frames depicting turning the tap on and turning the tap off are more sensitive to localization. But the respective duration or proportion of these sub-actions are dependent on the scenes and context of each action instance, thus making sensitive frames a little different.
This motivates us that action sensitivity of every frame should be decoupled into class-level and instance-level modeling and then recombined from these two parts. Meanwhile, action sensitivity should also be disentangled to subtasks (i.e, classification and localization).
For a given ground-truth G = {t s ,t e ,c}, each indicating the start time, end time and category of one action, we denote N f as the number of frames within this action, N c as the number of all pre-defined action categories. Our goal is to model the class-level action sensitivity p (disentangled into p cls , p loc to classification and localization respectively), instance-level action sensitivity q (disentagled to q cls , q loc ). Then we delve into details of action sensitivity learning. Class-level Modeling. Class-level sensitivity poses a fundamental prior for action sensitivity learning. Two key observations are that: i) video frames are often consecutive. ii) there often exist keyframes that have a peek value of sensitivity among all frames. In this case, we incorporate gaussian-like weights with learnable parameters µ, σ ∈ R Nc to model class-level action sensitivity p.
For classification sub-task, we model corresponding action sensitivity p cls i for the i-th frame:
p cls i = exp{− (d(i) − µc) 2 2σ 2 c } (3)
where d(i) is the distance from the current i-th frame to the central frame of the ground-truth segment which is normalized by N f . In this case, d(i) ∈ [−0.5, 0.5], when i = 1 (i.e., start frame), d(i) = −0.5, when i = N f (i.e., end frame), d(i) = 0.5. Learnable parameters µ c , σ c denote mean and variance of each category c's action sensitivity distribution. For localization sub-task, different frames are sensitive to locating start time and end time. Therefore action sensitivity p loc is the combination of two parts. We explicitly allocate one gaussian-like weights p sot to model the start time locating sensitivity and another p eot to model the end time locating sensitivity. p loc is calculated:
p loc i = exp{− (d(i) − µc,1) 2 2σc,1 } p sot i + exp{− (d(i) − µc,2) 2 2σc,2 } p eot i(4)
In this way, class-level action sensitivity p cls , p loc ∈ R N f ×Nc of all categories are learned with the optimization of model training. In addition, the initialization of µ c and σ c counts as there exists prior knowledge [68,53] according to different sub-tasks. For classification sub-task, nearcenter frames are more sensitive. Thus we initialize µ c as 0. For localization sub-task, near-start and near-end frames are more sensitive. Thus we initialize µ 1 as -0.5 and µ 2 as 0.5. For all σ, we initialize as 1.
Instance-level Modeling. Instance-level sensitivity poses an indispensable bias for class-level action sensitivity. In the instance level, as more information about frame contexts of each instance is referred to, we obtain instance-level action sensitivity q ∈ R N f using an instance-level evaluator operated directly on each frame, composed of 1D temporal convolution network which aims to encode temporal contexts better, a fully connected layer and a Sigmoid activation function. We denote Φ cls and Φ loc as two sub-task specific instance-level evaluator, then q cls and q loc are computed:
q cls i = Φ cls (fi) q loc i = Φ loc (fi)(5)
Unlike class-level modeling that contains some prior knowledge, instance-level sensitivity q is hard to learn in an unsupervised manner. Intuitively, from the instance level a sensitive frame implies that it can result in fine predictions. Hence we utilize the quality {Q i } N f i=1 of each frame's prediction to supervise the learning of q. For localization, The higher tIoU indicates a higher degree of overlap between two segments. Thus tIoU between the predicted segment and the ground-truth segment can measure the quality of prediction. For classification, the probability of the groundtruth category can serve as the quality of prediction. Therefore, qualityQ cls andQ loc are defined as:
Q cls i = φ(si[c)]) Q loc i = tIoU(∆i,∆)(6)
where s denotes the classification logits, ∆ i is the predicted segment (t s , t e ) of the i-th frame,∆ is the corresponding ground-truth segment, φ(·) is Sigmoid function. We use MSE loss to supervise the calculation of q. For q cls , optimization objective is formed as 7. Optimization of q loc is in a similar way.
Ls = MSE(q cls ,Q cls )(7)
Optimization with Action Sensitivity. In this way, combining class-level and instance-level, we obtain the final action sensitivity h(c) ∈ R N f (disentangled to classification and localization sub-task: h(c) → {h cls (c), h loc (c)}) for the ground-truth G = {t s ,t e ,c}:
h cls (c) = p cls 1[c] + q cls h loc (c) = p loc 1[c] + q loc(8)
where 1[c] ∈ R Nc denotes the one-hot vector ofc. Action sensitivity h is further used in training. For classification sub-task, we use a focal loss [31] to classify each frame, combined with classification action sensitivity h cls :
L cls = 1 Npos i (1in i h cls i (ci)L focal i + 1 bg i L focal i ) (9)
where 1 ini , 1 bgi are indicators that denote if the i-th frame is within one ground-truth action or if is background, N pos is the number of frames within action segments,c i denotes the action category of the i-th frame.
For localization sub-task, we use a DIoU loss [74] performed on frames within any ground-truth action instance, to regress offsets from current frames to boundaries, combined with localization action sensitivity h loc :
L loc = 1 Npos i (1in i h loc i (ci)L DIoU i )(10)
Action Sensitive Contrastive Loss
Now with ASE, each frame is equipped with action sensitivity and valuable frames to specific sub-tasks will be discovered. We further boost the training from the perspective of feature enhancement. Delve into feature representation, three shortcomings may hinder the performance: i) classification sensitive and localization sensitive frames are quite different, resulting in the misalignment of these two subtasks. ii) features in actions of different categories are not much discriminable. iii) features within action and outside boundaries are not much distinguished yet.
Therefore, on the basis of ASE, we propose an Action Sensitive Contrastive Loss (ASCL) to correspondingly tackle the above issues. Specifically, for a given video feature {f t } T t=1 and a ground-truth action instance G = {t s ,t e ,c}, we generate two action-related features and one action-irrelevant feature. First, to generate more valuable action-related features, we aim to find sensitive frames to these sub-tasks. Thinking that ASCL contrasts action instances of different classes, where class-level discrimination is more important, we hence utilize class-level sensitivity p to parse the sensitive frame ranges T cls for classification and T loc for localization. With one ground-truth categoryc, we get the most sensitive frames a cls , a sot , a eot for classification, start time localization, end time localization respectively. Take a eot as an example: (11) a cls and a sot are obtained in a similar way. Then, centered on a and extending forward and backward with a range of δN f , where δ is the sampling length ratio, we get sensitive frame ranges T cls for classification and T loc for localization (T cls and T loc are limited inside the action instance). Furthermore, we utilize class-level sensitivity to compute sensitive features f cls for classification, f loc for localization:
aeot = arg max i (p eot i 1[c]) f cls = 1 T t p cls t 1[c]ft, t ∈ T cls f loc = 1 T t p loc t 1[c]ft, t ∈ T loc(12)
Second, we aim to simultaneously discriminate actions and backgrounds better. Consequently we generate boundaryrelated background features f bg :
f bg = 1 T t ft, t ∈ [t s − δN f ,t s ] ∪ [t e ,t e + δN f ](13)
The learning objective of ASCL is based on a contrastive loss. As figure 2 shows, the positive samples P are constructed from f cls and f loc in action instances of the same category while the negative samples N come from: i) f cls and f loc in action instances of different categories. ii) all background features f bg . ASCL is computed for each batch B with N samples:
L ASCL = 1 N B − log fx∈P f * sim(f * , fx) fx∈P f * sim(f * , fx) + fx∈N f * sim(f * , fx)(14)
Optimizing ASCL will be of benefits to tackle the corresponding issues above : i) alleviate the misalignment of two sub-tasks by pulling features of their respective sensitive frames closer. ii) discriminate actions and backgrounds better by pushing action features of the same category closer and different categories apart, meanwhile pushing actions and backgrounds apart. Thus ASCL can enhance the feature representation and boost training furthermore.
Training and Inference
Training. In the training process, our final loss function is designed: (15) where L cls , L loc and L s are discussed in equation 9, equation 10 and equation 7. λ denotes the weight of Action Sensitive Contrastive loss.
L = L cls + L loc + Ls + λL ASCL
Inference. At inference time, our model outputs predictions (t s , t e , c) for every frame across all pyramids levels, where t s , t e denote the start and end time of action, c denote the predicted action category. c also serves as the action confidence score. SoftNMS [1] is then applied on these results to suppress redundant predictions.
Experiments
Datasets and Evaluation Metric
Datasets. To validate the efficacy of ASL, we conduct extensive experiments on 6 datasets of 3 types: Mul-tiThumos [66], Charades [46], Ego4D-Moment Queries v1.0 [18], Epic-Kitchens 100 [11], Thumos14 [50] and Ac-tivityNet1.3 [2].
MultiThumos is a densely labeled dataset including 413 sports videos of 65 classes. Charades is a large multi-label dataset containing 9848 videos of 157 action classes. These two datasets are both densely labeled and hence have multiple action instances in each video clip, where different actions may occur concurrently.
Ego4D-Moment Queries v1.0 (Ego4D-MQ1.0 for short) is a large-scale egocentric benchmark with 2,488 video Table 1. Results on MultiThumos and Charades. We report detection-mAP at different tIoU thresholds. Average mAP in [0.1:0.1:0.9] is reported on MultiThumos and Chrades. Best results are in bold. ‡ indicates results trained with stronger image augmentation [52,33]. I3D denotes using I3D [4] features and E2E indicates results trained in an end-to-end manner.
Model
Modality clips and 22.2K action instances from 110 pre-defined action categories, which is densely labeled and composed of long clips. EPIC-Kitchens 100 is a large egocentric action dataset containing 100 hours of videos from 700 sessions capturing cooking activities in different kitchens. These two datasets are both large, egocentric and densely labeled. Thumos14 is composed of 200 validation videos and 212 testing videos from 20 action classes while ActivityNet has 19,994 videos with 200 action classes. These two datasets are singly labeled and thus most of video clips in them have one action instance in each video clip. Evaluation Metric. Since ASL focuses on action detection, we take mean Average Precision (mAP) at certain tIoU thresholds as the evaluation metric. For all six datasets, we also report average mAP over several tIoU thresholds as the main metric. The tIoU thresholds are set consistent with the official setup or previous methods, which is detailed in the caption of Table 1
Implementation Details.
We follow the practice of using off-the-shelf preextracted features as input, specifically I3D [4] RGB features for MultiThumos, Charades, Thumos14 and Activ-ityNet , EgoVLP [26], Slowfast [15] and Omnivore [17] features for Ego4D-MQ1.0, Slowfast features [15,12] for Epic-Kitchens 100. We train our model with a batch size of 2, 16, 2, 2 for 60, 30, 15, 25 epochs on MultiThumos, Charades, Ego4D-MQ1.0 and Epic-Kitchens 100 respectively, where Table 3. Results on EPIC-Kitchens 100 val set. We report mAP at different tIoU thresholds and average mAP in [0.1:0.1:0.5]. All methods use the same SlowFast [15,12] the learning rate is set to 2e −4 . On ActivityNet and Thumos, we train our model with the batch size of 16, 2, the learning rate of 1e −3 , 1e −4 for 15, 30 epochs. We set λ as 0.3 and θ as 0.2.
In post-processing, we apply softNMS [1] to suppress redundant predictions. For fair comparison, we keep 200, 100, and 2000, 2000 predictions on Thumos14, Activi-tyNet, Ego4D-MQ1.0 and Epic-Kitchens 100 respectively. As on MultiThumos and Charades, considering that Point-TAD [52] splits a video clip into more than 4 parts and generates 48 predictions for each part, we keep 200 predictions on these two datasets.
Main Results
MultiThumos and Charades: We compare ASL with state-of-the-art methods under detection-mAP on these two densely-labeled TAL benchmarks. PDAN [10], coarsefine [22], MLAD [54], MS-TCT [9] are based on frame- level representation, while PointTAD [52] are query-based. As shown in Table 1, ASL reaches the highest mAP over all tIoU thresholds, outperforming the previous best method(i.e. PointTAD) by 2.0% absolute increase of average mAP on MultiThumos and 3.3% on Charades. Notably, PointTAD is further trained in an end-to-end manner with strong image augmentation while ASL is feature-based, indicating that ASL performs more accurate TAL with more efficiency on densely-labeled datasets. Ego4D-MQ1.0 and Epic-Kitchens 100: These two datasets are both challenging as they are large-scale, egocentric, densely labeled and composed of longer clips. Table 2 reports the results on Ego4D-MQ1.0. The state-ofthe-art methods are all based on Actionformer [68] and perform frame-level recognition and localization with strong features. Using the same feature EgoVLP [26], ASL surpasses the current best entry [35]. Using the combined EgoVLP, slowfast [15] and omnivore [17] features, ASL gains 2.06% improvement of average mAP on Val set and 2.21% on Test set. Moreover, ASL performs better than [5] which uses a stronger but not open-sourced In-ternVideo [5] feature. Meanwhile, on Epic-Kitchens 100 as table 3 shows, ASL outperforms the strong performance of Actionformer [68], BMN [27] and G-TAD [64] with the same Slowfast feature [15,12]. The above results demonstrate the advantage of ASL on the challenging, egocentric and densely labeled benchmark. Thumos14 and ActivityNet1.3: These two datasets are popular and nearly single-labeled, with approximately one action instance in each clip. Table 4 compares the results of ASL with various state-of-the-art methods (e.g., two-stage methods: BSN [29], G-TAD [64], P-GCN [67], RTDNet [51], one-stage methods: AFSD [25], SSN [73], Actionformer [68].). On Thumos14, across all tIoU thresholds, ASL achieves the best and gains 1.1% improvement of average mAP (67.9% v.s. 66.8%). On ActivityNet, ASL also outperforms previous methods of [email protected] and average mAP, though the gap is slight. One possible reason is that due to the success of action recognition on ActivityNet, we follow the common practice [68,71,77] to fuse external video-level classification scores [60]. In this case, classlevel sensitivity will not play an important role in training. Another reason may be that since each video in Ac-tivityNet is nearly single-labeled, our proposed ASCL will be short of positive and negative samples, leading to a nonsignificant increase compared to improvements on densely labeled datasets as Table 1, 2.
Ablation Study
To further verify the efficacy of our contributions, we analyze main components of ASL on MultiThumos. Action Sensitive Evaluator. Our proposed ASE can be divided into class-level and instance-level modeling. we first investigate the effect of these parts. In Table 5, baseline 1 denotes using our proposed framework without ASE and ASCL. After being equipped with class-level modeling, it boosts the performance by 1.1% of average mAP (baseline 2 v.s. baseline 1). When further adding instance-level bias, it gains 0.5% absolute increase (baseline 6 v.s. baseline 2). And our ASE contributes a total improvement of 1.6% on average mAP (baseline 7 v.s. baseline 1). It is obvious that action sensitivity modeling from both class-level and instance-level is beneficial to TAL task. Gaussian Weights. Then we analyze the effect of learnable gaussian weights in class-level action sensitivity learning. Table 6 demonstrates that compared to baseline 1 which does not use any gaussian weights to learn action sensitivity, fixed gaussian weights with prior knowledge do bring benefits (baseline 2,3 v.s. baseline 1). Meanwhile, learnable gaussian weights are more favored (baseline 4 v.s. baseline 3, baseline 7 v.s. baseline 6). Moreover, learnable gaussian weights for both two sub-tasks achieve the best results. Action Sensitive Contrastive Loss. Moreover, we delve into our proposed ASCL. As shown in Table 5, ASCL improves around 0.6% of average mAP on the basis of classlevel prior (baseline 5 v.s. baseline 2) and 0.5% on the basis of ASE (baseline 7 v.s. baseline 6). Baseline 4, where using ASCL alone denotes sampling near the center frame to form f cls and f loc directly, also gains an improvement of 0.3% compared to the vanilla framework (baseline 4 v.s. baseline 1). This indicates the effectiveness of contrast between actions and backgrounds. When performing ASCL based on ASE, it will facilitate the final performance more because it can alleviate the misalignment as discussed in 3.3.
Finally we discussed the hyperparameters in ASCL . Fig 3(a) shows the performance curve of average mAP corresponding to ASCL weight λ. Average mAP on MultiThumos generally improves when λ increases and slightly drop as λ reaches 0.4. Fig 3(b) reports the average mAP to different sampling length ratios δ. When δ equals 0.2, our method achieves the best. In this case, we set λ to 0.3 and δ to 0.2.
Qualitative Experiment
To better illustrate the effectiveness of ASL, we visualize some qualitative results of Ego4D-MQ1.0 benchmark in Fig 4. We show that frames depicting action's main subaction (i.e., hang clothes on the hanger, water run through hands) are of higher action sensitivity for classification. Frames depicting near-start and near-end sub-action (i.e, turn the tap on, lift laundry basket, e.t.c.) are of higher action sensitivity for localization. Moreover, action sensitivity of frames is not continuous, as our proposed instance-level action sensitivity is discrete partly because blurred or transitional frames exist in video clips.
Conclusion
In this paper, we introduce an Action Sensitivity Learning framework (ASL) for temporal action localization (TAL). ASL models action sensitivity of each frame and dynamically change their weights in training. Together with the proposed Action Sensitive Contrastive Loss (ASCL) to further enhance features and alleviate misalignment, ASL is able to recognize and localize action instances effectively. For accurate TAL, fine-grained information should be considered (e.g. frame-level information). We believe that ASL is a step further in this direction. In the future, efforts could be paid to more complicated sensitivity modeling. Besides, ASL could also be redesigned as a plug-and-play component that will be beneficial to various TAL methods.
Values of sample length ratio δ Values of average mAP (%) (b) Ablation of δ
Figure 3 .
3Ablation of hyperparameters in ASCL.
Figure 4 .
4Visualization of (Top) the frame sensitivity to sub-tasks of Action: hang clothes to dry and (bottom) Action: wash hands. Please zoom in for the best view.
features.Sub-Task Method
0.1 0.3 0.5 Avg
Verb
BMN [27]
10.8 8.4 5.6 8.4
G-TAD [64]
12.1 9.4 6.5 9.4
Actionformer [68] 26.6 24.2 19.1 23.5
ASL
27.9 25.5 19.8 24.6
Noun
BMN [27]
10.3 6.2 3.4 6.5
G-TAD [64]
11.0 8.6 5.4 8.4
Actionformer [68] 25.2 22.7 17.0 21.9
ASL
26.0 23.4 17.7 22.6
Table 4 .
4Results on Thumos14 and ActivityNet1.3. We report mAP at different tIoU thresholds. Average mAP in [0.3:0.1:0.7] is reported on THUMOS14 and [0.5:0.05:0.95] on ActivityNet1.3. Best results are in bold.Model
Feature
Thumos14
ActivityNet1.3
0.3
0.4
0.5
0.6
0.7
Avg.
0.5
0.75 0.95 Avg.
BSN [29]
TSN [57] 53.5 45.0 36.9 28.4 20.0 36.8 46.5 30.0
8.0
30.0
BMN [27]
TSN [57] 56.0 47.4 38.8 29.7 20.5 38.5 50.1 34.8
8.3
33.9
G-TAD [64]
TSN [57] 54.5 47.6 40.3 30.8 23.4 39.3 50.4 34.6
9.0
34.1
P-GCN [67]
I3D [4]
63.6 57.8 49.1
-
-
-
48.3 33.2
3.3
31.1
TCANet [38]
TSN [57] 60.6 53.2 44.6 36.8 26.7 44.3 52.3 36.7
6.9
35.5
ContextLoc [77]
I3D [4]
68.3 63.8 54.3 41.8 26.2 50.9 56.0 35.2
3.6
34.2
VSGN [71]
TSN [57] 66.7 60.4 52.4 41.0 30.4 50.2 52.4 36.0
8.4
35.1
RTD-Net [51]
I3D [4]
68.3 62.3 51.9 38.8 23.7 49.0 47.2 30.7
8.6
30.8
SSN [73]
TS [47]
51.0 41.0 29.8
-
-
-
43.2 28.7
5.6
28.3
GTAN [34]
P3D [39] 57.8 47.2 38.8
-
-
-
52.6 34.1
8.9
34.3
AFSD [25]
I3D [4]
67.3 62.4 55.5 43.7 31.1 52.0 52.4 35.3
6.5
34.4
React [42]
I3D [4]
69.2 65.0 57.1 47.8 35.6 55.0 49.6 33.0
8.6
32.6
TadTR [33]
I3D [4]
62.4 57.4 49.2 37.8 26.3 46.6 49.1 32.6
8.5
32.3
Actionformer [68] I3D [4]
82.1 77.8 71.0 59.4 43.9 66.8 54.2 36.9
7.6
36.0
ASL
I3D [4]
83.1 79.0 71.7 59.7 45.8 67.9 54.1 37.4
8.0
36.2
Table 5 .
5Ablation studies of components. ASE: Action Sensitivity Evaluator. class.: class-level modeling. inst.: instance-level modeling. ASCL: Action Sensitive Contrastive Loss.Table 6. Ablation studies of Gaussians weights. cls and loc denotes classification and localization sub-task. For gaussian weights in class-level action sensitivity learning, learnable/fixed denotes parameters learnable/not learnable. None denotes not using gaussian weights. Values of ASCL loss weight λ Values of average mAP (%)#
Components
mAP at different tIoUs
ASE
ASCL
0.2
0.5
0.7
Avg.
class. inst.
1
39.6 25.9 11.6 23.4
2
✓
41.0 26.5 12.9 24.5
3
✓
40.5 26.2 12.0 23.9
4
✓
40.2 26.1 11.8 23.7
5
✓
✓
41.9 27.0 13.6 25.1
6
✓
✓
41.8 27.2 13.3 25.0
7
✓
✓
✓
42.4 27.8 13.7 25.5
#
cls.
loc.
0.1
0.3
0.5
Avg.
1
None
None
40.9 26.3 12.3 24.2
2
fixed
None
40.9 26.5 12.4 24.4
3
fixed
41.0 26.6 12.7 24.6
4
learnable 41.7 26.8 13.0 24.9
5
learnable
None
41.9 27.1 13.0 24.9
6
fixed
42.0 26.9 13.4 25.1
7
learnable 42.4
27.8 13.7 25.5
24.6
24.8
25
25.2
25.4
25.6
0
0.2
0.4
0.6
Action Sensitivity for ClassificationAction Sensitivity for LocalizationAction Sensitivity for Classification Action: hang clothes to dry Action: wash hands…
…
…
Action Sensitivity for Localization
…
…
…
1.45
1.31 1.26
1.23
1.09
0.94
0.88
0.84
0.82
1.24
1.19
1.16
0.95
0.86
0.91
0.89
1.36
1.02
1.11
1.10
1.16
1.18
1.05
1.24
0.82
1.32
1.27
1.02
1.04
1.05
1.04
0.90
Soft-nms-improving object detection with one line of code. Navaneeth Bodla, Bharat Singh, Rama Chellappa, Larry S Davis, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision56Navaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. Soft-nms-improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision, pages 5561-5569, 2017. 5, 6
Activitynet: A large-scale video benchmark for human activity understanding. Victor Fabian Caba Heilbron, Bernard Escorcia, Juan Carlos Ghanem, Niebles, Proceedings of the ieee conference on computer vision and pattern recognition. the ieee conference on computer vision and pattern recognition25Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceed- ings of the ieee conference on computer vision and pattern recognition, pages 961-970, 2015. 2, 5
End-toend object detection with transformers. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko, Computer Vision-ECCV 2020: 16th European Conference. Glasgow, UKSpringerProceedings, Part I 16Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to- end object detection with transformers. In Computer Vision- ECCV 2020: 16th European Conference, Glasgow, UK, Au- gust 23-28, 2020, Proceedings, Part I 16, pages 213-229. Springer, 2020. 2
Quo vadis, action recognition? a new model and the kinetics dataset. Joao Carreira, Andrew Zisserman, proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition67Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 6, 7
Internvideo-ego4d: A pack of champion solutions to ego4d challenges. Guo Chen, Sen Xing, Zhe Chen, Yi Wang, Kunchang Li, Yizhuo Li, Yi Liu, Jiahao Wang, Yin-Dong Zheng, Bingkun Huang, Zhiyu Zhao, Junting Pan, Yifei Huang, Zun Wang, Jiashuo Yu, Yinan He, Hongjie Zhang, Tong Lu, Yali Wang, Limin Wang, Yu Qiao, 67Guo Chen, Sen Xing, Zhe Chen, Yi Wang, Kunchang Li, Yizhuo Li, Yi Liu, Jiahao Wang, Yin-Dong Zheng, Bingkun Huang, Zhiyu Zhao, Junting Pan, Yifei Huang, Zun Wang, Jiashuo Yu, Yinan He, Hongjie Zhang, Tong Lu, Yali Wang, Limin Wang, and Yu Qiao. Internvideo-ego4d: A pack of champion solutions to ego4d challenges, 2022. 6, 7
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, PMLR, 2020. 2International conference on machine learning. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on ma- chine learning, pages 1597-1607. PMLR, 2020. 2
Temporal action localization with a long-memory transformer. Gedas Cheng, & Feng, Bertasius, Tallformer, Computer Vision-ECCV 2022: 17th European Conference. Tel Aviv, IsraelSpringerProceedings, Part XXXIVGedas Cheng, Feng & Bertasius. Tallformer: Temporal ac- tion localization with a long-memory transformer. In Com- puter Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIV, pages 503-521. Springer, 2022. 2
Ctrn: Classtemporal relational network for action detection. Rui Dai, Srijan Das, Francois Bremond, arXiv:2110.13473arXiv preprintRui Dai, Srijan Das, and Francois Bremond. Ctrn: Class- temporal relational network for action detection. arXiv preprint arXiv:2110.13473, 2021. 2
Ms-tct: multi-scale temporal convtransformer for action detection. Rui Dai, Srijan Das, Kumara Kahatapitiya, S Michael, François Ryoo, Brémond, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition26Rui Dai, Srijan Das, Kumara Kahatapitiya, Michael S Ryoo, and François Brémond. Ms-tct: multi-scale temporal con- vtransformer for action detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20041-20051, 2022. 2, 6
Pdan: Pyramid dilated attention network for action detection. Rui Dai, Srijan Das, Luca Minciullo, Lorenzo Garattoni, Gianpiero Francesca, François Bremond, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer Vision26Rui Dai, Srijan Das, Luca Minciullo, Lorenzo Garattoni, Gi- anpiero Francesca, and François Bremond. Pdan: Pyramid dilated attention network for action detection. In Proceed- ings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2970-2979, 2021. 2, 6
Scaling egocentric vision: The epickitchens dataset. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray, European Conference on Computer Vision (ECCV). 25Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Da- vide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Scaling egocentric vision: The epic- kitchens dataset. In European Conference on Computer Vi- sion (ECCV), 2018. 2, 5
Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray, International Journal of Computer Vision (IJCV). 1307Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. International Journal of Computer Vision (IJCV), 130:33-55, 2022. 6, 7
Centernet: Keypoint triplets for object detection. Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, Qi Tian, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionKaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qing- ming Huang, and Qi Tian. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF inter- national conference on computer vision, pages 6569-6578, 2019. 2
End-to-end learning of motion representation for video understanding. Lijie Fan, Wenbing Huang, Chuang Gan, Stefano Ermon, Boqing Gong, Junzhou Huang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionLijie Fan, Wenbing Huang, Chuang Gan, Stefano Ermon, Boqing Gong, and Junzhou Huang. End-to-end learning of motion representation for video understanding. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6016-6025, 2018. 1
Slowfast networks for video recognition. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, Kaiming He, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)67Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. 6, 7
Devnet: A deep event network for multimedia event detection and evidence recounting. Chuang Gan, Naiyan Wang, Yi Yang, Dit-Yan Yeung, Alex G Hauptmann, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChuang Gan, Naiyan Wang, Yi Yang, Dit-Yan Yeung, and Alex G Hauptmann. Devnet: A deep event network for mul- timedia event detection and evidence recounting. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2568-2577, 2015. 1
Omnivore: A Single Model for Many Visual Modalities. Rohit Girdhar, Mannat Singh, Nikhila Ravi, Laurens Van Der Maaten, Armand Joulin, Ishan Misra, CVPR. 67Rohit Girdhar, Mannat Singh, Nikhila Ravi, Laurens van der Maaten, Armand Joulin, and Ishan Misra. Omnivore: A Sin- gle Model for Many Visual Modalities. In CVPR, 2022. 6, 7
Ego4d: Around the world in 3,000 hours of egocentric video. Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition25Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 2, 5
Bootstrap your own latent-a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Advances in neural information processing systems. 33Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Ghesh- laghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. 2
Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Michael Gutmann, Aapo Hyvärinen, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsMichael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth inter- national conference on artificial intelligence and statistics, pages 297-304. JMLR Workshop and Conference Proceed- ings, 2010. 2
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual rep- resentation learning. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 9729-9738, 2020. 2
Coarse-fine networks for temporal activity detection in videos. Kumara Kahatapitiya, Michael S Ryoo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition26Kumara Kahatapitiya and Michael S Ryoo. Coarse-fine net- works for temporal activity detection in videos. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8385-8394, 2021. 2, 6
Selfsupervised video representation learning with space-time cubic puzzles. Dahun Kim, Donghyeon Cho, In So Kweon, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence33Dahun Kim, Donghyeon Cho, and In So Kweon. Self- supervised video representation learning with space-time cu- bic puzzles. In Proceedings of the AAAI conference on arti- ficial intelligence, volume 33, pages 8545-8552, 2019. 1
Cornernet: Detecting objects as paired keypoints. Hei Law, Jia Deng, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European confer- ence on computer vision (ECCV), pages 734-750, 2018. 2
Learning salient boundary feature for anchorfree temporal action localization. Chuming Lin, Chengming Xu, Donghao Luo, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, Yanwei Fu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Chuming Lin, Chengming Xu, Donghao Luo, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, and Yan- wei Fu. Learning salient boundary feature for anchor- free temporal action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3320-3329, June 2021. 1, 2, 3, 7
Alex Jinpeng Kevin Qinghong Lin, Mattia Wang, Michael Soldan, Rui Wray, Eric Zhongcong Yan, Difei Xu, Rongcheng Gao, Wenzhe Tu, Weijie Zhao, Kong, arXiv:2206.01670Egocentric video-language pretraining. 67arXiv preprintKevin Qinghong Lin, Alex Jinpeng Wang, Mattia Sol- dan, Michael Wray, Rui Yan, Eric Zhongcong Xu, Difei Gao, Rongcheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. arXiv preprint arXiv:2206.01670, 2022. 6, 7
Bmn: Boundary-matching network for temporal action proposal generation. Tianwei Lin, Xiao Liu, Xin Li, Errui Ding, Shilei Wen, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer vision67Tianwei Lin, Xiao Liu, Xin Li, Errui Ding, and Shilei Wen. Bmn: Boundary-matching network for temporal action pro- posal generation. In Proceedings of the IEEE/CVF inter- national conference on computer vision, pages 3889-3898, 2019. 2, 6, 7
Single shot temporal action detection. Tianwei Lin, Xu Zhao, Zheng Shou, Proceedings of the 25th ACM international conference on Multimedia. the 25th ACM international conference on Multimedia1Tianwei Lin, Xu Zhao, and Zheng Shou. Single shot tempo- ral action detection. In Proceedings of the 25th ACM inter- national conference on Multimedia, pages 988-996, 2017. 1, 2
Bsn: Boundary sensitive network for temporal action proposal generation. Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, Ming Yang, European Conference on Computer Vision. 27Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for tempo- ral action proposal generation. In European Conference on Computer Vision, 2018. 2, 7
Feature pyramid networks for object detection. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyra- mid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 2117-2125, 2017. 2
Kaiming He, and Piotr Dollár. Focal loss for dense object detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision24Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Pro- ceedings of the IEEE international conference on computer vision, pages 2980-2988, 2017. 2, 4
Etad: Training action detection end to end on a laptop. Shuming Liu, Mengmeng Xu, Chen Zhao, Xu Zhao, Bernard Ghanem, Shuming Liu, Mengmeng Xu, Chen Zhao, Xu Zhao, and Bernard Ghanem. Etad: Training action detection end to end on a laptop, 2022. 2
End-to-end temporal action detection with transformer. Xiaolong Liu, Qimeng Wang, Yao Hu, Xu Tang, Shiwei Zhang, Song Bai, Xiang Bai, IEEE Transactions on Image Processing. 3127Xiaolong Liu, Qimeng Wang, Yao Hu, Xu Tang, Shiwei Zhang, Song Bai, and Xiang Bai. End-to-end temporal ac- tion detection with transformer. IEEE Transactions on Image Processing, 31:5427-5441, 2022. 2, 6, 7
Gaussian temporal awareness networks for action localization. Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, Tao Mei, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition27Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, and Tao Mei. Gaussian temporal awareness networks for action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 344-353, 2019. 2, 7
Where a strong backbone meets strong features -actionformer for ego4d moment queries challenge. Fangzhou Mu, Sicheng Mo, Gillian Wang, Yin Li, 67Fangzhou Mu, Sicheng Mo, Gillian Wang, and Yin Li. Where a strong backbone meets strong features -action- former for ego4d moment queries challenge, 2022. 6, 7
Representation learning with contrastive predictive coding. Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Repre- sentation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. 2
Temporal gaussian mixture layer for videos. A J Piergiovanni, Michael Ryoo, PMLRInternational Conference on Machine learning. AJ Piergiovanni and Michael Ryoo. Temporal gaussian mix- ture layer for videos. In International Conference on Ma- chine learning, pages 5152-5161. PMLR, 2019. 2
Changxin Gao, and Nong Sang. Temporal context aggregation network for temporal action proposal refinement. Zhiwu Qing, Haisheng Su, Weihao Gan, Dongliang Wang, Wei Wu, Xiang Wang, Yu Qiao, Junjie Yan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition27Zhiwu Qing, Haisheng Su, Weihao Gan, Dongliang Wang, Wei Wu, Xiang Wang, Yu Qiao, Junjie Yan, Changxin Gao, and Nong Sang. Temporal context aggregation network for temporal action proposal refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 485-494, 2021. 2, 7
Learning spatiotemporal representation with pseudo-3d residual networks. Zhaofan Qiu, Ting Yao, Tao Mei, Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatio- temporal representation with pseudo-3d residual networks, 2017. 7
Learning spatio-temporal representation with local and global diffusion. Zhaofan Qiu, Ting Yao, Chong-Wah Ngo, Xinmei Tian, Tao Mei, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhaofan Qiu, Ting Yao, Chong-Wah Ngo, Xinmei Tian, and Tao Mei. Learning spatio-temporal representation with local and global diffusion. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 12056-12065, 2019. 1
Reler@zju submission to the ego4d moment queries challenge 2022. Jiayi Shao, Xiaohan Wang, Yi Yang, Jiayi Shao, Xiaohan Wang, and Yi Yang. Reler@zju sub- mission to the ego4d moment queries challenge 2022, 2022. 6
React: Temporal action detection with relational queries. Dingfeng Shi, Yujie Zhong, Qiong Cao, Jing Zhang, Lin Ma, Jia Li, Dacheng Tao, European conference on computer vision. 27Dingfeng Shi, Yujie Zhong, Qiong Cao, Jing Zhang, Lin Ma, Jia Li, and Dacheng Tao. React: Temporal action detection with relational queries. In European conference on computer vision, 2022. 2, 7
Cdc: Convolutional-deconvolutional networks for precise temporal action localization in untrimmed videos. Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, Shih-Fu Chang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih-Fu Chang. Cdc: Convolutional-de- convolutional networks for precise temporal action localiza- tion in untrimmed videos. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 5734-5743, 2017. 2
Temporal action localization in untrimmed videos via multi-stage cnns. Zheng Shou, Dongang Wang, Shih-Fu Chang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZheng Shou, Dongang Wang, and Shih-Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1049-1058, 2016. 2
Training region-based object detectors with online hard example mining. Abhinav Shrivastava, Abhinav Gupta, Ross Girshick, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionAbhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard ex- ample mining. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 761-769, 2016. 2
Hollywood in homes: Crowdsourcing data collection for activity understanding. Gül Gunnar A Sigurdsson, Xiaolong Varol, Ali Wang, Ivan Farhadi, Abhinav Laptev, Gupta, Computer Vision-ECCV 2016: 14th European Conference. Amsterdam, The NetherlandsSpringer25Proceedings, Part I 14Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity under- standing. In Computer Vision-ECCV 2016: 14th Euro- pean Conference, Amsterdam, The Netherlands, October 11- 14, 2016, Proceedings, Part I 14, pages 510-526. Springer, 2016. 2, 5
Two-stream convolutional networks for action recognition in videos. Karen Simonyan, Andrew Zisserman, Karen Simonyan and Andrew Zisserman. Two-stream con- volutional networks for action recognition in videos, 2014. 7
Class semanticsbased attention for action detection. Deepak Sridhar, Niamul Quader, Srikanth Muralidharan, Yaoxin Li, Peng Dai, Juwei Lu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionDeepak Sridhar, Niamul Quader, Srikanth Muralidharan, Yaoxin Li, Peng Dai, and Juwei Lu. Class semantics- based attention for action detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13739-13748, 2021. 2
Bsn++: Complementary boundary regressor with scalebalanced relation modeling for temporal action proposal generation. Haisheng Su, Weihao Gan, Wei Wu, Yu Qiao, Junjie Yan, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence35Haisheng Su, Weihao Gan, Wei Wu, Yu Qiao, and Junjie Yan. Bsn++: Complementary boundary regressor with scale- balanced relation modeling for temporal action proposal gen- eration. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 2602-2610, 2021. 2
Jiang&Jingen Liu&A Roshan Zamir&George Toderici&Ivan Laptev&Mubarak Shah& Rahul Sukthankar. Yu-Gang, Yu-Gang Jiang&Jingen Liu&A Roshan Zamir&George Toderici&Ivan Laptev&Mubarak Shah& Rahul Sukthankar.
Thumos challenge: Action recognition with a large number of classes. 25Thumos challenge: Action recognition with a large number of classes. 2014. 2, 5
Relaxed transformer decoders for direct action proposal generation. Jing Tan, Jiaqi Tang, Limin Wang, Gangshan Wu, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)Oc27tober 2021Jing Tan, Jiaqi Tang, Limin Wang, and Gangshan Wu. Re- laxed transformer decoders for direct action proposal genera- tion. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision (ICCV), pages 13526-13535, Oc- tober 2021. 2, 7
Pointtad: Multi-label temporal action detection with learnable query points. Jing Tan, Xiaotong Zhao, Xintian Shi, Bin Kang, Limin Wang, Advances in Neural Information Processing Systems. 27Jing Tan, Xiaotong Zhao, Xintian Shi, Bin Kang, and Limin Wang. Pointtad: Multi-label temporal action detection with learnable query points. In Advances in Neural Information Processing Systems. 2, 6, 7
Fcos: Fully convolutional one-stage object detection. Zhi Tian, Chunhua Shen, Hao Chen, Tong He, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer vision24Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In Proceed- ings of the IEEE/CVF international conference on computer vision, pages 9627-9636, 2019. 2, 4
Modeling multi-label action dependencies for temporal action localization. Praveen Tirupattur, Kevin Duarte, S Yogesh, Mubarak Rawat, Shah, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition26Praveen Tirupattur, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. Modeling multi-label action dependen- cies for temporal action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1460-1470, 2021. 2, 6
VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Zhan Tong, Yibing Song, Jue Wang, Limin Wang, Advances in Neural Information Processing Systems. Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In Advances in Neural Information Processing Systems, 2022. 6
A robust and efficient video representation for action recognition. Heng Wang, Dan Oneata, Jakob Verbeek, Cordelia Schmid, International journal of computer vision. 1191Heng Wang, Dan Oneata, Jakob Verbeek, and Cordelia Schmid. A robust and efficient video representation for ac- tion recognition. International journal of computer vision, 119:219-238, 2016. 1
Temporal segment networks: Towards good practices for deep action recognition. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool, European conference on computer vision. SpringerLimin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment net- works: Towards good practices for deep action recognition. In European conference on computer vision, pages 20-36. Springer, 2016. 7
Rcl: Recurrent continuous localization for temporal action detection. Qiang Wang, Yanhao Zhang, Yun Zheng, Pan Pan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition2022Qiang Wang, Yanhao Zhang, Yun Zheng, and Pan Pan. Rcl: Recurrent continuous localization for temporal action detec- tion. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 13566-13575, 2022. 2
Changxin Gao, and Nong Sang. Proposal relation network for temporal action detection. Xiang Wang, Zhiwu Qing, Ziyuan Huang, Yutong Feng, Shiwei Zhang, Jianwen Jiang, Mingqian Tang, Xiang Wang, Zhiwu Qing, Ziyuan Huang, Yutong Feng, Shi- wei Zhang, Jianwen Jiang, Mingqian Tang, Changxin Gao, and Nong Sang. Proposal relation network for temporal ac- tion detection. 2021. 2
Cuhk & ethz & siat submission to activitynet challenge. Yuanjun Xiong, Limin Wang, Zhe Wang, Bowen Zhang, Hang Song, Wei Li, Dahua Lin, Yu Qiao, Luc Van Gool, Xiaoou Tang, Yuanjun Xiong, Limin Wang, Zhe Wang, Bowen Zhang, Hang Song, Wei Li, Dahua Lin, Yu Qiao, Luc Van Gool, and Xiaoou Tang. Cuhk & ethz & siat submission to activitynet challenge 2016, 2016. 7
R-c3d: Region convolutional 3d network for temporal activity detection. Huijuan Xu, Abir Das, Kate Saenko, Proceedings of the International Conference on Computer Vision (ICCV). the International Conference on Computer Vision (ICCV)Huijuan Xu, Abir Das, and Kate Saenko. R-c3d: Region convolutional 3d network for temporal activity detection. In Proceedings of the International Conference on Computer Vision (ICCV), 2017. 2
Boundary-sensitive pre-training for temporal localization in videos. Mengmeng Xu, Juan-Manuel Pérez-Rúa, Victor Escorcia, Brais Martinez, Xiatian Zhu, Li Zhang, Bernard Ghanem, Tao Xiang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionMengmeng Xu, Juan-Manuel Pérez-Rúa, Victor Escorcia, Brais Martinez, Xiatian Zhu, Li Zhang, Bernard Ghanem, and Tao Xiang. Boundary-sensitive pre-training for tempo- ral localization in videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7220- 7230, 2021. 2
Low-fidelity video encoder optimization for temporal action localization. Mengmeng Xu, Juan Manuel Perez Rua, Xiatian Zhu, Bernard Ghanem, Brais Martinez, Advances in Neural Information Processing Systems. 34Mengmeng Xu, Juan Manuel Perez Rua, Xiatian Zhu, Bernard Ghanem, and Brais Martinez. Low-fidelity video encoder optimization for temporal action localization. Ad- vances in Neural Information Processing Systems, 34:9923- 9935, 2021. 2
G-tad: Sub-graph localization for temporal action detection. Mengmeng Xu, Chen Zhao, David S Rojas, Ali Thabet, Bernard Ghanem, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)7Mengmeng Xu, Chen Zhao, David S. Rojas, Ali Thabet, and Bernard Ghanem. G-tad: Sub-graph localization for tem- poral action detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 1, 2, 6, 7
Reppoints: Point set representation for object detection. Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, Stephen Lin, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionZe Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9657-9666, 2019. 2
Every moment counts: Dense detailed labeling of actions in complex videos. Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, Li Fei-Fei, International Journal of Computer Vision. 1265Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo An- driluka, Greg Mori, and Li Fei-Fei. Every moment counts: Dense detailed labeling of actions in complex videos. Inter- national Journal of Computer Vision, 126:375-389, 2018. 2, 5
Graph convolutional networks for temporal action localization. Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, Chuang Gan, ICCV. Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, and Chuang Gan. Graph con- volutional networks for temporal action localization. In ICCV, 2019. 1, 2, 7
Actionformer: Localizing moments of actions with transformers. Chen-Lin Zhang, Jianxin Wu, Yin Li, European Conference on Computer Vision. 136647Chen-Lin Zhang, Jianxin Wu, and Yin Li. Actionformer: Localizing moments of actions with transformers. In Eu- ropean Conference on Computer Vision, volume 13664 of LNCS, pages 492-510, 2022. 1, 2, 3, 4, 6, 7
Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, Stan Z Li, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionShifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z Li. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 9759-9768, 2020. 2
Segtad: Precise temporal action detection via semantic segmentation. Chen Zhao, Merey Ramazanova, Mengmeng Xu, Bernard Ghanem, 2022Chen Zhao, Merey Ramazanova, Mengmeng Xu, and Bernard Ghanem. Segtad: Precise temporal action detection via semantic segmentation, 2022. 2
Video selfstitching graph network for temporal action localization. Chen Zhao, Ali Thabet, Bernard Ghanem, 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 7Chen Zhao, Ali Thabet, and Bernard Ghanem. Video self- stitching graph network for temporal action localization. In 2021 IEEE/CVF International Conference on Computer Vi- sion (ICCV), pages 13638-13647, 2021. 1, 2, 6, 7
Bottom-up temporal action localization with mutual regularization. Peisen Zhao, Lingxi Xie, Chen Ju, Ya Zhang, Yanfeng Wang, Qi Tian, Computer Vision-ECCV 2020: 16th European Conference. Glasgow, UKSpringerProceedings, Part VIII 16Peisen Zhao, Lingxi Xie, Chen Ju, Ya Zhang, Yanfeng Wang, and Qi Tian. Bottom-up temporal action localization with mutual regularization. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 539-555. Springer, 2020. 2
Temporal action detection with structured segment networks. Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaoou Tang, Dahua Lin, ICCV. 27Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xi- aoou Tang, and Dahua Lin. Temporal action detection with structured segment networks. In ICCV, 2017. 2, 7
Distance-iou loss: Faster and better learning for bounding box regression. Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, Dongwei Ren, The AAAI Conference on Artificial Intelligence (AAAI). Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, and Dongwei Ren. Distance-iou loss: Faster and better learning for bounding box regression. In The AAAI Confer- ence on Artificial Intelligence (AAAI), 2020. 5
Autoassign: Differentiable label assignment for dense object detection. Benjin Zhu, Jianfeng Wang, Zhengkai Jiang, Fuhang Zong, Songtao Liu, Zeming Li, Jian Sun, arXiv:2007.03496arXiv preprintBenjin Zhu, Jianfeng Wang, Zhengkai Jiang, Fuhang Zong, Songtao Liu, Zeming Li, and Jian Sun. Autoassign: Differ- entiable label assignment for dense object detection. arXiv preprint arXiv:2007.03496, 2020. 2
Feature selective anchor-free module for single-shot object detection. Chenchen Zhu, Yihui He, Marios Savvides, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionChenchen Zhu, Yihui He, and Marios Savvides. Feature se- lective anchor-free module for single-shot object detection. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 840-849, 2019. 2
Enriching local and global contexts for temporal action localization. Zixin Zhu, Wei Tang, Le Wang, Nanning Zheng, G Hua, ICCV, 2021. 1. 27Zixin Zhu, Wei Tang, Le Wang, Nanning Zheng, and G. Hua. Enriching local and global contexts for temporal action local- ization. In ICCV, 2021. 1, 2, 7
| []
|
[
"Deeply Coupled Cross-Modal Prompt Learning",
"Deeply Coupled Cross-Modal Prompt Learning",
"Deeply Coupled Cross-Modal Prompt Learning",
"Deeply Coupled Cross-Modal Prompt Learning"
]
| [
"Xuejing Liu [email protected] \nSenseTime Research\n\n",
"Wei Tang [email protected] \nNanjing University of Science and Technology\n\n",
"Jinghui Lu [email protected] \nSenseTime Research\n\n",
"Rui Zhao [email protected] \nSenseTime Research\n\n",
"Zhaojun Guo [email protected] \nFudan University\n\n",
"Fei Tan [email protected] \nSenseTime Research\n\n",
"Xuejing Liu [email protected] \nSenseTime Research\n\n",
"Wei Tang [email protected] \nNanjing University of Science and Technology\n\n",
"Jinghui Lu [email protected] \nSenseTime Research\n\n",
"Rui Zhao [email protected] \nSenseTime Research\n\n",
"Zhaojun Guo [email protected] \nFudan University\n\n",
"Fei Tan [email protected] \nSenseTime Research\n\n"
]
| [
"SenseTime Research\n",
"Nanjing University of Science and Technology\n",
"SenseTime Research\n",
"SenseTime Research\n",
"Fudan University\n",
"SenseTime Research\n",
"SenseTime Research\n",
"Nanjing University of Science and Technology\n",
"SenseTime Research\n",
"SenseTime Research\n",
"Fudan University\n",
"SenseTime Research\n"
]
| []
| Recent advancements in multimodal foundation models (e.g., CLIP) have excelled in zeroshot generalization. Prompt tuning involved in the knowledge transfer from foundation models to downstream tasks has gained significant attention recently. Existing prompttuning methods in cross-modal learning, however, either solely focus on language branch, or learn vision-language interaction in a shallow mechanism. In this context, we propose a Deeply coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly accommodates the interplay between vision and language with a Cross-Modal Prompt Attention (CMPA) mechanism, which enables the mutual exchange of respective representation through a well-connected multi-head attention module progressively and strongly. We then conduct comprehensive few-shot learning experiments on 11 image classification datasets and analyze the robustness to domain shift as well. Thorough experimental analysis evidently demonstrates the superb few-shot generalization and compelling domain adaption capacity of a well-executed DCP. The code can be found at https://github.com/GingL/CMPA. | 10.48550/arxiv.2305.17903 | [
"https://export.arxiv.org/pdf/2305.17903v2.pdf"
]
| 258,959,021 | 2305.17903 | f0d172b41055b0e3d6c5ac2d4f880d037dc10387 |
Deeply Coupled Cross-Modal Prompt Learning
Xuejing Liu [email protected]
SenseTime Research
Wei Tang [email protected]
Nanjing University of Science and Technology
Jinghui Lu [email protected]
SenseTime Research
Rui Zhao [email protected]
SenseTime Research
Zhaojun Guo [email protected]
Fudan University
Fei Tan [email protected]
SenseTime Research
Deeply Coupled Cross-Modal Prompt Learning
Recent advancements in multimodal foundation models (e.g., CLIP) have excelled in zeroshot generalization. Prompt tuning involved in the knowledge transfer from foundation models to downstream tasks has gained significant attention recently. Existing prompttuning methods in cross-modal learning, however, either solely focus on language branch, or learn vision-language interaction in a shallow mechanism. In this context, we propose a Deeply coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly accommodates the interplay between vision and language with a Cross-Modal Prompt Attention (CMPA) mechanism, which enables the mutual exchange of respective representation through a well-connected multi-head attention module progressively and strongly. We then conduct comprehensive few-shot learning experiments on 11 image classification datasets and analyze the robustness to domain shift as well. Thorough experimental analysis evidently demonstrates the superb few-shot generalization and compelling domain adaption capacity of a well-executed DCP. The code can be found at https://github.com/GingL/CMPA.
Introduction
Large foundation models pre-trained on web-scale image-text pairs such as CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) have shown promising performance on zero-shot image classification. Research has repeatedly shown that the general knowledge learned by the foundation models can also be transferred to diverse downstream tasks, such as few-shot image classification (Zhou et al., 2022b,a), visual grounding (Subramanian et al., 2022), visual question answering and so on. They have exhibited a significant potential in open-vocabulary scenarios. Thus, the + Work was done during internship at SenseTime Research * Corresponding author challenge associated with how to efficiently and effectively adapt large pre-trained models to downstream tasks has garnered increasing attention especially in low-resource training scenarios.
Directly fine-tuning the foundation model is infeasible due to the massive training parameters and the catastrophic forgetting caused by overfitting (Kirkpatrick et al., 2016). In contrast, the parameter-efficient prompt tuning approach explored in natural language processing has yielded significant success (Lester et al., 2021), leading to an increased examination of this technique within the realm of multi-modality, especially in the language-branch of CLIP. For example, CoOp (Zhou et al., 2022b) and ProDA (Lu et al., 2022b) explore the vanilla few-shot learning based on CLIP by adjusting the embedding or distribution of the text prompt. CoCoOp and ProGrad focus more on the unseen classes. They contextualize the text prompt either under the supervision of visual clues or tweak gradient direction to improve the generalization ability of the model.
The aforementioned approaches, however, only adjust the text embedding of CLIP and neglect the visual branch. The success of VPT (Jia et al., 2022) demonstrates the effectiveness of visual prompt learning. Inspired by this work, UPT (Zang et al., 2022) and MaPLe (Khattak et al., 2022) synergize the visual and textual prompts. Specifically, UPT improves the few-shot learning ability by generating visual and text prompts initially. MaPLe achieves better performance in the classification of unseen classes. They uncover the underlying rationale and limitations of dual-branch prompt tuning.
Concretely, the dual-branch CLIP learns the visual and language synergy only based on contrastive learning, whereas both branches lack mutual communication at the early stage of the network. Multi-modal prompt learning techniques, such as MaPLe and UPT, incorporate languagevision interactions of the network and achieve substantially improved performance, highlighting the significance of the cross-modal interactions. However, previous studies have leveraged languagevision interactions at a superficial level. For example, UPT generates visual and text prompts before they are fed into the corresponding encoders. MaPLe generates visual prompts conditioned on language counterparts by a mapping function. Many studies (Dosovitskiy et al., 2021;Wang et al., 2022a) have shown that neural networks, especially transformer-based models, can leverage the deep fusion of information from multiple views to improve their performance. It remains less explored in the thread of multi-modal few-shot learning. To this end, we design Deeply coupled Cross-modal Prompt learning (DCP) enhancing the language-vision interaction. Specifically, DCP is built upon CLIP, with additional text and visual prompts across multiple layers. Different from previous methods with deep prompt tuning (Jia et al., 2022;Zang et al., 2022;Khattak et al., 2022), DCP only initializes the first layer of visual and text prompt randomly. The subsequent prompts are generated by Cross-Modal Prompt Attention (CMPA) module, which elegantly integrates the prompts from the preceding cross-modal layer. CMPA is characterized with stronger connection in two folds, i.e., Depth and Breadth. 1) Depth means that CMPA intensifies the correlation of the prompts among different layers. 2) Breadth refers to that CMPA amplifies the interaction between visual and language modalities. CMPA is the core module to realize the deep coupling between two modalities. Essentially, DCP empowered by CMPA amalgamates uni-branch and dual-branch multi-modal pre-training paradigms in a favorable way in an attempt to bridge the discrepancy between visual and textual knowledge without introducing too much overhead.
To conclude, the contributions of this work are as follows:
• We develop a deeply coupled cross-modal prompt learning (DCP) method with a core module cross-modal prompt attention (CMPA). CMPA can reinforce the interaction between visual and language modals across different layers.
• We benchmark our method on 11 image classification datasets consisting of generic objects, scenes, actions and fine-grained categories. Our method surpasses visual prompt tuning, text prompt tuning and existing competitive multi-modal prompt tuning methods under the few-shot setting.
• We conduct experiments on domain adaptation tasks. Our method achieves comparable performance to the state-of-the-art methods, indicating the robustness of our method to domain shift.
2 Related Work 2.1 Vision-language Pre-trained Models
The advent of Transformer (Vaswani et al., 2017) has accelerated the development of large-scale pretraining. The application of Transformer in the multi-modal is divided into two schools of thought: one is the single-stream model, in which language and vision information are fused at the beginning and fed directly into the encoder together; the other is the dual-stream model, in which language and vision information first pass through two separate encoder modules at the beginning, and then the different modal information is fused through the cross Transformer. At the outset, the basic architecture of some contemporaneous work is BERT. The images are detected with Faster- RCNN (Ren et al., 2015) for region features, and these image region features are fed into BERT along with text information to align the text and image information. Following the same process as BERT, these methods first pretrain and then fine-tune on the corresponding tasks. Single-stream networks Alberti et al., 2019;Chen et al., 2019;Zhou et al., 2020;Qi et al., 2020; fuse information from different modalities directly through an encoder. The dual-stream models (Lu et al., 2019;Tan and Bansal, 2019) integrate different modal information through cross modal transformer. Empirically single-stream networks are more sufficient for information fusion, while dual-stream networks can be more efficient for training due to fewer training parameters. In the design of our method, we aim to combine the advantages of the single-stream and dual-stream, so as to enhance the cross-modal integration without introducing many training parameters.
Recent cross-modal large-scale pre-training models have made greater breakthroughs in train-ing data scale and tasks by devising various model architectures and training objectives, and have achieved impressive performance in many downstream tasks. CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) got remarkable zero-shot results after being pre-trained on millions or billions of (image, text) pairs collected from the internet. Coca combined the advantages of the contrast learning method (Radford et al., 2021) and the generative model SiMVLM (Wang et al., 2022b) by adding caption loss to the contrast loss of CLIP. OFA (Wang et al., 2022a), Unified-IO (Lu et al., 2022a) and Florence (Yuan et al., 2021) unified vision, language and multi-modal tasked by pretraining on both cross-modal and uni-modal data. These methods have achieved state-of-the-art results in many downstream tasks. Some methods are dedicated to improving the performance of certain specific tasks. UniTAB focused on grounded vision-language tasks such as grounded captioning and visual grounding. GLIP unified object detection and phrase grounding for pre-training. Pretraining models have opened up a situation where deep learning models scale and perform in tandem, becoming a revolutionary breakthrough in artificial intelligence and deep learning.
Prompt Learning
For a long time, first pre-training then fine-tuning was the dominant approach to apply large foundation models to downstream tasks. However, fine-tuning for large models is inefficient and may cause catastrophic forgetting (Kirkpatrick et al., 2016). Prompt learning is proposed to address the above problems. The prompt is usually a series of trainable parameters inserted into the input. The success of prompt learning in NLP (Lester et al., 2021) has inspired its application in other modalities. VPT (Jia et al., 2022) is a typical successful application of prompt learning on computer vision. Prompt learning has generated more attention and made great progress in cross-modal learning.
SoftCPT (Ding et al., 2022) and CPL (He et al., 2022) applied prompt tuning to different vision and language tasks and outperformed single-task prompt tuning method. CoOp (Zhou et al., 2022b), ProDA (Lu et al., 2022b) and UPT (Zang et al., 2022) adapted prompt learning to traditional fewshot visual recognition with CLIP as the backbone.
CoCoOp , ProGrad and MaPLe (Khattak et al., 2022) improved the classification performance of pre-trained models on novel categories by prompt learning. Different from previous methods, our approach brings stronger connection between modalities and layers with proposed cross-modal prompt attention. The stronger interaction between vision and language enables our method to get state-of-the-art performance in the few-shot learning.
Method
In this section, we first introduce the preliminaries, including CLIP (Radford et al., 2021), CoOp (Zhou et al., 2022b) and VPT (Jia et al., 2022). Then, we describe our deeply coupled prompt learning (DCP) and detail its underlying module CMPA.
Preliminaries
CLIP is a dual-encoder pre-trained model which consists of a text encoder and an image encoder. The text and image are independently encoded by the corresponding encoder, then projected to the same embedding space by a projection layer. Specifically, the backbone of the image encoder is ResNet (He et al., 2016) (d=256) or ViT (d=512), which can map the high-dimension image into a low-dimension embedding. The text encoder is built based on the decoder of Transformer (Vaswani et al., 2017), which is also known as GPT (Brown et al., 2020), to generate a vectorized representation for a sequence of words. The model uses a contrastive loss to align the two modalities during training stage. The training objective is to maximize the cosine similarity for the match image-text pairs and minimize the unmatched ones.
In zero-shot image recognition, the image encoder of CLIP encodes the image into a feature representation x. The input text is usually in the form of "a photo of a {class}." (discrete prompt), where the "{class}" token is the name of each category. For each dataset containing K categories, a set of text prompts {w i } K i=1 are generated by the text encoder. The prediction probability is computed as
p(y | x) = exp (cos (x, w y ) /τ ) K i=1 exp (cos (x, w i ) /τ ) ,(1)
where τ is a temperature parameter.
CoOp adapts CLIP to downstream tasks with prompt tuning. Specifically, CoOp tries to learn prompt embedding (continuous prompt) during few-shot training to avoid manual prompts. The prompt fed in the text encoder is designed as
t = [V ] 1 [V ] 2 ...[V ] M [CLASS], where [V ] m (m ∈ {1, ..., M })
is initialized with the same dimension as word embeddings. The parameters of the CLIP model is frozen while the prompt is trainable. The prediction probability of CoOp is
p(y | x) = exp (cos (x, g(t y ))/τ ) K i=1 exp (cos (x, g(t i ))/τ ) ,(2)
where g(·) denotes the text encoder.
VPT is an efficient and effective way to adapt large-scale Transformer models in vision with only a small amount of trainable parameters. The backbone of VPT is ViT, which is the same as the im-
where L denotes the number of Transformer layers and Head is the classification head. Only the prompts and classification head is learnt during training. VPT achieves impressive performance on 24 downstream recognition tasks.
Cross-modal Prompt Attention
Inspired by the advance of prompt learning in vision and language, recent studies start to explore multi-modal prompt learning (Zang et al., 2022;Khattak et al., 2022). These methods update the visual and text prompt simultaneously to achieve balance in the learning of visual and text embedding. Although the visual and text embedding are adapted to the few-shot data, the interaction between visual and text is still insufficient. Hence we propose deeply coupled cross-modal prompt learning (DCP), which can enhance the communication between prompts across different layers and modalities. The essential module of DCP is cross-modal prompt attention, which fuses visual and text with multi-head cross-modal attention. Figure 1 depicts the pipeline of DCP and the detailed architecture of cross-modal prompt attention (CMPA). Our method follows the implementation of CLIP, which is also a dual-encoder model. Differently, we add prompts to every branch, and enable information fusion between vision and language during training through CMPA. Specifically, CMPA is a multi-head attention with visual and text prompts as inputs. The language prompts of the first layer are initialized with the pre-trained CLIP word em-beddings of the template 'a photo of a <class>', whereas the visual prompts inserted into the first layer are randomly initialized from a normal distribution. Then, the prompts of the next layer are generated by CMPA based on the prompts from the preceding layer. Formally, CMPA can be formulated as
P l+1 t = softmax P l v (P l t ) T √ d k P l t (4) P l+1 v = softmax P l t (P l v ) T √ d k P v l (5) l = 1, 2, ..., N − 1,(6)
where P l t and P l v denote the text prompt and visual prompt the the l layer of each encoder, respectively. N is the depth of CMPA, which is smaller than the length of text and visual encoder. d k is the dimension of keys.
Different from previous methods, only the prompts from the first layer are randomly generated. The subsequent prompts condition on the prompts from both visual and language modal. CMPA enables information communication between vision and text through corresponding prompts. Totally, CMPA brings stronger feature fusion from two aspects: layers and modalities. Note that CMPA shares parameters from different layers, and the additional trainable parameters is only in a small amount.
Experiments
In this section, we conduct experiments to evaluate the effectiveness of our method under two settings. One is few-shot visual recognition including 11 different datasets covering generic objects, scenes, actions and fine-grained categories. The other is domain adaptation, where we train our model on ImageNet and evaluate it on other four datasets.
Few-shot Learning
Datasets
Following CoOp (Lester et al., 2021), we evaluate our method on 11 public visual recognition datasets: ImageNet (Deng et al., 2009), Cal-tech101 (Fei-Fei et al., 2004, OxfordPets (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), Flowers102 (Nilsback and Zisserman, 2008), Food101 (Bossard et al., 2014), FGVCAircraft (Maji et al., 2013), SUN397 (Xiao et al., 2010), DTD (Cimpoi et al., 2014), EuroSAT (Helber et al., 2019) and UCF101 (Soomro et al., 2012). We also use the same 1, 2, 4, 8 and 16 shots as CoOp for training and the full test set for evaluation purpose. The reported results are the average over three runs with different random seeds.
Implementation Details
We use the pre-trained ViT-B/16 CLIP model as our backbone. The length of prompt tokens for visual and textual context are both 16. The prompt depth is 9 as a trade-off between accuracy and training efficiency. We set the batch-size to 4 with a learning rate of 0.0035 via SGD optimizer. We use 20 epochs for most datasets, except Im-ageNet, SUN397 and Food101. Also, 5-epoch setting works for diverse shots of Food101, 1/2/4shot of ImageNet, and 1/2-shot of SUN397, respectively.
Main Results
Baseline Methods. We compare our method with the original zero-shot CLIP, text prompt learning (CoOp), visual prompt learning (VPT) and multimodal prompt learning (MaPLe), which all have ViT-B/16 as visual backbone. Basically, we follow the implementation of MaPLe (Khattak et al., 2022). The prompt length of CoOp is set to 16. VPT uses a prompt length of 8 and the visual and text prompt length of MaPLe is 2. The training epoch of CoOp is defined as 10, and that of VPT and MaPLe is 5. We use the deep variant of VPT in few-shot experiments. The prompt depth of MaPLe is 9 as their original setting.
Performance Analysis. Figure 2 demonstrates our results comparison with other methods. The top left sub-figure shows the average performance of four methods. We can have the following findings. 1) Overall, cross-modal prompt learning (DCP and MaPLe) gets a large performance gain compared with single-modal prompt learning methods (VPT and CoOp). VPT and CoOp achieve comparable performance on different shots. These results demonstrate the superiority of crossmodal prompt learning over uni-modal prompt learning. 2) Although both belong to multi-modal prompt learning methods, our method still outperforms MaPLe on 1/2/4/8/16 shots settings by 1.72/3.18/3.19/2.20/2.76(%). MaPLe utilized a linear layer to generate visual prompts from text prompts. Our proposed DCP enhances the interaction between vision and language with a cross- modal prompt attention, which can not only guide visual embedding learning through text prompts, but also influence the language embedding with visual prompts. 3) Compared with 2/4/8/16 shots, our approach achieves a lower performance gain on one shot. We can also find that on separate datasets, our method achieves the best performance in almost all 16-shot cases (except for Food101). This phenomenon indicates that our method is more effective in cases where the number of shots is rela-tively large. This is probably because the alignment between different modals is more challenging due to the small number of samples per category.
For individual datasets, we find that our approach has significant performance improvements on Flowers102, StanfordCars, FGVCAircraft, and EuroSAT. However, on the datasets of general categories such as ImageNet and Caltech101, our method does not achieve satisfactory performance when the number of shots is less than 16. We can conclude that our method is more robust for finegrained classification datasets, and we need more shots for general category classification. On the dataset of Food101, our method performs slightly lower than MaPLe. We also find that all methods underperform zero-shot on 1-shot setting. We suppose this phenomenon comes from the noisy training data of Food101 (Bossard et al., 2014).
Ablation Study
The are two important settings in CMPA: the feature fusion method in different prompts and parameter sharing of CMPA across different layers. We conduct corresponding ablation experiments in this section to find the optimal setting.
Feature Fusion in Prompts. Before the visual and text prompts are fed into the CMPA, the dimension of the batch size is supposed to be consistent. The defined batch size only affects visual prompt while the batch size of text prompts is actually the number of the dataset due to the implementation of CLIP. The dimension transformation of visual and text prompts is shown in Figure 3. The batch size of text prompt is actually the number of categories in the dataset. We experiment with three settings to align the batch size of visual and text prompts. Figure 4 reports the average accuracy over three runs on different shots (1/2/4/8/16) of 10 datasets (without ImageNet for time efficiency). 'Avg' means that we use the average of visual and text prompts across the dimension of batch. 'Max' stands for using the features with the highest response across the batch dimension as the visual and text prompt. 'First' represents that we select the first embedding across the batch dimension of visual and text prompts to feed into CMPA. Overall, the 'avg' setting of feature fusion can achieve better performance compared with 'max' and 'first'.
Parameter Sharing. We intend to learn as few parameters as possible to achieve a transfer of largescale pre-trained models in downstream tasks. Setting the prompt depth to 9 means that there are 9 CMPA modules, which greatly increases the number of trainable parameters for the model. Hence we conduct the experiment in which the parameters of CMPA are shared across different layers. Table 1 shows the average results of different shots on 11 datasets. 'PS' is short for 'parameter sharing'. It can be observed that on most shots (except for 8 shots) the performance of parameter sharing is higher than non-sharing setting.
Domain Generalization
After prompt tuning on specific datasets, we do not want to lose the general knowledge of the pretrained large model. In this section, we conduct domain adaptation experiments to evaluate the generalization ability of our model DCP.
Method
Datasets and Implementation Details
Following (Zhou et al., 2022b), we use Im-ageNet (Deng et al., 2009) as source domain, and ImageNet V2 (Recht et al., 2019), ImageNet-Sketch (Wang et al., 2019), ImageNet-A (Hendrycks et al., 2021b) and ImageNet-R (Hendrycks et al., 2021a) as target domains. We train our model on the 16 shots of ImageNet, and test it on other four datasets. Different from the settings in few-shot task, the training epoch on 16shot ImageNet in cross domain task is set to 5. We also decrease the prompt length to 8.
Main Results
Table 2 compares our method DCP with other prompt learning methods on cross-domain tasks. The compared methods include zero-shot CLIP, unimodal prompt learning methods (CoOp, Co-CoOp and VPT-Deep) and multi-modal prompt learning methods (MaPLe and UPT). The best results on different datasets are in bold, and the second best results are underlined. We can observe that 1) prompt learning does not corrupt the generalization ability of pre-trained large models; 2) multi-modal prompt learning methods outperform unimodal prompt learning methods in generalization performance; 3) our method can get comparable performance as the state-of-the-art methods.
Discussion and Conclusion
This paper proposes a deeply coupled cross-modal prompt learning method, with a core module crossmodal prompt attention. Our method focuses on optimizing the interaction across different models and layers to address the alignment between vision and language. Experiments on few-shot image clas-sification and domain adaptation evidence that our method can transfer the general knowledge learned by pre-trained foundation models to downstream tasks without penalty of the original generalization ability. Our method provides a strong baseline on few-shot image classification. The deep fusion between visual and language information may enable our approach to have greater potential for complex cross-modal tasks, such as referring expression comprehension (Subramanian et al., 2022), image retrieval (Baldrati et al., 2022) and visual question answering . We will apply our method to such complicated cross-modal tasks to evaluate its effectiveness in our future work.
Limitations
We discover that for datasets with a relatively large number of categories, our method requires a more delicate setting of epoch under different shots. Figure 5 shows the average results on Sun397 and ImageNet of different epochs. It can be observed that for datasets with a large number of categories (such as Sun397 and ImageNet), as the number of shots decreases, the performance deteriorates with an increase in the number of epochs, which is not evident on the datasets with a small number of cat-egories. We will delve further into this problem to find the reason and solution.
Figure 1 :
1The architecture of deeply coupled prompt learning and cross-modal prompt attention module.
age encoder of CLIP. There are two variants of VPT: VPT-Shallow and VPT-Deep. VPT-Shallow only inserts prompts into the first layer of the Transformer. The visual prompt can be defined as p = [P ] 1 [P ] 2 ...[P ] N , where [P ] n (n ∈ {1, ..., N }) keeps the same dimension as the image embedding. The input of VPT-shallow is [x cls , p, x], where x cls is the classification token [CLS]. VPT-Deep introduces visual prompts at every Transformer layer. The deep VPT can be formulated as x i cls , . . . , x i = L i x i−1 cls , p i−1 , x i−1 i = 1, 2, ..., L y = Head x L cls ,
Figure 2 :
2Main results of few-shot image classification on 11 datasets. The accuracy (%) is the average over three runs on 1/2/4/8/16 shots. Overall, our DCP (red line) outperforms other methods by a large margin on the average results of 11 datasets.
Figure 3 :
3The illustration of feature fusion (FF). The left branch represents the text prompt, and the right shows the visual prompt.
Figure 4 :
4The comparison of different feature fusion methods on 10 datasets without ImageNet.
Figure 5 :
5Accuracy comparison of different epochs on Sun397 and ImageNet.
Table 1 :
1The performance comparison with and without parameter sharing. The results are the average accuracy on 11 datasets of different shots.Variant
2
4
6
8
16
w/ PS
68.99 72.56 75.69 78.42 80.55
w/o PS 67.42 71.34 75.27 78.49 80.53
Table 2 :
2Domain generalization comparison of DCP with existing approaches. The winners and runners-up are marked in bold font and underlined, respectively.
AcknowledgementWe would like to thank anonymous reviewers for their insightful comments to help improve the paper. This publication has emanated from research conducted with the support of SenseTime Research and Hetao Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone (HZQB-KCZYZ-2021045.
Fusion of detected objects in text for visual question answering. Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter, 10.18653/v1/D19-1219Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsChris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of detected objects in text for visual question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019, pages 2131-2140. Association for Computational Linguistics.
Effective conditioned and composed image retrieval combining clip-based features. Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto Del Bimbo, 10.1109/CVPR52688.2022.02080IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022. New Orleans, LA, USAIEEEAlberto Baldrati, Marco Bertini, Tiberio Uricchio, and Alberto Del Bimbo. 2022. Effective conditioned and composed image retrieval combining clip-based fea- tures. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 21434-21442. IEEE.
Food-101 -mining discriminative components with random forests. Lukas Bossard, Matthieu Guillaumin, Luc Van Gool, Computer Vision -ECCV 2014 -13th European Conference. Zurich, SwitzerlandSpringer8694Proceedings, Part VILukas Bossard, Matthieu Guillaumin, and Luc Van Gool. 2014. Food-101 -mining discriminative components with random forests. In Computer Vision -ECCV 2014 -13th European Conference, Zurich, Switzer- land, September 6-12, 2014, Proceedings, Part VI, volume 8694 of Lecture Notes in Computer Science, pages 446-461. Springer.
Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish; NeurIPSvirtualTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
UNITER: learning universal image-text representations. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, abs/1909.11740CoRRYen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. UNITER: learning universal image-text representations. CoRR, abs/1909.11740.
Describing textures in the wild. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, Andrea Vedaldi, 10.1109/CVPR.2014.4612014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USAIEEE Computer SocietyMircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. 2014. De- scribing textures in the wild. In 2014 IEEE Confer- ence on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 3606-3613. IEEE Computer Society.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 10.1109/CVPR.2009.5206848IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Miami, Florida, USAIEEE Computer SocietyJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hier- archical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248-255. IEEE Computer Soci- ety.
Shiming Xiang, and Chunhong Pan. 2022. Prompt tuning with soft context sharing for visionlanguage models. Kun Ding, Ying Wang, Pengzhang Liu, Qiang Yu, Haojian Zhang, 10.48550/arXiv.2208.13474abs/2208.13474CoRRKun Ding, Ying Wang, Pengzhang Liu, Qiang Yu, Hao- jian Zhang, Shiming Xiang, and Chunhong Pan. 2022. Prompt tuning with soft context sharing for vision- language models. CoRR, abs/2208.13474.
Jacob Steinhardt, and Dawn Song. 2021b. Natural adversarial examples. Dan Hendrycks, Kevin Zhao, Steven Basart, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual. Computer Vision Foundation / IEEEDan Hendrycks, Kevin Zhao, Steven Basart, Jacob Stein- hardt, and Dawn Song. 2021b. Natural adversarial examples. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 15262-15271. Computer Vision Foundation / IEEE.
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, PMLRProceedings of the 38th International Conference on Machine Learning, ICML 2021. the 38th International Conference on Machine Learning, ICML 2021139Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up vi- sual and vision-language representation learning with noisy text supervision. In Proceedings of the 38th In- ternational Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4904-4916. PMLR.
Visual prompt tuning. Menglin Jia, Luming Tang, Claire Bor-Chun Chen, Serge J Cardie, Bharath Belongie, Ser-Nam Hariharan, Lim, 10.1007/978-3-031-19827-4_41Computer Vision -ECCV 2022 -17th European Conference. Tel Aviv, IsraelSpringer13693Proceedings, Part XXXIIIMenglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge J. Belongie, Bharath Hariharan, and Ser-Nam Lim. 2022. Visual prompt tuning. In Com- puter Vision -ECCV 2022 -17th European Confer- ence, Tel Aviv, Israel, October 23-27, 2022, Proceed- ings, Part XXXIII, volume 13693 of Lecture Notes in Computer Science, pages 709-727. Springer.
Maple: Multi-modal prompt learning. Hanoona Muhammad Uzair Khattak, Muhammad Abdul Rasheed, Salman Maaz, Fahad Shahbaz Khan, Khan, 10.48550/arXiv.2210.03117abs/2210.03117CoRRMuhammad Uzair Khattak, Hanoona Abdul Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. 2022. Maple: Multi-modal prompt learning. CoRR, abs/2210.03117.
Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. James Kirkpatrick, Razvan Pascanu, Neil C Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, abs/1612.00796CoRROvercoming catastrophic forgetting in neural networksJames Kirkpatrick, Razvan Pascanu, Neil C. Rabi- nowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, Demis Hassabis, Clau- dia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. CoRR, abs/1612.00796.
3d object representations for fine-grained categorization. Jonathan Krause, Michael Stark, Jia Deng, Li Fei-Fei, 10.1109/ICCVW.2013.772013 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2013. Sydney, AustraliaIEEE Computer SocietyJonathan Krause, Michael Stark, Jia Deng, and Li Fei- Fei. 2013. 3d object representations for fine-grained categorization. In 2013 IEEE International Confer- ence on Computer Vision Workshops, ICCV Work- shops 2013, Sydney, Australia, December 1-8, 2013, pages 554-561. IEEE Computer Society.
The power of scale for parameter-efficient prompt tuning. Brian Lester, Rami Al-Rfou, Noah Constant, 10.18653/v1/2021.emnlp-main.243Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics2021EMNLPBrian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Domini- can Republic, 7-11 November, 2021, pages 3045- 3059. Association for Computational Linguistics.
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceGen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innova- tive Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educa- tional Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 11336-11344. AAAI Press.
Visualbert: A simple and performant baseline for vision and language. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang, abs/1908.03557CoRRLiunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A sim- ple and performant baseline for vision and language. CoRR, abs/1908.03557.
Grounded language-image pre-training. Pengchuan Liunian Harold Li, Haotian Zhang, Jianwei Zhang, Chunyuan Yang, Yiwu Li, Lijuan Zhong, Lu Wang, Lei Yuan, Jenq-Neng Zhang, Kai-Wei Hwang, Jianfeng Chang, Gao, 10.1109/CVPR52688.2022.01069IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022. New Orleans, LA, USAIEEELiunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai- Wei Chang, and Jianfeng Gao. 2022. Grounded language-image pre-training. In IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10955-10965. IEEE.
Declaration-based prompt tuning for visual question answering. Yuhang Liu, Wei Wei, Daowan Peng, Feida Zhu, 10.24963/ijcai.2022/453Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022. the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022Vienna, Austriaijcai.orgYuhang Liu, Wei Wei, Daowan Peng, and Feida Zhu. 2022. Declaration-based prompt tuning for visual question answering. In Proceedings of the Thirty- First International Joint Conference on Artificial In- telligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 3264-3270. ijcai.org.
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaJiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguis- tic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Pro- cessing Systems 2019, NeurIPS 2019, December 8- 14, 2019, Vancouver, BC, Canada, pages 13-23.
Unifiedio: A unified model for vision, language, and multimodal tasks. Jiasen Lu, Christopher Clark, Rowan Zellers, 10.48550/arXiv.2206.08916abs/2206.08916CoRRRoozbeh Mottaghi, and Aniruddha KembhaviJiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022a. Unified- io: A unified model for vision, language, and multi- modal tasks. CoRR, abs/2206.08916.
12-in-1: Multi-task vision and language representation learning. Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, Stefan Lee, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA2020Computer Vision Foundation / IEEEJiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10434-10443. Computer Vision Foundation / IEEE.
Prompt distribution learning. Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian, 10.1109/CVPR52688.2022.00514IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022. New Orleans, LA, USAIEEEYuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. 2022b. Prompt distribution learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 5196- 5205. IEEE.
Finegrained visual classification of aircraft. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew B Blaschko, Andrea Vedaldi, abs/1306.5151CoRRSubhransu Maji, Esa Rahtu, Juho Kannala, Matthew B. Blaschko, and Andrea Vedaldi. 2013. Fine- grained visual classification of aircraft. CoRR, abs/1306.5151.
Automated flower classification over a large number of classes. Maria-Elena Nilsback, Andrew Zisserman, 10.1109/ICVGIP.2008.47Sixth Indian Conference on Computer Vision, Graphics & Image Processing. Bhubaneswar, IndiaIEEE Computer SocietyMaria-Elena Nilsback and Andrew Zisserman. 2008. Automated flower classification over a large number of classes. In Sixth Indian Conference on Computer Vision, Graphics & Image Processing, ICVGIP 2008, Bhubaneswar, India, 16-19 December 2008, pages 722-729. IEEE Computer Society.
Cats and dogs. M Omkar, Andrea Parkhi, Andrew Vedaldi, C V Zisserman, Jawahar, 10.1109/CVPR.2012.62480922012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USAIEEE Computer SocietyOmkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. 2012. Cats and dogs. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012, pages 3498-3505. IEEE Computer Society.
Imagebert: Cross-modal pre-training with large-scale weak-supervised imagetext data. Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, Arun Sacheti, abs/2001.07966CoRRDi Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. 2020. Imagebert: Cross-modal pre-training with large-scale weak-supervised image- text data. CoRR, abs/2001.07966.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, PMLRProceedings of the 38th International Conference on Machine Learning, ICML 2021. the 38th International Conference on Machine Learning, ICML 2021139Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learn- ing transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR.
Do imagenet classifiers generalize to imagenet?. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classi- fiers generalize to imagenet? In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5389-5400. PMLR.
Faster R-CNN: towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross B He, Jian Girshick, Sun, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaShaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time ob- ject detection with region proposal networks. In Ad- vances in Neural Information Processing Systems 28: Annual Conference on Neural Information Process- ing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 91-99.
UCF101: A dataset of 101 human actions classes from videos in the wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, abs/1212.0402CoRR. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. CoRR, abs/1212.0402.
VL-BERT: pretraining of generic visual-linguistic representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netWeijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pre- training of generic visual-linguistic representations. In 8th International Conference on Learning Repre- sentations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Reclip: A strong zero-shot baseline for referring expression comprehension. Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, Anna Rohrbach, 10.18653/v1/2022.acl-long.357Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1ACL 2022. Association for Computational LinguisticsSanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, and Anna Rohrbach. 2022. Reclip: A strong zero-shot baseline for re- ferring expression comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5198-5215. Association for Computational Linguis- tics.
LXMERT: learning cross-modality encoder representations from transformers. Hao Tan, Mohit Bansal, 10.18653/v1/D19-1514Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsHao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5099- 5110. Association for Computational Linguistics.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, LukaszAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Attention is all you need. Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAKaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
Learning robust global representations by penalizing local predictive power. Haohan Wang, Songwei Ge, Zachary C Lipton, Eric P Xing, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaHaohan Wang, Songwei Ge, Zachary C. Lipton, and Eric P. Xing. 2019. Learning robust global represen- tations by penalizing local predictive power. In Ad- vances in Neural Information Processing Systems 32: Annual Conference on Neural Information Process- ing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 10506-10518.
OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang, PMLRInternational Conference on Machine Learning, ICML 2022. Baltimore, Maryland, USA162Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022a. OFA: unifying ar- chitectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Inter- national Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 23318-23340. PMLR.
Simvlm: Simple visual language model pretraining with weak supervision. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao, The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event. OpenReview.netZirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yu- lia Tsvetkov, and Yuan Cao. 2022b. Simvlm: Simple visual language model pretraining with weak super- vision. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
SUN database: Large-scale scene recognition from abbey to zoo. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, Antonio Torralba, 10.1109/CVPR.2010.5539970The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010. San Francisco, CA, USAIEEE Computer SocietyJianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. 2010. SUN database: Large-scale scene recognition from abbey to zoo. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pages 3485- 3492. IEEE Computer Society.
Unitab: Unifying text and box outputs for grounded vision-language modeling. Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, Lijuan Wang, 10.1007/978-3-031-20059-5_30Computer Vision -ECCV 2022 -17th European Conference. Tel Aviv, IsraelSpringer13696Proceedings, Part XXXVIZhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2022. Unitab: Unifying text and box outputs for grounded vision-language modeling. In Com- puter Vision -ECCV 2022 -17th European Confer- ence, Tel Aviv, Israel, October 23-27, 2022, Proceed- ings, Part XXXVI, volume 13696 of Lecture Notes in Computer Science, pages 521-539. Springer.
Coca: Contrastive captioners are image-text foundation models. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, Yonghui Wu, 10.48550/arXiv.2205.01917abs/2205.01917CoRRJiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Ye- ung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. Coca: Contrastive captioners are image-text founda- tion models. CoRR, abs/2205.01917.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang, abs/2111.11432Florence: A new foundation model for com. Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. 2021. Florence: A new foundation model for com- puter vision. CoRR, abs/2111.11432.
Unified vision and language prompt learning. Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, Chen Change Loy, 10.48550/arXiv.2210.07225abs/2210.07225CoRRYuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and Chen Change Loy. 2022. Unified vision and language prompt learning. CoRR, abs/2210.07225.
Conditional prompt learning for vision-language models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, 10.1109/CVPR52688.2022.01631IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022. New Orleans, LA, USAIEEEKaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022a. Conditional prompt learning for vision-language models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 16795-16804. IEEE.
Learning to prompt for visionlanguage models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, 10.1007/s11263-022-01653-1Int. J. Comput. Vis. 1309Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022b. Learning to prompt for vision- language models. Int. J. Comput. Vis., 130(9):2337- 2348.
The Thirty-Second Innovative Applications of Artificial Intelligence Conference. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, Jianfeng Gao, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence. New York, NY, USAAAAI Press2020The Thirty-Fourth AAAI Conference on Artificial IntelligenceLuowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Con- ference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 13041-13049. AAAI Press.
Prompt-aligned gradient for prompt tuning. Beier Zhu, Yulei Niu, Yucheng Han, Yue Wu, Hanwang Zhang, 10.48550/arXiv.2205.14865abs/2205.14865CoRRBeier Zhu, Yulei Niu, Yucheng Han, Yue Wu, and Hanwang Zhang. 2022. Prompt-aligned gradient for prompt tuning. CoRR, abs/2205.14865.
| [
"https://github.com/GingL/CMPA."
]
|
[
"Spatio-Temporal Wildfire Prediction using Multi-Modal Data",
"Spatio-Temporal Wildfire Prediction using Multi-Modal Data"
]
| [
"Chen Xu \nMilton Stewart School of Industrial and Systems Engineering\nGeorgia Institute of Technology\n\n",
"Yao Xie \nMilton Stewart School of Industrial and Systems Engineering\nGeorgia Institute of Technology\n\n",
"Daniel A Zuniga Vazquez \nEnergy Systems Division\nArgonne National Laboratory\n\n",
"Rui Yao \nEnergy Systems Division\nArgonne National Laboratory\n\n",
"Feng Qiu \nEnergy Systems Division\nArgonne National Laboratory\n\n"
]
| [
"Milton Stewart School of Industrial and Systems Engineering\nGeorgia Institute of Technology\n",
"Milton Stewart School of Industrial and Systems Engineering\nGeorgia Institute of Technology\n",
"Energy Systems Division\nArgonne National Laboratory\n",
"Energy Systems Division\nArgonne National Laboratory\n",
"Energy Systems Division\nArgonne National Laboratory\n"
]
| []
| Due to severe societal and environmental impacts, wildfire prediction using multi-modal sensing data has become a highly sought-after data-analytical tool by various stakeholders (such as state governments and power utility companies) to achieve a more informed understanding of wildfire activities and plan preventive measures. A desirable algorithm should precisely predict fire risk and magnitude for a location in real time. In this paper, we develop a flexible spatio-temporal wildfire prediction framework using multi-modal time series data. We first predict the wildfire risk (the chance of a wildfire event) in real-time, considering the historical events using discrete mutually exciting point process models. Then we further develop a wildfire magnitude prediction set method based on the flexible distribution-free time-series conformal prediction (CP) approach. Theoretically, we prove a risk model parameter recovery guarantee, as well as coverage and set size guarantees for the CP sets. Through extensive real-data experiments with wildfire data in California, we demonstrate the effectiveness of our methods, as well as their flexibility and scalability in large regions. | 10.1109/jsait.2023.3276054 | [
"https://export.arxiv.org/pdf/2207.13250v3.pdf"
]
| 252,846,279 | 2207.13250 | 708170be4ac4ea60b15083a3e37c091396a6c308 |
Spatio-Temporal Wildfire Prediction using Multi-Modal Data
Chen Xu
Milton Stewart School of Industrial and Systems Engineering
Georgia Institute of Technology
Yao Xie
Milton Stewart School of Industrial and Systems Engineering
Georgia Institute of Technology
Daniel A Zuniga Vazquez
Energy Systems Division
Argonne National Laboratory
Rui Yao
Energy Systems Division
Argonne National Laboratory
Feng Qiu
Energy Systems Division
Argonne National Laboratory
Spatio-Temporal Wildfire Prediction using Multi-Modal Data
1
Due to severe societal and environmental impacts, wildfire prediction using multi-modal sensing data has become a highly sought-after data-analytical tool by various stakeholders (such as state governments and power utility companies) to achieve a more informed understanding of wildfire activities and plan preventive measures. A desirable algorithm should precisely predict fire risk and magnitude for a location in real time. In this paper, we develop a flexible spatio-temporal wildfire prediction framework using multi-modal time series data. We first predict the wildfire risk (the chance of a wildfire event) in real-time, considering the historical events using discrete mutually exciting point process models. Then we further develop a wildfire magnitude prediction set method based on the flexible distribution-free time-series conformal prediction (CP) approach. Theoretically, we prove a risk model parameter recovery guarantee, as well as coverage and set size guarantees for the CP sets. Through extensive real-data experiments with wildfire data in California, we demonstrate the effectiveness of our methods, as well as their flexibility and scalability in large regions.
I. INTRODUCTION
In recent years, widespread large-scale wildfire cause severe consequences, including direct property damage and economic losses, community evacuation, and fatalities, as well as impacts on nature such as higher CO 2 emissions [1]. To monitor and prevent severe consequence caused by large-scale wildfire, an imperative challenge was brought up: how to utilize multi-modal data collected through various sensing technologies, so as to precisely predict wildfire risk and magnitude for a local region and monitor the predictions in real-time.
Wild fire risk prediction is particularly important for power utility companies to enhance their capability in making precise location-wise wildfire risk predictions. To prevent damage and economic losses, the utility companies also perform schedule utility shutdown for high wild-fire risk regions [2]. Despite such urgent and essential need, utility companies often only leverage simple models/metrics for risks assessment, such as the burning index (BI) [3] and the fire load index [4], which are static metrics that do not take into account the contribution from historical wildfire incidents and auxiliary environmental information. Imprecise wildfire risk prediction is causing sub-optimal power operator actions (such as unnecessary shut-down) that significantly disrupt reliable power delivery to customers.
Meanwhile, thanks to the development of sensing technology, there have been abundant multimodal data collected through a variety of sensing mechanisms to gather wildfire information [5], which provides the unique opportunity for using sensing to perform precise location-wise real-time wildfire prediction. Common approaches to identify wildfire incidents include reports from human observers, wireless sensing [6], and infrared technology. Additional environmental information (e.g., weather and environmental conditions) has been integrated with each record, thus providing excellent opportunities for subsequent statistical analyses. As a result, each wildfire record is multi-modal: we know not only when and where it occurred but also its magnitude, the condition of the surrounding (e.g., infrastructure type), current weather information, and so on.
Nevertheless, most existing wildfire modeling approaches [7]- [10] have not been designed to utilize such abundant multi-modal data.
In this paper, we present a framework for predicting wildfire risk and magnitude using multimodal sensing data, based on a mutually exciting point process model and time series conformal prediction sets. Our model can capture the complex spatial-temporal dependence of the multimodal data through mutually exciting point processes, which is a natural framework for real-time prediction, since the conditional probability can be used to capture fire risk given the past observations. In addition, we present a fire magnitude prediction algorithm through time-series CP sets. Theoretically, we first prove model parameter recovery guarantees of the point process model for risk prediction. We then present coverage guarantees of fire magnitude prediction sets. Through extensive real-data experiments, we verify our models' competitive performances against other baseline methods regarding the precision of wildfire risk prediction.
Our prediction framework has the following features: (i) Predicting the wildfire risk -the chance of binary fire event (no fire versus fire) at a given locations and times, given historical observations and available multi-modal data (which can be treated as marks of the point processes), using a flexible marked spatio-temporal Hawkes process model [11]. Specifically, we model the mutual exciting property in that historical and neighboring occurrences likely affect the occurrence likelihood, where certain occurrences may increase the chance while others inhibit the chance. The model parameters are efficiently estimated using an alternating optimization approach, in contrast to the more expensive expectation-maximization method [12]. (ii) Exploiting interdependence among different geographic regions and the mutually exciting point process model is highly interpretable. (iii) Predicting fire magnitude using time-series CP set, which can guarantee to contain true fire magnitude with a user specified high probability.
The rest of the paper is organized as follows. Section II describes background on sensing and the wildfire dataset. Section III contains our proposed methods. In particular, Section III-A introduces proposed spatio-temporal Hawkes process models, which either linearly (i.e., LinearSTHawkes) or nonlinearly (i.e., NonLinearSTHawkes) quantify feature contributions to fire hazards. Section III-B describes the objective function, the estimation procedure, and how to yield binary predictions based on predicted risks. Section III-C describes the CP sets for wildfire magnitude prediction. Section IV has two parts. We first present the theoretical analyses regarding the accuracy of fire risk prediction as a result of model recovery guarantee in Section IV-A.
Section IV-B then verifies coverage guarantee of the prediction sets, whose size also converge to the true fire sizes asymptotically. Section V first validates the proposed model on a small-scale real-data experiment, where Section V-B compares LinearSTHawkes with baseline methods and Section V-C demonstrates the further advantage of NonLinearSTHawkes. Section VI then shows the scalability of our methods on a significantly larger region, where Section VI-B further examines the empirical coverage of prediction sets by the CP method. Finally, Section VII concludes the work with discussion on future steps. The appendix contains additional derivations and algorithms.
A. Related work
Wildfire prediction and modeling is an essential procedure for analyzing the occurrence of wildfire events. There have many indices, such as the BI [13] and the fire danger index [14] for general awareness of fire risks. Despite their popularity, these indices often fail to account for events' interactions. Meanwhile, regression-based approaches [9], [15], [16] are more flexible and often yield satisfactory predictions. However, their performance can be sensitive to the number of available observations per location and thus not applicable under arbitrary spatial granularity with a fixed amount of training data. Lastly, stochastic point-process models [17]- [19] have been leveraged to examine the conditional fire risk given past data and allow a deeper understanding of the underlying stochastic mechanism. However, most current works focus on model evaluation through the akaike information criterion (AIC) rather than predicting the binary occurrence of wildfire events using one-class data. In practice, making a binary prediction is essential for forestry managers and utility owners to understand the fire risk.
Since our proposed fire occurrence model is based on the Hawkes process, we briefly survey existing methods in a wider context. Initially proposed in [11], the Hawkes process is a stochastic temporal point-process model for rates of events conditioning on historical ones. There have been many extensions that take into account spatial interactions [20]- [22] and influences by auxiliary features (i.e., marks) [23]- [25]. Neural-network-based Hawkes process models [26]- [28] have also been proposed for greater model expressiveness. These models have shown great promise in fields such as financial markets [29], social networks [30], disease modeling [31], and neurophysiological studies [32]. Despite their emerging popularity and flexibility, how to make a prediction based on rate estimates and comparisons against predictive models has been less well studied.
We briefly surveyed CP, the primary tool used for constructing prediction sets that quantify uncertainty in fire magnitude prediction. Originated in the seminal work [33], CP has gained wide popularity for uncertainty quantification [34]. It is particularly appealing as the methods are distribution-free, model-agnostic, and easily implementable. The only assumption is that observations are exchangeable (e.g., i.i.d.). On a high level, CP methods assign non-conformity scores to potential outcomes of the response variable. The outcomes that have small non-conformity scores are included in the prediction set. Many methods follow this logic with promising results [35]- [39]. More recently, works have also relaxed the exchangeability assumption [40]- [45], but time-series CP methods are still limited, and their applications to wildfire predictions remain largely unexplored.
II. SENSING FOR WILDFIRE AND REAL-DATA ILLUSTRATION
The latest technology provides multi-modal data for wildfire risk prediction and monitoring.
Below, we briefly describe a few common sensing and data collection techniques [5], [46].
• Air patrols: Patrollers typically consist of a pilot and a trained aerial observer. To identify and report observed wildfire phenomena, the plane flies over predetermined areas during periods associated with elevated fire danger. Wildfire activities are also commonly reported by commercial or recreational pilots.
• Infrared technology: Thermal imaging technology is commonly used to detect fire risks hot spots. It is also used to detect wildfire progression, contour the fire impact, and identify residual fire during extinguishment.
• Computer technology: Various management systems are used to obtain well-rounded multimodal information. Such systems obtain up-to-date weather information, predict the fire probability and spread rate, and reports moisture levels in the natural surrounding.
A feature of our work is that we validate our model on a large-scale multi-modal dataset, 2014-2019 fire incident data collected by the California public utilities commission [46]. The wildfire occurrence dataset is publicly available and associated with three large utility companies: The wildfire data is multi-modal and collecting using various sensing mechanism. Each incident is multi-modal with additional information, which we call marks in our model. Marks can be categorized as being discrete/continuous and dynamic/static. Static marks do not change at a given location, and all discrete marks are one-hot encoded to be utilized in the model. Static and discrete marks include (1) existing vegetation type and physiology (EVT PHYS) [47], such as the road condition and agricultural condition, (2) the name of the three utility companies, and (3) the fire threat zone, which is classified into three levels indicating increasing levels of static fire danger [46]. Dynamic and discrete marks include seasonal information (e.g., spring, summer, autumn, and winter). Dynamic and continuous marks include (1) relative humidity in % of the surrounding [48] (2) temperature in celsius [48] (3) large fire probability (LFP) [49], and (4) fire potential index (FPI) [49]. In particular, LFP and FPI are forecasted by the United States geological survey (USGS) to indicate the risks associated with a region.
PG&E
To pre-process the multi-modal data, we interpolate missing entries of each continuous mark using the spline function with degree 5. Each feature is also standardized to have unit variance and zero mean and further scaled to lie within the interval [0, 1] so that estimated parameters for different marks are on the same scale. The unit for risk prediction is in days, while we allow fractional time values during training where the exact hour and minutes are recorded along each incident.
III. WILDFIRE PREDICTION FRAMEWORK
A. Wildfire risk prediction: Mutually exciting spatio-temporal point processes
We observe a sequence of n fire incidents over a time horizon [0, T ], where each observation consists of time t i , location u i , and a mark m i ∈ R p (where p is the number of features):
x i = (t i , u i , m i ), i = 1, . . . , n.(1)
Note that we specify u i ∈ {1, . . . , K} for K locations under space discretization.
We model these event data using a marked spatio-temporal Hawkes process. Given the σ-algebra H t that denotes all historical fire occurrence before time t, the conditional intensity function is the probability of an event occurring at time t and location k, with current mark m:
λ(t, k, m|H t ) = lim ∆t,∆u→0 E [N([t, t + ∆t) × B(k, ∆k) × B(m, ∆m)) | H t ] ∆t × B(k, ∆k) × B(m, ∆m) ,(2)
where B(a, r) is a ball centered at a with radius r and N is the counting measure. For notation simplicity, we drop H t in (2) from now on.
We can use the conditional intensity function above (2) to quantify the fire risk. For mutually exciting point processes, the conditional intensity function depend on the past events and they typically increase the chance of a future event in the neighborhood. This mutual excitation can be modeled by representing the conditional intensity function (2) as (see, e.g., [12]):
λ(t, k, m) = λ g (t, k)f (m|t, k) = µ(k) + j:t j <t K(u j , k, t j , t) f (m|t, k),(3)
which factors the conditional intensity into product of ground process λ g (t, k) and conditional density f (m|t, k). In (3), µ(k) is the scalar baseline intensity and K(u j , k, t j , t) measures spatial and temporal influence from event happening at t j in u j till current time t through a kernel function In general, functions µ(k), K(u j , k, t j , t), and f (m|t, k) can take many possible forms. Such choices often depend on the application of interest. For computation simplicity and model interpretability, here we parametrize the model in (3) as
µ(k) = µ k , K(u j , k, t j , t) = α u j ,k βe −β(t−t j ) .(4)
In equation (4), the parameters µ k represent the baseline rate of fire risk at location k. The parameters α u j ,k capture the spatial influence of fire incidents that occurred at location u j and time t j on the fire risk at location k and time t. To simplify the design of K(u j , k, t j , t) in (4), we use a negative exponential model. This choice is motivated by two key factors. Firstly, it results in an optimization problem whose parameters can be efficiently estimated with a performance guarantee (refer to Section IV). Secondly, domain experts have observed that past fire incidents can affect the risk of future fire incidents, but the impact of past events diminishes quickly over time.
Furthermore, we assume the distribution of the mark is either in linear form or, more generally, through a non-linear function g
f (m|t, k) = γ T m, (LinearSTHawkes) (5) f (m|t, k) = g(m|t, k) (NonLinearSTHawkes)(6)
Even though (5) is linear, it implicitly incorporates the spatial-temporal information through the mark m, which is collected in location k at time t. Meanwhile, g(m|t, k) in (6) can be any feature extractor (e.g., neural networks) that outputs the score of m. Regarding the formulation differences of (5) and (6), note that LinearSTHawkes based on (5) is more interpretable, and also leads to more computationally efficient sequential convex optimization scheme with guarantees (see Section IV-A). On the other hand, NonLinearSTHawkes can be more expressive in terms of capturing the dependency of fire risks on marks through the feature extractor g(m|t, k) in (6).
B. Point process parameter estimation and real-time prediction
We estimate the parameters in the model through maximum likelihood. For LinearSTHawkes, denote all parameters using θ = {µ, A, β, γ}, where µ = {µ k } K k=1 and A = [α i,j ] K i,j=1 . We can derive and simplify the log-likelihood of x 1 , . . . , x n as follows similar to [12] (the full derivation can be found in appendix B-A):
ℓ(θ) = n i=1 log(λ g (t i , u i )) + n i=1 log(f (m i |t i , u i )) − K k=1 T 0 λ g (τ, k)dτ = n i=1 log µ(u i ) + j:t j <t i α u j ,u i βe −β(t i −t j ) + n i=1 log(f (m i |t i , u i )) − K k=1 T µ(k) − n i=1 K k=1 α u i ,k (1 − e −β(T −t i ) ).(7)
Note that the likelihood term of the marks decouples from the rest. Thus, when using NonLinearSTHawkes based on (6), we first fit a feature extractor on the marks and then employ maximum likelihood estimation to estimate the rest parameters. To achieve better model estimation stability (since we believe few features should be effective in the model), we further add ℓ 1 regularization on γ:
min θ={µ,A,β,γ} − n i=1 log µ(u i ) + j:t j <t i α u j ,u i βe −β(t i −t j ) − n i=1 log(γ T m i ) + K k=1 T µ(k) + n i=1 K k=1 α u i ,k (1 − e −β(T −t i ) ) + ∥γ∥ 1 (8) subject to α i,j = 0 if |i − j| ≥ τ,(9)∥µ∥ 2 ≤ 1, ∥A∥ 2 ≤ 1, ∥γ∥ 2 ≤ 1,(10)β ≥ 0, µ(u i ) ≥ 0 ∀u i .(11)
The purpose of constraints (9)-(11) can be explained as follows: (9) introduces sparsity in the interaction matrix and reduces the total number of parameters in the model for computational efficiency; (10) ensures the objective (8) is bounded and is reasonable since the rate λ(t, k, m) is typically very small; (11) is introduced since baseline rates (i.e. µ(u i )) and interaction propagation over time (i.e. β) are non-negative. Note that the constraints define a convex feasible region.
In addition, we can show that ℓ(θ) is concave in all other parameters with a fixed scalar β. Thus, we can device a method to solve (8) to global optimal solution: for a grid of β values, solve the corresponding convex optimization problem using solvers such as [50] to high numerical accuracy, and then choose the optimal β that gives the best overall objective value. The description of the algorithm, as well its computational efficiency, is in Algorithm 2 of Appendix B-B. In our experiments, we observe that the algorithm usually terminates in a small number of iterations (e.g., three), and each iteration only takes a few seconds to minutes, depending on the problem size. Hence, it is computationally friendly.
C. Fire magnitude prediction: Conformal prediction set
Besides predicting when and where fire occurs, fire magnitude prediction is also desirableknowing the possible fire magnitude can better inform decision-makers of potential losses by such disasters and plan accordingly. The dataset described in Section II treats fire magnitude as discrete categories in its catalog. In principle, this can thus be achieved by variants of LinearSTHawkes and NonLinearSTHawkes for categorical data. However, making categorical prediction based on the estimated risks requires us to construct multi-class thresholds, which can greatly increase model design complexity. In addition, it is unclear how to quantify uncertainty in the resulting categorical estimates.
Thus, we treat fire magnitude prediction as a classification problem: given multi-modal features (1), we would like to build a multi-class classifier that outputsŶ i ∈ {1, . . . , C} as the fire magnitude prediction (assuming C magnitude levels). Denote π i := P Y i |X i as the true conditional distribution of Y i |X i , whose properties are unknown. In a typical classification setting, we assume the first N data are known to us as training data and the goal is to construct
X i ∈ R p as inan estimatorπ := A({(X i , Y i )} N i=1 ), which satisfies C c=1π X i (c) = 1,π X i (c) ≥ 0 for any i ≥ 1.
Here, A is any classification algorithm, from the simplest multinomial logistic regression to a complex deep neural networks. Then, the point predictionŶ
i := arg max c∈[C]πX i (c) is obtained for any test index i > N .
However, point predictions are often insufficient in such settings-there are inherent uncertainties in these predictions, which arise due to randomness in data generation, during the collection of multi-modal data, and when fitting the multi-class classifier. Therefore, a confident fire magnitude prediction is essential, which quantifies uncertainties in the point predictions and contains all the possible high-probability outcomes. One way for uncertainty quantification in classification is the construction of prediction sets aroundŶ i that contain actual observations Y i with high probability before its realization. Formally, given a significance level α ∈ (0, 1), we construct a prediction
set C(X i , α) ⊂ {1, . . . , C} such that P(Y i ∈ C(X i , α)) ≥ 1 − α.(12)
We note that the significance level α in conformal prediction should be distinguished from the interaction parameters α ij in the point-process model, the latter of which has double subscripts as in (4). A set satisfying (12) thus confidently predicts the actual fire magnitude Y i with high probability. Note that a trivial construction that always satisfies (12) is C(X i , α) = {1, . . . , C}, so we also want the prediction set to be as small as possible. This is a challenging question because fire incidents are highly correlated and non-stationary, and classifiers can be very complex (e.g., neural network classifiers).
To build prediction sets that satisfy (12) in practice, we produce uncertainty sets using recent advances in CP [36], [42], [51]. CP methods requires two ingredients. First, they define nonconformity scores, which quantify the dissimilarity of a potential fire magnitude. Second, they specify the prediction set based on non-conformity scores. As a result, CP methods assign nonconformity scores to each possible fire magnitude and the prediction set contains fire magnitude whose non-conformity scores are small compared to past ones.
We first specify a particular form of non-conformity score recently developed in [36] using any estimatorπ. The notations are very similar and we include the descriptions for a self-contained exposition. Given the estimatorπ, for each possible label c at test feature X i , i > N , we make two other definitions:
m X i (c) := C c ′ =1π X i (c ′ ) · I(π X i (c ′ ) >π X i (c)).(13)r X i (c) := C c ′ =1 I(π X i (c ′ ) >π X i (c)) + 1.(14)
where I is the indicator function. In other words, (13) calculates the total probability mass of labels deemed more likely than c byπ. It strictly increases as c becomes less probable. Meanwhile, (14) calculates the rank of c within the order statistics. It is also larger for less probable c. Given a random variable U i ∼ Unif[0, 1] and pre-specified regularization parameters {λ, k reg }, we define the non-conformity score aŝ
τ i (c) := m X i (c) +π X i (c) · U i (i) + λ(r X i (c) − k reg ) + (ii) .(15)
We interpret terms (i) and (ii) in (15) as follows. Term (i) randomizes the uncertainty set, accounts for discrete probability jumps when new labels are considered. A similar randomization factor is used in [35,Eq. (5)]. In term (ii), (z) + := max(z, 0). Meanwhile, the regularization parameters {λ, k reg } force the non-conformity score to increase when λ increases and/or k reg decreases. In words, λ denotes the additional penalty when the label is less probable by one rank and k reg denotes when this penalty takes place. This term ensures that the sets are adaptive, by returning smaller sets for easier cases and larger ones for harder cases.
Then, the prediction set based on (15) is
C(X i , α) := {c ∈ [C] : i−1 j=i−N I(τ j ≤τ i (c))/N < 1 − α},(16)
whereτ j :=τ j (Y j ). The set in (16) includes all the labels whose non-conformity scores are no greater than (1 − α) fraction of previous N non-conformity scores. Following (15) and (16), we thus propose ensemble regularized adaptive prediction set (ERAPS) in Algorithm 1. In particular,
ERAPS aggregates probability predictions from bootstrap multi-class classifiers to yield more accurate point prediction and leverage new feedback of Y i to ensure adaptiveness in the prediction sets.
IV. THEORETICAL GUARANTEE
In this section, we establish some theoretical performance guarantees for the proposed algorithms.
Section (IV-A) provides parameter recovery guarantee for the point-process model defined in (3).
Section (IV-B) provides coverage guarantee (see Eq. (12)) and the tightness of the fire magnitude prediction set by ERAPS.
A. Parameter recovery for point process model
Note that for fixed β, the problem for estimating the rest of the parameters in θ via (7) for
LinearSTHawkes is convex (it can be shown that the objective function is concave in θ other than β, and constraints induce convex feasible domain). We can establish the following bound using a similar technique as in [52], [53]. We do not consider the bound for NonLinearSTHawkes in (6) because it is impossible to verify convexity for a generic feature extractor g.
We first obtain parameter recovery bound for minimizing a generic continuously differentiable
strictly convex function f (θ) : Θ → R, where Θ ⊂ R p is a convex set. Let F (θ) := ∇f (θ) be
the gradient of f on Θ. We know that F (θ) is monotone [52]:
[F (θ) − F (θ ′ )] T [θ − θ ′ ] ≥ 0 ∀θ, θ ′ ∈ Θ.
Algorithm 1 Ensemble Regularized Adaptive Prediction Set
Require:
Training data{(X i , Y i )} N i=1
, classification algorithm A, α, regularization parameters {λ, k reg }, aggregation function ϕ (e.g., mean), number of bootstrap models B, the batch size s, and test data
{(X i , Y i )} N +N 1 i=N +1
, with Y i revealed only after the batch of s prediction intervals with i in the batch are constructed.
Ensure: Ensemble uncertainty sets { C(X i , α)} N +N 1 i=N +1 1: for b = 1, . . . , B do ▷ Train Bootstrap Estimators 2: Sample with replacement an index set S b = (b 1 , . . . , b N ) from indices (1, . . . , N ).
3:
Computeπ b = A({(X i , Y i ) | i ∈ S b }). 4: end for 5: Initialize τ = {} and sample {U i } N +N 1 i=1 i.i.d. ∼ Unif[0, 1]. 6: for i = 1, . . . , N do ▷ LOOComputeπ ϕ −i := ϕ({π b : i / ∈ S b }) such that for each c ∈ {1, . . . , C} π ϕ −i,X i (c) = ϕ({π b X i (c) : i / ∈ S b }). 8: Computeτ ϕ i :=τ X i (Y i ) using (15) andπ ϕ −i . 9: τ = τ ∪ {τ ϕ i } 10: end for 11: for i = N + 1, . . . , N + N 1 do ▷ Build Uncertainty Sets 12: Computeτ ϕ i,cal := q τ ,1−α (τ ) as the (1 − α)-empirical quantile of τ . 13: Computeπ ϕ −i := ϕ({π ϕ −i } N i=1 ) so that for each c ∈ {1, . . . , C} π ϕ −i,X i (c) := ϕ({π ϕ −i,X i (c)} N i=1 ).
14:
Compute C(X i , α) in (16) usingπ ϕ −i andτ ϕ i,cal .
15:
if t − T = 0 mod s then ▷ Slide Scores Forward 16: for j = i − s, . . . , i − 1 do 17:
Computeτ ϕ j :=τ X j (Y j ) using (15) andπ ϕ −j . Let θ * ∈ Θ be the unique global minimizer of f , which exists as f is strictly convex. To estimate θ * , we use the projected gradient descent procedure, starting at an arbitrary θ 0 ∈ Θ :
θ k := Proj Θ (θ k−1 − t k F (θ k−1 )),(17)
where t k > 0 determines the step size and Proj Θ (θ) := arg min θ∈Θ ∥θ − θ∥ 2 . To analyze the error ∥θ k − θ * ∥ 2 after k iterations, we need the following conditions:
Assumption 1: Assume that there exist D, κ, M > 0 where (i) ∥θ − θ ′ ∥ 2 ≤ D ∀θ, θ ′ ∈ Θ,(18)(ii) [F (θ) − F (θ ′ )] T [θ − θ ′ ] ≥ κ∥θ − θ ′ ∥ 2 2 ∀θ, θ ′ ∈ Θ,(19)(iii) ∥F (θ)∥ 2 ≤ M ∀θ ∈ Θ.(20)
We now have the following lemma that yields the error bound in (22). The proof is contained in appendix A-A.
Lemma 1: Under Assumptions 1:(18)- (20) and with the step sizes
t k := [κ(k + 1)] −1 ,(21)
Estimates θ k obtained through (17) obey the error bound
∥θ k − θ * ∥ 2 2 ≤ M 2 κ 2 (k + 1) .(22)
We can now use Lemma 1 to obtain the parameter recovery guarantee for minimizing ℓ(θ) via solving (7). For a fixed β > 0, let
θ[β] := θ − {β}(23)
contain all the model parameters except β when solving (7). We thus know that under Lemma 1, the estimateθ[β] converges to the global minimum θ * [β] at rate 1/k. Meanwhile, since the optimal parameter β * is non-negative scalar, we can estimate it up to arbitrary precision using one one-dimensional grid search. In particular, assume β * ∈ [β 0 , β 1 ] with known values of β 0 , β 1 .
For a fixed integer J ≥ 1, divide the region [β 0 , β 1 ] into J + 1 points β 0 , . . . , β J , where β j := β 0 + j J (β 1 − β 0 ), j = 0, . . . , J.(24)
Then, we can obtain estimatesθ[β j ] via solving (7) using the projected gradient descent procedure (17) at the fixed β j . Given J pairs of estimates (β j ,θ[β j ]), we definê
θ := (β j * ,θ[β j * ]) (25) j * := arg min j=0,...,J ℓ([β j ,θ[β j ]]),(26)
which denotes the estimate that reaches the smallest log-likelihood out of these M estimates. We then bound in the following theorem the parameter estimation error ofθ in (25). The proof is contained in appendix A-B.
Theorem 1 (LinearSTHawkes parameter recovery guarantee): Let θ * be a minimizer of ℓ(θ) in (7) under LinearSTHawkes in (5). Under Assumption 1:(18)-20, the estimateθ in
(25) obeys the bound ∥θ − θ * ∥ 2 2 = O 1 J 2 + 1 k + 1 .(27)
In (27), J is the number of grid searches for β * in [β 0 , β 1 ] and k is the number of projected
gradient descent step (17) of θ[β j ] in (23) at each search point β j .
The implication of Theorem 1 is that we can recover the true model of λ(t, k, m) in (3) for
LinearSTHawkes in (5). This is because LinearSTHawkes reaches the smallest negative log-likelihood under θ * and log likelihood is also the highest under the true model. Thus, when estimatesθ approach true parameters θ * in ℓ 2 norm, the corresponding model estimate also recover the true model.
B. Conformal prediction set guarantee
Note that in existing CP literature, it is typically assumed that observations (X i , Y i ) are exchangeable. This assumption is unrealistic in our setting when strong correlation exists within data. Instead, we impose assumptions on the quality of estimating the non-conformity scores and on the dependency of non-conformity scores in order to bound coverage gap of (12). Most of the assumptions and proof techniques extends our earlier work [42], but we extend it to the classification setting under arbitrary definitions of non-conformity scores. In particular, we allow arbitrary dependency to exist within features X i or responses Y i .
Given any feature X, a possible label c, and a probability mapping p such that C c=1 p X (c) = 1, p X (c) ≥ 0, we denote G : (X, c, p) → R as an arbitrary non-conformity mapping and τ p X (c) := G(X, c, p) as the non-conformity score at label c. For instance, we may consider
G(X, c, p) = C c ′ =1 p X (c ′ ) · I{p X (c ′ ) > p X i (c)},(28)
which computes the total probability mass of labels that are deemed more likely than c by p. The less likely c is, the greater τ p i (c) is, indicating the non-conformity of label c. For notation simplicity, the oracle (resp. estimated) non-conformity score of each training datum (X i , Y i ), i = 1, . . . , N under the true conditional distribution π := P Y |X (resp. any estimatorπ) is abbreviated as
τ i = τ π X i (Y i ) (resp.τ i )
. We now impose these two assumptions that are sufficient for bounding coverage gap of (12).
First, we make assumptions about the quality of estimation by the chosen classifier:
Assumption 2 (Error bound on estimation): Assume there is a real sequence
{ϑ i } where 1 N i−1 j=i−N (τ j − τ j ) 2 ≤ ϑ 2 N .
Then we make assumptions about to the property of true non-conformity scores: We brief remark on implications of the Assumptions above. Note that Assumption 2 essentially reduces to the point-wise estimation quality of π byπ, which may fail under data overfitting-all N training data are used to train the estimator. In this case,π tends to over-concentrate on the empirical conditional distribution under (X i , Y i ), i = 1, . . . , N , which may not be representative of the true conditional distribution P Y |X . A common way to avoid this in the CP literature is through data-splitting-train the estimator on a subset of training data and compute the estimated non-conformity scoresτ only on the rest training data (i.e., calibration data). However, doing so likely results in a poor estimate of π and as we will see, the theoretical guarantee heavily depends on the size of estimated non-conformity scores. On the other hand, Assumption 3 can be relaxed as stated in [42]. For instance, the oracle non-conformity scores can either follow linear processes with additional regularity conditions [42,Corollary 1] or be strongly mixing with bounded sum of mixing coefficients [42,Corollary 2]. The proof techniques directly carry over, except for slower convergence rates.
Lastly, define the empirical distributions using oracle and estimated non-conformity scores:
F (x) := 1 N i−1 j=i−N I(τ j ≤ x), [Oracle] F (x) := 1 N i−1 j=i−N I(τ j ≤ x). [Estimated]
We then have the following coverage results at the prediction index t > T .
sup x |F (x) −F (x)| ≤ (L + 1)ϑ 2/3 N + 2 sup x |F (x) − F (x)|.
The proof of Lemma 2 appears in Appendix A-C.
|P(Y i / ∈ C(X i , α)) − α| ≤ 24 log(16N )/N + 4(L + 1)ϑ 2/3 N .(29)
The proof of Theorem 2 appears in Appendix A-E. Note that Theorem 2 holds uniformly over all α ∈ [0, 1] because Lemmas 2 and 3 bound the sup-norm of differences of distributions.
Hence, users in practice can select desired parameters α after constructing the non-conformity scores. Such a bound is also useful when building multiple prediction intervals simultaneously, under which α is corrected to reach nearly valid coverage [54].
In addition to coverage guarantee, we can analyze the convergence of C(X i , α) to the oracle prediction set C * (X i , α) under further assumptions. Given the true conditional distribution function π := P Y |X , we first order the labels so that π
X i (i) ≥ π X i (j) if i ≤ j. Then, we have C * (X i , α) = {1, . . . , c * }, where c * := min c∈[C] c k=1 π X i (k) ≥ 1 − α.(1) c * 1 = c * 2 where c * 1 := arg min c c k=1 π X i (k) ≥ 1 − α , c * 2 := arg max c τ i (c) < F −1 (1 − α) .
(2) There exists a sequence ϑ ′ i converging to zero with respect to N such that ∥τ i −τ i ∥ ∞ ≤ ϑ ′ i , where the ∞-norm is taken over class labels.
Then, there exists N large enough such that for all i > N ,
C(X i , α)∆C * (X i , α) ≤ 1,(30)
where ∆ in (30) denotes set difference.
The proof of Theorem 3 appears in Appendix A-F. Note that if the non-conformity score at any label c is defined in (28), which is the total probability mass of labels c ′ ̸ = c that are more likely than c based on a conditional probability mapping p, then the first additional assumption (i,e., c * 1 = c * 2 ) in Theorem 3 can be verified to hold. In general, whether this assumption is satisfied depends on the particular form of the non-conformity score.
V. MODEL VALIDATION BY REAL-DATA
We apply the proposed models on the 2014-2019 California wildfire data described in Section II. The experiment is organized as follows. Section V-A describes the setup details, including the dataset and evaluation metrics. Section V-B compares LinearSTHawkes with competing baselines on data from a small region. Section V-C compares LinearSTHawkes and NonLinearSTHawkes on the same region to highlight their performance differences.
A. Evaluation metrics
We use the F 1 score for performance assessment, which is a standard metric for classification when data are imbalanced-note that the number of no occurrence of fire incidents (denoted as 0) significantly outweighs the other (denoted as 1). The goal is to predict as many fire occurrences as possible without making too many false positives. In our case, false positives measured at each location refers to be a prediction of fire incidents at a specific date t when there is no fire incident. Quantitatively, we define the set of fire occurrences as U and our predicted set as V .
Then the precision P and recall R are defined as
P = |U ∩ V |/|V |, R = |U ∩ V |/|U |,(31)
where the notation | · | denotes the size of the set. In the definition (31), we write P and/or R to be 1 if the ratio is 0/0 (i.e., there is no fire incident at a specific location and the model correct predicts none). The F 1 score is thus a combination: F 1 = 2/(P −1 + R −1 ) = 2P R/(P + R), where a high F 1 score indicates both a large of true detection and a small number of false positives.
In general, when one of P and R is more important, one can consider a weighted F 1 that assigns imbalanced weights to precision and recall. We use non-weighted F 1 scores in all our experiments.
We construct dynamic thresholds to make binary prediction based on estimated fire risk λ(t, k, m) defined in Eq. (3). The detailed Algorithm 3 is provided in appendix B-C. In particular, we observe that rate estimatesλ(t, k, m) have clear seasonality (e.g., a sharp drop from summer to fall and a sharp rise from spring to summer). At the same time, fire incidents often occur when rate estimates suddenly increase on certain days. For instance, Figure 4 illustrates the performance of our model based on the observations above. Estimated parameters. In practice, our feature m i includes both temporal dynamic features m d (e.g., weather information) and location-specific information m l (e.g., road condition), so that we re-write γ T m as
γ T m = γ T d m d + γ T l m l ,(32)
which decompose the contribution of m into the sum of both terms. Table I shows the estimated parameters for features (i.e., marks), whose magnitude indicates feature importance. Higher magnitude of estimates contribute more significantly to the growth of fire risk. Noticeably, the top two features in γ d (excluding summer, the seasonality parameter) are also factors in defining the Fire Danger Index, which is a most commonly used index for fire hazard monitoring [55]. Therefore, the model estimates of feature parameters are physically meaningful. Next, Figure Prediction results. We first compare LinearSTHawkes with several one-class classification baselines. We choose isolation forest [56], one-class SVM [57], local outlier factor [58], and elliptic envelope [59] due to their popularity and generality. These classifiers, including static and dynamic marks, use the same data as LinearSTHawkes. Figure 3a visualizes the histograms of F 1 scores by each method, which show that LinearSTHawkes outperforms competing methods by yielding less zero F 1 scores and more one F 1 scores. Note that zero (resp. one) We now illustrate the location-wise prediction results of LinearSTHawkes. Figure 3b-3d visualizes F 1 score, recall, and precision on each of the 36 location. The result helps us assess the prediction difficulty at various locations, where we suspect the difficulty arises partially due to the distribution shift of data in 2018 comparing to data in 2014-17 (cf. Figure 1). To better illustrate how LinearSTHawkes makes a prediction, we further visualize in Figure 4 the trajectory of rate prediction on top of actual incidents. Dynamic thresholds are obtained by using Algorithm 3. The figure shows that sharp increases in predicted fire risks tend to occur near true fire events, which helps us make correct predictions. In the future, to reduce the number of false positives,
we may refit the model parameters during validation using newly observed incidents.
C. Compare LinearSTHawkes vs. NonLinearSTHawkes
We now compare LinearSTHawkes and NonLinearSTHawkes on 2019 test data (cf. and FPI in the dynamic marks. Figure 5 compares the performance of both methods and there are several observations. First, the histograms of F 1 scores (cf. Figure 5a & 5b) show that NonLinearSTHawkes performs better than LinearSTHawkes, as the former yields more non-zero F 1 scores. To explain the improvement, we found the empirical distribution of estimates g(m|t, k) by NonLinearSTHawkes to closely match the Frechet distribution, a classic example from extreme value theory [60]. Although the Frechet distribution is not used to aid modeling, the connection allows NonLinearSTHawkes to make a more accurate prediction because many rare events (e.g., fire incidents) follow the Frechet distribution. Further discussions appear in appendix B-D. Second, the trajectory of predicted fire risks by NonLinearSTHawkes (cf. Figure 5, lower right) fluctuates much more than LinearSTHawkes (cf. Figure 5, top right). For this prediction task, such fluctuation enables better detection because actual fire incidents are often associated with sudden risk increases.
Remark 1 (History-dependent mark in NonLinearSTHawkes): Accumulated weather conditions can often induce fire events (e.g., several dry days earlier can lead to elevated fire risks).
Thus, it seems natural to include in each m i additional spatio-temporal marks to account for accumulation effects. However, doing so has two drawbacks:
1) Data acquisition and storage are much more expensive. One must collect a complete record of historical marks at each grid to fit the models. The issue mainly arises when the number of grids is large (e.g., hundreds) and marks frequently arrive (e.g., hourly).
2) The curse of dimensionality rises when each mark contains longer historical values. Note that the total number of fire incidents is fixed and typically small (e.g., hundreds over multiple years). Therefore, parameter estimation can be more difficult as the feature dimension • The amount of negative influence into location 140 (i.e., j:α 140 <0 α j,140 ) is -0.30.
• The amount of positive influence from location 140 (i.e., j:α 140,j >0 α 140,j ) is 0.23.
• The amount of negative influence from location 140 (i.e., j:α 140,j <0 α 140,j ) is -0.47.
In comparison, location 20 is in the mid-south region of few clusters of fire incindents. Quantitatively, if we use α ij to roughly measure the total influence of location i on location j:
• The amount of positive influence into location 20 (i.e., j:α j,20 >0 α j,20 ) is 0.00.
• The amount of negative influence into location 20 (i.e., j:α j,20 <0 α j,20 ) is -0.09.
• The amount of positive influence from location 20 (i.e., j:α 20,j >0 α 20,j ) is 0.00.
• The amount of negative influence from location 20 (i.e., j:α 20,j <0 α 20,j ) is 0.00.
A. Real-time fire risk prediction
B. Fire magnitude conformal prediction sets
We show that prediction sets by ERAPS maintain desired coverage defined in (12). are used as prediction algorithms; their setup is the same as those in [51]. We let regularization parameters (λ, k reg ) = (1, 2) as suggested in [51]. Thus, ERAPS is more robust and consistent in terms of coverage. Second, both methods return prediction sets with almost the same sizes, but ERAPS is preferable due to its ability to maintain near 1 − α coverage.
VII. CONCLUSION AND DISCUSSIONS
We have developed a predictive framework for wildfire risk and magnitude using multi-modal sensing data, based on a mutually exciting spatio-temporal point process model as well as time series CP set. We established performance guarantees of the proposed methods, and demonstrate the good performance on large-scale real data experiments. Overall, our method is efficient in model parameter, enjoys interpretability, accurate prediction against existing methods. There are several future works. Regarding the point process model, we can consider beyond the parametric forms in (4) and (5), such as the more general neural network-based formulations.
The development of dynamic marks in Algorithm 3 can also be refined. Regarding conformal uncertainty quantification, remaining questions include how to better utilize the existing time-series method when data have an additional spatial dimension.
From our numerical results, we observe that distribution shifts may exist sometime for wildfire prediction. Although our LinearSTHawkes and NonLinearSTHawkes are not designed to explicitly consider distribution shift, they still yield improved performance against baseline models on real data. In particular, as shown in Fig. 3a on small-scale data and Fig. 7 on large-scale data, our proposed models always outperform the baseline one-class classifiers. As a result, although the performance of our proposed framework may vary from year to year, it is still preferable in terms of predictive ability. We believe this is due to the model design to capture spatial-temporal information (e.g., past fire incidents around neighbors) and mark contribution (e.g., how multi-modal sensor information contributes to fire risks). To mitigate the adverse effects of distribution shifts, one approach is to introduce uncertainty into model parameters.
For instance, instead of specifying the parameters in the optimization problem (8) Under the projected gradient descent (17), we have
∥θ k − θ * ∥ 2 2 = ∥Proj Θ (θ k−1 − t k F (θ k−1 ) − θ * )∥ 2 2 ≤ ∥θ k−1 − t k F (θ k−1 ) − θ * ∥ 2 2 = ∥θ k−1 − θ * ∥ 2 2 − 2t k F (θ k−1 ) T [θ k−1 − θ * ] + t 2 k ∥F (θ k−1 )∥ 2 2 .
By assumptions (19) and (20) on the monotone operator F and the fact that F (θ * ) = 0 when θ * is the minimizer of f , we have
∥θ k − θ * ∥ 2 2 ≤ (1 − 2t k κ)∥θ k−1 − θ * ∥ 2 2 + t 2 k M 2 .(33)
Define d k := ∥θ k − θ * ∥ 2 2 . If S := M 2 /κ 2 and t k = [κ(k + 1)] −1 , we show by induction that
d k ≤ S k + 1 = M 2 κ 2 (k + 1) .(34)
Base case k = 0. Pick θ, θ ′ such that ∥θ − θ ′ ∥ 2 = D, where D in (18) denotes the diameter of the parameter set for θ. Observe that
M D ≥ [F (θ) − F (θ ′ )] T [θ − θ ′ ] ≥ κ∥θ − θ ′ ∥ 2 2 = κD 2 .
Thus, D ≤ 2M/κ. By assumption (18), we thus have that √ d 0 = ∥θ 0 − θ * ∥ 2 ≤ D, so that the base case is proven.
Induction step from k − 1 to k, k ≥ 1. Observe that by the choice of t k , κt k = (k + 1) −1 ≤ 1/2.
Thus d k ≤ (1 − 2t k κ)d k−1 + t 2 k M 2 [By (33)] ≤ (1 − 2t k κ) S k + t 2 k M 2
[By induction hypothesis and κt k ≤ 1/2]
= (1 − 2 k + 1 ) S k + S (k + 1) 2 = ( k − 1 k + 1 k + 1 ) S k + 1 ≤ S k + 1 .
B. Proof of Theorem 1
First, note that after searching over J grid points of β j in the region [β 0 , β 1 ], we obtain
∥β j * − β * ∥ 2 2 ≤ β 1 − β 0 J 2 .(35)
Meanwhile, we know that for each fixed value of β j , the function ℓ(
β j , θ[β j ]) is convex in θ[β j ].
Because the constrains when solving for (7) are also convex, Lemma (1) implies
∥θ[β j * ] − θ * [β j * ]∥ 2 2 = O((k + 1) −1 )(36)
after k projected gradient descent steps (17). Putting (35) and (36) together, we thus have
∥θ − θ * ∥ 2 2 = O 1 J 2 + O 1 k + 1 = O 1 J 2 + 1 k + 1 C. Proof of Lemma 2
The proof is identical to that of [42, Lemma 2] so we omit the mathematical details. The gist of the proof proceeds by bounding the size of the set of past N estimated non-conformity scores which deviate too much from the oracle one. The set is denoted as
S N := {i ∈ [N ] : |τ i − τ i | > ϑ 2/3 N }.
Then, one can relate the difference |F (x) −F (x)| at each x to a sum of two terms of indicator variables-ones whose index belongs to S and ones which does not. The ones that does not belong to S can be bounded using the term |F (x) − F (x)| up to a multiplicative constant.
D. Proof of Lemma 3
The proof is identical to that of [42, Lemma 1] so we omit the mathematical details. In fact, this is a simple corollary of the famous Dvoretzky-Kiefer-Wolfowitz inequality [61, p.210], which states the convergence of the empirical bridge to actual distributions under the i.i.d. assumption.
E. Proof of Theorem 2
The proof is identical to that of [42, Theorem 1] so we omit the mathematical details. The gist of the proof proceeds by bounding the non-coverage |P(Y i / ∈ C(X i , α)) − α| using the sum of constant multiples of sup x |F (x) −F (x)| and sup x |F (x) − F (x)|, both of which can be bounded by Lemmas 2 and 3 above.
F. Proof of Theorem 3
Based on the assumptions and the definition in (16), we now have
C * (X i , α) = {1, . . . , c * }, c * = arg max c τ i (c) < F −1 (1 − α), C(X i , α) = {1, . . . ,ĉ},ĉ = arg max cτ i (c) <F −1 (1 − α),
whereF −1 is the empirical CDF based on estimated non-conformity scores {τ i−N , . . . ,τ i−1 }.
We now show that C(X i , α)∆C * (X i , α) ≤ 1 if and only if
∥τ i − τ i ∥ ∞ → 0 andF −1 (1 − α) → F −1 (1 − α).
(⇒) Without loss of generality, suppose thatĉ < c * so that C(X i , α)∆C * (X i , α) > 1. Then, by definition of the prediction sets, we must havê
τ i (c * ) ≥F −1 (1 − α), τ i (c * ) < F −1 (1 − α).
Denote δ τ,i :=τ i (c * ) − τ i (c * ) and δ F :
= F −1 (1 − α) −F −1 (1 − α), we thus have δ τ,i + δ F ≥ F −1 (1 − α) − τ i (c * ) > 0.
However, this is a contraction when N approaches infinity-by the assumption that ∥τ i −τ i ∥ ∞ → 0 and the earlier results thatF −1 (1 − α) → F −1 (1 − α), we must have δ τ,i and δ F both converging to zero.
(⇐) By the form of the estimated and true prediction sets, it is obvious that if ∥τ i − τ i ∥ ∞ → 0
andF −1 (1 − α) → F −1 (1 − α)
, their set difference must converges to zero.
APPENDIX B ADDITIONAL DETAILS
A. Log-likelihood derivation
The first two terms under log can be trivially derived upon substitution, so we only simplify the integration term:
K k=1 T 0 λ g (τ, k)dτ = K k=1 T 0 (µ(k) + j:t j <t α u j ,k βe −β(τ −t j ) )dτ = K k=1 T µ(k) + K k=1 n j=1 T 0 1(τ > t j )α u j ,k βe −β(τ −t j ) )dτ (i) = K k=1 T µ(k) + K k=1 n j=1 α u j ,k (1 − e −β(T −t j ) ),
where (i) follows from the definite interval formula for exponential functions. Interchanging the finite sums K k=1 n j=1 yields (7). Under the general formulation (3), we have λ g (t, k) = µ(k) + j:t j <t K(u j , k, t j , t), so that the integral is simplified as
K k=1 T µ(k) + K k=1 n j=1 T t j K(u j , k, t j , τ )dτ,
which may not have a closed form expression. In particular, there have been many parametric and non-parametric forms for λ g (t, k), including neural network-based models discussed in the literature review. Although they are more flexible and potentially more effective, the log-likelihood objective becomes non-convex, requiring gradient-descent type methods for local optimization under more computational resources. Algorithm 2 contains details for the alternating minimization procedures. It first finds minimizers of Ψ(θ[β], β), given β 0 as the initial value of the one-dimensional parameter β. Then, we can use one-dimensional line search to solve for β, given the other estimates. The procedure iterates for a total of N times, where we describe the computational efficiency of the proposed approach in Remark 4. In general, we can allow β to be location-dependent, such as having the same support as α u i ,k .
Remark 2 (Parameters):
• β 0 is the initial guess of the temporal influence parameter, whose value depends on problem context. It can typically be set to 1.
• The lower end β low (in line 3, Algorithm2) can remain constant since we know β > 0, so that a reasonably small β low suffices.
• ϵ β determines the stopping criteria, whose choice depends on the desired degree of accuracy.
Remark 3 (Algorithm Details):
• The termination criterion (Line 4-6, Algorithm 2) can be justified: once consecutive solutions for β are close to each other, the solutions for θ[β] are likely to be close to each other in vector norm.
• Since Ψ(θ[β], β) is non-convex in β, the one-dimensional line search is only guaranteed to find a local minimum. Nevertheless, once θ[β] (k) is computed by Algorithm 2, we can clearly characterize the number of local minima of Ψ(θ[β] (k) , β). If it has multiple local minima within the bisection search domain, we can use line search multiple times to find the global minimum. Doing so is efficient because the search region for β doubles every time (e.g., K is logarithmic in widths of the search region) and evaluating the derivative of Ψ(θ[β] k , β) at each possible minimizer is a constant operation. thus construct a dynamic threshold selection procedure in Algorithm 3, which leverages current prediction and feedback.
We explain the intuitive procedures of Algorithm 3. Overall, the algorithm updates thresholds only when the current anomaly prediction is false. It does so by increasing/decreasing the threshold if an anomaly/normal datum is estimated. Then, it projects the threshold back to a target interval determined by past predicted risks. Meanwhile, we realize in practice that due to rareness of true fire incidents and the randomness in predicted risks, there tends to be an excessive number of positive prediction, leading to a significant number of false positives. These false positives are especially undesirable and costly in the case of power system management, where power delivery facilities are mistakenly shutdown to avoid further damages. Thus, to further control the number of false positives, we predict it as an anomaly only when the "slope" of increase is large enough even if a risk estimate exceeds the threshold-this procedure is highlighted in line 8: ∆ tk ≥ δ k and λ(t, k, m) > τ tk , where ∆ tk is defined in 37. We do so since true anomalies typically occur when the relative risk increase is large enough; Figure 4 shows an example of this. The choice of δ k may be guided by historical data (e.g., what is the lowest/largest/average rate of increase ∆ tk in validation data for each k). Furthermore, to reduce false positives, line 11 (τ tk := max(Π(τ t−1,k + η kŶtk ), λ(t − 1, k, m)/a 1k )) ensures that thresholds increase sufficiently quickly under sharp rise in risk estimates, even if the 1) There had been at least one fire incident at location k.
2) The number of detected fire at k has not exceed the total number of fire occurred at k in validation data.
3) The time since the last positive detection is no less than the average fire occurrence gap in validation data. The procedures above aim to limit the number of false positives during detection based on the following observation: bumps/sudden rises in predicted risks often exist outside summer, when fire incidents rarely exist. To make better detection besides naively using an average or a sum as the metric, one may use historical data (training and validation) to predict a distribution of the total possible number of fires at k in test time. Then, one can decide the total number of detection based on statistical tests over this predicted distribution. Such Bayesian-type approaches can be more systematic but may also introduce additional complication that hinders computational efficiency, so we leave it as future work.
D. Empirical observed connection with extreme value distribution
We observe empirically that the distribution of estimated mark influences in NonLinearSTHawkes (cf. (6)) is similar to the Frechet distribution. This similarity is illustrated in Figure 10 for a Frechet distribution with the shape parameter being 1. Such a connection is useful as the Frechet distribution belongs to the family of generalized extreme value distribution (GEV), which has 42 been used to capture the distribution of rare events, such as catastrophes [64]. In our case, fire incidents are rare events, and it is natural to expect the dependency of fire risks on marks to also follow extreme value distribution (e.g., only rare weather lead to significant impact on fire risks).
How to better incorporate such information as priors in the model belongs to future work.
Assumption 3 (
3Regularity of non-conformity scores): Assume {τ j } i j=i−N are independent and identically distributed (i.i.d.) according to a common cumulative density function (CDF) F with Lipschitz continuity constant L > 0.
Lemma 2 ( [ 42 ,
242Lemma 2]): Suppose Assumptions 2 and 3 hold. Then,
Lemma 3 ( [ 42 ,
342Lemma 1]): Suppose Assumption 3 holds. Then, for any training size N , there is an event A within the probability space of non-conformity scores {τ j } N j=1 , such that when A occurs,sup x |F (x) − F (x)| ≤ log(16N )/N .In addition, the complement of event A occurs with probability P(A C ) ≤ log(16N )/N .The proof of Lemma 3 appears in Appendix A-D.As a consequence of Lemmas 2 and 3, the following bound of coverage gap of (12) holds:Theorem 2 (Coverage guarantee, [42, Theorem 1]): Suppose Assumptions 2 and 3 hold. For any training size N and significance level α ∈ (0, 1), we have
Theorem 3 (
3Set size convergence guarantee): Suppose Lemmas 2 and 3 hold and denote F −1 as the inverse CDF of {τ j } i j=i−N . Further assume that
Fig. 1 :
1Visualize data and grid discretization on data from different years. There are grid-wise shifts in data distribution-for instance, fire incidents cluster more closely around grid 12 in 2018 (validation) than in 2014-2017 (training) or in 2019 (test).B. LinearSTHawkes vs. BaselinesWe first focus on a small region because the distribution of fire incidents within the regionand the performance of our model can be visualized clearly. The model is trained with incidents between 2014 and 2017 and examined on validation data in 2018. There were 238 fire occurrences in 2014-2017 and 70 in 2018. Upon consulting domain experts, we set the sides of discretized cells to be 0.24-degree in both longitude and latitude directions so that 36 non-overlapping cells cover the region. Figure 1 visualizes both the training and validation data, from which it is clear that the validation data have a much less number of actual fires; only a few grids have fires that occurred near them.
Fig. 2 :
2The distribution of α ij closely follows the data distribution inFigure 1.Based on (32), we interpret the feature and interaction parameters of LinearSTHawkes, estimated via Algorithm 2. First,
2 examines the location-to-location interaction parameters α ij , which is forced to be zero if centroids of two cells exceeds 4×0.24 degrees. Values of α ij above or below zero indicate excitatory or inhibitory effects from nearby and past events. The distribution of interaction effects closely aligns with the 2014-2017 training data in Figure 1. For instance, we see clusters of fire incidents in 2014-2017 training data in Figure 1 around location 20 and as a result, location 20 in Figure 2 also interacts intensively with its nearby neighbors. Quantitatively, if we use α ij to roughly measure the amount of influence of location i on location j: • The amount of positive influence into location 20 (i.e., j:α j,20 >0 α j,20 ) is 0.40.• The amount of negative influence into location 20 (i.e., j:α j,20 <0 α j,20 ) is -0.30.• The amount of positive influence from location 20 (i.e., j:α 20,j >0 α 20,j ) is 0.29.• The amount of negative influence from location 20 (i.e., j:α 20,j <0 α 20,j ) is -1.44.In addition, we can perform counterfactual analyses using the estimated parameters: suppose a decision-maker wants to know the increase in risk when an external condition changes from A to B (e.g., Fire tier zone shift, changes in vegetation types, etc.). Then, the change in risk at a certain location and time is ∆(A, B) := λ(t, k, B) − λ(t, k, A). Similar analyses can be performed for a change in location from k to k 1 . Such analyses can help one better study the effect of different factors on fire risks, making risk management more effective.
F 1
1scores appear at locations that are the easiest (resp. hardest) to predict discussed earlier. In addition, LinearSTHawkes can yield non-trivial fractional F 1 scores at other locations by capturing a decent number of true positives. Nevertheless, our model also yields many zero F 1 scores because the task is inherently challenging: it makes 365 daily predictions at each of 36 locations, in a total of 13140 predictions, when there are only 70 actual fire occurrences across all 36 locations.
Fig. 3 :
3Comparison across methods (top) and LinearSTHawkes performance per location (bottom). Histograms of F 1 scores over all locations on the top row show that our LinearSTHawkes outperforms other methods by yielding fewer zero F 1 scores, a moderate number of fractional F 1 scores, and more one F 1 scores. The bottom row visualizes the F 1 score, recall, and precision of LinearSTHawkes at each location.
Figure 1 Fig. 4 :
14right), where we train the feature extractor g(m|t, k) in (6) using the one-class SVM.In principle, one can use any feature extractor, but we choose SVM due to the flexibility of the kernel function. Based on earlier results, we only include seasonal and weather information, LFP, Real-time prediction of fire risks and incidents on top of actual incidents and dynamic thresholds. The prediction by LinearSTHawkes can closely match the actual data.
Fig. 5 :Fig. 6 :
56increases. How to choose historical values appropriately to reduce the effect of this issue would increase difficulty in training.VI. LARGE-SCALE DATA VALIDATIONWe now show that our LinearSTHawkes and NonLinearSTHawkes are scalable to a large region with much more fire incidents and locations. There are a total of 2011 fire occurrences Compare LinearSTHawkes with NonLinearSTHawkes on 2019 test data. Both models are trained on 2014-2018 data. The top row shows results under LinearSTHawkes, and the bottom row shows those under NonLinearSTHawkes. In comparison, NonLinearSTHawkes shows improved performance because of a more flexible feature extractor and the ability to yield less zero F 1 scores.in this region, comprising 63% of total wildfire incidents in California from 2014 to 2019.Figure 6avisualizes fire incidents within the region on the map, andFigure 6billustrates the resulting 453 grids after discretization into squares with side lengths equal to 0.24 degrees; we remove regions that lie inside the ocean. Most grids have no fire in the 5-year horizon since fire incidents seem to cluster near the coastal line with large populations. We remark that the setup and hyperparameter choices are the same as those in Section V-B. The distribution of estimated interaction parameters α ij (cf.Figure 6c) still closely align with that of the actual data. For instance,Figure 6ashows there are clusters of true fire incidents around the coastal line on the west side and few incidents in the mid-south side. As a result, estimates inFigure 6care much denser in distribution around the west side than around the mid-south side. As a concrete example, location 140 is on the west side along the coastal line, where there are clusters of fire incidents. Quantitatively, if we use α ij to roughly measure the amount of influence of location i on location j:• The amount of positive influence into location 140 (i.e., j:α j,140 >0 α j,140 ) is 0.17. Data visualization. (a) shows fire events colored by season as inFigure 1, (b) shows the grid discretization, and (c) visualizes the location-location interaction matrix parameters α ij .
Figure 7a Fig. 7 :
7a7compares the prediction performances of NonLinearSTHawkes, LinearSTHawkes, IForest, and OneClassSVM. We see that NonLinearSTHawkes performs better than both the LinearSTHawkes and the isolation forest by yielding more non-zero F 1 scores and a large number of F 1 scores being one. Due to its flexible feature extractor, the NonLinearSTHawkes is also competitive against the one-class SVM; importantly, it yields more F 1 scores between On 2019 test data: The top row compares the histograms of F 1 score under various methods. The leftmost NonLinearSTHawkes has the most number of non-zero F 1 scores, with many being 1. The bottom row visualizes the temporal predicted risks by NonLinearSTHawkes at one grid. Overall, NonLinearSTHawkes yields the best performance among all models. zero and one, making it more informative than the one-class SVM on certain locations. Hence, NonLinearSTHawkes maintains improved performance than other models even if the number of grids significantly increases.Figure 7bfurther visualizes the real-time prediction behavior of NonLinearSTHawkes, where the peaks identified as fire incidents closely align with the actual incidents.
Data in 2014-2018 are training data, and data in 2019 are test data, where there are a total of five possible fire magnitude. Both the random forest classifier (RF) and the neural network classifier (NN)
Figure 8 Fig. 8 :
88shows marginal coverage under both classifiers, where we also compare ERAPS against a competing method titled split regularized adaptive prediction set (SRAPS)[36]. The details of SRAPS are described in [51, Algorithm 1].We have two findings. First, ERAPS performs very similarly under both classifiers and always maintains 1 − α coverage, whereas SRAPS tends to lose coverage at different values of α. Marginal coverage(12) and size of prediction sets by ERAPS and SRAPS under the random forest classifier and the neural network classifier. ERAPS always maintains desired coverage, whereas competing methods can fail to do so.
as unknown constants in our models, one could allow them to vary within a pre-specified range (or even treat them as random variables). With accurate parameter estimation, the estimated model could better address model shifts that arise from distribution shifts in test data. However, we do not explore this model design in this work, as our goal is to propose simple yet effective models for capturing fire risks using multi-modal data and establishing theoretical guarantees based on theproposed models (see Theorem IV-A). ACKNOWLEDGEMENT This work is partially supported by an NSF CAREER CCF-1650913, and NSF DMS-2134037, CMMI-2015787, DMS-1938106, and DMS-1830210, and an Argonnal National Lab grant.
θ[β] = θ − {β} so that θ[β] contains all parameters except β and θ[β] ∪ β = θ. We then define Ψ(θ[β], β) := −ℓ(θ).
Fig. 9 :
9Objective (8) over β ∈ [0, 2] on the small-scale example with K = 36 location. The interval is discretized into 25 evenly-spaced grid points. answer is false:
) Estimated g(m|t, k) Fig. 10: Compare Frechet random variables with our estimated conditional intensities at the first location of the large region in test time.
, SCE, and SDG&E. A total of 3191 fire incidents are recorded, where the latitude-longitude coordinates of each incident are enclosed within the coordinate rectangle [32.24, −124, 38] × [41.28, −114.67].
TABLE I :
IEstimated parameters of static marks γ l and dynamic marks γ d defined in(32). "PHYS=" indicates road type or existing vegetation type. A larger parameter estimate indicates more contribution of the feature to fire hazards. Note that Temperature and Relative Humidity in γ d also define the widely-used Fire Danger Index so that LinearSTHawkes selects physically meaningful features.0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
0.06
0.04
0.02
0.00
0.02
0.04
0.06
0.08
Algorithm 3 Location-wise Dynamic Threshold SelectionRequire: Risk estimates {λ(t, k, m)} T t=1 , τ k,min , τ k,max , η k , δ k , a 1k , a 2k , and true anomalies {Y tk } T t=1 , revealed individually after each prediction. Ensure: Decision thresholds {τ tk } T t=1 , anomaly estimates {Ŷ tk } T t=1 . 1: Define Projection Π(x) := arg minLet τ 2k := max(Π(τ 1k + η kŶ1k , λ(1, k, m)/a 1k ) 5: end if 6: for t = 2, . . . , T do 7:Define increase8:if ∆ tk ≥ δ k and λ(t, k, m) > τ tk then 9:LetŶ tk = 1 10:Reset τ tk = λ(t, k, m). if λ(t, k, m) ≤ λ(t − 1, k, m)/a 2k ) ensures that when risk estimates drop significantly at location k (e.g., under seasonal shifts from summer to fall), the algorithm resets thresholds to capture possible future rise in estimates. One can achieve different performances by tuning knobs {a 1k , a 2k } in these two lines; in practice, larger a 1k implies more positive anomaly estimates, and the algorithm resets thresholds less often under larger a 2k . If risk estimates are fairly constant, we recommend setting a 1k , a 2k fairly close to 1. After tuning, we set other parameters as τ k,min = λ(1, k, m)/1.8, τ k,max = λ(1, k, m) × 1.8, η k = (τ k,max − τ k,min )/(T 1.5 ), δ k = 0.05.In practice, fire typically densely clusters near summer (e.g., June-August), so we also apply the following screening procedure at each (t, k) before applying the algorithm. First, compute the number of fire incidents, frequency, and the gap between fire events on validation data at k.Second, require true statements for all three screening questions and claim no fire at (t, k) if any
Emission of trace gases and aerosols from biomass burning. M O Andreae, P Merlet, Global Biogeochemical Cycles. 15M. O. Andreae and P. Merlet, "Emission of trace gases and aerosols from biomass burning," Global Biogeochemical Cycles, vol. 15, pp. 955 -966, 2001.
Wildfire and Wildfire Safety -cpuc.ca.gov. 7"Wildfire and Wildfire Safety -cpuc.ca.gov," https://www.cpuc.ca.gov/industries-and-topics/wildfires, [Accessed 07-Oct- 2022].
Fire Weather Week 2 Forecasts -cpc.ncep.noaa.gov. 7"Fire Weather Week 2 Forecasts -cpc.ncep.noaa.gov," https://www.cpc.ncep.noaa.gov/products/people/mchen/fireWeather/ cpc wk2fw index.html, [Accessed 07-Oct-2022].
NFDRS System Inputs and Outputs -NWCG -nwcg.gov. 7"NFDRS System Inputs and Outputs -NWCG -nwcg.gov," https://www.nwcg.gov/publications/pms437/fire-danger/ nfdrs-system-inputs-outputs, [Accessed 07-Oct-2022].
Detecting wildfire -Environment and Natural Resources -enr.gov.nt.ca. 7"Detecting wildfire -Environment and Natural Resources -enr.gov.nt.ca," https://www.enr.gov.nt.ca/en/services/ wildfire-operations/detecting-wildfire, [Accessed 07-Oct-2022].
A survey on secure localization in wireless sensor networks. A Srinivasan, J Wu, Encyclopedia of Wireless and Mobile communications. 126A. Srinivasan and J. Wu, "A survey on secure localization in wireless sensor networks," Encyclopedia of Wireless and Mobile communications, p. 126, 2007.
Information systems in support of wildland fire management decision making in canada. B S Lee, M E Alexander, B Hawkes, T J Lynham, B J Stocks, P Englefield, Computers and Electronics in Agriculture. 37B. S. Lee, M. E. Alexander, B. Hawkes, T. J. Lynham, B. J. Stocks, and P. Englefield, "Information systems in support of wildland fire management decision making in canada," Computers and Electronics in Agriculture, vol. 37, pp. 185-198, 2002.
Interpreting and using outputs from the canadian forest fire danger rating system in research applications. B M Wotton, Environmental and Ecological Statistics. 16B. M. Wotton, "Interpreting and using outputs from the canadian forest fire danger rating system in research applications," Environmental and Ecological Statistics, vol. 16, pp. 107-131, 2007.
A review of machine learning applications in wildfire science and management. P Jain, S C Coogan, S G Subramanian, M Crowley, S Taylor, M D Flannigan, Environmental Reviews. 284P. Jain, S. C. Coogan, S. G. Subramanian, M. Crowley, S. Taylor, and M. D. Flannigan, "A review of machine learning applications in wildfire science and management," Environmental Reviews, vol. 28, no. 4, pp. 478-505, 2020.
Hybrid artificial intelligence models based on a neuro-fuzzy system and metaheuristic optimization algorithms for spatial prediction of wildfire probability. A Jaafari, E K Zenner, M Panahi, H Shahabi, Agricultural and forest meteorology. 266A. Jaafari, E. K. Zenner, M. Panahi, and H. Shahabi, "Hybrid artificial intelligence models based on a neuro-fuzzy system and metaheuristic optimization algorithms for spatial prediction of wildfire probability," Agricultural and forest meteorology, vol. 266, pp. 198-207, 2019.
Spectra of some self-exciting and mutually exciting point processes. A G Hawkes, Biometrika. 58A. G. Hawkes, "Spectra of some self-exciting and mutually exciting point processes," Biometrika, vol. 58, pp. 83-90, 1971.
A review of self-exciting spatio-temporal point processes and their applications. A Reinhart, 10.1214/17-STS629Statistical Science. 333A. Reinhart, "A review of self-exciting spatio-temporal point processes and their applications," Statistical Science, vol. 33, no. 3, Aug 2018. [Online]. Available: http://dx.doi.org/10.1214/17-STS629
A critical assessment of the burning index in los angeles county, california. F P Schoenberg, C.-H Chang, J E Keeley, J Pompa, J Woods, H Xu, International Journal of Wildland Fire. 164F. P. Schoenberg, C.-H. Chang, J. E. Keeley, J. Pompa, J. Woods, and H. Xu, "A critical assessment of the burning index in los angeles county, california," International Journal of Wildland Fire, vol. 16, no. 4, pp. 473-483, 2007.
Spatial interpolation of mcarthur's forest fire danger index across australia: Observational study. L A Sanabria, X Qin, J Li, R P Cechet, C Lucas, Environ. Model. Softw. 50L. A. Sanabria, X. Qin, J. Li, R. P. Cechet, and C. Lucas, "Spatial interpolation of mcarthur's forest fire danger index across australia: Observational study," Environ. Model. Softw., vol. 50, pp. 37-50, 2013.
Ignition probability of organic soils. W H Frandsen, Canadian Journal of Forest Research. 27W. H. Frandsen, "Ignition probability of organic soils," Canadian Journal of Forest Research, vol. 27, pp. 1471-1477, 1997.
Laboratory determination of factors influencing successful point ignition in the litter layer of shrubland vegetation. M P Plucinski, W R Anderson, International Journal of Wildland Fire. 17M. P. Plucinski and W. R. Anderson, "Laboratory determination of factors influencing successful point ignition in the litter layer of shrubland vegetation," International Journal of Wildland Fire, vol. 17, pp. 628-637, 2008.
A stochastic model for the occurrence of man-caused forest fires. A A Cunningham, D L Martell, Canadian Journal of Forest Research. 3A. A. Cunningham and D. L. Martell, "A stochastic model for the occurrence of man-caused forest fires," Canadian Journal of Forest Research, vol. 3, pp. 282-287, 1973.
Point process modeling of wildfire hazard in los angeles county, california. H Xu, F P Schoenberg, The Annals of Applied Statistics. 5H. Xu and F. P. Schoenberg, "Point process modeling of wildfire hazard in los angeles county, california," The Annals of Applied Statistics, vol. 5, pp. 684-704, 2011.
Spatiotemporal wildfire modeling through point processes with moderate and extreme marks. J Koh, F Pimont, J.-L Dupuy, T Opitz, The Annals of Applied Statistics. 171J. Koh, F. Pimont, J.-L. Dupuy, and T. Opitz, "Spatiotemporal wildfire modeling through point processes with moderate and extreme marks," The Annals of Applied Statistics, vol. 17, no. 1, pp. 560-582, 2023.
Second-order analysis of inhomogeneous spatio-temporal point process data. E Gabriel, P J Diggle, Statistica Neerlandica. 631E. Gabriel and P. J. Diggle, "Second-order analysis of inhomogeneous spatio-temporal point process data," Statistica Neerlandica, vol. 63, no. 1, pp. 43-51, 2009.
Statistical analysis of spatial and spatio-temporal point patterns. P J Diggle, CRC pressP. J. Diggle, Statistical analysis of spatial and spatio-temporal point patterns. CRC press, 2013.
Factorized point process intensities: A spatial analysis of professional basketball. A C Miller, L Bornn, R P Adams, K Goldsberry, ICML. A. C. Miller, L. Bornn, R. P. Adams, and K. Goldsberry, "Factorized point process intensities: A spatial analysis of professional basketball," in ICML, 2014.
An introduction to the theory of point processes. J D Scargle, Elementary theory and methods. 46J. D. Scargle, "An introduction to the theory of point processes, vol. i: Elementary theory and methods," Technometrics, vol. 46, pp. 257 -257, 2004.
Spatiotemporal-textual point processes for crime linkage detection. S Zhu, Y Xie, 10.1214/21-AOAS1538The Annals of Applied Statistics. 162S. Zhu and Y. Xie, "Spatiotemporal-textual point processes for crime linkage detection," The Annals of Applied Statistics, vol. 16, no. 2, pp. 1151 -1170, 2022. [Online]. Available: https://doi.org/10.1214/21-AOAS1538
A stochastic marked point process model for earthquakes. L Holden, S Sannan, H Bungum, Natural Hazards and Earth System Sciences. 3L. Holden, S. Sannan, and H. Bungum, "A stochastic marked point process model for earthquakes," Natural Hazards and Earth System Sciences, vol. 3, pp. 95-101, 2003.
The neural hawkes process: A neurally self-modulating multivariate point process. H Mei, J Eisner, NIPS. H. Mei and J. Eisner, "The neural hawkes process: A neurally self-modulating multivariate point process," in NIPS, 2017.
Learning temporal point processes via reinforcement learning. S Li, S Xiao, S Zhu, N Du, Y Xie, L Song, Advances in neural information processing systems. 31S. Li, S. Xiao, S. Zhu, N. Du, Y. Xie, and L. Song, "Learning temporal point processes via reinforcement learning," Advances in neural information processing systems, vol. 31, 2018.
Transformer hawkes process. S Zuo, H Jiang, Z Li, T Zhao, H Zha, ICML. S. Zuo, H. Jiang, Z. Li, T. Zhao, and H. Zha, "Transformer hawkes process," in ICML, 2020.
Critical reflexivity in financial markets: a hawkes process analysis. S J Hardiman, N Bercot, J.-P Bouchaud, The European Physical Journal B. 86S. J. Hardiman, N. Bercot, and J.-P. Bouchaud, "Critical reflexivity in financial markets: a hawkes process analysis," The European Physical Journal B, vol. 86, pp. 1-9, 2013.
Tideh: Time-dependent hawkes process for predicting retweet dynamics. R Kobayashi, R Lambiotte, ICWSM. R. Kobayashi and R. Lambiotte, "Tideh: Time-dependent hawkes process for predicting retweet dynamics," in ICWSM, 2016.
Constructing disease network and temporal progression model via context-sensitive hawkes process. E Choi, N Du, R Chen, L Song, J Sun, 2015 IEEE International Conference on Data Mining. E. Choi, N. Du, R. Chen, L. Song, and J. Sun, "Constructing disease network and temporal progression model via context-sensitive hawkes process," 2015 IEEE International Conference on Data Mining, pp. 721-726, 2015.
On the stability and dynamics of stochastic spiking neuron models: Nonlinear hawkes process and point process glms. F Gerhard, M Deger, W A Truccolo, PLoS Computational Biology. 13F. Gerhard, M. Deger, and W. A. Truccolo, "On the stability and dynamics of stochastic spiking neuron models: Nonlinear hawkes process and point process glms," PLoS Computational Biology, vol. 13, 2017.
A tutorial on conformal prediction. G Shafer, V Vovk, Journal of Machine Learning Research. 9G. Shafer and V. Vovk, "A tutorial on conformal prediction," Journal of Machine Learning Research, vol. 9, no. Mar, pp. 371-421, 2008.
Conformal prediction: a unified review of theory and new challenges. M Fontana, G Zeni, S Vantini, Bernoulli. 291M. Fontana, G. Zeni, and S. Vantini, "Conformal prediction: a unified review of theory and new challenges," Bernoulli, vol. 29, no. 1, pp. 1-23, 2023.
Classification with valid and adaptive coverage. Y Romano, M Sesia, E Candes, Advances in Neural Information Processing Systems. 33Y. Romano, M. Sesia, and E. Candes, "Classification with valid and adaptive coverage," Advances in Neural Information Processing Systems, vol. 33, pp. 3581-3591, 2020.
Uncertainty sets for image classifiers using conformal prediction. A N Angelopoulos, S Bates, M Jordan, J Malik, International Conference on Learning Representations. A. N. Angelopoulos, S. Bates, M. Jordan, and J. Malik, "Uncertainty sets for image classifiers using conformal prediction," in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=eNdiU DbM9
The application of conformal prediction to the drug discovery process. M Eklund, U Norinder, S Boyer, L Carlsson, Annals of Mathematics and Artificial Intelligence. 74M. Eklund, U. Norinder, S. Boyer, and L. Carlsson, "The application of conformal prediction to the drug discovery process," Annals of Mathematics and Artificial Intelligence, vol. 74, pp. 117-132, 2013.
Large scale comparison of qsar and conformal prediction methods and their applications in drug discovery. N Bosc, F Atkinson, E Felix, A Gaulton, A Hersey, A R Leach, Journal of Cheminformatics. 11N. Bosc, F. Atkinson, E. Felix, A. Gaulton, A. Hersey, and A. R. Leach, "Large scale comparison of qsar and conformal prediction methods and their applications in drug discovery," Journal of Cheminformatics, vol. 11, 2019.
Anomaly detection of trajectories with kernel density estimation by conformal prediction. J Smith, I Nouretdinov, R Craddock, C R Offer, A Gammerman, AIAI Workshops. J. Smith, I. Nouretdinov, R. Craddock, C. R. Offer, and A. Gammerman, "Anomaly detection of trajectories with kernel density estimation by conformal prediction," in AIAI Workshops, 2014.
Conformal prediction under covariate shift. R J Tibshirani, R F Barber, E Candes, A Ramdas, Advances in Neural Information Processing Systems. R. J. Tibshirani, R. F. Barber, E. Candes, and A. Ramdas, "Conformal prediction under covariate shift," in Advances in Neural Information Processing Systems, 2019, pp. 2530-2540.
PAC prediction sets under covariate shift. S Park, E Dobriban, I Lee, O Bastani, International Conference on Learning Representations. S. Park, E. Dobriban, I. Lee, and O. Bastani, "PAC prediction sets under covariate shift," in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=DhP9L8vIyLc
Conformal prediction for dynamic time-series. C Xu, Y Xie, arXiv:2010.09107arXiv preprintC. Xu and Y. Xie, "Conformal prediction for dynamic time-series," arXiv preprint arXiv:2010.09107., 2020.
Conformal anomaly detection on spatio-temporal observations with missing data. arXiv:2105.11886Accepted at ICML 2021 Distribution-free Uncertainty Quantification workshop. arXiv preprint--, "Conformal anomaly detection on spatio-temporal observations with missing data," arXiv preprint arXiv:2105.11886. Accepted at ICML 2021 Distribution-free Uncertainty Quantification workshop, 2021.
Conformal time-series forecasting. K Stankevivciūtė, A M Alaa, M Van Der Schaar, NeurIPSK. Stankevivciūtė, A. M. Alaa, and M. van der Schaar, "Conformal time-series forecasting," in NeurIPS, 2021.
R F Barber, E J Candes, A Ramdas, R J Tibshirani, arXiv:2202.13415Conformal prediction beyond exchangeability. arXiv preprintR. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani, "Conformal prediction beyond exchangeability," arXiv preprint arXiv:2202.13415, 2022.
California Public Utilities Commission (cpuc). 7"California Public Utilities Commission (cpuc)," https://www.cpuc.ca.gov/wildfires, [Accessed 07-Oct-2022].
LANDFIRE Program: Home -landfire.gov. 7[47] "LANDFIRE Program: Home -landfire.gov," https://www.landfire.gov/, [Accessed 07-Oct-2022].
NLDAS: North American Land Data Assimilation System -NCAR -Climate Data Guide -climatedataguide. 7ucar.edu"NLDAS: North American Land Data Assimilation System -NCAR -Climate Data Guide -climatedataguide.ucar.edu," https://climatedataguide.ucar.edu/climate-data/nldas-north-american-land-data-assimilation-system, [Accessed 07-Oct-2022].
Fire Danger Forecast -U.S. Geological Survey -usgs.gov. 7"Fire Danger Forecast -U.S. Geological Survey -usgs.gov," https://www.usgs.gov/fire-danger-forecast, [Accessed 07-Oct-2022].
CVXPY: A Python-embedded modeling language for convex optimization. S Diamond, S Boyd, Journal of Machine Learning Research. 1783S. Diamond and S. Boyd, "CVXPY: A Python-embedded modeling language for convex optimization," Journal of Machine Learning Research, vol. 17, no. 83, pp. 1-5, 2016.
Conformal prediction set for time-series. C Xu, Y Xie, arXiv:2206.07851Accepted at ICML 2022 Distribution-free Uncertainty Quantification workshop. arXiv preprintC. Xu and Y. Xie, "Conformal prediction set for time-series," arXiv preprint arXiv:2206.07851. Accepted at ICML 2022 Distribution-free Uncertainty Quantification workshop, 2022.
Signal recovery by stochastic optimization. A B Juditsky, A Nemirovski, Automation and Remote Control. 8010A. B. Juditsky and A. Nemirovski, "Signal recovery by stochastic optimization," Automation and Remote Control, vol. 80, no. 10, pp. 1878-1893, 2019.
Solar radiation anomaly events modeling using spatial-temporal mutually interactive processes. M Zhang, C Xu, A Sun, F Qiu, Y Xie, arXiv:2101.11179arXiv preprintM. Zhang, C. Xu, A. Sun, F. Qiu, and Y. Xie, "Solar radiation anomaly events modeling using spatial-temporal mutually interactive processes," arXiv preprint arXiv:2101.11179, 2021.
A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion. A Farcomeni, Statistical Methods in Medical Research. 17A. Farcomeni, "A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion," Statistical Methods in Medical Research, vol. 17, pp. 347 -388, 2008.
Wildland Fire Danger Index (FDI) / Links and Information / Fire Weather / Wildland Fire / Forest &. "Wildland Fire Danger Index (FDI) / Links and Information / Fire Weather / Wildland Fire / Forest &
Wildfire / Home -Florida Department of Agriculture & Consumer Services -fdacs.gov. 7Wildland-Fire/Fire-Weather/Links-and-Information/Wildland-Fire-Danger-Index-FDIWildfire / Home -Florida Department of Agriculture & Consumer Services -fdacs.gov," https://www.fdacs.gov/Forest-Wildfire/ Wildland-Fire/Fire-Weather/Links-and-Information/Wildland-Fire-Danger-Index-FDI, [Accessed 07-Oct-2022].
Isolation forest. F T Liu, K M Ting, Z.-H Zhou, Eighth IEEE International Conference on Data Mining. F. T. Liu, K. M. Ting, and Z.-H. Zhou, "Isolation forest," 2008 Eighth IEEE International Conference on Data Mining, pp. 413-422, 2008.
Libsvm: A library for support vector machines. C.-C Chang, C.-J Lin, ACM Trans. Intell. Syst. Technol. 227C.-C. Chang and C.-J. Lin, "Libsvm: A library for support vector machines," ACM Trans. Intell. Syst. Technol., vol. 2, pp. 27:1-27:27, 2011.
Lof: identifying density-based local outliers. M M Breunig, H.-P Kriegel, R T Ng, J Sander, SIGMOD '00. M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, "Lof: identifying density-based local outliers," in SIGMOD '00, 2000.
A fast algorithm for the minimum covariance determinant estimator. P J Rousseeuw, K Van Driessen, Technometrics. 41P. J. Rousseeuw and K. van Driessen, "A fast algorithm for the minimum covariance determinant estimator," Technometrics, vol. 41, pp. 212-223, 1999.
Extreme value theory: an introduction. L De Haan, A Ferreira, Springer21L. De Haan and A. Ferreira, Extreme value theory: an introduction. Springer, 2006, vol. 21.
Introduction to empirical processes and semiparametric inference. M R Kosorok, SpringerM. R. Kosorok, Introduction to empirical processes and semiparametric inference. Springer, 2008.
Graph implementations for nonsmooth convex programs. M Grant, S Boyd, Recent Advances in Learning and Control, ser. Lecture Notes in Control and Information Sciences. V. Blondel, S. Boyd, and H. KimuraSpringer-Verlag LimitedM. Grant and S. Boyd, "Graph implementations for nonsmooth convex programs," in Recent Advances in Learning and Control, ser. Lecture Notes in Control and Information Sciences, V. Blondel, S. Boyd, and H. Kimura, Eds. Springer-Verlag Limited, 2008, pp. 95-110, http://stanford.edu/ ∼ boyd/graph dcp.html.
Sequential anomaly detection in the presence of noise and limited feedback. M Raginsky, R Willett, C Horn, J Silva, R Marcia, IEEE Transactions on Information Theory. 58M. Raginsky, R. Willett, C. Horn, J. Silva, and R. Marcia, "Sequential anomaly detection in the presence of noise and limited feedback," IEEE Transactions on Information Theory, vol. 58, pp. 5544-5562, 2012.
Algorithm 2 Alternating Minimization for Regularized Marked Spatio-Temporal Hawkes Process Model (Eq. D E A Sanders, British Actuarial Journal. 113The modelling of extreme eventsD. E. A. Sanders, "The modelling of extreme events," British Actuarial Journal, vol. 11, no. 3, p. 519-557, 2005. Algorithm 2 Alternating Minimization for Regularized Marked Spatio-Temporal Hawkes Process Model (Eq. (8))
. K , Β Low, ϵ β Ensure: θ[β] * , β * 1: for k = 1, . . . , K do 2: θ[β] (k) ← arg min θ[βRequire: β 0 , K, β low , ϵ β Ensure: θ[β] * , β * 1: for k = 1, . . . , K do 2: θ[β] (k) ← arg min θ[β]
. * ← Θ, β] (k) , β * ← β (k* ← θ[β] (k) , β * ← β (k)
Doing so in general may not exhibit fast converge. Nevertheless, in our case, the number of iteration N is always between 3 and 5. A typically loss curve over β is given in Figure 9 below. Specifically, the consecutive β (3) = 0.76 (after three iterations) and β (2) = 0.78 are close enough, so that Algorithm 2 terminates. terms of clock time. Algorithm 2 in essence performs coordinatedescent on the non-convex optimization problem (8). measured on 16-inch Macbook Pro 2019), the computation per iteration is ∼12Remark 4 (Computation efficiency of Algorithm 2): Algorithm 2 in essence performs coordinate- descent on the non-convex optimization problem (8). Doing so in general may not exhibit fast converge. Nevertheless, in our case, the number of iteration N is always between 3 and 5. A typically loss curve over β is given in Figure 9 below. Specifically, the consecutive β (3) = 0.76 (after three iterations) and β (2) = 0.78 are close enough, so that Algorithm 2 terminates. In terms of clock time (measured on 16-inch Macbook Pro 2019), the computation per iteration is ∼12
Since fire incidents are rare, we also view Y tk = 1 as anomalies. Moreover, Y tk is fully observable after time t, so we have full feedback after identifying the anomalies. Inspired by the Hedging Algorithm. − Let Y Tk ∈ {1, 1}, 63Hegdingt ≥ 1, k ∈ [K] denote the fire occurrence status in location k at time t, where 1 indicates that a fire event occurs. Algorithm 4)Let Y tk ∈ {1, −1}, t ≥ 1, k ∈ [K] denote the fire occurrence status in location k at time t, where 1 indicates that a fire event occurs. Since fire incidents are rare, we also view Y tk = 1 as anomalies. Moreover, Y tk is fully observable after time t, so we have full feedback after identifying the anomalies. Inspired by the Hedging Algorithm [63, Hegding (Algorithm 4)], we
| []
|
[
"Tight Performance Guarantees of Imitator Policies with Continuous Actions",
"Tight Performance Guarantees of Imitator Policies with Continuous Actions"
]
| [
"Davide Maran [email protected] \nPolitecnico di Milano\nPiazza Leonardo da Vinci\n32 20133MilanItaly\n",
"Alberto Maria Metelli [email protected] \nPolitecnico di Milano\nPiazza Leonardo da Vinci\n32 20133MilanItaly\n",
"Marcello Restelli [email protected] \nPolitecnico di Milano\nPiazza Leonardo da Vinci\n32 20133MilanItaly\n"
]
| [
"Politecnico di Milano\nPiazza Leonardo da Vinci\n32 20133MilanItaly",
"Politecnico di Milano\nPiazza Leonardo da Vinci\n32 20133MilanItaly",
"Politecnico di Milano\nPiazza Leonardo da Vinci\n32 20133MilanItaly"
]
| []
| Behavioral Cloning (BC) aims at learning a policy that mimics the behavior demonstrated by an expert. The current theoretical understanding of BC is limited to the case of finite actions. In this paper, we study BC with the goal of providing theoretical guarantees on the performance of the imitator policy in the case of continuous actions. We start by deriving a novel bound on the performance gap based on Wasserstein distance, applicable for continuous-action experts, holding under the assumption that the value function is Lipschitz continuous. Since this latter condition is hardy fulfilled in practice, even for Lipschitz Markov Decision Processes and policies, we propose a relaxed setting, proving that value function is always Hölder continuous. This result is of independent interest and allows obtaining in BC a general bound for the performance of the imitator policy. Finally, we analyze noise injection, a common practice in which the expert's action is executed in the environment after the application of a noise kernel. We show that this practice allows deriving stronger performance guarantees, at the price of a bias due to the noise addition. | 10.48550/arxiv.2212.03922 | [
"https://export.arxiv.org/pdf/2212.03922v1.pdf"
]
| 254,409,014 | 2212.03922 | 50b7baed2365f292b67cf95470b5cec170a69715 |
Tight Performance Guarantees of Imitator Policies with Continuous Actions
Davide Maran [email protected]
Politecnico di Milano
Piazza Leonardo da Vinci
32 20133MilanItaly
Alberto Maria Metelli [email protected]
Politecnico di Milano
Piazza Leonardo da Vinci
32 20133MilanItaly
Marcello Restelli [email protected]
Politecnico di Milano
Piazza Leonardo da Vinci
32 20133MilanItaly
Tight Performance Guarantees of Imitator Policies with Continuous Actions
Behavioral Cloning (BC) aims at learning a policy that mimics the behavior demonstrated by an expert. The current theoretical understanding of BC is limited to the case of finite actions. In this paper, we study BC with the goal of providing theoretical guarantees on the performance of the imitator policy in the case of continuous actions. We start by deriving a novel bound on the performance gap based on Wasserstein distance, applicable for continuous-action experts, holding under the assumption that the value function is Lipschitz continuous. Since this latter condition is hardy fulfilled in practice, even for Lipschitz Markov Decision Processes and policies, we propose a relaxed setting, proving that value function is always Hölder continuous. This result is of independent interest and allows obtaining in BC a general bound for the performance of the imitator policy. Finally, we analyze noise injection, a common practice in which the expert's action is executed in the environment after the application of a noise kernel. We show that this practice allows deriving stronger performance guarantees, at the price of a bias due to the noise addition.
Introduction
The degree of interaction of the human in the ecosystem of artificial intelligence is progressively becoming more and more prominent (Zanzotto 2019). In this setting, the human plays the role of an expert that, with different tools, interacts with the artificial agents and allows the agent to leverage their knowledge to improve, quicken, and make the learning process more effective (Jeon, Milli, and Dragan 2020).
Imitation Learning (IL, Osa et al. 2018) can be considered one of the simplest forms of interaction between a human and an artificial agent. This kind of interaction is unidirectional since the human expert provides the agent with a set of demonstrations of behavior that is optimal w.r.t. an unknown objective. The agent, on its part, aims to learn a behavior as close as possible to the demonstrated one. Classically, we distinguish between two realizations of IL: Behavioral Cloning (BC, Bain and Sammut 1995) and Inverse Reinforcement Learning (IRL, Arora and Doshi 2021). BC aims at mimicking the behavior of the agent by recovering a policy that matches as much as possible the expert's Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. demonstrated behavior. Instead, IRL has the more ambitious goal of reconstructing a reward function that justifies the expert's behavior. Thus, it aims at representing the expert's intent rather than their behavior. In this sense, IRL is more challenging than BC, as its output, the reward function, is a more powerful tool that succeeds in being deployed even in the presence of a modification of the environment.
Although IL techniques have been successfully applied to a large variety of real-world applications (e.g., Asfour et al. 2008;Geng, Lee, and Hülse 2011;Rozo, Jiménez, and Torras 2013;Likmeta et al. 2021), their theoretical understanding in terms of performance of the imitation policy is currently limited. Recently, in (Xu, Li, and Yu 2020), a first analysis of the error bounds has been provided for BC and Generative Adversarial Imitation Learning (Ho and Ermon 2016). However, these results involve the presence of an fdivergence (Rényi et al. 1961), usually total variation (TV) or KL-divergence, between the expert's policy and the imitator one. Consequently, they are significant only when the action space is finite, while becoming vacuous for experts with continuous actions. To further argue on the limitation of this analysis, consider the case in which BC is reduced to minimize the mean squared error (MSE) between the expert's action and the imitator one. Even in this simple scenario, as we shall see, the current analysis based on TV cannot relate MSE with the performance of the imitator policy. This represents a relevant limitation since many of the applications of IL are naturally defined with continuous-actions context.
Original Contributions
In this paper, we aim to take a step forward to a more comprehensive theoretical understanding of BC. Specifically, we devise error bounds that relate the performance difference J π E´J π I between the expert's policy π E and the imitator one π I to their divergence. Our bounds are based on the Wasserstein distance (Villani 2009) and, for this reason, are meaningful even in the presence of continuous-action spaces (Section 3). Our work contains the following contributions: 1. We will prove a performance bound for standard BC in case of Lipschitz reward-transition for the MDP (see (Rachelson and Lagoudakis 2010)) and Lipschitz continuity of the value function. 2. Since the latter assumption is often violated in practice 1 , we extend the result by only requiring Lipschitzness of the MDP, even if it requires a weaker performance bound. It is also shown that, the less regularity we lose on the value function, the slower the convergence of BC (Section 4). 3. Finally, we focus on a popular practice employed in imitation learning, i.e., noise injection (Laskey et al. 2017a).
In this setting, the expert's action, before being executed in the environment, is corrupted with noise to make the imitation process more robust. We show that noise injection allows achieving stronger theoretical guarantees at the price of competing against a noisy expert, which could have a lower performance (Section 5). In particular, in the second point, we show that the value function of a Lipschitz MDP is always Hölder continuous, with a suitable choice of the exponent depending on the properties of the MDP and policy. This represents a result of independent interest that overcomes a well-known limitation of the Lipschitz continuity of the value function (Rachelson and Lagoudakis 2010;Pirotta, Restelli, and Bascetta 2015), with possible applications outside BC.
Preliminaries
In this section, we provide the necessary mathematical background (Section 2.1) and the foundations of Markov Decision Processes (Section 2.2).
Mathematical Background
Notation Let X be a set and F be a σ-algebra over X , we denote with PpX q the set of probability measures over the measurable space pX , Fq. Let x P X , we denote the Dirac delta measure centered in x as δ x . Let f : X Ñ R be a function, we denote the L 8 -norm as }f } 8 " sup xPX f pxq and with }f } i the L i -norm for i P t1, 2u.
Lipschitz Continuity Let pX , d X q and pY, d Y q be two metric spaces and L ą 0. A function f : X Ñ Y is said to be L-Lipschitz continuous (L-LC) if: d Y pf pxq, f px 1 qq ď Ld X px, x 1 q, @x, x 1 P X .
We denote the Lipschitz semi-norm of function f as }f } L " sup x,x 1 PX ,x‰x 1 d Y pf pxq, f px 1 qq{d X px, x 1 q. In the real space (X Ď R n ), we use the Euclidean distance, i.e., d X px, x 1 q " }x´x 1 } 2 . For probability measures (X " PpΩq), the most intuitive distance is the total variation (TV), defined as:
TVpµ, νq " sup }f }8ď1ˇżΩ
f pωq pµ´νq pdωqˇˇˇˇ@µ, ν P PpΩq However, with continuous deterministic distributions, the TV takes its maximum value 1 (Figure 1). Thus, we introduce L 1 -Wasserstein distance (Villani 2009), defined as:
Wpµ, νq " sup }f } L ď1ˇżΩ f pωqpµ´νqpdωqˇˇˇˇ@µ, ν P PpΩq
It is worth noting that, for deterministic distributions, we have Wpδ x , δ x 1 q " d X px, x 1 q.
under the demanding assumption that γLpp1`Lπq ă 1 (Rachelson and Lagoudakis 2010), requiring the Lipschitz constants of the transition model Lp and of the policy Lπ to be very small.
Hölder Continuity
The notion of Lipschitz continuity is generalized by Hölder continuity. Let pX , d X q and pY, d Y q be two metric spaces and L, α ą 0. A function f : X Ñ Y is said to be pα, Lq-Hölder continuous (pα, Lq-HC) if:
d Y pf pxq, f px 1 qq ď Ld X px, x 1 q α , @x, x 1 P X .
It is worth noting that: (i) Lipschitz continuity is obtained by Hölder continuity for α " 1; (ii) only constant functions are pα, Lq´HC for α ą 1; (iii) in bounded domains, the higher the value of α the more restrictive the condition.
Convolution Let f, g : R n Ñ R be two functions, their convolution is defined for all x P R n as:
pf˚gqpxq :" ż R n f px´yqgpyqdy " ż R n f pyqgpx´yqdy.
We introduce the following regularity assumption regarding the probability measures. Definition 1. A probability measure L P PpR n q is L-TV-Lipschitz continuous (L-TV-LC) if:
TVpLp¨`hq, Lp¨qq ď L}h} 2 , @h P R n .
Under this assumption, we can prove that the convolution regularizes bounded and possibly irregular functions. Proposition 1. Let f : R n Ñ R be a function such that }f } 8 ď M , and let L P PpR n q be an L-TV-LC probability measure that admits density function : R n Ñ R ě0 . Then, the convolution f˚ is 2LM -LC continuous.
Markov Decision Processes
A discrete-time discounted Markov Decision Process (MDP, Puterman 2014) is a 6-tuple M " pS, A, p, r, γ, µq where S and A are the measurable sets of states and actions, p : SˆA Ñ PpSq is the transition model that defines the probability measure pp¨|s, aq of the next state when playing action a P A in state s P S, r : SˆA Ñ R is the reward function defining the reward rps, aq upon playing action a P A in state s P S, γ P r0, 1q is the discount factor, and µ P PpSq is the initial-state distribution. The agent's behavior is modeled by a (Markovian stationary) policy π : S Ñ PpAq, which assigns a probability measure πp¨|sq of the action to be taken in state s P S. With little abuse of notation, when the policy is deterministic, we denote with πpsq the action played in state s P S. The execution of a policy determines a γ-discounted visitation distribution, defined as:
d π psq :" p1´γq`8 ÿ t"0
γ t Pps t " s|π, µq, @s P S.
Value Functions
The state-action value function (or Qfunction) which quantifies the expected discounted sum of the rewards obtained under a policy π, starting from a state s P S and fixing the first action a P A:
Q π ps, aq :" E π «`8 ÿ t"0 γ t rps t , a t qˇˇˇˇs 0 " s, a 0 " a ff ,(1)
where Eπ denotes the expectation w.r.t. to the stochastic process a t " πp¨|s t q and s t`1 " pp¨|s t , a t q for all t P N. The Figure 1: Comparison between TV and Wasserstein distances for two Gaussian distributions µ and ν. Left: TVpµ, νq « 0.38, Wpµ, νq " 1, Center: TVpµ, νq « 1, Wpµ, νq " 1, Right: TVpµ, νq « 1, Wpµ, νq " 0.4 state value function (or V-function) is defined as V π psq :" Ea"πp¨|sqrQ π ps, aqs, for all s P S. Given an initial state distribution µ, the expected return is defined as:
J π :" E s"µ rV π psqs " 1 1´γ E s"d π ,a"πp¨|sq
rrps, aqs.
Lipschitz MDPs
We now introduce notions that will allow us to characterize the smoothness of an MDP (Rachelson and Lagoudakis 2010). To this end, we assume that the state space S and action space A are metric spaces endowed with the corresponding distance functions d S and d A .
Assumption 1 (Lipschitz MDP). An MDP M is pL p , L r q-LC if, for all ps, aq, ps 1 , a 1 q P SˆA it holds that: Wppp¨|s, aq, pp¨|s 1 , a 1 qq ď L p`dS ps, s 1 q`d A pa, a 1 q˘,ˇr ps, aq´rps 1 , a 1 qˇˇď L r`dS ps, s 1 q`d A pa, a 1 q˘. Assumption 2 (Lipschitz Policy). A (Markovian stationary) policy π is L π -LC if, for all s, s 1 P S it holds that:
Wpπp¨|sq, πp¨|s 1 qq ď L π d S ps, s 1 q.
Note, if instead of the Wasserstein metric, we had used the TV, these assumptions would be way more restrictive, not holding for deterministic environment/policies with continuous state-action spaces (Munos and Szepesvári 2008). Under Assumptions 1 and 2, provided that γL p p1`L π q ă 1, the Q-function Q π is L Q -LC with L Q ď Lr 1´γLpp1`Lπq (Rachelson and Lagoudakis 2010, Theorem 1).
Bound for Imitating Policies based on Wasserstein Distance
The high-level goal of this work is to find a theoretical guarantee for the imitator policies learned with BC. Specifically, we want to bound the difference in expected return J π E´J π I between the imitator policy π I learned with BC and the expert policy π E in terms of a distributional divergence between the corresponding action distributions. The best-known results for this kind of analysis, in the case of discrete action spaces, are proved in (Xu, Li, and Yu 2020) and we report it below for completeness. 2 Theorem 2 (Xu, Li, and Yu (2020), Theorem 1). Let π E be the expert policy and π I be the imitator policy. If |rps, aq| ď R max for all ps, aq P SˆA, it holds that:
J π E´J π I ď 2R max p1´γq 2 E s"d π E
rTVpπ E p¨|sq, π I p¨|sqqs.
As anticipated, this result is not suitable for continuous action spaces, since the TV between different policies would take its maximum value 1 whenever one of the two policies is deterministic. The following example clarifies the issue. Example 1. Suppose that the action space is a real space A Ď R n and that both expert π E and the imitator π I policies are deterministic. A common way to perform BC is to minimize the mean squared error (MSE) between the expert's action and the imitator one. Suppose we are able to provide the following guarantee on the MSE, for some ε ą 0:
E s"d E " }π E psq´π I psq} 2 2 ı ď ε 2 .
(2)
However, this condition provides no guarantee in TV. Indeed, by taking π I psq " π E psq`ε ? n 1 n , being 1 n the vector of all 1s, Equation (2) is fulfilled, but we obtain: Es"d π E rTVpπ E p¨|sq, π I p¨|sqqs " Es"d π E r1tπ E psq ‰ π I psqus " 1, where 1 is the indicator function.
A Bound based on Wasserstein Distance
Even if the existing analysis of Xu, Li, and Yu (2020) cannot be applied in continuous action spaces, as shown in Example 1, it is not hard to leverage the regularity of the MDP to effectively bound the performance difference J π E´J π I . Theorem 3. Let π E be the expert policy and π I be the imitator policy. If that state-action value function Q π I of the imitator policy π I is L Q π I -LC, then it holds that:
J π E´J π I ď L Q π I 1´γ E s"d π E rWpπ I p¨|sq, π E p¨|sqqs.
Proof. Using the performance difference lemma (Kakade and Langford 2002), we have:
J π E´J π I " 1 1´γ E s"d π E " E a"π E p¨|sq rA π I ps, aqs ,
where A π I ps, aq " Q π I ps, aq´V π I psq is the advantage function. The inner expectation can be written as:
E a"π E p¨|sq rA π I ps, aqs " ż A Q π I ps, aqpπ E pda|sq´π I pda|sqq ď sup sPS }Q π I ps,¨q} L Wpπ E p¨|sq, π I p¨|sqq,
where the inequality follows from the definition of Wasserstein metric. The result is obtained by observing that sup sPS }Q π I ps,¨q} L ď }Q π I } L ď L Q π I .
A similar bound was previously derived by (Pirotta, Restelli, and Bascetta 2015, Theorem 1) and (Asadi, Misra, and Littman 2018, Theorem 2). However, (Pirotta, Restelli, and Bascetta 2015) assume that the policy is LC w.r.t. a policy parametrization. Instead, the result of (Asadi, Misra, and Littman 2018) involves the transition model instead of the policy and requires a bound uniform over SˆA on the Wasserstein distance between the true and the estimated models. Let us now revisit Example 1 in light of Theorem 3. Example 1 (continued). Under Equation (2) , we can provide an effective guarantee on the Wasserstein distance:
E s"d π E rWpπ E p¨|sq, π I p¨|sqqs " E s"d π E r}π E psq´π I psq} 1 s ď E s"d π E r}π E psq´π I psq} 2 2 s 1 2 ď ε,
where in the first inequality, we used Jensen's inequality.
Comparing Theorem 3 with Theorem 2, we no longer require the uniform bound R max on the reward function, but we introduce an additional assumption on the regularity of the imitator Q-function Q π I . Clearly, we should find suitable assumptions under which L Q π I is finite. As we anticipated in Section 2.2, the only known result that provides such an estimate under the assumption of Lipschitz MDP and Lipschitz policy with Wasserstein metric is (Rachelson and Lagoudakis 2010), where the authors proved that, if γL p p1`L π q ă 1 is satisfied, L Q π can be chosen as:
L Q π :" L r 1´γL p p1`L π q .(3)
However, we argue that condition γL p p1`L π q ă 1 is very demanding and often unrealistic. Indeed, to fulfill it we need at least one of these conditions to be satisfied: (i) γ ! 1: in practice, it is almost always false, since the discount factor is often chosen to be close to 1; (ii) L p ă 1: this is a very unrealistic assumption, since it would make all the states shrink exponentially when the same actions are performed; (iii) L π « 0: the action depends very little on the state so that there is a very limited possibility of controlling the environment (this condition alone is not even sufficient).
The Tightness of the Value Function Lipschitz Constant
It is legitimate to question whether the value L Q π of Equation (3), widely employed in the literature (e.g., Rachelson and Lagoudakis 2010;Pirotta, Restelli, and Bascetta 2015;Asadi, Misra, and Littman 2018), is a tight approximation of the Lipschitz semi-norm }Q π } L . In this section, we prove that the result cannot be improved, at least when requiring the Lipschitz continuity of the value function. Example 2 shows that the value function Q π can be made non-LC even when the MDP and the policy are LC, while Theorem 4 proves that a bound like the one of Theorem 3 cannot be obtained for a generic Lipschitz MDP and Lipschitz policies.
Example 2. Let M be an MDP and π be a policy defined as follows, given the constants L p , L r ą 0:
• S " r´1, 1s; • A " t0u;
• The dynamic is deterministic. From every state s P S, performing action 0, the only possible, the environment moves to the state s 1 " clippL p s,´1, 1q. 3 This means that ppds 1 |s, aq " δ clippLps,´1,1q pds 1 q; • rps, aq " L r s; • The initial state distribution is µ " Unipr0, 1sq (not influential for the derivation that follows). This MDP is pL p , L r q-LC and the policy has Lipschitz constant equal to L π " 0, since there is one action only. Equation (3) ensures that the state value function V π (that is equal to the state-action value function Q π since there is one action only) is LC with constant:
L V π " L r 1´γL p .
Since the state space is one dimensional, we can compute the state value function V π exactly:
V π psq " L r`8 ÿ k"0 γ k clip`L k p s,´1, 1˘, @s P S.
As shown in Figure 2 left, the point of maximal slope s " 0. Even if we have employed the specific values L p " 1.15, L r " 1, and γ " 0.75, it is simple to see that this property is valid in general. Moreover, we have plotted in orange the line which passes through the origin, having slope equal to:
L V π " L r 1´γL p " 1 1´0.75¨1.15 « 7.27,
which is the tangent line to the state value function in s " 0, as it also can be found analytically:
BV π Bs p0q " L r`8 ÿ k"0 γ k L k p " L r 1´γL p .
This means that, in this case, the choice of the Lipschitz constant provided by the theory (Equation 3) is actually tight. What happens if we reach the hard edge of γL p p1`L π q " γL p ą 1, where Equation (3) does not guarantee any property? For instance, by taking L p " 1.15, L r " 1, and γ " 0.9, we lose any Lipschitz property, finding a derivative which is unbounded, as shown in Figure 2 right.
Note that, in this example, we are able to find a non-LC state value function even in the apparently simple case of A " t0u, where L π " 0. Therefore, this example also shows that the dynamics of the system alone is enough to make the state value function irregular. Furthermore, the same example can be adapted to prove that, for a generic Lipschitz MDP and a pair of Lipschitz policies, a bound like the one of Theorem 3 cannot be obtained in general. Theorem 4. There exist an pL p , L r q-LC MDP and an L π -LC policy π such that for every finite constant C ą 0, (even depending on L p , L π , and L r ), there exists an L π -LC policy π 1 such that:
J π´J π 1 ě C E s"d π rWpπp¨|sq, π 1 p¨|sqqs.
The proof is reported in Appendix B. If we set π " π E as the expert policy and π 1 " π I as an imitator policy, Theorem 4 shows that, even if the MDP and the policies are LC, we cannot, in general, upper bound the performance difference J π E´J π I with the expected Wasserstein distance Es"d π rWpπ E p¨|sq, π I p¨|sqqs. This is in line with the fact that, without additional assumptions, e.g., when γL p pL π`1 q ă 1 does not hold, Theorem 4 is vacuous. Therefore, these bounds cannot be improved in the framework of Lipschitz continuity, however, a weaker notion of regularity can be used to generalize the previous theorems.
Hölder Continuity is All We Need
In this section, we propose an approach for overcoming the limitations of the Lipschitz continuity, discussed in the previous section. In Section 4.1, we show that the state-action value function Q π is always Hölder continuous, provided that the MDP and the policy are LC. Then, in Section 4.2, we apply these findings to BC, deriving a bound on the performance difference J π E´J π I in terms of the Wasserstein distance that holds for every LC MDP and policy.
The Hölder Continuity of the Value Function
The first step to improve the result of (Rachelson and Lagoudakis 2010) is to observe that, like in Example 2, even when the value function is not Lipschitz continuous, it keeps being continuous. This observation is not, in principle, accounted for by the previous analysis, which provides no result when γL p p1`L π q ą 1. This suggests that employing a notion of regularity that is stronger than continuity but weaker than Lipschitz continuity, as Hölder continuity, might lead to an improvement of the analysis. Indeed, we are able to prove the following generalization. Theorem 5 (Hölder-continuity of the Q-function). Let M be an pL p , L r q-LC MDP, let π be an L π -LC policy, and let 0 ă α ă α :" min " 1,´l og γ logpL p p1`L π qq * .
If the state space S and the action space A admit finite diameter 4 diampSq and diampAq, respectively, then the stateaction value function Q π is pα, L Q π ,α q´HC with a Hölder 4 The diameter of a metric space pX , dX q is defined as: diampX q " sup x,x 1 PX dX px, x 1 q. constant bounded by:
L Q π ,α :" L r pdiampSq`diampAqq 1´α 1´γpL p p1`L π qq α .
The proof is reported in Appendix C. The requirement on the finiteness of the diameter of the state and action spaces is a mild condition. Indeed, it is common to assume that these spaces are bounded (or even compact), which implies a finite diameter. Furthermore, as commonly done in practice, one can easily re-scale the states and actions, modifying the diameter and not altering the nature of the problem. 5 Furthermore, we can easily obtain the Hölder constant of the state value function V π . Proposition 6 (Hölder-continuity of the V-function). Let π be an L π -LC policy. If the state-action value function Q π is pα, L Q π ,α q-HC, then the corresponding state value function V π is pα, L V π ,α q-HC with:
L V π ,α :" L Q π ,α pL π`1 q α .
These result represent a generalization of those of (Rachelson and Lagoudakis 2010), which are obtained by setting α " 1.
Moreover, this Theorem 5 implies that the value functions of an LC MDP and policy is always continuous, since any HC function is also continuous, regardless of its constants, as it seemed from the previous example. Coming back to Example 2, we can perform further analyses. Example 2 (continued). We can use Theorem 5 to provide an upper bound on the value function even if the Lipschitz continuity does not hold. The critical exponent is given by:
α "´l og γ logpL p p1`L π qq « 0.72.
For every value of α ă α, the state value function V π is pα, L V π ,α q´HC. As we can see in Figure 2 right, for small α, the bound provided by L V π ,α |s| α is tight for s Ñ 1.
A More General Bound based on Wasserstein Distance
Similarly to what we have done in Section 3, to a result of regularity, we are able to associate a result about the loss of BC, bounding the difference in performance between two policies with their Wasserstein distance. Indeed, thank to Theorem 5, we can prove the following bound. Theorem 7 (Optimal Error Rate for BC). Let π E be the expert policy and π I be the imitator policy. If that state-action value function Q π I of the imitator policy π I is pα, L Q π I ,α q-HC, then it holds that:
J π E´J π I ď L Q π I ,α 1´γ E s"d π E
rW pπ E p¨|sq, π I p¨|sqq α s .
Furthermore, if the MDP M is pL p , L r q-LC and the imitator policy π I is L π I -LC, the bound is tight for what concerns the exponent α that cannot be improved above the critical value α of Theorem 5.
The proof is reported in Appendix C. This is a remarkable result, since it is valid for every LC MDP and policy. As expected, a low value of α leads to a looser bound. Unfortunately, this bound, despite being tight in the exponent, is difficult to manage in practice. Indeed, in order to minimize the right-hand side, Es"d π E rWpπ E p¨|sq, π I p¨|sqq α s, one should know the value α in advance. However, α ď α depends on the Lipschitz constants of the environment and of the policy, which are usually unknown. Therefore, no imitation learning algorithm can be trained to minimize this error explicitly. Fortunately, we can see that, weakening this result, we can obtain a more practical guarantee. Since 0 ă α ă 1, we can apply Jensen's inequality to obtain:
J π E´J π I ď L Q π I ,α 1´γ E s"d π E
rW pπ E p¨|sq, π I p¨|sqqs α . (4) In this formulation, we minimize the expected Wasserstein distance only, and the knowledge of α is not needed, but its value impacts the kind of guarantee we can provide. Remark 1. If we perform BC in a pL p , L r q-LC MDP and with a L π I -LC imitator policy π I , the best possible performance guarantee (from Equation 4) is given by:
J π E´J π I ď Opε α q,
where ε is the square root of the imitation MSE, i.e., ε 2 " Es"d π E r}π E psq´π I psq} 2 2 s as defined in Example 1, and α ă α "´l og γ logpLpp1`Lπ I qq , the critical exponent. Therefore, a very low value of α, corresponding to lack of regularity, can badly influence the possibility of learning a good imitator policy.
Noise Injection
BC may struggle when the regularity assumptions are lacking. However, in practice, using a noisy expert policy may significantly help the learning process (Laskey et al. 2017b). This empirical benefit is justified by the intuition that noise helps in exploring the neighborhood of the expert trajectories. In this section, we formulate this empirical evidence in a mathematically rigorous way. Indeed, we show how to break the barrier enforced by Theorem 7, whose result is obtained by a deterministic expert. Clearly, these advantages come with the price that a noisy expert might experience a loss in expected return compared to the deterministic one.
Noise Injection: a Mathematical Formulation
The simplest form of noise injection is realized by adding to the expert's action a E,t a noise component η t . In particular, assuming that the action space is real, i.e., A Ď R n , we have: @t P N :
$ ' & ' % a t,E " π E p¨|s t q η t iid " L a t " a t,E`ηt ,(5)
where tη t u tPN is a noise sequence whose components are independent between each other and from the sequences of states and actions, and identically distributed by law L P PpAq. If L admits a density function, we can express the density function of the played action a t as the convolution of the expert policy density function π E and the density function of the noise law L. Note that the formalization in Equation (5) encompasses distributions that do not correspond to the intuitive idea of noise (e.g., when L is a discrete law). To obtain a meaningful result, we enforce the following assumption. Assumption 3. The law of the noise L admits a density function w.r.t. a reference measure : R n Ñ R ě0 and is TV-LC (see Definition 1) with constant L . Under this assumption, denoting with π E, the policy with noise injection, i.e., a t " π E, p¨|s t q, we have that: π E, pa|sq " ż R n π E pa 1 |sq pa´a 1 qda 1 , @ps, aq P SˆA.
This represents the convolution of the policy density function π E and the noise density function . In other words, this shows that the action taken by the expert policy a E,t is averaged over the noise probability distribution.
Assumption 3 covers the most common types of noise, like the Gaussian or the uniform ones. In fact, we can prove that every univariate unimodal distribution satisfies Definition 1 (see Proposition 9 in the Appendix A). Considering multivariate Gaussian noise, we directly derive the L constant. Example 3. Suppose the noise is sampled from a zero-mean Gaussian distribution N p0, Σq with covariance matrix Σ, the previous integral writes, for all ps, aq P SˆA:
π E, pa|sq " ż R n π E pa 1 |sq e´1 2 pa´a 1 q T Σ´1pa´a 1 q p2πq n{2 detpΣq 1{2 loooooooooooomoooooooooooon pa´a 1 q da 1 ,
where we recognise the Gaussian n-variate density . Assumption 3 is verified since, for h P R n :
TVpN ph, Σq, N p0, Σqq ď c 1 2 KLpN ph, Σq, N p0, Σqq " 1 2 }h} Σ´1 ď 1 2 a s min pΣq }h} 2 ,
where we used Pinsker's inequality and s min p¨q denoted the minimum singular value of a matrix. In particular, if Σ is diagonal as σ 2 I, we have that L " 1{p2σq. It is worth noting that, in the diagonal covariance case, L is proportional to σ´1. This suggests that, the smaller the impact of the noise L, i.e., the smaller the standard deviation σ, the higher the constant L . Indeed, as σ decreases, the regularization effect of the noise becomes less relevant (in the limit σ Ñ 0, noise injection vanishes).
A Bound based on Wasserstein Distance for Noise Injection
We are now able to prove a performance guarantee for BC with noise injection. The idea is based on a simple yet interesting fact. We can use the noise to smooth a bounded function, as in Proposition 1. Applying this approach to the state-action value function, leads to the following result. Figure 3: The performance of the expert J π as a function of the standard deviation of the noise σ. The performance is measured on 40 episodes int environment repeated for 20 different random seeds (nuance represents the 95% non-parametric c.i.).
Theorem 8. Let π E be the expert policy and π I be the imitator policy. Let us suppose that we have injected a noise of density function , satisfying Assumption 3 to obtain a noisy expert π E, and a noisy imitator π I, . If |Q π I ps, aq| ď Q max for all ps, aq P SˆA, it holds that:
J π E, ´J π I, ď 2L Q max 1´γ E s"d π E, rWpπ E p¨|sq, π I p¨|sqqs.
The proof is reported in Appendix D. Some observations are in order. First, note the similarity with Theorem 3, with the only difference being the substitution of L Q π with 2L Q max . Second, we require no smoothness assumption (e.g., Lipschitz continuity) on the environment or on the policy. Yet, if in the previous result of Theorem 3 the constant L Q π could easily become infinite, now, the constant 2L Q max can be easily bounded by 2L Rmax p1´γq 2 , since Q max ď Rmax 1´γ . From an intuitive perspective, the need for smoothness in the environment is replaced with an assumption on the density function of the noise. Lastly, we can see that, in the right-hand side of the formula, the error is measured by the Wasserstein distance of the non-noisy policies. This is advisable, since it implies that the intrinsic error due to the noise does not affect the bound besides on the γdiscounted visitation distribution. We show in Appendix that this quantity is always smaller than its counterpart involving the noisy policies. Remark 2. If we perform BC injecting a noise η t of density function and satisfying Assumption 3, we have the following performance guarantee:
J π E, ´J π I, ď Opεq,
where ε is the MSE of the imitation policy as in Remark 1.
In comparison with Remark 1 for standard BC, we can appreciate that, here, the exponent α disappeared. Indeed, we have a performance bound that decreases linearly in the MSE. In many cases, when the environment is not intrinsically very smooth, or the expert policy is irregular, the α parameter can be very small, slowing down the convergence significantly. Instead, a liner decay is a relevant improvement of the v speed. Furthermore, as already noted, no assumption of regularity is required in Theorem 8, so that the last result has a much wider range of applications.
Practical Considerations
In the previous sections, we have seen that the use of noise injection allows having a much better performance guarantee than standard BC (see Remarks 1 and 2). Still, in practice, what matters is to have an imitator policy that is good itself rather than an imitator that is simply good in mimicking a given policy. Therefore, if with the noise injection we negatively affect the performance of the expert, i.e., if J π E, ! J π E , the results given about noise injection could become useless. On the contrary, we argue that adding noise to the expert's action to a certain extent, does not particularly affect performance. In Figure 3, we show the results of testing this statement on some of the most common continuousactions environments of the OpenAI gym (Brockman et al. 2016) library. In this simulation, 6 we first train an expert policy with the well-known DDPG (Lillicrap et al. 2015), TD3 (Fujimoto, Hoof, and Meger 2018) and PPO (Schulman et al. 2017) in the following OpenAI gym environments:
• Pendulum-v0: this environment has a continuous action space r´2, 2s. The objective is to apply torque on a pendulum to swing it into an upright position. The whole system is very regular, as it is governed by simple differential equations, and is also deterministic, except for the initial position of the pendulum, which is random. • LunarLanderContinuous-v2: this environment has a continuous action space r´1, 1s 2 . Here, we have to make a rocket land safely in a landing pad. The dynamics is quite complex, and stochasticity is present to simulate the effect of the wind. • BipedalWalker-v3: this environment has a continuous action space r´1, 1s 4 . Here we have to make a bipedal robot walk. The dynamics is even more complex, but the whole system is deterministic. Then, we evaluated the performance of these experts with noise injection with Gaussian noise with different standard deviations. As we can see in figure 3, even when the noise increases until it is close to the radius of the action space, at least in seven cases out of nine, the performance does not suffer significant drops. Intuitively, this can be explained by the fact that we applied an i.i.d. zero-mean noise sequence that is independent of the state and the action. Thus, its effect does not accumulate over the horizon.
Conclusions
In this paper, we have addressed BC in the case of continuous-action environments from a theoretical perspective. We have shown that the existing theoretical guarantees on BC are not suitable when dealing with continuous actions. Thus, we have derived a first bound for the performance guarantees, under the assumption that the imitator value function is Lipschitz continuous. Since this latter assumption is demanding (i.e., it is not guaranteed even when the underlying MDP and policy are LC), we have relaxed it by studying the continuity properties of the value function. As a result of independent interest, we have proved that the value function is always Hölder continuous, under the milder assumption that the underlying MDP and policy are LC. Then, we have applied these findings to obtain a general bound for the performance gap of BC, which we have proved to be tight. Finally, we have formalized noise injection and we have shown the advantages of this practice when applied to BC.
A General Math
Proposition 1. Let f : R n Ñ R be a function such that }f } 8 ď M , and let L P PpR n q be an L-TV-LC probability measure that admits density function : R n Ñ R ě0 . Then, the convolution f˚ is 2LM -LC continuous.
Proof. For every x, y P X , we have:
|pf˚ qpxq´pf˚ qpyq| "ˇˇˇˇż R n f pzqgpx´zqdz´ż R n f pzqgpy´zq dzˇˇˇ"ˇˇˇˇż R n f pzqp px´zq´ py´zqq dzˇˇˇˇ,
since }f } 8 ď M , by assumption, we have that the result is bounded by:
M ż R n | px´zq´ py´zq| dz " 2M TVpLpx´¨q´Lpy´¨qq ď 2LM }x´y} 2 ,
where the last passage follows from the definition of L-TV-LC.
Proposition 9. Let f be any univariate density function. Then, if f is unimodal (i.e. there is x 0 such that f pxq is nondecreasing in p´8, x 0 s and nonincreasing in rx 0 ,`8q), f is TV-LC with L " 2 sup xPR f pxq.
Proof. We first prove the result for functions f that are Lipschitz with constant L f :
TVpf p¨`hq, f p¨qq " ż R |f px`hq´f pxq| dx " ż x0´h 8 |f px`hq´f pxq| dx`ż x0 x0´h |f pxq´f px`hq| dx`ż 8 x0 |f pxq´f px`hq| dx.
At this point, we know that being f continuous there is one point in x P rx 0´h , x 0 q such that f pxq " f px`hq. Therefore, by Lipschitzness, in the whole interval rx 0´h , x 0 q we have |f pxq´f px`hq| ď 2L f h. Substituting in the previous formula, we have:
ď ż x0´h 8 |f px`hq´f pxq| dx`2L f h 2`ż 8 x0 |f pxq´f px`hq| dx.
Now, note that:
• in the interval p´8, x 0´h s the function f px`hq´f pxq is nonnegative; • in the interval rx 0 ,`8q the function f px`hq´f pxq is nonpositive. Therefore, the previous integral writes:
ż x0´h 8 f px`hq´f pxq dx`ε`ż 8 x0 f pxq´f px`hq dx ď 2L f h 2`ż x0´h x0 f pxq dx`ż x0 x0`h f pxq dx ď 2 sup f h`2L f h 2 .
This being valid for every h, we can also write, by triangular inequality, for every K P N:
TVpf p¨`hq, f p¨qq ď K´1 ÿ k"0 TVˆf´¨`h k K¯, f´¨`h k`1 K¯ď K´1 ÿ k"0 2 sup f h K`2 L f h 2 K 2 " 2h sup f`2L f h 2 K .
This entails, taking K Ñ 8, that TVpf p¨`hq, f p¨qq ď 2h sup f.
Let us come to the general case. Let N P N. As Lipschitz functions are dense in L 1 pRq, taking ε " h N , we know that there is a Lipschitz function f ε satisfying the unimodality and such that sup f " sup f ε such that TVpf, f ε q ă ε.
Therefore, TVpf p¨`hq, f p¨qq ď 2ε`TVpf ε p¨`hq, f ε p¨qq ď 2ε`2h sup f ď 2hpsup f`1{N q. Taking N Ñ 8, we get the result.
B Negative Result
Theorem 4. There exist an pL p , L r q-LC MDP and an L π -LC policy π such that for every finite constant C ą 0, (even depending on L p , L π , and L r ), there exists an L π -LC policy π 1 such that:
J π´J π 1 ě C E s"d π rWpπp¨|sq, π 1 p¨|sqqs.
Proof. In order to show the result we will use a similar MDP as the one defined in Example 2 of the main paper. Then, we will take π " π˚, the optimal policy, which is constant equal to zero (so L π " 0), and, as π 1 a sequence of policies π n such that:
lim nÑ`8 J˚´J πn
Es"d πn rWpπ˚p¨|sq, π n p¨|sqqs "`8.
In this way, for any finite constant C ą 0, there will be an n ą 0 such that, choosing π 1 " π n , we have the result. Let M an MDP defined as follows:
• S " r0, 1s;
• A " r0, 1s;
• The dynamic is deterministic: from every state s P S, performing action a P A, we go to the state s 1 " clippL p ps`aq, 0, 1q. This means that ppds 1 |s, aq " δ clippLpps`aq,0,1q pds 1 q; • rps, aq "´L r s with L r ą 0; • The initial state is 0 (i.e., µ " δ 0 ); • The discount factor is γ.
Since the reward is always negative, the optimal policy π˚is identically zero, since is the only one allowing to remain in the origin. Consider instead the sequence of policies for n P N, defined as: π n p¨|sq " δ 1{n p¨q. Both π˚and π n are Lipschitz with L π " 0, and we have Wpπ˚p¨|sq, π n p¨|sqq " 1{n in every state s P S. Nonetheless, we can see that:
V˚p0q " 0 V πn p0q ď´L r`8 ÿ k"0 γ k clip´L k p n , 0, 1¯.
When n Ñ 8, we have: Here, we can substitute δ :" 1{n, finding lim nÑ`8
J˚´J πn
Es"d πn rWpπ˚p¨|sq, π n p¨|sqqs ě lim
δÑ0`L r ř 8 k"0 γ k clip´δL k p , 0, 1δ " L r`8 ÿ k"0 γ k L k p " L r 1´γL p ,
which gives`8 for L p ě 1{γ.
C Proofs with Hölder continuity
Before proving the main result, we need two technical lemmas. Lemma 1. Let pX , d X q, pY, d Y q be two metric spaces, and tf n u nPN be a sequence of functions X Ñ Y that are pα, Lq´HC such that lim nÑ`8 f n " f . Then, f is pα, Lq´HC.
Proof. Let x, x 1 P X . Fix ε ą 0. Let: n 0 :" inftn : }f n´f } 8 :" sup
x 2 PX d Y pf n px 2 q, f px 2 qq ď εLd X px, x 1 q α u.
Note that n 0 exists by definition. Then, we have:
d Y pf pxq, f px 1 qq ď d Y pf pxq, f n0 pxqq`d Y pf n0 pxq, f n0 px 1 qq`d Y pf px 1 q, f n0 px 1 qq ď d Y pf n0 pxq, f n0 px 1 qq`2εLd X px, x 1 q α ď p2ε`1qLd X px, x 1 q α .
Since this is valid for every ε ą 0, we have also d Y pf pxq, f px 1 qq ď Ld X px, x 1 q α , which is the thesis.
Proposition 10. Let µ, ν P PpX q be two probability measures on a metric space pX , d X q. Then, for any pα, Lq-HC function f , it holds that: ż X pµpdxq´νpdxqqf pxq ď L Wpµ, νq α .
For this proof, we thank an interesting Stack Exchange discussion (Yuval 2022).
Proof. Let Kpµ, νq denote the space of couplings of µ and ν, i.e. Borel probability measures on XˆX that project to µ in the first coordinate and to ν in the second coordinate. Recall that:
Wpµ, νq " inf λPKpµ,νq "ż XˆX d X px, yq λpdx, dyq * .
Suppose that @x, y P X we have that |f pxq´f pyq| ď Ld X px, yq α , where 0 ă α ă 1. Then for any λ P Kpµ, νq, we have:ˇˇˇż
X pµpdxq´νpdxqqf pxqˇˇˇˇď ż XˆX |f pxq´f pyq| λpdx, dyq ď L ż XˆX d X px, yq α λpdx, dyq ď Lˆż XˆX d X px, yq λpdx, dyq˙α ,
where the last inequality is an application of Hölder's inequality for the functions px, yq Þ Ñ d X px, yq α and the constant 1, with exponents p " 1{α and q " 1{p1´αq. Alternatively, the last inequality in can be obtained from an application of Jensen's inequality for the convex function t Þ Ñ t 1{α on r0, 8q. Taking infimum over λ P Kpµ, νq gives:ˇˇˇż X pµpdxq´νpdxqqf pxqˇˇˇˇď LW pµ, νq α , as required.
Proposition 6 (Hölder-continuity of the V-function). Let π be an L π -LC policy. If the state-action value function Q π is pα, L Q π ,α q-HC, then the corresponding state value function V π is pα, L V π ,α q-HC with: L V π ,α :" L Q π ,α pL π`1 q α .
Proof. By definition, for every s P S:
V π psq " ż A Q π ps, aqπpda|sq.
Therefore, by introducing suitable Dirac deltas, we have for s, s 1 P S:ˇV π psq´V π ps 1 qˇˇ"ˇˇˇˇż A Q π ps, aqπpda|sq´ż A Q π ps 1 , aqπpda|s 1 qˇˇˇ"ˇˇˇˇż A ż S Q π pz, aqδ s pdzqπpda|sq´ż A ż S Q π pz, aqδ s 1 pdzqπpda|s 1 qˇˇˇ"ˇˇˇˇż A ż S`δ s pdzqπpda|sq´δ s 1 pdzqπpda|s 1 q˘Q π pz, aqˇˇˇˇ.
Now, thanks to Proposition 10, we have for s, s 1 P S:ˇV π psq´V π ps 1 qˇˇď L Q π ,α W`δ s p¨qπp¨|sq, δ s 1 p¨qπp¨|s 1 q˘α ď L α p1`L π q α d S`s , s 1˘α , where the last passage follows from the following manipulation of the Wasserstein distance: Wpδ s p¨qπp¨|sq, δ s 1 p¨qπp¨|s 1 qq ď Wpδ s p¨qπp¨|sq, δ s p¨qπp¨|s 1 qq`Wpδ s p¨qπp¨|s 1 q, δ s 1 p¨qπp¨|s 1 qq.
Concerning the first term, we have:
Wpδ s p¨qπp¨|sq, δ s p¨qπp¨|s 1 qq " sup }f } L ď1ˇżSˆA`δ s pdzqπpda|sq´δ s pdzqπpda|s 1 q˘f pz, aqˇˇˇ" sup }f } L ď1ˇżA`π pda|sq´πpda|s 1 q˘f ps, aqˇˇˇď sup }f } L ď1 }f ps,¨q} L¨W`π p¨|sq, πp¨|s 1 qď W`πp¨|sq, πp¨|s 1 q˘ď L π d S`s , s 1˘, having observed that }f ps,¨q} L ď }f } L ď 1. Concerning the second term, we have:
Wpδ s p¨qπp¨|s 1 q, δ s 1 p¨qπp¨|s 1 qq " sup }f } L ď1ˇżSˆA`δ s pdzqπpda|s 1 q´δ s 1 pdzqπpda|s 1 q˘f pz, aqˇˇˇ" where we observed that the Lipschitz semi-norm is bounded by 1 since for z, z 1 P S:ˇˇˇż A πpda|s 1 qf pz, aq´ż A πpda|s 1 qf pz 1 , aqˇˇˇˇď ż A πpda|s 1 qˇˇf pz, aq´f pz 1 , aqˇˇď d S pz, z 1 q.
Theorem 5 (Hölder-continuity of the Q-function). Let M be an pL p , L r q-LC MDP, let π be an L π -LC policy, and let 0 ă α ă α :" min " 1,´l og γ logpL p p1`L π qq * .
If the state space S and the action space A admit finite diameter 7 diampSq and diampAq, respectively, then the state-action value function Q π is pα, L Q π ,α q´HC with a Hölder constant bounded by:
L Q π ,α :" L r pdiampSq`diampAqq 1´α 1´γpL p p1`L π qq α .
Proof. Consider the following sequences of functions for n P N:
" Q 0 ps, aq " 0 Q n`1 ps, aq " rps, aq`γ ş S V n ps 1 qppds 1 |s, aq , " V 0 psq " 0 V n`1 psq " ş A Q n`1 ps, aqπpda|sq .
We want to prove, by induction, that Q n is always pα, L Q π ,α q´HC. The base case n " 0 is trivial, since a constant function is Hölder continuous for every couple of parameters. Now, let us suppose that Q n is pα, L Q π ,α q´HC. Then, for s 1 , s 2 P S and a 1 , a 2 P A, we have:
|Q n`1 ps 1 , a 1 q´Q n`1 ps 2 , a 2 q| "ˇˇˇˇrps 1 , a 1 q`γ ż S V n ps 1 qppds 1 |s 1 , a 1 q´rps 2 , a 2 q´γ ż S V n ps 1 qppds 1 |s 2 , a 2 qˇˇˇ"ˇˇˇˇr ps 1 , a 1 q´rps 2 , a 2 q`γ ż S V n ps 1 q´ppds 1 |s 1 , a 1 q´ppds 1 |s 2 , a 2 q¯ˇˇˇď |rps 1 , a 1 q´rps 2 , a 2 q|`γˇˇˇˇż S V n ps 1 q´ppds 1 |s 1 , a 1 q´ppds 1 |s 2 , a 2 q¯ˇˇˇˇ.
Now, we consider one term at a time. Concerning the first term, recalling that 0 ă α ď 1 and that the reward function is L r -LC, we have:
|rps 1 , a 1 q´rps 2 , a 2 q| ď L r pd S ps 1 , s 2 q`d A pa 1 , a 2 qq " L r pd S ps 1 , s 2 q`d A pa 1 , a 2 qq α pd S ps 1 , s 2 q`d A pa 1 , a 2 qq 1´α ď L r pdiampSq`diampAqq 1´α pd S ps 1 , s 2 q`d A pa 1 , a 2 qq α , having observed that the distance between any pair of points is smaller than the diameter of the corresponding set. Concerning the second term, we make use of Proposition 6, this entails that V n is pα, L Q π ,α pL π`1 q α q´HC, Proposition 10, and exploit that the transition model is L p -LC:ˇˇˇż S V n ps 1 q´ppds 1 |s 1 , a 1 q´ppds 1 |s 2 , a 2 q¯ˇˇˇˇď L V π ,α W ppp¨|s 1 , a 1 q, pp¨|s 2 , a 2 qq α (7)
ď L Q π n ,α pL p pL π`1 qq α pd S ps 1 , s 2 q`d A pa 1 , a 2 qq α . 7 The diameter of a metric space pX , dX q is defined as: diampX q " sup x,x 1 PX dX px, x 1 q.
Thus, we have the recurrence involving the Hölder constants:
L Q π 0 ,α " 0 L Q π n`1 ,α " L r pdiampSq`diampAqq 1´α`γ L Q π n ,α pL p pL π`1 qq α .
The sequence is convergent for pL p pL π`1 qq α ă 1, which leads to the condition:
α ă´l og γ logpL p pL π`1 qq .
Under such a condition, the limit L Q π ,α can be easily found as:
L Q π ,α " L r pdiampSq`diampAqq 1´α`γ L Q π ,α pL p pL π`1 qq α ùñ L Q π ,α " L r pdiampSq`diampAqq 1´α 1´γ pL p pL π`1 qq α .
Theorem 7 (Optimal Error Rate for BC). Let π E be the expert policy and π I be the imitator policy. If that state-action value function Q π I of the imitator policy π I is pα, L Q π I ,α q-HC, then it holds that:
Figure 2 :
2State value functions of Example 2. Left: the bound of (Rachelson and Lagoudakis 2010) hold and it is tight. Right: the bound of (Rachelson and Lagoudakis 2010) does not hold, but our bound based on Hölder continuity holds, for different values of α P p0, 1q.
sup }f } L ď1ˇżS pδ s pdzq´δ s 1 pdzqq ż A πpda|s 1 qf pz, aqˇˇˇď W pδ s p¨q, δ s 1 p¨qq ď d S`s , s 1˘,sup
}f } L ď1
›
›
›
›
ż
A
πpda|s 1 qf p¨, aq
›
›
›
›
L
It is well-known that the value function is Lipschitz continuous arXiv:2212.03922v1 [cs.LG] 7 Dec 2022
The result reported in(Xu, Li, and Yu 2020) involves the KLdivergence and is obtained, via Pinsker's inequality, from the one we report that is tighter (Appendix A.2 ofXu, Li, and Yu (2020)).
clippx, a, bq is the clipping function, i.e., maxtmintx, bu, au.
As an alternative, one could assume that the reward function rps, aq is pα, Lrq-HC, removing the need for the diameters.
Details can be found in Appendix.
E a"π E p¨|sq rA π I ps, aqs " ż A Q π I ps, aqpπ E pda|sq´π I pda|sqq ď sup sPS L Q π I ps,¨q,α¨W pπ E p¨|sq, π I p¨|sqq α , where the inequality follows from Proposition 10. The result is obtained by observing that sup sPS L Q π I ps,¨q,α ď L Q π I ,α .For what concerns the second part of the theorem, we build a counterexample. Moreover, we will not focus on the constant, since it is very difficult to estimate it correctly, and we will limit to verify that the bound on the exponent α is tight.As in the previous results, in order to show that the lower bound is tight, we rely on the process defined in the examples of the main paper. Let M an MDP defined as follows:• S " r0, 1s; • A " r0, 1s; • The dynamic is deterministic: from every state s P S, performing action a P A, we go to the state s 1 " clippL p ps`aq, 0, 1q.This means that ppds 1 |s, aq " δ clippLpps`aq,0,1q pds 1 q; • rps, aq "´L r s with L r ą 0; • The initial state is 0 (i.e., µ " δ 0 ); • The discount factor is γ.As before, the optimal policy π˚is identically zero, since is the only one allowing to stay in the origin. Consider instead the sequence of policies π n p¨|sq " δ 1{n p¨q. Both π˚and π n are Lipschitz with L π " 0, and we have Wpπ˚p¨|sq, π n p¨|sqq " 1{n in every state s P r0, 1s. Nonetheless, we can see that:At this point, we want to evaluate the difference J˚´J πn as n increases:At this point, we can see that:Therefore, the previous sum can be rewritten as:Here, using the formula for geometric sums, we have:.Since γ´llog Lp . This means that, as we have Wpπ˚p¨|sq, π n p¨|sqq " 1{n " δ in every state, δ also corresponds to the Wasserstein error over the trajectory. Therefore, Theorem 7 provides J˚´J πn ď Cδ α with α ď α :"´l og γ logpL p p1`L π qq "´l og γ logpL p q , while we have just found J˚´J πn ě c δ´l og γ log Lp , which makes the value of α tight.D Results with Noise InjectionTheorem 8. Let π E be the expert policy and π I be the imitator policy. Let us suppose that we have injected a noise of density function , satisfying Assumption 3 to obtain a noisy expert π E, and a noisy imitator π I, . If |Q π I ps, aq| ď Q max for all ps, aq P SˆA, it holds that:
A survey of inverse reinforcement learning: Challenges, methods and progress. S Arora, P Doshi, Artif. Intell. 297103500Arora, S.; and Doshi, P. 2021. A survey of inverse reinforce- ment learning: Challenges, methods and progress. Artif. In- tell., 297: 103500.
Lipschitz Continuity in Model-based Reinforcement Learning. K Asadi, D Misra, M L Littman, Proceedings of the 35th International Conference on Machine Learning (ICML). the 35th International Conference on Machine Learning (ICML)Asadi, K.; Misra, D.; and Littman, M. L. 2018. Lipschitz Continuity in Model-based Reinforcement Learning. In Pro- ceedings of the 35th International Conference on Machine Learning (ICML), 264-273.
Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots. T Asfour, P Azad, F Gyarfas, R Dillmann, Int. J. Humanoid Robotics. 52Asfour, T.; Azad, P.; Gyarfas, F.; and Dillmann, R. 2008. Imitation Learning of Dual-Arm Manipulation Tasks in Hu- manoid Robots. Int. J. Humanoid Robotics, 5(2): 183-202.
A Framework for Behavioural Cloning. M Bain, C Sammut, Machine Intelligence 15, Intelligent Agents. Bain, M.; and Sammut, C. 1995. A Framework for Be- havioural Cloning. In Machine Intelligence 15, Intelligent Agents, 103-129.
. G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, W Zaremba, arXiv:1606.01540OpenAI Gym. gym. Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. OpenAI Gym. gym:arXiv:1606.01540.
Addressing function approximation error in actor-critic methods. S Fujimoto, H Hoof, D Meger, PMLRInternational conference on machine learning. Fujimoto, S.; Hoof, H.; and Meger, D. 2018. Addressing function approximation error in actor-critic methods. In International conference on machine learning, 1587-1596. PMLR.
Transferring human grasping synergies to a robot. T Geng, M Lee, M Hülse, Mechatronics. 211Geng, T.; Lee, M.; and Hülse, M. 2011. Transferring human grasping synergies to a robot. Mechatronics, 21(1): 272- 284.
Generative Adversarial Imitation Learning. J Ho, S Ermon, Advances in Neural Information Processing Systems 29 (NIPS). Ho, J.; and Ermon, S. 2016. Generative Adversarial Imita- tion Learning. In Advances in Neural Information Process- ing Systems 29 (NIPS), 4565-4573.
Rewardrational (implicit) choice: A unifying formalism for reward learning. H J Jeon, S Milli, A D Dragan, Advances in Neural Information Processing Systems. NeurIPS33Jeon, H. J.; Milli, S.; and Dragan, A. D. 2020. Reward- rational (implicit) choice: A unifying formalism for reward learning. In Advances in Neural Information Processing Systems 33 (NeurIPS).
Approximately Optimal Approximate Reinforcement Learning. S M Kakade, J Langford, Proceedings of the Nineteenth International Conference on Machine Learning (ICML). the Nineteenth International Conference on Machine Learning (ICML)Kakade, S. M.; and Langford, J. 2002. Approximately Op- timal Approximate Reinforcement Learning. In Proceed- ings of the Nineteenth International Conference on Machine Learning (ICML), 267-274.
DART: Noise Injection for Robust Imitation Learning. M Laskey, J Lee, R Fox, A D Dragan, K Goldberg, 1st Annual Conference on Robot Learning (CoRL). Laskey, M.; Lee, J.; Fox, R.; Dragan, A. D.; and Gold- berg, K. 2017a. DART: Noise Injection for Robust Imita- tion Learning. In 1st Annual Conference on Robot Learning (CoRL), 143-156.
DART: Noise Injection for Robust Imitation Learning. M Laskey, J Lee, R Fox, A D Dragan, K Goldberg, 1st Annual Conference on Robot Learning (CoRL). Laskey, M.; Lee, J.; Fox, R.; Dragan, A. D.; and Gold- berg, K. 2017b. DART: Noise Injection for Robust Imita- tion Learning. In 1st Annual Conference on Robot Learning (CoRL), 143-156.
Dealing with multiple experts and non-stationarity in inverse reinforcement learning: an application to real-life problems. A Likmeta, A M Metelli, G Ramponi, A Tirinzoni, M Giuliani, M Restelli, Mach. Learn. 1109Likmeta, A.; Metelli, A. M.; Ramponi, G.; Tirinzoni, A.; Giuliani, M.; and Restelli, M. 2021. Dealing with mul- tiple experts and non-stationarity in inverse reinforcement learning: an application to real-life problems. Mach. Learn., 110(9): 2541-2576.
T P Lillicrap, J J Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, D Wierstra, arXiv:1509.02971Continuous control with deep reinforcement learning. arXiv preprintLillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
Finite-Time Bounds for Fitted Value Iteration. R Munos, C Szepesvári, Journal of Machine Learning Research. 95Munos, R.; and Szepesvári, C. 2008. Finite-Time Bounds for Fitted Value Iteration. Journal of Machine Learning Re- search, 9(5).
. T Osa, J Pajarinen, G Neumann, J A Bagnell, P Abbeel, J Peters, An Algorithmic Perspective on Imitation Learning. Found. Trends Robotics. 71-2Osa, T.; Pajarinen, J.; Neumann, G.; Bagnell, J. A.; Abbeel, P.; and Peters, J. 2018. An Algorithmic Perspective on Imi- tation Learning. Found. Trends Robotics, 7(1-2): 1-179.
Policy gradient in Lipschitz Markov Decision Processes. M Pirotta, M Restelli, L Bascetta, Mach. Learn. 1002-3Pirotta, M.; Restelli, M.; and Bascetta, L. 2015. Policy gradi- ent in Lipschitz Markov Decision Processes. Mach. Learn., 100(2-3): 255-283.
Markov decision processes: discrete stochastic dynamic programming. M L Puterman, John Wiley & SonsPuterman, M. L. 2014. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.
On the locality of action domination in sequential decision making. E Rachelson, M G Lagoudakis, Rachelson, E.; and Lagoudakis, M. G. 2010. On the locality of action domination in sequential decision making.
On measures of entropy and information. A Rényi, Proceedings of the fourth Berkeley symposium on mathematical statistics and probability. the fourth Berkeley symposium on mathematical statistics and probabilityBerkeley, California, USA1Rényi, A.; et al. 1961. On measures of entropy and informa- tion. In Proceedings of the fourth Berkeley symposium on mathematical statistics and probability, volume 1. Berkeley, California, USA.
A robot learning from demonstration framework to perform force-based manipulation tasks. L D Rozo, P Jiménez, C Torras, Intell. Serv. Robotics. 61Rozo, L. D.; Jiménez, P.; and Torras, C. 2013. A robot learn- ing from demonstration framework to perform force-based manipulation tasks. Intell. Serv. Robotics, 6(1): 33-51.
J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
Optimal transport: old and new. C Villani, Springer338Villani, C. 2009. Optimal transport: old and new, volume 338. Springer.
Error Bounds of Imitating Policies and Environments. Yuval, P. 2022. T Xu, Z Li, Y Yu, ver- sion: 2022-05-12Wasserstein metric vs Holder continuity. Mathematics Stack Exchange. Xu, T.; Li, Z.; and Yu, Y. 2020. Error Bounds of Imitating Policies and Environments. Yuval, P. 2022. Wasserstein metric vs Holder continuity. Mathematics Stack Exchange. URL:https://math.stackexchange.com/q/4448627 (ver- sion: 2022-05-12).
Viewpoint: Human-in-the-loop Artificial Intelligence. F M Zanzotto, J. Artif. Intell. Res. 64Zanzotto, F. M. 2019. Viewpoint: Human-in-the-loop Arti- ficial Intelligence. J. Artif. Intell. Res., 64: 243-252.
| []
|
[
"Asynchronous decentralized successive convex approximation",
"Asynchronous decentralized successive convex approximation"
]
| [
"Ye Tian \nSchool of Industrial Engineering\nPurdue University\n\n",
"Ying Sun \nSchool of Industrial Engineering\nPurdue University\n\n",
"Gesualdo Scutari \nSchool of Industrial Engineering\nPurdue University\n\n"
]
| [
"School of Industrial Engineering\nPurdue University\n",
"School of Industrial Engineering\nPurdue University\n",
"School of Industrial Engineering\nPurdue University\n"
]
| []
| We study decentralized asynchronous multiagent optimization over networks, modeled as directed graphs. The optimization problem consists of minimizing a (nonconvex) smooth function-the sum of the agents' local costs-plus a convex (nonsmooth) regularizer, subject to convex constraints. Agents can perform their local computations as well as communicate with their immediate neighbors at any time, without any form of coordination or centralized scheduling; furthermore, when solving their local subproblems, they can use outdated information from their neighbors. We propose the first distributed algorithm, termed ASY-DSCA, working in such a general asynchronous scenario and applicable to constrained, composite optimization. When the objective function is nonconvex, ASY-DSCA is proved to converge to a stationary solution of the problem at a sublinear rate. When the problem is convex and satisfies the Luo-Tseng error bound condition, ASY-DSCA converges at an R-linear rate to the optimal solution. Luo-Tseng (LT) condition is weaker than strong convexity of the objective function, and it is satisfied by several nonstrongly convex functions arising from machine learning applications; examples include LASSO and logistic regression problems. ASY-DSCA is the first distributed algorithm provably achieving linear rate for such a class of problems. | null | [
"https://export.arxiv.org/pdf/1909.10144v2.pdf"
]
| 202,718,909 | 1909.10144 | 9f3be05fe2152499cb10ce287a7f2b99e7c3a310 |
Asynchronous decentralized successive convex approximation
30 Jan 2020
Ye Tian
School of Industrial Engineering
Purdue University
Ying Sun
School of Industrial Engineering
Purdue University
Gesualdo Scutari
School of Industrial Engineering
Purdue University
Asynchronous decentralized successive convex approximation
30 Jan 2020
We study decentralized asynchronous multiagent optimization over networks, modeled as directed graphs. The optimization problem consists of minimizing a (nonconvex) smooth function-the sum of the agents' local costs-plus a convex (nonsmooth) regularizer, subject to convex constraints. Agents can perform their local computations as well as communicate with their immediate neighbors at any time, without any form of coordination or centralized scheduling; furthermore, when solving their local subproblems, they can use outdated information from their neighbors. We propose the first distributed algorithm, termed ASY-DSCA, working in such a general asynchronous scenario and applicable to constrained, composite optimization. When the objective function is nonconvex, ASY-DSCA is proved to converge to a stationary solution of the problem at a sublinear rate. When the problem is convex and satisfies the Luo-Tseng error bound condition, ASY-DSCA converges at an R-linear rate to the optimal solution. Luo-Tseng (LT) condition is weaker than strong convexity of the objective function, and it is satisfied by several nonstrongly convex functions arising from machine learning applications; examples include LASSO and logistic regression problems. ASY-DSCA is the first distributed algorithm provably achieving linear rate for such a class of problems.
Introduction
We consider the following general class of (possibly nonconvex) multiagent composite optimization:
min x∈K U (x) i∈[I] f i (x) + G(x),(P)
where [I] {1, . . . , I} is the set of agents in the system, f i : R n → R is the cost function of agent i, assumed to be smooth but possibly nonconvex; G : R n → R is convex possibly nonsmooth; and K ⊆ R n is a closed convex set. Each agent has access only to its own objective f i but not the sum I i=1 f i while G and K are common to all the agents. Problem (P) has found a wide range of applications in machine learning, particularly in supervised learning; examples include logistic regression, SVM and LASSO, and deep learning. In these problems, each f i is the empirical risk that measures the mismatch between the model (parameterized by x) to be learnt, and the data set owned only by agent i. G and K plays the role of regularization that restricts the solution space to promote some favorable structure, such as sparsity.
Classic distributed learning typically subsumes a master-slave computational architecture wherein the master nodes run the optimization algorithm gathering the needed information from the workers (cf. Fig. 1left panel). In contrast, in this paper, we consider a decentralized computational architecture, modeled as a general directed graph that lacks a central controller/master node (see Fig. 1-right panel). Each node can only communicate with its intermediate neighbors. This setting arises naturally when data are acquired and/or stored at the node sides. Examples include resource allocation, swarm robotic control, and multi-agent reinforcement learning [20,50]. Furthermore, in scenarios where both architectures are available, decentralized learning has the advantage of being robust to single point failures and being communication efficient. For instance, [18] compared the performance of stochastic gradient descent on both architectures; they show that, the two implementations have similar total computational complexity, while the maximal communication cost per node of the algorithm running on the decentralized architecture is O(degree of network), significantly smaller than the O(I) of the same scheme running on a master-slave system. As the problem and network size scale, synchronizing the entire multiagent system becomes inefficient or infeasible. Synchronous schedules require a global clock, which is against the gist of removing the central controller as in decentralized optimization. This calls for the development of asynchronous decentralized learning algorithms. In addition, asynchronous modus operandi brings also benefits such as mitigating communication and/or memory-access congestion, saving resources (e.g., energy, computation, bandwidth), and making algorithms more fault-tolerant. Therefore, asynchronous decentralized algorithms have the potential to prevail in large scale learning problems. In this paper, we consider the following general decentralized asynchronous setting:
(i) Agents can perform their local computations as well as communicate (possibly in parallel) with their immediate neighbors at any time, without any form of coordination or centralized scheduling; and
(ii) when solving their local subproblems, they can use outdated information from their neighbors, subject to arbitrary but bounded delays.
We are not aware of any provably convergence scheme applicable to the envisioned decentralized asynchronous setting and Problem (P)-specifically in the presence of constraints or the nonsmooth term G-see Sec. 1.2 for a discussion of related works. This paper fills exactly this gap.
Main contributions
Our major contributions are summarized next.
• Algorithmic design: We introduce ASY-DSCA, the first distributed asynchronous algorithm [in the sense (i) and (ii) above] applicable to the composite, constrained optimization (P). ASY-DSCA builds on successive convex approximation techniques (SCA) [10,[34][35][36]-agents solve strongly convex approximations of (P)-coupled with a suitably defined perturbed push-sum mechanism that is robust against asynchrony, whose goal is to track locally and asynchronously the average of agents' gradients. No specific activation mechanism for the agents' updates, coordination, or communication protocol is assumed, but only some mild conditions ensuring that information used in the updates does not become infinitely old. We remark that SCA offers a unified umbrella to deal efficiently with convex and nonconvex problems [10,[34][35][36]: for several problems (P) of practical interest (cf. Sec. 2.1), a proper choice of the agents' surrogate functions to minimize leads to subproblems that admit a closed form solution (e.g., soft-thresholding and/or projection to the Euclidean ball). ASY-DSCA generalizes ASY-SONATA, proposed in the companion paper [40], by i) enabling SCA models in the agents' local updates; and ii) enlarging the class of optimization problems to include constraints and nonsmooth (convex) objectives.
• Convergence rate: Our convergence results are the following: i) For general nonconvex F in (P), a sublinear rate is established for a suitably defined merit function measuring both distance of the (average) iterates from stationary solutions and consensus disagrement; ii) When (P) satisfies the Luo-Tseng (LT) error bound condition [22], we establish R-linear convergence of the sequence generated by ASY-DSCA to an optimal solution. Notice that the LT condition is weaker than strong convexity, which is the common assumption used in the literature to establish linear convergence of distributed (even synchronous) algorithms. Our interest in the LT condition is motivated by the fact that several popular objective functions arising from machine learning applications are nonstrongly convex but satisfy the LT error bound; examples include popular empirical losses in high-dimensional statistics such as quadratic and logistic losses-see Sec.2.2 for more details. ASY-DSCA is the first asynchronous distributed algorithm with provably linear rate for such a class of problems over networks; this result is new even in the synchronous distributed setting.
• New line of analysis: We put forth novel convergence proofs, whose main novelties are highlighted next.
New Lyapunov function for descent
Our convergence analysis consists in carefully analyzing the interaction among the consensus, the gradient tracking and the nonconvex-nonsmooth-constrained optimization processes in the asynchronous environment. This interaction can be seen as a perturbation that each of these processes induces on the dynamics of the others. The challenge is proving that the perturbation generated by one system on the others is of a sufficiently small order (with respect to suitably defined metrics), so that convergence can be established and a convergence rate of suitably defined quantities be derived. Current techniques from centralized (nonsmooth) SCA optimization methods [10,[34][35][36], error-bound analysis [22], and (asynchronous) consensus algorithms, alone or brute-forcely put together, do not provide a satisfactory answer: they would generate "too large" perturbation errors and do not exploit the interactions among different processes. On the other hand, existing approaches proposed for distributed algorithms are not applicable too (see Sec. 1.2 for a detailed review of the state of the art): they can neither deal with asynchrony (e.g., [39]) or be applicable to optimization problems with a nonsmooth function in the objective and/or constraints.
To cope with the above challenges our analysis builds on two new Lyapunov functions, one for nonconvex instances of (P) and one for convex ones. These functions are carefully crafted to combine objective value dynamics with consensus and gradient errors while accounting for asynchrony and outdated information in the agents' updates. Apart from the specific expression of these functions, a major novelty here is the use in the Lyapunov functions of weighting vectors that endogenously vary based upon the asynchrony trajectory of the algorithm-see Sec.6 (Step 2) and Sec.7 (Remark 17) for technical details. The descent property of the Lyapunov functions is the key step to prove that consensus and tracking errors vanish and further establish the desired converge rate of valid optimality/stationarity measures.
Linear rate under the LT condition
The proof of linear convergence of ASY-DSCA under the LT condition is a new contribution of this work. Existing proofs establishing linear rate of distributed synchronous and asynchronous algorithms [1,25,31,38,39] (including our companion paper [40]) are not applicable here, as they all leverage strong convexity of F , a property that we do not assume. On the other hand, existing techniques showing linear rate of centralized first-order methods under the LT condition [22,42] do not customize to our distributed, asynchronous setting. Roughly speaking, this is mainly due to the fact that use of the LT condition in [22,42] is subject to proving descent on the objective function along the algorithm iterates, a property that can no longer be guaranteed in the distributed setting, due to the perturbations generated by the consensus and the gradient tracking errors. Asynchrony complicates further the analysis, as it induces unbalanced updating frequency of agents and the presence of the outdated information in agents' local computation. Our proof of linear convergence leverages the descent property of the proposed Lyapunov function to be able to invoke the LT condition in our distributed, asynchronous setting (see Sec.6 for a technical discussion on this matter).
Related works
On the asynchronous model: The literature on asynchronous methods is vast; based upon agents' activation rules and assumptions on delays, existing algorithms can be roughly grouped in three categories. 1) Algorithms in [7,17,19,[43][44][45] tolerate delayed information but require synchronization among agents, thus fail to meet the asynchronous requirement (i) above. 2) On the other hand, schemes in [3,[12][13][14]26,46,48] accounts for agents' random (thus uncoordinated) activation; however, upon activation, they must use the most updated information from their neighbors, i.e., no delays are allowed; hence, they fail to meet requirement (ii). 3) Asynchronous activations and delays are considered in [16,23,29,47,52] and [2,4,9,27,40], with the former (resp. latter) schemes employing random (resp. deterministic) activations. Some restrictions on the form of delays are imposed. Specifically, [4,16,23,52] can only tolerate packet losses (either the information gets lost or is received with no delay); [2] handles only communication delays (eventually all the transmitted information is received by the intended agent); and [29,47] assume that the agents' activation and delay as independent random variables, which is not realistic and hard to enforce in practice [6].
The only schemes we are aware of that are compliant with the asynchronous model (i) and (ii) are those in [27,40]; however, they are applicable only to smooth unconstrained problems. Furthermore, all the aforementioned algorithms but [16,40] are designed only for convex objectives U . On the convergence rate: Referring to convergence rate guarantees, none of the aforementioned methods is proved to converge linearly in the asynchronous setting and when applied to nonsmooth constrained problems in the form (P). Furthermore, even restricting the focus to synchronous distributed methods or smooth unconstrained instances of (P), we are not aware of any distributed scheme that provably achieves linear rate without requiring U to be strongly convex; we refer to [39] for a recent literature review of synchronous distributed schemes belonging to this class. In the centralized setting, linear rate can be proved for first order methods under the assumption that U satisfies some error bound conditions, which are weaker than strongly convexity; see, e.g., [5,15,22,49]. A natural question is whether such results can be extended to (asynchronous) decentralized methods. This paper provides a positive answer to this open question.
Notation
The i-th vector of the standard basis in R n is denoted by e i ; x i is the i-th entry of a vector x; and A i denotes the i-th row of a matrix A. Given two matrices (vectors) A and B of same size, by A B we mean that B − A is a nonnegative matrix (vector). We do not differentiate between a vector and its transpose when it is the argument of a function/mapping. The vector of all ones is denoted by 1 (its dimension will be clear from the context). We use · to denote the Frobenius norm when the argument is a matrix and the Euclidean norm when applied to a vector; · 2 denotes the spectral norm of a matrix. Given G : R n → R, the proximal mapping is defined as prox G (x) argmin y∈K G(y) + 1 2 y − x 2 2 . Let K * denote the set of stationary solutions of (P), and dist(x, K * ) min y∈K * x − y .
Problem setup
We study Problem (P) under the following assumptions. (iii) G : K → R is convex but possibly nonsmooth; and (iv) U is lower bounded on K.
Note that each f i need not be convex, and each agent i knows only its own f i but not j =i f j . The regularizer G and the constraint set K are common knowledge to all agents.
To solve Problem (P), agents need to leverage message exchanging over the network. The communication network of the agents is modeled as a fixed, directed graph G (V, E). V [I] is the set of nodes (agents), and E ⊆ V × V is the set of edges (communication links). If (i, j) ∈ E, it means that agent i can send information to agent j. We assume that the digraph does not have self-loops. We denote by N in i the set of in-neighbors of agent i, i.e., N in
i {j ∈ V | (j, i) ∈ E} while N out i {j ∈ V | (i, j)
∈ E} is the set of its out-neighbors. The following assumption on the graph connectivity is standard.
Assumption 2.
The graph G is strongly connected.
Case study: Collaborative supervised learning
A timely application of the described decentralized setting and optimization Problem (P) is collaborative supervised learning. Consider a training data set {(u s , y s )} s∈D , where u s is the input feature vector and y s is the outcome associated to item s. In the envisioned decentralized setting, data D are partitioned into I subsets {D i } i∈[I] , each of which belongs to an agent i ∈ [I]. The goal is to learn a mapping p(· ; x) parameterized by x ∈ R n using all samples in D by solving min x∈K 1/|D| s∈D ℓ (p(u s ; x), y s ) + G(x), wherein ℓ is a loss function that measures the mismatch between p(u s ; x) and y s ; and G and K play the role of regularizing the solution. This problem is an instance of (P) with f i (x) 1/|D| s∈Di ℓ (p(u s ; x), y s ). Specific examples of loss functions and regularizers are give next. 1) Elastic net regularization for log linear models: ℓ (p(u s ; x), y s ) Φ(u ⊤ s x) − y s · (u ⊤ s x) with Φ convex, u s ∈ R n and y s ∈ R; G(x) λ 1 x 1 + λ 2 x 2 2 is the elastic net regularizer, which reduces to the LASSO regularizer when (λ 1 , λ 2 ) = (λ, 0) or the ridge regression regularizer when (λ 1 , λ 2 ) = (0, λ);
2) Sparse group LASSO [11]: The loss function is the same as that in example 1), with Φ(t) = t 2 /2;
G(x) = S∈J w S x S 2 + λ x 1 , where J is a partition of [n];
3) Logistic regression: ℓ (p(u s ; x), y s ) ln(1 + e −ys·u ⊤ s x ); popular choices of G(x) are G(x) λ x 1 or G(x) λ x 2 2 . The constraint set K is generally assumed to be bounded. For large scale data sets, solving such learning problems is computationally challenging even if F is convex. When the problem dimension n is larger than the sample size |D|, the Hessian of the empirical risk loss F is typically rank deficient and hence F is not strongly convex. Since linear convergence rate for decentralized methods is established in the literature only under strong convexity, it is unclear whether such a fast rate can be achieved under less restrictive conditions, e.g., embracing popular high-dimensional learning problems as those mentioned above. We show next that a positive answer to this question can be obtained leveraging the renowned LT error bound, a condition that has been wide explored in the literature of centralized optimization methods. (i) F is convex;
The Luo-Tseng error bound
(ii) For any η > inf x∈K U (x), there exists ǫ, κ > 0 such that:
U (x) ≤ η and x − prox G (x − ∇F (x)) ≤ ǫ (1) ⇓ dist(x, K * ) ≤ κ x − prox G (x − ∇F (x)) .(2)
Assumption 3(ii) is a local growth condition on U around K * , crucial to prove linear rate. Note that for convex F , condition 3(ii) is equivalent to other renowned error bound conditions, such as the Polyak-Lojasiewicz [21,30], the quadratic growth [8], and the Kurdyka-Lojasiewicz [5] conditions. A broad class of functions satisfying Assumption 3 is in the form U (x) = F (x) + G(x), with F and G such that (cf. [41,Theorem 4], [49, Theorem 1]):
(i) F (x) = h(Ax) is L-smooth,
where h is strongly convex and A is any linear operator;
(ii) G is either a polyhedral convex function (i.e., its epigraph is a polyhedral set) or has a specific separable form as
G(x) = S∈J w S x S 2 + λ x 1 , where J is a partition of the set [n]
, and λ and w S 's are nonnegative weights (we used x S to denote the vector whose component i is x i if i ∈ S, and 0 otherwise);
(iii) U (x) is coercive.
It follows that all examples listed in Section 2.1 satisfy Assumption 3. Hence, the proposed decentralized asynchronous algorithm, to be introduced, will provably achieve linear rate for such a general classes of problems.
Algorithmic development
Solving Problem (P) over G poses the following challenges: i) U is nonconvex/nonsmooth; ii) each agent i only knows its local loss f i but not the global F ; and iii) agents perform updates in an asynchronous fashion. Furthermore, it is well established that, when f i are nonconvex (or convex only in some variable), using convex surrogates for f i in the agents' subproblems rather than just linearization (as in gradient algorithms) provides more flexibility in the algorithmic design and can enhance practical convergence [10,[34][35][36]. This motivated us to equip our distributed asynchronous design with SCA models.
To address these challenges, we develop our algorithm building on SONATA [37,39], as to our knowledge it is the only synchronous decentralized algorithm for (P) capable to handle challenges i) and ii) and incorporating SCA techniques. Moreover, when employing a constant step size, it converges linearly to the optimal solution of (P) when F is strongly convex; and sublinearly to the set of stationary points of (P), when F is nonconvex. We begin briefly reviewing SONATA.
3.1 Preliminaries: the SONATA algorithm [37,39] Each agent i maintains a local estimate x i of the common optimization vector x, to be updated at each iteration; the k-th iterate is denoted by x k i . The specific procedure put forth by SONATA is given in Algorithm 1 and briefly described next.
Algorithm 1 The SONATA Algorithm Data: For all agent i and ∀j ∈ N in i , x 0 i ∈ R n , z 0 i = y 0 i = ∇f i (x 0 i ), φ 0 i = 1. Set k = 0. while a termination criterion is not met, each agent i ∈ [I] do
(S.1) Local optimization:
x k i = argmin x∈K U i x; x k i , I y k i − ∇f i (x k i ) f i (x; x k i ) + (I y k i − ∇f i (x k i )) ⊤ x − x k i + G(x) , (3a) v k+1 i = x k i + γ x k i − x k i . (3b) (S.2) Consensus step: x k+1 i = w ii v k+1 i + j∈N in i w ij v k+1 j .(4)
(S.3) Gradient tracking:
z k+1 i = I j=1 a ij z k j + ∇f j (x k+1 j ) − ∇f j (x k j ) , φ k+1 i = I j=1 a ij φ k j , y k+1 i = z k+1 i φ k+1 i .(5)
k ← k + 1 end while (S.1): Local optimization. At each iteration k, every agent i locally solves a strongly convex approximation of Problem (P) at x k i , as given in (3a), where f i :
K × K → R is a so-called SCA surrogate of f i , that is, satisfies Assumption 4 below. The second term in (3a), (Iy k i − ∇f i (x k i )) ⊤ x − x k i , serves as a first order approximation of j =i f j (x) unknown to agent i, wherein Iy k i tracks the sum gradient I j=1 ∇f j (x k i ) (see step (S.3))
. We then employ a relaxation step (3b) with step size γ.
Assumption 4. f i : K × K → R satisfies:
(i) ∇ f i (x; x) = ∇f i (x) for all x ∈ K;
(ii) f i (·; y) is uniformly strongly convex on K with constant µ > 0;
(iii) ∇ f i (x; ·) is uniformly Lipschitz continuous on K with constant l.
The choice of f i is quite flexible. For example, one can construct a proximal gradient type update (3a) by linearizing f i and adding a proximal term; if f i is a DC function, f i can retain the convex part of f i while linearizing the nonconvex part. We refer to [10,[34][35][36] for more details on the choices of f i , and Sec. 5 for specific examples used in our experiments.
(S.2): Consensus. This steps aims at enforcing consensus on the local variables x i via gossiping. Specifically, after the local optimization step, each agent i performs a consensus update (4) with mixing matrix W = (w ij ) I i,j=1 satisfying the following assumption.
Assumption 5. The weight matrices W (w ij ) I i,j=1 and A (a ij ) I i,j=1 satisfy (we will write M (m ij ) I i,j=1 to denote either A or W and 1 ∈ R I is a vector of all ones):
(i) ∃m > 0 such that: m ii ≥m, ∀i ∈ V; m ij ≥m, for all (j, i) ∈ E; and m ij = 0, otherwise;
(ii) W is row-stochastic, that is, W 1 = 1; and iii) A is column-stochastic, that is,
A ⊤ 1 = 1.
Several choices for W and A are available; see, e.g., [33]. Note that SONATA uses a row-stochastic matrix W for the consensus update and a column-stochastic matrix A for the gradient tracking. In fact, for general digraph, a doubly stochastic matrix compliant with the graph might not exist while one can always build compliant row or column stochastic matrices. These weights can be determined locally by the agents, e.g., once its in-and out-degree can be estimated.
(S.3): Gradient tracking. This step updates y i by employing a perturbed push-sum algorithm with weight matrix A satisfying Assumption 5. This step aims to track the average gradient (1/I)
I i=1 ∇f i (x i ) via y i .
In fact, using the column stochasticity of A and applying the telescopic cancellation, one can check that the following holds:
I i=1 φ k i = I i=1 φ 0 i = I, I i=1 z k i = I i=1 ∇f i (x k i ).(6)
It can be shown that for all
i ∈ [I], z k i and φ k i converges to ξ k i · I i=1 z k i and ξ k i · I i=1 φ k i , respectively, for some ξ k i > 0 [24]. Hence, y k i = z k i /φ k i converges to (1/I) I i=1 ∇f i (x k i ), employing the desired gradient tracking.
Notice that the extension of the gradient tracking to the asynchronous setting is not trivial, as the ratio consensus property discussed above no longer holds if agents naively perform their updates using in (5) delayed information. In fact, packets sent by an agent, corresponding to the summand in (5), may get lost. This breaks the equalities in (6). Consequently, the ratio y k i cannot correctly track the average gradient. To cope with this issue, our approach is to replace step (S.3) by the asynchronous gradient tracking mechanism developed in [40].
Asynchronous decentralized SCA (ASY-DSCA)
We now break the synchronism in SONATA and propose ASY-DSCA (cf. Algorithm 2). All agents update asynchronously and continuously without coordination, possibly using delayed information from their neighbors. More specifically, a global iteration counter k, unknown to the agents, is introduced, which increases by 1 whenever a variable of the multiagent system changes. Let i k be the agent triggering iteration k → k +1; it executes Steps (S1)-(S.3) (no necessarily withih the same activation), as described below.
(S.1): Local optimization. Agent i k solves the strongly convex optimization problem (7) based on the local surrogate U i k . It is tacitly assumed that U i k is chosen so that (7) is simple to solve (i.e., the solution can be computed in closed form or efficiently). Given the solution
x k i k , v k+1 i k is generated. (S.2): Consensus. Agent i k may receive delayed variables from its in-neighbors j ∈ N in i k , whose iteration index is k − d k j .
To perform its update, agent i k first sorts the "age" of all the received variables from agent j since k = 0, and then picks the most recently generated one. This is implemented maintaining a local counter τ i k j , updated recursively as τ k
i k j = max(τ k−1 i k j , k − d k j )
. Thus, the variable agent i k uses from j has iteration index τ k i k j . Since the consensus algorithm is robust against asynchrony [40], we simply adopt the update of SONATA [cf. (4)] and replace v k j by its delayed version v
I i=1 ∇f i (x i ).
To cope with this issue, we leverage the asynchronous sumpush scheme (P-ASY-SUM-PUSH) introduced in our companion paper [40] and update the y-variable as in
Algorithm 2 The ASY-DSCA Algorithm Data: For all agent i and ∀j ∈ N in i , x 0 i ∈ R n , z 0 i = y 0 i = ∇f i (x 0 i ), φ 0 i = 1,ρ 0 ij = 0,σ 0 ij = 0, τ −1 ij = −D. And for t = −D, −D + 1, . . . , 0, ρ t ij = 0, σ t ij = 0, v t i = 0. Set k = 0. while a termination criterion is not met do Pick: (i k , d k ); Set: τ k i k j = max(τ k−1 i k j , k − d k j ), ∀j ∈ N in i k ; (S.1) Local optimization: x k i k = argmin x∈K U i k x; x k i k , I y k i k − ∇f i k (x k i k ) , v k+1 i k = x k i k + γ x k i k − x k i k ;(7)
(S.2) Consensus step (using delayed information):
x k+1 i k = w i k i k v k+1 i k + j∈N in i k w i k j v τ k i k j j ;(8)y k+1 i k = F i k , k, (ρ τ k i k j i k j ) j∈N in i k , (σ τ k i k j i k j ) j∈N in i k , ∇f i k (x k+1 i k ) − ∇f i k (x k i k )(9)
Untouched state variables shift to state k + 1 while keeping the same value;
k ← k + 1. end while procedure F (i, k, (ρ ij ) j∈N in i , (σ ij ) j∈N in i , ǫ) Sum step: z k+ 1 2 i = z k i + j∈N in i ρ ij −ρ k ij + ǫ, φ k+ 1 2 i = φ k i + j∈N in i σ ij −σ k ij ;(10)
Push step:
z k+1 i = a ii z k+ 1 2 i , φ k+1 i = a ii φ k+ 1 2 i ; ∀j ∈ N out i , ρ k+1 ji = ρ k ji + a ji z k+ 1 2 i , σ k+1 ji = σ k ji + a ji φ k+ 1 2 i ;(11)
Mass-Buffer update:ρ
k+1 ij = ρ ij ,σ k+1 ij = σ ij , ∀j ∈ N in i ;(12)return z k+1 i /φ k+1 i
. end procedure (9). Each agent i maintains mass counters (ρ ji , σ ji ) associated to (z i , φ i ) that record the cumulative mass generated by i for j ∈ N out i since k = 0; and transmits (ρ ji , σ ji ). In addition, agent i also maintains buffer variables (ρ ij ,σ ij ) to track the latest mass counter (ρ ij , σ ij ) from j ∈ N in i that has been used in its update. We describe now the update of z and ρ; φ and σ follows similar steps. For notation simplicity, let i = i k update. It first performs the sum step (10) using a possibly delayed mass counter ρ τ k ij ij received from j. By computing the difference ρ τ k ij ij −ρ k ij , it collects the sum of the a ij z j 's generated by j that it has not yet added. Agent i then sums them together with a gradient correction term (perturbation)
ǫ = ∇f i (x k+1 i ) − ∇f i (x k i ) to
its current state variable z k i to form the intermediate mass z to its local mass counter ρ k ji , to be transmit to j ∈ N out i . Since the last mass counter agent i processed is ρ (12)]. Finally, it outputs y k+1
τ k ij ij , it setsρ ij = ρ τ k ij ij [cf.i = z k+1 i /φ k+1 i .
Convergence of ASY-DSCA
We study ASY-DSCA under the asynchronous model below.
Assumption 6 (Asynchronous model). Suppose:
(i) ∃ 0 < T < ∞ such that ∪ k+T −1 t=k i t = V, for all k ∈ N + ; (ii) ∃ 0 < D < ∞ such that 0 ≤ d k j ≤ D,
for all j ∈ N in i k and k ∈ N + . Assumption 6(i) is an essentially cyclic rule stating that within T iterations all agents will have updated at least once, which guarantees that all of them participate "sufficiently often". Assumption 6(ii) requires bounded delay-old information must eventually be purged by the system. This asynchronous model is general and imposes no coordination among agents or specific communication/activation protocol-an extensive discussion on specific implementations and communication protocols satisfying Assumption 6 can be found for ASY-SONATA in the companion paper [40] and apply also to ASY-DSCA; we thus omit here further details.
The convergence of ASY-DSCA is established under two settings, namely: i) convex F and error bound Assumption 3 (cf. Theorem 1); and ii) general nonconvex F (cf. Theorem 2).
Theorem 1 (Linear convergence). Consider (P) under Assumption 1 and 3, and let U ⋆ denote the optimal function value. Let {(x k i ) I i=1 } k∈N be the sequence generated by Algorithm 2, under Assumption 2, 6, and with weight matrices W and A satisfying Assumption 5. Then, there exist a constantγ cvx > 0 and a solution x ⋆ of (P) such that if γ ≤γ cvx , it holds
U (x k i ) − U (x ⋆ ) = O(λ k ), x k i − x ⋆ = O ( √ λ) k ,
for all i ∈ V and some λ ∈ (0, 1).
Theorem 1 establishes the first linear convergence result of a distributed (synchronous or asynchronous) algorithm over networks without requiring strong convexity but the weaker LT condition. Linear convergence is achieve on both function values and sequence iterates.
We consider now the nonconvex setting. To measure the progress of ASY-DSCA towards stationarity, we introduce the merit function Theorem 2 (Sublinear convergence). Consider (P) under Assumption 1 (thus possibly nonconvex). Let {(x k i ) I i=1 } k∈N0 be the sequence generated by Algorithm 2, in the same setting of Theorem 1. Given δ > 0, let T δ be the first iteration k ∈ N such that M F (x k ) ≤ δ. Then, there exists aγ ncvx > 0, such that if γ ≤γ ncvx ,
M F (x k ) max x k − prox G (x k − ∇F (x k )) 2 , I i=1 x k i −x k 2 ,(13)wherex k (1/I) · I i=1 x k i ,T δ = O(1/δ).
The expression of the step-size can be found in (55).
Numerical Results
We test ASY-DSCA on a LASSO problem (a convex instance of (P)) and an M-estimation problem (a constrained nonconvex formulation) over both directed and undirected graphs. The experiments were performed using MATLAB R2018b on a cluster computer with two 22-cores Intel E5-2699Av4 processors (44 cores in total) and 512GB of RAM each. The setting of our simulations is the following.
(i) Network graph. We simulated both undirected and directed graph, generated according to the following procedures. Undirected graph: An undirected graph is generated according to the Erdos-Renyi model with parameter p = 0.3 (which represents the probability of having an edge between any two nodes). Doubly stochastic weight matrices are used, with weights generated according to the Metropolis-Hasting rule. Directed graph: We first generate a directed cycle graph to guarantee strong connectivity. Then we randomly add a fixed number of out-neighbors for each node. The row-stochastic weight matrix W and the column-stochastic weight matrix A are generated using uniform weights.
(ii) Surrogate functions of ASY-DSCA and SONATA. We consider two surrogate functions:
f 1 i (x; x k i ) = ∇f i (x k i ) ⊤ (x − x k i ) + µ 2 x − x k i 2 and f 2 i (x; x k i ) = ∇f i (x k i ) ⊤ (x − x k i ) + 1 2 (x − x k i ) ⊤ H(x − x k i ) + µ 2 x − x k i 2 ,
where H is a diagonal matrix having the same diagonal entries as ∇ 2 f i (x k i ). We suffix SONATA and ASY-DSCA with "-L" if the former surrogate functions are employed and with "-DH" if the latter are adopted.
(iii) Asynchronous model. Each agent sends its updated information to its out-neighbors and starts a new computation round, immediately after it finishes one. The length of each computation time is sampled from a uniform distribution over the interval [p min , p max ]. The communication time/traveling time of each packet follows an exponential distribution exp( 1 Dtv ). Each agent uses the most recent information among the arrived packets from its in-neighbors, which in general is subject to delays. In all our simulations, we set p min = 5, p max = 15, and D tv = 30 (ms is the default time unit).
(iv) Comparison with state of arts schemes. We compare the convergence rate of ASY-DSCA, AsyPrimalDual [47] and synchronous SONATA in terms of time. The parameters are manually tuned to yield the best empirical performance for each-the used setting is reported in the caption of the associated figure. Note that AsyPrimalDual is the only asynchronous decentralized algorithm able to handle constraints and nonsmoothness additive functions in the objective and constraints, but only over undirected graphs and under restricted assumptions of asynchrony; also AsyPrimalDual is provably convergence only when applied to convex problems.
LASSO
The decentralized LASSO problem reads
min x∈R n U (x) i∈[I] M i x − b i 2 + λ x 1 .(14)
Data (M i , b i ) i∈[I] are generated as follows. We choose x 0 ∈ R n as a ground truth sparse vector, with density * n nonzero entries drawn i.i.d. from N (0, 1). Each row of M i ∈ R r×n is drawn i.i.d. from N (0, Σ) with Σ as a diagonal matrix such that Σ i,i = i −ω . We use ω to control the conditional number of Σ. Then we
generate b i = M i x 0 + δ i ,
Sparse logistic regression
We consider the decentralized sparse logistic regression problem in the following form min x∈R n i∈[I] s∈Di log(1 + exp(−y s u ⊤ s x)) + λ x 1 , Data (u s , y s ), s ∈ ∪ i∈[I] D i , are generated as follows. We first choose x 0 ∈ R n as a ground truth sparse vector with density * n nonzero entries drawn i.i.d. from N (0, 1). We generate each sample feature u s independently, with each entry drawn i.i.d. from N (0, 1); then we set y s = 1 with probability 1/(1 + exp(−u ⊤ s x 0 )), and y s = −1 otherwise. We set |D i | = 3, ∀i ∈ [I], n = 100, I = 20, λ = 0.01 and density = 0.3. We use the same optimality measure as that for the LASSO problem. The results and the tuning of parameters are reported in Fig. 3.
M-estimator
As nonconvex (constrained, nonsmooth) instance of problem (P), we consider the following M-estimation task [51, (17)]: min
x 2 ≤r 1 |D| i∈[I] s∈Di ρ α (u ⊤ s x − y s ) + λ x 1 ,(15)
where ρ α (t) = (1 − e −α t 2 /2 )/α is the nonconvex Welsch's exponential squared loss and D ∪ i∈[I] D i . We generate x 0 ∈ R n as unit norm sparse vector with density * n nonzero entries drawn i.i.d. from N (0, 1).
Each entry of u s ∈ R n is drawn i.i.d. from N (0, 1); we generate y s = u s ⊤ x 0 + 0.1 * ǫ s , with ǫ s i.i.d.
∼ N (0, 1). We set |D i | = 10, for all i ∈ [I], n = 100, I = 30, α = 0.1, r = 2, λ = 0.01, and density = 0.1. Since (15) is nonconvex, progresses towards stationarity and consensus are measured using the merit function M F (·) in (13). The result and tuning of parameters are reported in Fig. 4.
Discussion
All the experiments clearly show that ASY-DSCA achieves linear rate on LASSO and Logistic regression, with nonstrongly convex objectives, both over undirected and directed graphs-this supports our theoretical findings (Theorem 1). The flexibility in choosing the surrogate functions provides us the chance to better exploit the curvature of the objective function than plain linearization-based choices. For example, in the LASSO experiment, ASY-DSCA-DH outperforms all the other schemes due to its advantage of better exploiting second order information. Also, ASY-DSCA compares favorably with AsyPrimalDual. ASY-DSCA exhibits good performance also in the nonconvex setting (recall that no convergence proof is available for AsyPrimalDual applied to nonconvex problems). In our experiments, asynchronous algorithms turned to be faster than synchronous ones. The reason is that, at each iteration, agents in synchronous algorithms must wait for the slowest agent receiving the information and finishing its computation (no delays are allowed), before proceeding to the next iteration. This is not the case of asynchronous algorithms wherein agents communicate and update continuously with no coordination.
6 Proof of Theorem 1 6
.1 Roadmap of the proof
We begin introducing in this section the roadmap of the proof. Define x k [x k 1 , · · · , x k I ] ⊤ , v k [v k 1 , · · · , v k I ] ⊤ ∈ R I×n ; and let S (D + 2)I. Construct the two S × n matrices:
∆H k e i k ∆x k ⊤ , with ∆x k x k i k − x k i k , H k [(x k ) ⊤ , (v k ) ⊤ , (v k−1 ) ⊤ , · · · , (v k−D ) ⊤ ] ⊤ ,
with v t = 0, for t ≤ 0. Our proof builds on the following quantities that monitor the progress of the algorithm.
• Optimality gaps:
∆ k x k i k − x k i k , E k o max i∈[S] U (H k i ) − U * ; (16a)
• Consensus errors (x k ψ is some weighted average of row vectors of H k and will be defined in Sec. 6.2):
E k x H k − 1 · x k ψ , E k y y k i k −ḡ k ; (16b)
• Tracking error:
E k t Iy k i k − ∇F (x k i k ) 2 .(16c)
Specifically, ∆ k and E k o measure the distance of the x k i 's from optimality in terms of step-length and objective value. E k x and E k y represents the consensus error of x i 's and y i 's, respectively while E k t is the tracking error of y k i . Our goal is to show that the above quantities vanish at a linear rate, implying convergence (at the same rate) of the iterates generated by the algorithm to a solution of Problem (P). Since each of them affects the dynamics of the others, our proof begins establishing the following set of inequalities linking these quantities (the explicit expression of the constants below will be given in the forthcoming sections):
E k+1 y ≤ 3C 1 l k l=0 ρ k−l E l x + γ∆ l + C 1 ρ k g 0 , (17a) E k+1 x ≤ C 2 ρ k E 0 x + C 2 k l=0 ρ k−l γ∆ l ,(17b)E k t ≤ 8I l 2 (E k x ) 2 + 2I 2 (E k y ) 2 , (17c) E k+1 o ≤ C 4 (γ) ζ(γ) k E 0 o + C 3 (γ)C 4 (γ) ζ(γ) k ℓ=0 ζ(γ) k−ℓ E ℓ t ,(17d)(∆ k ) 2 ≤ 1 γ μ − ǫ 2 − γL 2 E k o + 1 2 ǫ μ − ǫ 2 − γL 2 E k t .(17e)
We then show that ∆ k , E k o , E k x , E k y and E k y vanish at linear rate chaining the above inequalities by means of the generalized small gain theorem [40].
The main steps of the proof are summarized next. • Step 1: Proof of (17a)-(17c) via P-ASY-SUM-PUSH. We rewrite (S.2) and (S.3) in ASY-DSCA (Algorithm 2) as instances of the perturbed asynchronous consensus scheme and the perturbed asynchronous sum-push scheme (the P-ASY-SUM-PUSH) introduced in the companion paper [40]. By doing so, we can bound the consensus errors E k
x and E k y in terms of ∆ k and then prove (17a)-(17b)-see Lemma 4 and Lemma 5. Eq. (17c) follows readily from (17a)-(17b)-see Lemma 6. [1,25,31,38,39] (including our companion paper [40]) leverage strong convexity of F , a property that is replaced here by the weaker local growing condition (2) in the LT error bound. Hence, they are not applicable to our setting. On the other hand, existing proofs showing linear rate of centralized first-order methods under the LT condition [22] do not readily customize to our distributed, asynchronous setting, for the reasons elaborated next. To invoke the local growing condition (2), one needs first to show that the sequences generated by the algorithm enters (and stays into) the region where (1) holds, namely: a) the function value remains bounded; and b) the proximal operator residual is sufficiently small. A standard path to prove a) and b) in the centralized setting is showing that the objective function sufficiently descents along the trajectory of the algorithm. Asynchrony apart, in the distributed setting, function values on the agents' iterates do not monotonically decrease provably, due to consensus and gradient tracking errors. To cope with these issues, in this Step 2, we put forth a new analysis. Specifically, i) Sec. 6.3.1: we build a novel Lyapunov function [cf. (28)] that linearly combines objective values of current and past (up to D) iterates (all the elements of H k ); notice that the choice of the weights (cf. ψ k in Lemma 3) is very peculiar and represents a major departure from existing approaches (including our companion paper [40])-ψ k endogenously vary according to the asynchrony trajectory of the algorithm. The Lyapunov function is proved to "sufficiently" descent over the asynchronous iterates of ASY-DSCA (cf. Proposition 8); ii) Sec. 6.3.2: building on such descent properties, we manage to prove that x k i k will eventually satisfy the aforementioned conditions (1) (cf. Lemma 10 & Corollary 9), so that the LT growing property (2) can be invoked at x k i k (cf. Corollary 11); iii) Sec. 6.3.4: Finally, leveraging this local growth, we uncover relations between E k o and E k t and prove (17d) (cf. Proposition 12). Eq. (17e) is proved in Sec. 6.3.4 by product of the derivations above.
• Step 3: R-linear convergence via the generalized small gain theorem. We complete the proof of linear convergence by applying [40,Th. 23] to the inequality system (17), and conclude that all the local variables {x i } i∈ [I] converge to the set of optimal solutions K * R-linearly.
Step 1: Proof of (17a)-(17c)
We interpret the consensus step (S.2) in Algorithm 2 as an instance of the perturbed asynchronous consensus scheme [40]: (8) can be rewritten as
H k+1 = W k (H k + γ∆H k ),(18)
where W k is a time-varying augmented matrix induced by the update order of the agents and the delay profile. The specific expression of W k can be found in [40] and is omitted here, as it is not relevant to the convergence proof. We only need to recall the following properties of W k . (i) W k is row stochastic;
(ii) all the entries in the first I columns of W k+K1−1:k are uniformly bounded below by η;
(iii) there exists a sequence of stochastic vectors {ψ k } k≥0 such that: i) for any
ℓ ≥ t ≥ 0, W ℓ:t − 1ψ t ⊤ 2 ≤ C 2 ρ ℓ−t ; ii) ψ k i ≥ η for all i ∈ V.
Note that Lemma 3 implies
1ψ t ⊤ = lim n→∞ W n:t = ( lim n→∞ W n:t+1 ) W t = 1ψ t+1 ⊤ W t ,(19)
and thus ψ t+1 ⊤ W t = ψ t ⊤ , for all t ≥ 0. Then we define
x k ψ = ψ k ⊤ H k ;(20)
x k ψ evolves according to the following dynamics:
x k+1 ψ = ψ 0 ⊤ H 0 + k l=0 γψ l ⊤ ∆H l .(21)
This can be shown by applying (18) recursively, so that
H k+1 = W n:0 H 0 + k l=0 W n:l γ∆H l ,(22)
and multiplying (22) from the left by ψ k+1 ⊤ and using (19). Taking the difference between (21) and (22) and applying Lemma 3 the consensus error E k x can be bound as follows.
Lemma 4. Under the condition of Lemma 3, {E k x } satisfies E k+1 x ≤ C 2 ρ k E 0 x + C 2 k l=0 ρ k−l γ ∆ l , ∀k ≥ 0.(23)
To establish similar bounds for E k y , we build on the fact that the gradient tracking update (9) is an instance of the P-ASY-SUM-PUSH in [40], as shown next. Define
g k = [∇f 1 (x k 1 ), ∇f 2 (x k 2 ), · · · , ∇f I (x k I )] ⊤ , g k = (1/I) · (g k ) ⊤ 1, E k y y k i k −ḡ k .
We can prove the following bound for E k y .
Lemma 5. Let {x k , y k i k } ∞ k=0 be the sequence generated by the Algorithm 2 under Assumption 2, 5, and 6. Then, there exists a constant
C 1 = 4 √ 2S(1+m −K 1 ) I η ρ(1−m K 1 ) such that E k+1 y ≤ 3 C 1 l k l=0 ρ k−l E l x + γ∆ l + C 1 ρ k g 0 .(24)
Proof. See Appendix A.
Finally, using Lemma 4 and Lemma 5, we can bound k t=0 (E t x ) 2 and k t=0 (E t y ) 2 in terms of k t=0 γ 2 (∆ t ) 2 , and E k t in terms of E k x and E k y , as given below.
Lemma 6.
Under the setting of Lemma 4 and Lemma 5, we have: for any k ≥ 1,
k t=0 (E t x ) 2 ≤ c x + ̺ x k t=0 γ 2 (∆ t ) 2 , k t=0 (E t y ) 2 ≤ c y + ̺ y k t=0 γ 2 (∆ t ) 2 , E k t ≤ 2I 2 (E k y ) 2 + 8I l 2 (E k x ) 2 .(25)
with ̺ x
2C 2 2 (1−ρ) 2 , and ̺ y 36(C1L) 2 (2C 2 2 +(1−ρ) 2 ) (1−ρ) 4
. (The expressions of the constants c x and c y are omitted as they are not relevant).
Proof. The proof of the first two results follows similar steps as in that of [40,Lemma 26] and thus is omitted. We prove only the last inequality, as follows:
E k t = Iy k i k ± Iḡ k − ∇F (x k i k ) 2 ≤ 2I 2 (E k y ) 2 + 2 I j=1 f j (x k j ) ± F (x k ψ ) − ∇F (x k i k ) 2 ≤ 2I 2 (E k y ) 2 + 8I l 2 (E k x ) 2 .
6.3
Step 2: Proof of (17d)-(17e) under the LT condition
A new Lyapunov function and its descent
We begin studying descent of the objective function U along the trajectory of the algorithm; we have the following result.
Lemma 7. Let {(x k , y k )} be the sequence generated by Algorithm 2 under Assumptions 1 and 4, it holds
U (v k+1 i k ) ≤ U (x k i k ) − γ μ − γL 2 ∆x k 2 + γ · ∇F (x k i k ) − Iy k i k ⊤ ∆x k .(26)
Proof. Applying the first order optimality condition to (7) and invoking the strong convexity of f i k (Assump-
tion 4) we have − (∆x k ) ⊤ Iy k i k + G(x k i k ) − G( x k i k ) ≥ −(∆x k ) ⊤ (∇f i k (x k i k ) − ∇ f i k ( x k i k ; x k i k )) = (∆x k ) ⊤ ∇ f i k ( x k i k ; x k i k ) − ∇ f i k (x k i k ; x k i k ) ≥μ · ∆x k 2 .(27)
As F is L-smooth, applying the descent lemma gives
F (v k+1 i k ) ≤ F (x k i k ) + γ · ∇F (x k i k ) ⊤ ∆x k + L 2 γ 2 ∆x k 2 = F (x k i k ) + γ · (Iy k i k ) ⊤ ∆x k + γ · ∇F (x k i k ) − Iy k i k ⊤ ∆x k + L 2 γ 2 ∆x k 2 (27) ≤ F (x k i k ) + γ G(x k i k ) − G( x k i k ) −μ ∆x k 2 + L 2 γ 2 ∆x k 2 + γ · ∇F (x k i k ) − Iy k i k ⊤ ∆x k .
By the convexity of G, we have
γ G(x k i k ) − G( x k i k ) ≤ G(x k i k ) − G(v k+1 i k ).
Combining the above two results proves (26).
We build now on (26) and establish descent on a suitable defined Lyapunov function. Define the mapping U : R S×n → R S as U (H) = [U (h 1 ), · · · , U (h S )] ⊤ for H = [h 1 , · · · , h S ] ⊤ ∈ R S×n . That is, U (H) is a vector constructed by stacking the value of the objective function U evaluated at each local variable h i . Recalling the definition of the weights ψ k (cf. Lemma 3), we introduce the Lyapunov function
L k ψ k ⊤ U (H k ),(28)
and study next its descent properties.
Proposition 8. Let {(x k , v k , y k )} be the sequence generated by Algorithm 2 under Assumptions 1, 2, 4, and 5. Then,
L k+1 ≤ L 0 − k t=0 (∆ t ) 2 γ ημ − γ L 2 + l I 3 2 √ ̺ x + I √ ̺ y + C,(29)
for all k ≥ 0, where C is some constant independent of γ and k; and ̺ x and ̺ y are defined in Lemma 6.
Proof. By the row stochasticity of W and the convexity of U :
U (H k+1 ) = U W k (H k + γ∆H k ) W k U H k + γ∆H k W k U (H k ) − γ μ − γL 2 ∆x k 2 − γ · ∇F (x k i k ) − Iy k i k ⊤ ∆x k e i k ,
where in the last inequality we applied Lemma 7. Using now Lemma 3, we have
L k+1 ≤ L k − ψ k i k γ μ − γL 2 ∆x k 2 − γ ∇F (x k i k ) − Iy k i k ⊤ ∆x k ≤ L k − γημ ∆x k 2 + L(γ) 2 2 ∆x k 2 + ψ k i k γ ∇F (x k i k ) − Iy k i k ⊤ ∆x k ≤ L k − γ ημ − γL 2 ∆x k 2 + ψ k i k γ ∇F (x k i k ) ± Iḡ k − I y k i k ⊤ ∆x k ≤ L k − γ ημ − γL 2 ∆x k 2 + γ I l I j=1 x k ψ − x k j ∆x k + γ IE k y · ∆x k ≤ L k − γ ημ − γL 2 ∆x k 2 + γ l I 3 2 E k x ∆x k + γ IE k y · ∆x k 2 ( * ) ≤ L k − γ ημ − γ L 2 + 1 2ǫ 1 + 1 2ǫ 2 ∆x k 2 + ǫ 1 2 l 2 I 3 (E k x ) 2 + ǫ 2 2 I 2 (E k y ) 2 ≤ L 0 − γ ημ − γ L 2 + 1 2ǫ 1 + 1 2ǫ 2 k t=0 ∆x t 2 + ǫ 1 2 l 2 I 3 k t=0 (E t x ) 2 + ǫ 2 2 I 2 k t=0 (E t y ) 2 ,
where ( * ) follows from the Young's inequality with ǫ 1,2 > 0. Invoking Lemma 6 and setting γ l ≡ γ gives (29), where the free parameters ǫ 1,2 are chosen as ǫ 1 = 1/(l I 3 2 √ ̺ x ) and ǫ 2 = 1/(I √ ̺ y ), respectively.
Leveraging the LT condition
We build now on Proposition 8 and show next that the two conditions in (1) holds at x k i , for sufficiently large k; this will permit to invoke the LT growing property (2).
The first condition-U (x k i ) bounded for large k-is a direct consequence of Proposition 8 and the facts that U is bounded from below (Assumption 1) and ψ k i ≥ η, for all i ∈ [I] and k ≥ 0. Formally, we have the following.
(i) U (x k i ) is uniformly upper bounded, for all i ∈ V and k ≥ 0; (ii) ∞ t=0 (∆ t ) 2 < ∞, ∞ t=0 (E t x ) 2 < ∞, and ∞ t=0 (E t y ) 2 < ∞.
We prove now that the second condition in (1) holds for large k-the residual of the proximal operator at
x k i k , that is x k i k − prox G (x k i k − ∇F (x k i k ))
, is sufficiently small. Since ∆ k and the gradient tracking error E k t are vanishing [as a consequence of Corollary 9ii) and Lemma 6], it is sufficient to bound the aforementioned residual by ∆ k and E k t . This is done in the lemma below.
Lemma 10. The proximal operator residual on
x k i k satisfies x k i k − prox G (x k i k − ∇F (x k i k )) 2 ≤ 4 1 + (l + l) 2 (∆ k ) 2 + 5E k t . Proof. For simplicity, we denotex k = prox G (x k i k − ∇F (x k i k )
). According to the variational characterization of the proximal operator, we have, for all w ∈ K,
x k − x k i k − ∇F (x k i k ) ⊤ (x k − w) + G(x k ) − G(w) ≤ 0.
The first order optimality condition of x k i k implies
∇ f i k ( x k i k ; x k i k ) + Iy k i k − ∇f i k (x k i k ) ⊤ ( x k i k − z) + G( x k i k ) − G(z) ≤ 0, ∀z ∈ K.(30)
Setting z =x k and w = x k i k and adding the above two inequalities yields
0 ≥ ∇ f i k ( x k i k ; x k i k ) + Iy k i k − ∇f i k (x k i k ) −x k + x k i k − ∇F (x k i k ) ⊤ ( x k i k −x k ) = Iy k i k −x k + x k i k − ∇F (x k i k ) ⊤ ( x k i k − x k i k ) + ∇ f i k ( x k i k ; x k i k ) − ∇f i k (x k i k ) ⊤ ( x k i k − x k i k ) + x k − x k i k 2 + ∇ f i k ( x k i k ; x k i k ) + Iy k i k − ∇f i k (x k i k ) − ∇F (x k i k ) ⊤ (x k i k −x k ) ≥ − 1 2 Iy k i k − ∇F (x k i k ) 2 − 1 2 ∆x k 2 − 1 4 x k − x k i k 2 − ∆x k 2 + µ ∆x k 2 + x k − x k i k 2 − 1 4 x k − x k i k 2 − 2 (l + l) 2 ∆x k 2 + Iy k i k − ∇F (x k i k ) 2 .
Rearranging terms proves the desired result.
Corollary 9 in conjunction with Lemma 10 and Lemma 6 show that both conditions in (1) hold at {x k }, for large k. We can then invoke the growing condition (2).
Corollary 11. Let {x k } be the sequence generated by Algorithm 2 under the setting of Corollary 9. Then, there exists a constant κ > 0 and a sufficiently largek such that, for k ≥k,
dist(x k i k , K * ) ≤ κ x k i k − prox G (x k i k − ∇F (x k i k )) .(31)
Proof. It is sufficient to show that (1) holds at x k i k . By Corollary 9(i), U (x k i k ) ≤ B, for all k ≥ 0 and some B < +∞. Lemma 10 in conjunction with Corollary 9(ii) and Lemma 6 yields lim
k→∞ x k i k − prox G (x k i k − ∇F (x k i k )) = 0. 6.3.3 Proof of (17d) Define C 3 (γ) γ c 6 (μ − ǫ 2 − γL 2 ) + c7 2ǫ c 7 +μ − ǫ 2 − γL 2 ,(32)C 4 (γ) 1 − 1 − σ(γ) η −1 ,(33)ζ(γ) 1 − 1 − σ(γ) η 1 K 1 ,(34)σ(γ) c 7 + μ − ǫ 2 − γL 2 (1 − γ) c 7 +μ − ǫ 2 − γL 2 ,(35)
where K 1 = (2I − 1) · T + I · D, and c 6 , c 7 are polynomials in (1, l,l, L, κ) whose expressions are given in (60) and (42); and ǫ ∈ (0, 2μ) is a free parameter (to be chosen).
In this section, we prove (17d), which is formally stated in the proposition below.
Proposition 12. Let {(x k , y k )} be the sequence generated by Algorithm 2 under Assumptions 1, 2, 3, 4, and 5. Then, for k ≥k, it holds
E k+1 o ≤ C 4 (γ) ζ(γ) k E 0 o + C 3 (γ)C 4 (γ) ζ(γ) k ℓ=0 ζ(γ) k−ℓ E ℓ t .(36)
Since σ(γ) < 1 for 0 < γ < sup ǫ∈(0,2μ) 2μ−ǫ L = 2μ L and η ∈ (0, 1], Proposition 12 shows that, for sufficiently small γ > 0, the optimality gap E k o converges to zero R-linearly if E k t does so. The proof of Proposition 12 follows from Proposition 13 and Lemma 15 below. Proposition 13. Let {(x k , y k )} be the sequence generated by Algorithm 2 in the setting of Proposition 12. Let p k U (H k ) − U (x * )1; let Σ k be the diagonal matrix with all diagonal entries 1 and Σ k i k i k = σ(γ); and let ( W Σ) k:ℓ W k Σ k · · · W ℓ Σ ℓ . Then, for k ≥k,
p k+1 W Σ k:0 p 0 + C 3 (γ) k ℓ=1 W Σ k:ℓ W ℓ−1 e i ℓ−1 E ℓ−1 t + C 3 (γ) W k e i k E k t ,(37)
where C 3 (γ) is defined in (32).
Proof. By convexity of U and (18), we have
p k+1 = U (H k+1 ) − U (x * )1 W k U H k + γ∆H k − U (x * )1 .(38)
Since U H k + γ∆H k differs from U H k only by its i k -th row, we study descent occurred at this row,
which is (v k+1 i k ) ⊤ = (x k i k + γ x k i k − x k i k ) ⊤ .
Recall that by applying the descent lemma on F and using the convexity of G we proved
U (v k+1 i k ) − U (x k i k ) ≤ L 2 γ 2 ∆x k 2 + γ ∇F (x k i k ) ⊤ x k i k − x k i k + G( x k i k ) − G(x k i k ) T1 .(39)
The above inequality establishes a connections between U (v k+1 i k ) and U (x k i k ). However, it is not clear whether there is any contraction (up to some error) going from the optimality gap
U (v k+1 i k ) − U * to U (x k i k ) − U * .
To investigate it, we derive in the lemma below two upper bounds of T 1 in (39), in terms of U (v k+1 i k ) − U (x * ) and ∆x k (up to the tracking error). Building on these bounds and (39) we can finally prove the desired contraction, as stated in (43).
Lemma 14. T 1 in (39) can be bounded in the following two alternative ways: for k ≥k,
T 1 ≤ −μ + ǫ 2 · ∆x k 2 + 1 2ǫ E k t ,(40)T 1 ≤ − 1 1 − γ U (v k+1 i k ) − U (x * ) + 1 1 − γ c 5 ∆x k 2 + c 6 E k t ,(41)
where c 5 and c 6 are polynomials in (1, l,l, L, κ) whose expressions are given in (60).
Proof. See Appendix B.
Using Lemma 14 in (39) yields
U (v k+1 i k ) − U * ≤ (1 − γ) U (x k i k ) − U * ) + L 2 γ(1 − γ) + c 5 γ ∆x k 2 + c 6 · γE k t ≤ (1 − γ) U (x k i k ) − U (x * (x k i k )) + (c 5 + L/8) c7 γ ∆x k 2 + c 6 · γE k t ,(42a)
and
U (v k+1 i k ) − U * ≤ U (x k i k ) − U * − μ − γL 2 − ǫ 2 γ ∆x k 2 + γ 2ǫ E k t .(42b)
Canceling out ∆x k 2 in (42a)-(42b) yields: for k ≥k,
U (v k+1 i k ) − U (x * ) ≤ σ(γ) U (x k i k ) − U (x * ) + C 3 (γ)E k t ,(43)
where σ(γ) and C 3 (γ) are defined in (32). Thus we observed a contraction from U (
x k i k ) − U (x * ) to U (v k+1 i k ) − U (x *W k Σ k p k + C 3 (γ) E k t e i k W Σ k:0 p 0 + C 3 (γ) k ℓ=1 W Σ k:ℓ W ℓ−1 e i ℓ−1 E ℓ−1 t + C 3 (γ) W k e i k E k t .
The lemma below shows that the operator norm of ( W Σ) k:ℓ induced by the ℓ ∞ norm decays at a linear rate.
Lemma 15. For any k ≥ ℓ ≥ 0, ( W Σ) k:ℓ ∞ ≤ C 4 (γ) ζ(γ) k−ℓ ,
where the expression of ζ(γ), C 4 (γ), and K 1 are given in (32).
Proof. See Appendix C.
Proof of (17e)
Eq. (17e) follows directly from the second inequality of (42) and the fact that
U (x k i k ) − U (v k+1 i k ) ≤ E k o .
This completes the proof of the inequality system (17).
Step 3: R-linear convergence via the generalized small gain theorem
The last step is to show that all the error quantities in (17) vanish at a linear rate. To do so, we leverage the generalized small gain theorem [40,Th. 17]. We use the following. Invoking [40,Lemma 20 & Lemma 21], if we choose λ such that max ρ 2 , ζ(γ) < λ < 1, by (17) we get
|E y | √ λ,N ≤ 3C 1 l √ λ − ρ (|E x | √ λ,N + γ ∆ k √ λ,N ) + E 0 y + C 1 g 0 √ λ (44) |E x | √ λ,N ≤ C 2 γ √ λ − ρ ∆ k √ λ,N + E 0 x + C 2 E 0 x √ λ (45) |E o | λ,N ≤ C 3 (γ)C 4 (γ) ζ(γ) (λ − ζ(γ)) |E t | λ,N + E 0 o + C 4 (γ)E 0 o λ (46) |E t | λ,N ≤ 8 I l 2 (E x ) 2 λ,N + 2 I 2 (E y ) 2 λ,N(47)(∆ k ) 2 λ,N ≤ 1 2 ǫ μ − ǫ 2 − γL 2 |E t | λ,N + 1 γ μ − ǫ 2 − γL 2 |E o | λ,N(48)
Taking the square on both sides of (44) & (45) while using |u| q,N 2 = (u) 2 q 2 ,N , and writing the result in matrix form we obtain (49) at the top of next page.
We are now ready to apply [40,Th. 17]: a sufficient condition for E y , E x , E o , E t , and ∆ 2 to vanish at an R-linear rate is ρ(G) < 1. By [40,Lemma 23], this is equivalent to requiring p G (1) > 0, where p G (z) is the characteristic polynomial of G, This leads to the following condition:
(E y ) 2 λ,N (E x ) 2 λ,N |E o | λ,N |E t | λ,N (∆ k ) 2 λ,N 0 36C 2 1 l 2 ( √ λ−ρ) 2 0 0 36C 2 1 l 2 γ 2 ( √ λ−ρ) 2 0 0 0 0 3C 2 2 γ 2 ( √ λ−ρ) 2 0 0 0 C3(γ)C4(γ) ζ(γ)(λ−ζ(γ)) 0 2I 2 8I l 2 0 0 0 0 0 1 γ(μ− ǫ 2 − γL 2 ) 1 2 ǫ (μ− ǫ 2 − γL 2 ) 0 G (E y ) 2 λ,N (E x ) 2 λ,N |E o | λ,N |E t | λ,N (∆ k ) 2 λ,N + ǫ N . (49) B(λ; γ) = 72 I 2 C 2 1 l 2 γ 2 ( √ λ − ρ) 2 + 24 I l 2 C 2 2 γ 2 ( √ λ − ρ) 2 + 216 I 2 C 2 1 C 2 2 l 2 γ 2 ( √ λ − ρ) 4 · 1 2ǫ μ − ǫ 2 − γL 2 + C 3 (γ)C 4 (γ) ζ(γ) (λ − ζ(γ)) 1 γ μ − ǫ 2 − γL 2 < 1.
It is not hard to see that B(λ; γ) is continuous at λ = 1, for any γ ∈ (0, 2μ−ǫ L ). Therefore, as long as
B(1; γ) = 72 I 2 C 2 1 l 2 (1 − ρ) 2 + 24 I l 2 C 2 2 (1 − ρ) 2 + 216 I 2 C 2 1 C 2 2 l 2 (1 − ρ) 4 γ· γ 2ǫ μ − ǫ 2 − γL 2 + C 3 (γ)C 4 (γ) ζ(γ) (1 − ζ(γ)) 1 μ − ǫ 2 − γL 2 < 1,(50)
there will exist some λ ∈ (0, 1) such that B(λ; γ) < 1.
We show now that B(1; γ) < 1, for sufficiently small γ. We only need to prove boundedness of the following quantity when γ ↓ 0:
C 3 (γ)C 4 (γ) ζ(γ) (1 − ζ(γ)) = c 6 (μ − ǫ 2 − γL 2 ) + c7 2ǫ c 7 +μ − ǫ 2 − γL 2 ζ(γ) K1+1 h(γ) · γ 1 − ζ(γ)
.
It is clear that h(γ) is right-continuous at 0 and thus lim γ↓0 h(γ) < ∞. Hence, it is left to check that γ
1−ζ(γ)
is bounded when γ ↓ 0. According to L'Hôpital's rule,
lim γ ↓0 γ 1 − ζ(γ) = − K 1 (1 − (1 − σ(γ)) η) 1 K 1 −1 1 ησ ′ (γ) γ=0 = K 1 c 7 +μ − ǫ 2 η μ − ǫ 2 < ∞.
Finally, we prove that all (x k i ) k≥k converge linearly to some x ⋆ . By the definition of the augmented matrix H and the update (18), we have: for k ≥k,
h k+1 − h k = ( W − I)h k + γ∆H k ≤ ( W − I)(H k − 1 · (x k ψ ) ⊤ ) + γ ∆H k ≤ 3E k x + γ∆ k .
Since both E k x and ∆ k are O ( √ λ) k , ∞ k=0 h k+1 − h k < +∞; thus {H k } k∈N is Cauchy and converges to some 1(x ⋆ ) ⊤ , implying all x k i converges to x ⋆ . We prove next that x k i converges to x ⋆ R-linearly. For any k ′ > k ≥k,
we have H k − H k ′ ≤ k ′ −1 t=k H t − H t+1 ≤ k ′ −1 t=k (3E t x + γ∆ t ) = O ( √ λ) k .
Taking k ′ → ∞ completes the proof.
Proof of Theorem 2
In this section we prove the sublinear convergence of ASY-DSCA. We organize the proof in two steps.
Step 1: we prove ∞ k=0 (∆ k ) 2 < +∞ by showing the descent of a properly constructed Lyapunov function. This function represents a major novelty of our analysis-see Remark 17. Step 2: we connect the decay rate of ∆ k and that of the merit function M F (x k ).
7.1
Step 1: ∆ k is square summable In Sec. 6.2 we have shown that the weighted average of the local variables x ψ evolves according to the dynamics Eq. (21). Using x 0 ψ = ψ 0 ⊤ H 0 , (21) can be rewritten recursively as
x k+1 ψ = x k ψ + γψ k ⊤ ∆H k = x k ψ + γψ k i k ∆x k .(51)
Invoking the descent lemma while recalling ∆ k = ∆x k , yields
F (x k+1 ψ ) ≤ F (x k ψ ) + γψ k i k ∇F (x k ψ ) ⊤ ∆x k + L(γψ k i k ) 2 2 (∆ k ) 2 (27) ≤ F (x k ψ ) + Lγ 2 2 (∆ k ) 2 − γψ k i k μ(∆ k ) 2 + G( x k i k ) − G(x k i k ) + γψ k i k ∇F (x k ψ ) − Iḡ k ⊤ ∆x k + γψ k i k Iḡ k − Iy k i k ⊤ ∆x k ≤ F (x k ψ ) + Lγ 2 2 (∆ k ) 2 − γψ k i k μ(∆ k ) 2 + G( x k i k ) − G(x k i k ) + γl √ IE k x ∆ k + γIE k y ∆ k .(52)
Introduce the Lyapunov function
L k F (x k ψ ) + ψ k ⊤ G(H k )(53)
where G : R S×n → R S is defined as G(H) [G(h 1 ), · · · , G(h S )] ⊤ , for H = [h 1 , · · · , h S ] ⊤ ∈ R S×n .
Remark 17. Note that L k contrasts with the functions used in the literature of distributed algorithms to study convergence in the nonconvex setting. Existing choices either cannot deal with asynchrony [37,39] (e.g. the unbalance in the update frequency of the agents and the use of outdated information) or cannot handle nonsmooth functions in the objective and constraints [40]. A key feature of L k is to combine current and past information throughout suitable dynamics, {x k ψ }, and weights averaging via {ψ k }.
Using the dynamics of H k as in (18), we get
G(H k+1 ) W k (1 − γ) G(H k ) + γ G(H k + ∆H k ) .
where we used the convexity of G and the row-stochasticity of W k . Thus
ψ k+1 ⊤ G(H k+1 ) ≤ ψ k+1 ⊤ W k (1 − γ) G(H k ) + γ G(H k + ∆H k ) = ψ k ⊤ (1 − γ) G(H k ) + γ G(H k + ∆H k ) ,
where in the last equality we used ψ t+1 ⊤ W t = ψ t ⊤ [cf. (19)]. Therefore,
γψ k i k G(x k i k ) − G( x k i k ) = γ ψ k ⊤ G(H k ) − ψ k ⊤ G(H k + ∆H k ) ≤ ψ k ⊤ G(H k ) − ψ k+1 ⊤ G(H k+1 ).
Combining the above inequality with (52), we get
L k+1 ≤ L k − ημ(∆ k ) 2 γ + L 2 (∆ k ) 2 γ 2 + ǫ 1 2 l 2 I(E k x ) 2 + 1 2ǫ 1 γ 2 (∆ k ) 2 + ǫ 2 2 I 2 (E k y ) 2 + 1 2ǫ 2 γ 2 (∆ k ) 2 = L k − (∆ k ) 2 γ ημ − γ L 2 + 1 2ǫ 1 + 1 2ǫ 2 + ǫ 1 2 l 2 I(E k x ) 2 + ǫ 2 2 I 2 (E k y ) 2 ≤ L 0 − k t=0 (∆ t ) 2 γ ημ − γ L 2 + 1 2ǫ 1 + 1 2ǫ 2 + ǫ 1 2 l 2 I k t=0 E t x 2 + ǫ 2 2 I 2 k t=0 E t y 2 .(54)
To bound the last two terms in (54), we apply Proposition 6:
L k+1 ≤ L 0 − k t=0 (∆ t ) 2 γ ημ − γ L 2 + 1 2ǫ 1 + 1 2ǫ 2 + ǫ 1 2 l 2 I̺ x + ǫ 2 2 I 2 ̺ y + ǫ 1 2 l 2 Ic x + ǫ 2 2 I 2 c y = L 0 − k t=0 (∆ t ) 2 γ ημ − γ L 2 + l 2 I̺ x + I 2 ̺ y + l 2 Ic x 2 l 2 I̺ x + I 2 c y 2 I 2 ̺ y ,
where in the last equality we set ǫ 1 = 1/ l 2 I̺ x and ǫ 2 = 1/ I 2 ̺ y . Note that
L k = F (x k ψ ) + ψ k ⊤ G(H k ) ≥ F (x k ψ ) + G(ψ k ⊤ H k ) = U (x k ψ ) ≥ U * ,
for all k ∈ N + . Thus, for sufficiently small γ, such that γ ≤γ ncvx ημ L + 2 l 2 I̺ x + 2
I 2 ̺ y −1 ,(55)
we can obtain the following bound In this section we establish the connection between M F (x k ) and ∆ k , E k x , and E k y . Invoking Lemma 10 we can bound x k − prox G (x k − ∇F (x k )) as
x k − prox G (x k − ∇F (x k )) 2 ≤ 3 x k − x k i k 2 + 3 x k i k − prox G (x k i k − ∇F (x k i k )) 2 + 3 prox G (x k i k − ∇F (x k i k )) − prox G (x k − ∇F (x k )) 2 ( * ) ≤ 3 x k − x k i k 2 + 3 x k i k − prox G (x k i k − ∇F (x k i k )) 2 + x k i k − ∇F (x k i k ) − (x k − ∇F (x k )) 2 ≤(5 + 2L 2 ) x k − x k i k 2 + 3 x k i k − prox G (x k i k − ∇F (x k i k )) 2 ≤4(5 + 2L 2 )(E k x ) 2 + 3 x k i k − prox G (x k i k − ∇F (x k i k )) 2 ≤4(5 + 2L 2 )(E k x ) 2 + 3 4 1 + (l +l) 2 (∆ k ) 2 + 5E k t ,
where (*) follows from the nonexpansiveness of a proximal operator. Further applying Lemma 6 and (17c), yields:
k t=0 M F (x t ) ≤ k t=0 x k − prox G (x k − ∇F (x k )) 2 + k t=0 (E t x ) 2 ≤ k t=0 (21 + 8L 2 )(E t x ) 2 + 3 4(1 + (l +l) 2 )(∆ t ) 2 + 5E t t ≤ k t=0 (21 + 8L 2 )(E t x ) 2 + 15 8Il 2 (E t x ) 2 + 2I 2 (E t y ) 2 + 12(1 + (l +l) 2 ) k t=0 (∆ t ) 2 ≤ 21 + 8L 2 + 120Il 2 c x + ̺ x k t=0
γ 2 (∆ t ) 2 + 30I 2 c y + ̺ y k t=0 γ 2 (∆ t ) 2 + 12(1 + (l +l) 2 ) k t=0 (∆ t ) 2 = 21 + 8L 2 + 120Il 2 ̺ x γ 2 + 30κ 2 I 2 ̺ y γ 2 + 12(1 + (l +l) 2 ) k t=0 (∆ t ) 2 + 21 + 8L 2 + 120Il 2 c x + 30κ 2 I 2 c y (56) ≤ 21 + 8L 2 + 120Il 2 ̺ x γ 2 + 30κ 2 I 2 ̺ y γ 2 + 12(1 + (l +l) 2 )
2L 0 − 2U * + l 2 Icx √ l 2 I̺x + I 2 cy √ I 2 ̺y γημ + 21 + 8L 2 + 120Il 2 c x + 30κ 2 I 2 c y B opt ,
where ̺ x and ̺ y are defined in Lemma 6. Let T δ = inf{k ∈ N | M F (x k ) ≤ δ}. Then it holds: T δ · δ < T δ −1 k=0 M F (x k ) ≤ B opt and thus T δ = O(B opt /δ).
Conclusion
We proposed ASY-DSCA, an asynchronous decentralized method for multiagent convex/nonconvex composite minimization problems over (di)graphs. The algorithm employs SCA techniques and is robust against agents' uncoordinated activations and use of outdated information (subject to arbitrary but bounded delays). For convex (not strongly convex) objectives satisfying the LT error bound condition, ASY-DSCA achieves R-linear convergence rate while sublinear convergence is established for nonconvex objectives.
A Proof of Lemma 5
Applying [40, Th. 6] with the identifications: ǫ t = ∇f i t (x t+1 i t ) − ∇f i t (x t i t ) and
m k z = I i=1 z 0 i + k−1 t=0 ǫ t = I i=1 ∇f i (x 0 i ) + k−1 t=0 ∇f i t (x t+1 i t ) − ∇f i t (x t i t ) ( * ) = I · 1 I I i=1 ∇f i (x k i ) ḡ k ,
we arrive at E k+1 y ≤ C 1 ρ k g 0 + k l=0 ρ k−l ǫ l , where in ( * ) we have used x t+1 j = x t j for j = i t . The rest of the proof follows the same argument as in [40,Prop. 18].
B Proof of Lemma 14
Using (27), we have: for any ǫ > 0,
T 1 = ∇F (x k i k ) ± Iy k
Figure 1 :
1Master-slave (left panel) vs. decentralized (right panel) architectures.
Assumption 1 (
1On Problem (P)). The following hold: (i) The set K ⊂ R n is nonempty, closed, and convex; (ii) Each f i : O → R is proper, closed and l-smooth, where O ⊃ K is open; F is L-smooth with L I · l;
Assumption 3 .
3(Error-bound conditions[22,28,32]):
.
(S.3): Robust gradient tracking. As anticipated in Sec. 3.1, the packet loss caused by asynchrony breaks the sum preservation property (6) in SONATA. If treated in the same way as the x variable in (8), y i would fail to track (1/I)
and prox G is the prox operator (cf. Sec. 2.2). M F is a valid merit function since it is continuous and M F (x k ) = 0 if and only if all the x i 's are consensual and stationary. The following theorem shows that M F (x k ) vanishes at sublinear rate.
with each entry of δ i drawn i.i.d. from N (0, 0.01). We set r = 10, n = 300, I = 20, λ = 2, ω = 1.1 and density = 0.3. Since the problem satisfies the LT condition, we use 1 I i∈[I] U x k i − U ⋆ as the optimality measure. The result are reported in Fig. 2.
Figure 2 :
2LASSO. Left: undirected graph. We setμ = 8 and γ = 0.008 in ASY-DSCA-L;μ = 1 and γ = 0.008 in ASY-DSCA-DH; α = 0.06 and η = 0.6 in AsyPrimalDual;μ = 1 and γ = 0.002 in SONATA-L; andμ = 1 and γ = 0.005 in SONATA-DH. right: directed graph (each agent is of 10 out-neighbors). We setμ = 10 and γ = 0.01 in ASY-DSCA-L;μ = 10 and γ = 0.03 in ASY-DSCA-DH;μ = 10 and γ = 0.03 in SONATA-L; andμ = 10 and γ = 0.05 in SONATA-DH.
Figure 3 :
3Logistic regression. Left: undirected graph. We setμ = 10 and γ = 0.06 in ASY-DSCA-L; α = 0.1 and η = 0.7 in AsyPrimalDual; andμ = 10 and γ = 0.08 in SONATA-L. right: directed graph (each agent is of 10 out-neighbors). We setμ = 10 and γ = 0.05 in ASY-DSCA-L; andμ = 10 and γ = 0.1 in SONATA-L.
Figure 4 :
4m-estimator. Left: undirected graph. We setμ = 300 and γ = 0.1 in ASY-DSCA-L; α = 0.01 and η = 0.6 in AsyPrimalDual; andμ = 100 and γ = 0.1 in SONATA-L. right: directed graph (each agent is of 7 out-neighbors). We setμ = 1000 and γ = 0.08 in ASY-DSCA-L; andμ = 1000 and γ = 0.2 in SONATA-L.
•
Step 2: Proof of (17d)-(17e) under the LT condition. Proving (17d)-contraction of the optimality measure E k o up to the tracking error-poses several challenges. To prove contraction of some form of optimization errors, existing techniques developed in the literature of distributed algorithms
Lemma 3 .K 1 .
31[40, Lemma 17] Let { W k } k∈N+ be the sequence of matrices in the dynamical system(18), generated under Assumption 6, and with W satisfying Assumption 5 (i), (ii). Define K 1 (2I − 1) · T + I · Then we have for any k ≥ 0:
Corollary 9 .
9Under the setting of Proposition 8 and step-size 0
Definition 16 ( [25]). Given the sequence {u k } ∞ k=0 , a constant λ ∈ (0, 1), and N ∈ N, let us define|u| λ,N = max k=0,...,N u k λ k , |u| λ = sup k∈N0 u k λ k .If |u| λ is upper bounded, then u k = O(λ k ), for all k ∈ N 0 .
2: M F (x k ) vanishes at sublinear rate
). Continuing from (38), we havep k+1
(43)
Decentralized proximal gradient algorithms with linear convergence rates. Ernest K Sulaiman A Alghunaim, Kun Ryu, Ali H Yuan, Sayed, arXiv:1909.06479arXiv preprintSulaiman A Alghunaim, Ernest K Ryu, Kun Yuan, and Ali H Sayed. Decentralized proximal gradient algorithms with linear convergence rates. arXiv preprint arXiv:1909.06479, 2019.
. M Assran, M Rabbat, arXiv:1803.08950Asynchronous subgradient-push. arXiv preprintM. Assran and M. Rabbat. Asynchronous subgradient-push. arXiv preprint arXiv:1803.08950, 2018.
A coordinate descent primal-dual algorithm and application to distributed asynchronous optimization. P Bianchi, W Hachem, F Iutzeler, IEEE Trans. Automat. Contr. 6110P. Bianchi, W. Hachem, and F. Iutzeler. A coordinate descent primal-dual algorithm and application to distributed asynchronous optimization. IEEE Trans. Automat. Contr., 61(10):2947-2957, 2016.
Newton-raphson consensus under asynchronous and lossy communications for peer-to-peer networks. N Bof, R Carli, G Notarstefano, L Schenato, D Varagnolo, arXiv:1707.09178N. Bof, R. Carli, G. Notarstefano, L. Schenato, and D. Varagnolo. Newton-raphson consensus under asynchronous and lossy communications for peer-to-peer networks. arXiv:1707.09178, 2017.
From error bounds to the complexity of first-order descent methods for convex functions. J Bolte, T P Nguyen, J Peypouquet, B W Suter, Math. Prog. 1652J. Bolte, T. P. Nguyen, J. Peypouquet, and B. W. Suter. From error bounds to the complexity of first-order descent methods for convex functions. Math. Prog., 165(2):471-507, 2017.
Asynchronous parallel algorithms for nonconvex big-data optimization-Part I & Part II: Model and convergence & Complexity and numerical results. L Cannelli, F Facchinei, V Kungurtsev, G Scutari, arXiv:1607.04818&arXiv:1701.04900L. Cannelli, F. Facchinei, V. Kungurtsev, and G. Scutari. Asynchronous parallel algorithms for non- convex big-data optimization-Part I & Part II: Model and convergence & Complexity and numerical results. arXiv:1607.04818 & arXiv:1701.04900, 2016.
T T Doan, C L Beck, R Srikant, arXiv:1708.03277Impact of communication delays on the convergence rate of distributed optimization algorithms. T. T. Doan, C. L. Beck, and R. Srikant. Impact of communication delays on the convergence rate of distributed optimization algorithms. arXiv:1708.03277, 2017.
Error bounds, quadratic growth, and linear convergence of proximal methods. D Drusvyatskiy, A S Lewis, Math. Oper. Res. 433D. Drusvyatskiy and A. S. Lewis. Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res., 43(3):919-948, 2018.
Decentralized quasi-newton methods. M Eisen, A Mokhtari, A Ribeiro, IEEE Trans. Signal Process. 6510M. Eisen, A. Mokhtari, and A. Ribeiro. Decentralized quasi-newton methods. IEEE Trans. Signal Process., 65(10):2613-2628, 2017.
Parallel selective algorithms for nonconvex big data optimization. F Facchinei, G Scutari, S Sagratella, IEEE Trans. on Signal Process. 637F. Facchinei, G. Scutari, and S. Sagratella. Parallel selective algorithms for nonconvex big data opti- mization. IEEE Trans. on Signal Process., 63(7):1874-1889, 2015.
. J Friedman, T Hastie, R Tibshirani, arXiv:1001.0736arXiv preprintJ. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. arXiv preprint arXiv:1001.0736, 2010.
Asynchronous accelerated proximal stochastic gradient for strongly convex distributed finite sums. H Hendrikx, F Bach, L Massoulié, arXiv:1901.09865arXiv preprintH. Hendrikx, F. Bach, and L. Massoulié. Asynchronous accelerated proximal stochastic gradient for strongly convex distributed finite sums. arXiv preprint arXiv:1901.09865, 2019.
Accelerated decentralized optimization with local updates for smooth and strongly convex objectives. H Hendrikx, L Massoulié, F Bach, arXiv:1810.02660arXiv preprintH. Hendrikx, L. Massoulié, and F. Bach. Accelerated decentralized optimization with local updates for smooth and strongly convex objectives. arXiv preprint arXiv:1810.02660, 2018.
Asynchronous distributed optimization using a randomized alternating direction method of multipliers. F Iutzeler, P Bianchi, P Ciblat, W Hachem, Proc. of CDC 2013. of CDC 2013F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem. Asynchronous distributed optimization using a randomized alternating direction method of multipliers. In Proc. of CDC 2013, pages 3671-3676.
Linear convergence of gradient and proximal-gradient methods under the polyak-lojasiewicz condition. H Karimi, J Nutini, M Schmidt, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerH. Karimi, J. Nutini, and M. Schmidt. Linear convergence of gradient and proximal-gradient meth- ods under the polyak-lojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 795-811. Springer, 2016.
Asynchronous optimization over heterogeneous networks via consensus admm. S Kumar, R Jain, K Rajawat, IEEE Trans. Signal Inf. Process. Netw. 31S. Kumar, R. Jain, and K. Rajawat. Asynchronous optimization over heterogeneous networks via consensus admm. IEEE Trans. Signal Inf. Process. Netw., 3(1):114-129, 2017.
Distributed mirror descent method for multi-agent optimization with delay. J Li, G Chen, Z Dong, Z Wu, Neurocomputing. 177J. Li, G. Chen, Z. Dong, and Z. Wu. Distributed mirror descent method for multi-agent optimization with delay. Neurocomputing, 177:643-650, 2016.
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. X Lian, C Zhang, H Zhang, C.-J Hsieh, W Zhang, J Liu, Advances in Neural Information Processing Systems. X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu. Can decentralized algorithms out- perform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 5330-5340, 2017.
Distributed multi-agent optimization subject to nonidentical constraints and communication delays. P Lin, W Ren, Y Song, Automatica. 65P. Lin, W. Ren, and Y. Song. Distributed multi-agent optimization subject to nonidentical constraints and communication delays. Automatica, 65:120-131, 2016.
Markov games as a framework for multi-agent reinforcement learning. M L Littman, Machine learning proceedings. ElsevierM. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pages 157-163. Elsevier, 1994.
A topological property of real analytic subsets. S Lojasiewicz, Leséquations aux dérivées partielles. 117S. Lojasiewicz. A topological property of real analytic subsets. Coll. du CNRS, Leséquations aux dérivées partielles, 117:87-89, 1963.
Error bounds and convergence analysis of feasible descent methods: a general approach. Z.-Q Luo, P Tseng, Ann. of Oper. Res. 461Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. of Oper. Res., 46(1):157-178, 1993.
Asynchronous broadcast-based convex optimization over a network. A Nedić, IEEE Trans. Automat. Contr. 566A. Nedić. Asynchronous broadcast-based convex optimization over a network. IEEE Trans. Automat. Contr., 56(6):1337-1351, 2011.
Distributed optimization over time-varying directed graphs. A Nedić, A Olshevsky, IEEE Trans. on Automat. Contr. 603A. Nedić and A. Olshevsky. Distributed optimization over time-varying directed graphs. IEEE Trans. on Automat. Contr., 60(3):601-615, 2014.
Achieving geometric convergence for distributed optimization over time-varying graphs. A Nedić, A Olshevsky, W Shi, SIAM J. Optim. 274A. Nedić, A. Olshevsky, and W. Shi. Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim., 27(4):2597-2633, 2017.
Asynchronous distributed optimization via randomized dual proximal gradient. I Notarnicola, G Notarstefano, IEEE Trans. Automat. Contr. 625I. Notarnicola and G. Notarstefano. Asynchronous distributed optimization via randomized dual proxi- mal gradient. IEEE Trans. Automat. Contr., 62(5):2095-2106, 2017.
Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions. A Olshevsky, I C Paschalidis, A Spiridonoff, arXiv:1811.03982arXiv preprintA. Olshevsky, I. C. Paschalidis, and A. Spiridonoff. Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions. arXiv preprint arXiv:1811.03982, 2018.
Inexact newton methods for the nonlinear complementarity problem. J.-S Pang, Math. Prog. 361J.-S. Pang. Inexact newton methods for the nonlinear complementarity problem. Math. Prog., 36(1):54- 71, 1986.
Arock: an algorithmic framework for asynchronous parallel coordinate updates. Z Peng, Y Xu, M Yan, W Yin, SIAM J. Sci. Comput. 385Z. Peng, Y. Xu, M. Yan, and W. Yin. Arock: an algorithmic framework for asynchronous parallel coordinate updates. SIAM J. Sci. Comput., 38(5):A2851-A2879, 2016.
B T Polyak, Gradient methods for solving equations and inequalities. USSR Computational Mathematics and Mathematical Physics. 4B. T. Polyak. Gradient methods for solving equations and inequalities. USSR Computational Mathe- matics and Mathematical Physics, 4(6):17-32, 1964.
Harnessing smoothness to accelerate distributed optimization. Guannan Qu, Na Li, IEEE Transactions on Control of Network Systems. 53Guannan Qu and Na Li. Harnessing smoothness to accelerate distributed optimization. IEEE Transac- tions on Control of Network Systems, 5(3):1245-1260, 2017.
Local convergence analysis for successive inexact quadratic programming methods. Working Paper, School of Organization and Management. U Tulowizki, R S Dembo, New Haven, CTYale UniversityU. Tulowizki R.S. Dembo. Local convergence analysis for successive inexact quadratic programming methods. Working Paper, School of Organization and Management, Yale University, New Haven, CT, 1984.
Adaptation, learning, and optimization over networks. A Sayed, Foundations and Trends R in Machine Learning. 7A. Sayed. Adaptation, learning, and optimization over networks. Foundations and Trends R in Machine Learning, 7(4-5):311-801, 2014.
Parallel and distributed methods for constrained nonconvex optimization-Part I: Theory. G Scutari, F Facchinei, L Lampariello, IEEE Trans. Signal Process. 658G. Scutari, F. Facchinei, and L. Lampariello. Parallel and distributed methods for constrained nonconvex optimization-Part I: Theory. IEEE Trans. Signal Process., 65(8):1929-1944, Apr. 2017.
Parallel and distributed methods for constrained nonconvex optimization-Part II: Applications in communications and machine learning. G Scutari, F Facchinei, L Lampariello, S Sardellitti, P Song, IEEE Trans. Signal Process. 658G. Scutari, F. Facchinei, L. Lampariello, S. Sardellitti, and P. Song. Parallel and distributed methods for constrained nonconvex optimization-Part II: Applications in communications and machine learning. IEEE Trans. Signal Process., 65(8):1945-1960, Apr. 2017.
Parallel and distributed successive convex approximation methods for bigdata optimization. G Scutari, Y Sun, Multi-Agent Optimization. F. Facchinei and J.-S. PangG. Scutari and Y. Sun. Parallel and distributed successive convex approximation methods for big- data optimization. In F. Facchinei and J.-S. Pang, editors, Multi-Agent Optimization, pages 141-308.
Foundation Subseries. C I M Springer, Lecture Notes in Mathematics). Springer, C.I.M.E. Foundation Subseries (Lecture Notes in Mathematics), 2018.
Distributed nonconvex constrained optimization over time-varying digraphs. G Scutari, Y Sun, Math. Prog. G. Scutari and Y. Sun. Distributed nonconvex constrained optimization over time-varying digraphs. Math. Prog., Feb 2019.
EXTRA: An exact first-order algorithm for decentralized consensus optimization. Wei Shi, Qing Ling, Gang Wu, Wotao Yin, SIAM Journal on Optimization. 252Wei Shi, Qing Ling, Gang Wu, and Wotao Yin. EXTRA: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization, 25(2):944-966, 2015.
Y Sun, A Daneshmand, G Scutari, arXiv:1905.02637Convergence rate of distributed optimization algorithms based on gradient tracking. arXiv preprintY. Sun, A. Daneshmand, and G. Scutari. Convergence rate of distributed optimization algorithms based on gradient tracking. arXiv preprint arXiv:1905.02637, May 2019.
Achieving linear convergence in distributed asynchronous multi-agent optimization. Y Tian, Y Sun, G Scutari, arXiv:1803.10359arXiv preprintY. Tian, Y. Sun, and G. Scutari. Achieving linear convergence in distributed asynchronous multi-agent optimization. arXiv preprint arXiv:1803.10359, 2018.
A coordinate gradient descent method for nonsmooth separable minimization. P Tseng, S Yun, Math. Prog. 1171-2P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Math. Prog., 117(1-2):387-423, 2009.
On the rate of convergence of a partially asynchronous gradient projection algorithm. Paul Tseng, SIAM J. Optimiz. 14Paul Tseng. On the rate of convergence of a partially asynchronous gradient projection algorithm. SIAM J. Optimiz., 1(4):603-619, 1991.
Distributed consensus and optimization under communication delays. K I Tsianos, M G Rabbat, Proc. of. ofAllertonK. I. Tsianos and M. G. Rabbat. Distributed consensus and optimization under communication delays. In Proc. of Allerton 2011, pages 974-982.
Distributed dual averaging for convex optimization under communication delays. K I Tsianos, M G Rabbat, Proc. of ACC 2012. of ACC 2012K. I. Tsianos and M. G. Rabbat. Distributed dual averaging for convex optimization under communi- cation delays. In Proc. of ACC 2012, pages 1067-1072.
Cooperative distributed optimization in multiagent networks with delays. H Wang, X Liao, T Huang, C Li, IEEE Trans. Syst. Man Cybern. Syst. 452H. Wang, X. Liao, T. Huang, and C. Li. Cooperative distributed optimization in multiagent networks with delays. IEEE Trans. Syst. Man Cybern. Syst., 45(2):363-369, 2015.
On the o(1/k) convergence of asynchronous distributed alternating direction method of multipliers. E Wei, A Ozdaglar, Proc. of GlobalSIP. of GlobalSIPE. Wei and A. Ozdaglar. On the o(1/k) convergence of asynchronous distributed alternating direction method of multipliers. In Proc. of GlobalSIP 2013, pages 551-554.
Decentralized consensus optimization with asynchrony and delays. T Wu, K Yuan, Q Ling, W Yin, A H Sayed, IEEE Trans. Signal Inf. Process. Netw. 99T. Wu, K. Yuan, Q. Ling, W. Yin, and A. H. Sayed. Decentralized consensus optimization with asynchrony and delays. IEEE Trans. Signal Inf. Process. Netw., PP(99), 2017.
Convergence of asynchronous distributed gradient methods over stochastic networks. Jinming Xu, Shanying Zhu, Yeng Chai Soh, Lihua Xie, IEEE Transactions on Automatic Control. 632Jinming Xu, Shanying Zhu, Yeng Chai Soh, and Lihua Xie. Convergence of asynchronous distributed gradient methods over stochastic networks. IEEE Transactions on Automatic Control, 63(2):434-448, 2017.
On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems. H Zhang, J Jiang, Z.-Q Luo, Journal of the Operations Research Society of China. 12H. Zhang, J. Jiang, and Z.-Q. Luo. On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems. Journal of the Operations Research Society of China, 1(2):163-186, 2013.
Fully decentralized multi-agent reinforcement learning with networked agents. K Zhang, Z Yang, H Liu, T Zhang, T Başar, arXiv:1802.08757arXiv preprintK. Zhang, Z. Yang, H. Liu, T. Zhang, and T. Başar. Fully decentralized multi-agent reinforcement learning with networked agents. arXiv preprint arXiv:1802.08757, 2018.
Robustness and tractability for non-convex m-estimators. Ruizhi Zhang, Yajun Mei, Jianjun Shi, Huan Xu, arXiv:1906.02272arXiv preprintRuizhi Zhang, Yajun Mei, Jianjun Shi, and Huan Xu. Robustness and tractability for non-convex m-estimators. arXiv preprint arXiv:1906.02272, 2019.
Asynchronous adaptation and learning over networks-Part I/Part II/Part III: Modeling and stability analysis/Performance analysis/Comparison analysis. X Zhao, A Sayed, IEEE Trans. Signal Process. 634X. Zhao and A. Sayed. Asynchronous adaptation and learning over networks-Part I/Part II/Part III: Modeling and stability analysis/Performance analysis/Comparison analysis. IEEE Trans. Signal Process., 63(4):811-858, 2015.
| []
|
[
"Coincidence-based reconstruction for reactor antineutrino detection in gadolinium-doped Cherenkov detectors",
"Coincidence-based reconstruction for reactor antineutrino detection in gadolinium-doped Cherenkov detectors"
]
| [
"L Kneale \nDepartment of Physics & Astronomy\nUniversity of Sheffield\nHicks BuildingS3 7RHBroomhall, SheffieldUnited Kingdom\n",
"M Smy \nDepartment of Physics & Astronomy\nUniversity of California Irvine\nFrederick Reines Hall\n92697-4575IrvineCaliforniaUSA\n",
"M Malek \nDepartment of Physics & Astronomy\nUniversity of Sheffield\nHicks BuildingS3 7RHBroomhall, SheffieldUnited Kingdom\n"
]
| [
"Department of Physics & Astronomy\nUniversity of Sheffield\nHicks BuildingS3 7RHBroomhall, SheffieldUnited Kingdom",
"Department of Physics & Astronomy\nUniversity of California Irvine\nFrederick Reines Hall\n92697-4575IrvineCaliforniaUSA",
"Department of Physics & Astronomy\nUniversity of Sheffield\nHicks BuildingS3 7RHBroomhall, SheffieldUnited Kingdom"
]
| []
| A reconstruction algorithm has been developed to capitalize on advances in Cherenkov technology for reactor antineutrino detection.Large gadolinium-doped water (Gd-H 2 O) Cherenkov detectors are a developing technology which use Gd loading to increase the visibility of the neutrons produced in inverse beta decay (IBD) interactions, which produce positron-neutron pairs coincident in time and space. In this paper, we describe the reconstruction which uses the combined light from both events in an IBD pair to accurately reconstruct the interaction vertex.The algorithm has been applied to the reconstruction of reactor antineutrinos in Gd-H 2 O and in Gd-doped water-based liquid scintillator (Gd-WbLS), an advanced detector medium which is also currently in development.Compared to a single-event reconstruction, the combined reconstruction improves vertex resolution for reactor IBD positrons by up to a factor of 4.5 at the lowest energies. IBD-neutron vertex resolution was found to improve by more than 30% in most instances.Powerful background rejection with the coincidence reconstruction can be achieved by requiring a minimum quality of fit. This was found to reject up to 94% of accidental coincidences of uncorrelated background events, while retaining at least 97.5% of the IBD signal pairs. | 10.1016/j.nima.2023.168375 | [
"https://export.arxiv.org/pdf/2210.10576v1.pdf"
]
| 252,992,479 | 2210.10576 | 004db0adb09275f871c0d75bbfc3c6696fc53567 |
Coincidence-based reconstruction for reactor antineutrino detection in gadolinium-doped Cherenkov detectors
19 Oct 2022
L Kneale
Department of Physics & Astronomy
University of Sheffield
Hicks BuildingS3 7RHBroomhall, SheffieldUnited Kingdom
M Smy
Department of Physics & Astronomy
University of California Irvine
Frederick Reines Hall
92697-4575IrvineCaliforniaUSA
M Malek
Department of Physics & Astronomy
University of Sheffield
Hicks BuildingS3 7RHBroomhall, SheffieldUnited Kingdom
Coincidence-based reconstruction for reactor antineutrino detection in gadolinium-doped Cherenkov detectors
19 Oct 2022Preprint submitted to Nuclear Instruments & Methods in Physics Research A. October 20, 2022reactor antineutrinosinverse beta decayvertex reconstruction
A reconstruction algorithm has been developed to capitalize on advances in Cherenkov technology for reactor antineutrino detection.Large gadolinium-doped water (Gd-H 2 O) Cherenkov detectors are a developing technology which use Gd loading to increase the visibility of the neutrons produced in inverse beta decay (IBD) interactions, which produce positron-neutron pairs coincident in time and space. In this paper, we describe the reconstruction which uses the combined light from both events in an IBD pair to accurately reconstruct the interaction vertex.The algorithm has been applied to the reconstruction of reactor antineutrinos in Gd-H 2 O and in Gd-doped water-based liquid scintillator (Gd-WbLS), an advanced detector medium which is also currently in development.Compared to a single-event reconstruction, the combined reconstruction improves vertex resolution for reactor IBD positrons by up to a factor of 4.5 at the lowest energies. IBD-neutron vertex resolution was found to improve by more than 30% in most instances.Powerful background rejection with the coincidence reconstruction can be achieved by requiring a minimum quality of fit. This was found to reject up to 94% of accidental coincidences of uncorrelated background events, while retaining at least 97.5% of the IBD signal pairs.
gadolinium, Cherenkov, water-based liquid scintillator
Introduction
As Reines and Cowan showed [1], the antineutrino emission from a reactor can be detected via the inverse β decay (IBD) weak interaction of antineutrinos with free protons in water or a hydrocarbon liquid: ν e + p −→ e + + n. This is the principal interaction of the antineutrino at the low energies of reactor antineutrinos.
A nascent water Cherenkov technology -gadolinium (Gd) doping -opens up the possibility of detecting reactor antineutrinos in a water or water-based Cherenkov detector. Large Gd-doped water (Gd-H 2 O) Cherenkov detectors use Gd loading to tag the neutrons produced in the IBD interaction. The Gd-H 2 O technology was first demonstrated in [2] and other detectors have more recently followed suit [3,4].
In pure water, the IBD neutron captures on hydrogen and the low light yield makes this difficult to observe. In Gd-H 2 O, the neutron captures preferentially on gadolinium at concentrations greater than 0.01%, and this increases the light yield from the capture of the IBD neutron by a factor of 3 to 4. In addition, the coincidence of the neutron capture with the positron signal is closer in time than in pure water, which enhances background rejection.
Liquid scintillator detectors are a proven technology for reactor antineutrino detection [5] and Gd-doped scintillator detectors benefit from the increased light yield and the coincidence of the neutron capture close in distance and time to the positron vertex [6,7,8].
Water-based liquid scintillator (WbLS) [9] is an emerging medium, which has the potential to combine the higher light yield and lower-energy sensitivity of scintillation detectors with the directional information and large scale of water Cherenkov detectors with benefits for reactor antineutrino detection [10,11].
The accuracy of vertex reconstruction is important for reducing systematic error on the definition of the fiducial volume, for background rejection and for reconstruction of the antineutrino energy. Improvements at lower energies in particular can improve overall sensitivity to reactor antineutrinos and help to lower the energy threshold of a detector.
The displacement of the neutron capture from the primary IBD interaction vertex is small compared to the vertex resolution in Gd-doped detectors -with a mean distance of ∼ 6 cm, over 90% of neutrons capture within 10 cm and the remaining capture within 35 cm. A novel reconstruction algorithm which capitalizes on this spatial coincidence of the signal pair in the emerging Gd-doping technology has been developed and applied to interactions of antineutrinos in the reactor spectral range.
This paper describes the coincidence reconstruction that has been implemented specifically to reconstruct the position of events in a Gd-doped detector medium, by reconstructing pairs of events together. Section 2 describes the fundamentals of reactor antineutrino detection in Gd-doped media. Section 4 describes the established maximum likelihood fitter for singleevent reconstruction which forms the basis of the coincidence reconstruction. Section 5 details the extension of this fitter to a coincidence reconstruction and its implementation for reactor antineutrinos. Improvements to vertex resolution and event selection/rejection in the reactor antineutrino energy range are presented and discussed in Sections 6 and 7 for two Gd-doped Cherenkov detection media in two different-sized detectors. Conclusions are drawn in Section 8. Some of the material included in this paper has been taken from [12].
Reactor antineutrinos in gadolinium-doped Cherenkov detectors
In a Cherenkov detector, positrons from the IBD interaction with a total energy above the Cherenkov threshold of ∼0.8 MeV emit a prompt signal. The detectable spectrum of the IBD positrons resulting from reactor antineutrino interactions is in the range ∼0.8 MeV to ∼8 MeV total energy, with a peak at ∼2.4 MeV, given a peak reactor antineutrino energy for IBD of ∼3.7 MeV [13]. The neutrons from the IBD thermalize and are captured on nuclei in the medium, emitting a delayed signal. In pure water, the IBD neutrons capture on hydrogen, resulting in the delayed emission of a single 2.2 MeV gamma as the resulting deuteron decays to ground state. This occurs within a mean time of ∼200 µs of the prompt signal.
The principle of using Gd-H 2 O for low-energy reactor antineutrino detection was first suggested by [14]. The neutron captures preferentially onto the Gd due to the very high neutron-capture cross section of Gd (∼49,000 b) compared to hydrogen (∼0.3 b). At a concentration of 0.1% Gd ions, which can be achieved with the addition of 0.2% gadolinium sulfate, 90% of the neutrons may capture onto Gd [15]. The remaining neutrons capture onto the hydrogen or sulfate. The subsequent decay of the Gd to ground state releases a cascade of gammas totaling ∼8 MeV in energy. These further interact in the water to produce Cherenkov light and the neutron-capture signal can be detected with a peak visible energy of around 4.5 MeV in a Gd-doped water Cherenkov detector, which is generally higher in energy than the positron signal. In Gd-H 2 O, the delayed neutron-capture signal occurs within a shorter mean time of ∼30 µs.
This combination of the prompt positron and higher-energy delayed neutroncapture signal within a short space and time results in a more easily detectable correlated signal in Gd-H 2 O compared to in pure water. This results in lower-energy sensitivity and makes the prospect of reactor antineutrino detection in a water Cherenkov detector feasible.
The addition of a scintillating component to Gd-H 2 O, in the form of a water-based liquid scintillator, could combine the benefits of Gd-H 2 O, including the coincident signal pair from the Gd doping and the directional information and particle identification capabilities of Cherenkov light [16], with the higher light yield of scintillator detectors, for detection of reactor antineutrinos at the lowest end of the energy range.
WbLS cocktails have been developed using PPO (2,5-diphenyl-oxazole) as the wavelength-shifting scintillator component in a linear alkylbenzene (LAB) solvent [9]. The oily scintillator component is then combined with pure water using a surfactant which creates micelles which have both hydrophilic (polar) and hydrophobic (non-polar) surfaces. Gd-doped WbLS (Gd-WbLS) is under development.
IBD positrons are emitted largely isotropically and the prompt signal comes from the single positron. Neutrons from the IBD interaction are generally emitted in the forward direction compared to that of the incoming antineutrino, although this directional information is lost within a couple of scatters as the neutron thermalizes in the medium. The light from the neutron capture on gadolinium is composed of multiple gammas in multiple directions, which results in a more isotropic light distribution compared to that of the single positron in Gd-H 2 O. In Gd-WbLS, there is an additional contribution of isotropic scintillation light in both the prompt positron and delayed neutron signal.
Detector Simulations
In this paper, the coincidence reconstruction is applied to interactions in two different detector sizes, each with a Gd-H 2 O and a Gd-WbLS fill. More precisely, the two fill media are:
• Gd-H 2 O with 0.2% Gd 2 (SO 4 ) 3 doping (for 0.1% Gd concentration) and
• Gd-WbLS with 0.2% Gd 2 (SO 4 ) 3 doping and ∼100 photons per MeV
WbLS (approximately 1% of the light yield of pure LAB-based scintillator with 2g/L of the fluor, PPO, typically used in large neutrino experiments such as Daya Bay and SNO+ [17,18]).
The two detectors are upright cylinders, with an inner PMT support structure which creates an instrumented inner detector volume within the tank. The detector parameters are summarized in Table 1. Full Monte Carlo (MC) detector simulations were carried out with an adaptation of RAT-PAC (Reactor Analysis Tool -Plus Additional Codes) [19], which is based on the physics simulation framework GEANT4 [20,21], the CLHEP physics library [22], the GLG4sim (Generic Liquid-scintillator Anti-Neutrino Detector or GenericLAND) Geant4 simulation for neutrino physics [23] and the data analysis framework ROOT [24].
The MC model for WbLS is detailed in [25]. The time profile of scintillation light is based on measurements of WbLS [26,27], and the light yield and scattering were taken from measurements of Gd-WbLS [28].
Low-energy single-event reconstruction
The single-event reconstruction -BONSAI (Branch Optimization Navigating Several Annealing Iterations) -was originally written to reconstruct low-energy events from Cherenkov light in water Cherenkov detectors and has been used for many years in Super-Kamiokande for reconstruction of events up to 100 Mev [29]. It is a maximum likelihood fitter to the PMT hit timing. The likelihood is based on the hit time residuals of the Cherenkov signal in Gd-H 2 O (or Cherenkov + scintillation signal in Gd-WbLS) and dark noise background.
It is calculated for a selection of test vertices and is given by:
lnL(x, t 0 ) = ln( N i=1 P(∆t i (x))).(1)
The hit time residual ∆t i (x) is:
∆t i (x) = t i − tof i (x) − t 0 (2)
where x is the test vertex, t i is the hit time at the i th PMT, t 0 is the emission time and tof i = |x i − x|/c water is the time of flight from the reconstructed vertex to the PMT vertex for hit i. The dark noise component is calculated by taking the rate of hits outside the signal window and scaling it to the size of the signal window. The signal window is t i − tof i (x) for a given test vertex x. P (∆t i ) is a probability density function (PDF) which is defined using hit time residuals from true vertices in calibration data or Monte Carlo (MC) simulation. The timing residual PDFs fold in the effects of PMT timing features, photocoverage and scattering and reflection in the detector medium but do not depend directly on the light's angle of incidence, distance traveled or on the location of the PMTs. In Figure 1, showing the timing residuals used for the detector configurations under analysis, the shape is dominated by the PMT timing features discussed in [30] with the prompt peak at zero, the double-pulsing peaking at ∼50 ns and the late-pulsing peak at ∼70 ns. Scattering in increasing tank sizes results in increasing tails out to longer times. The addition of liquid scintillator in Gd-WbLS results in a wider prompt peak due to absorption and re-emission by the scintillator and consequently less well-defined prompt and double-pulsing peaks. The PDFs of time residuals are derived from simulation for this paper.
For a given triggered event, hits are first passed through a hit selection criterion, which creates a list of hits that can be used to generate a sample of vertices which form the starting point for the likelihood maximization. This is done by removing isolated hits and then requiring that for any one pair of PMT hits separated by time ∆t, the distance that could be traveled by direct light in the time between the hits ( c n ∆t) is less than the distance between the two hit PMTs. This ensures that in principle the light is unscattered and could have come from the same interaction.
A minimum of four hits is required to reconstruct a vertex in 3D space. Sets of four hits are selected from the list of direct hits and used to define a test vertex by solving all four equations for the hit arrival times simultaneously, exactly and analytically for x, y, z and t 0 . In this way, each quadruple of hits defines a point in the detector and a list of potential initial test vertices for the maximum likelihood vertex search is generated. Having more than one starting point helps to avoid mis-reconstruction due to local maxima.
Since the number of quadruples is proportional to the number of hits N 4 , some limits are applied to increase speed. The number of quadruples is reduced by giving preference to four-hit combinations with less spread in absolute time. This is done by selecting a time window containing a predetermined number of combinations and maximizing over all combinations in the window. Additional quadruples are formed by combining each hit in the time window with the three hits that immediately follow it. When test vertices for all combinations in that window have been evaluated, the number of starting points is further reduced by averaging over nearby points in steps of 60 cm and 150 cm.
From the final, reduced list of starting points, likelihood maximization, with free parameters for emission time and dark noise rate, is carried out for successive iterations of searches of test vertices.
At each iteration, the simulated annealing process selects the test vertices with the best log likelihoods in a range to take forward to the next iteration. The range is a fraction of the total range of log likelihoods for all test vertices in that iteration and the fraction (and thus range) is reduced in each step.
Each selected test vertex then becomes a center point for a dodecahedron and the vertices of each dodecahedron become new additional test vertices. The radius of the dodecahedrons are reduced with each iteration. The dodecahedron grid shape was selected to give the optimal coverage of the space to balance accuracy and speed.
In the final iteration, the vertex corresponding to the highest log likelihood is chosen as the reconstructed vertex.
Coincidence Reconstruction
The coincidence reconstruction for event pairs in BONSAI uses the same building blocks as the standard BONSAI single-event fit. Light from both the positron and neutron events is used and a single, combined, reconstructed vertex is output for any event pair passed to the reconstruction.
The hits from each event go through the hit selection process independently. Once the selected hits have been through the four-hit selection, a list of test vertices is output for each event. At this point, the lists of test vertices from the two events are combined into a single, larger list of test vertices which are used as starting points for the vertex search.
The hit information for each event is retained and the likelihood for each event is calculated simultaneously for each test vertex. The coincidence reconstruction is achieved via the maximization of the sum of log likelihoods for the prompt (positron or positron-like) and delayed (neutron or neutronlike) events with free parameters for prompt and delayed emission times and the dark noise rate:
lnL(x, t 0,p , t 0,d ) = lnL p (x, t 0,p ) + lnL d (x, t 0,d )(3)
where the log likelihoods for the prompt and delayed events are as given in Equation 1. Emission times t 0,p and t 0,d are the fitted prompt and delayed emission times respectively. Combining the starting solutions for each individual event into a larger list of starting solutions improves rejection of local maxima in the likelihood maximization. The addition of data points (PMT hits) is particularly helpful where light yields from one or both individual events are low. This is often the case for the positron event in the reactor antineutrino range, particularly in Gd-H 2 O. For example, the BONSAI single-event reconstruction tends to be unstable if there are fewer than 10 inner-PMT hits, which equates to between 1 and 1.5 MeV in the Gd-H 2 O detectors. In the 16 m (22 m) Gd-H 2 O detector simulations used in this paper, the light yield (without dark noise) from 23% (21%) of the positrons produced fewer than 10 PMT hits and the average light yield was 16.72 (17.14) hits.
CoRe Implementation of the Coincidence Reconstruction for Reactor
Antineutrinos The distance of the neutron-capture vertex from the IBD interaction and positron vertex is within the expected vertex resolution. The mean distance is ∼6 cm, over 90% of neutron captures occur within 10 cm of the primary vertex, and all occur within 35 cm. The additional light from the neutron event can therefore be used to improve the reconstruction of the positron event.
The CoRe implementation was first developed to reconstruct pairs of events using Cherenkov light in Gd-H 2 O and optimized for the best possible vertex resolution for IBD positron-neutron pairs in this medium.
In BONSAI, preference is given to vertices which, when combined with the hit pattern, reflect a Cherenkov light distribution. This is achieved by correcting the log likelihood as follows:
lnL (x, λ) = lnL(x, t 0 ) − 1 2 θ c − θ f it σ θ 2(4)
where θ c (44.75°, close to the maximum Cherenkov cone opening angle for positrons in water) is the constraining angle and θ f it is the fitted cone opening angle. The cone opening angle is the angle between the direction of the particle and the direction of the photons. The value of σ θ depends on whether the cone opening angle is less than the constraining angle (a good fit to Cherenkov light) or greater than the constraining angle (a poor fit to Cherenkov light). The prompt light in an IBD interaction originates from Cherenkov cones along the positron's track. For electrons and positrons, significant numbers of Cherenkov photons are detected only when the particles are highly relativistic and the Cherenkov cone opening angle is therefore maximal. In water, the maximum Cherenkov cone opening angle for positrons is 41.2°. Since the Cherenkov light from IBD neutron events comes from multiple gammas emitted isotropically, the delayed signal is generally more isotropic than the light from the positron. For this reason, the angular constraint was relaxed for the CoRe implementation and it was found that a wider angular constraint resulted in better vertex resolution for neutrons and -additionallyfor positrons. The optimal constraint was found to increase with the size of the detector. A 90°-angle constraint gave the optimal vertex resolution for both positrons and neutrons in Gd-H 2 O in the 22 m detector. The optimal vertex resolution in the 16 m detector was found to be 80°.
In order to extend the CoRe implementation to reconstruct pairs of events in Gd-WbLS, the constraint on the angle was turned off completely to allow for the fully isotropic scintillation light, which does not arrive with sufficient separation from the Cherenkov light to require a separate treatment. If, however, it were possible to separate the Cherenkov and scintillation light, e.g., by using a slower scintillator [31] or a lower concentration of PPO [27] in combination with novel light collection such as fast photosensors [32] or wavelength-based photon sorting [33], then separate treatment of the Cherenkov and scintillation light may improve results for Gd-WbLS fills.
The results in this paper use a reconstruction threshold requiring a minimum total light yield from an event of 5 hits on the PMTs instrumenting the inner volume. Ten hits is considered to be the minimum light yield required for a reliable reconstruction in BONSAI to help reject events which reconstruct poorly. Since CoRe was expected to improve results at very low light levels, the threshold in BONSAI and CoRe was set to 5 hits for this study.
CoRe iterates over all triggers and attempts to reconstruct all pairs occurring within a specified time of each other -200 µs in Gd-H 2 O and 300 µs in Gd-WbLS. These loose limits on the time difference were designed to ensure that all true pairs were reconstructed while at the same time reducing computation time. The longer time limit in Gd-WbLS takes into account the wider timing distribution of the prompt and delayed triggers compared to Gd-H 2 O. For successfully reconstructed pairs, the data output include the time between events, as well as the total charge and total number of PMT hits for each event. Additional information for each event includes a measure of the fit quality -called the timing goodness.
Timing Goodness -Fit Quality
Where events are poorly reconstructed, the coincidence of the hit time residuals as calculated from the reconstructed vertex is also poor. BONSAI outputs the coincidence of the time residuals as a measure of the vertex fit quality -the timing goodness. The time residuals are calculated using the reconstructed time of emission, which is extracted from a fit to the peak of the time-of-flight-subtracted PMT hit times at the reconstructed vertex x.
More specifically, the timing goodness is given by a Gaussian distribution for the Cherenkov timing resolution, weighted by a second, wider Gaussian:
g(x) = hits w i e −0.5 ∆t i (x) σ 2 hits w i (5)
Here, σ is the timing resolution expected for Cherenkov events and w i are weights based on the hit time residuals using a wider effective time resolution. The hit weights are given by a Gaussian of width ω:
w i = e −0.5 ∆t i (x) ω 2(6)
The results presented in this paper for both media have timing goodness values calculated using Gaussian distributions with widths of σ = 4 ns and ω = 50 ns. An ideal reconstruction would result in timing goodness g(x) = 1.
This measure of fit quality is less well-adapted to Gd-WbLS because of the wider prompt peak in the pdf of the time residuals and the convolution of the Cherenkov and scintillation light but, for the purposes of this paper, it was found to be sufficient as a relative, rather than absolute, measure. It should be adapted to provide an accurate measure of the fit quality for reconstruction in Gd-WbLS in the future.
Improved Vertex Resolution with CoRe
CoRe improved the IBD vertex resolution compared to the BONSAI single-event fit for all detector configurations studied. Figures 2 and 3 show the results for Gd-H 2 O and Gd-WbLS respectively for both the 16 m and 22 m detectors. The vertex resolution is expressed in terms of the distance from the true vertex within which 68% of the events reconstruct.
The improvement achieved using the coincidence reconstruction is particularly beneficial for positrons at lower energies in Gd-H 2 O, where the low light yield from such events makes reconstruction without the additional light from the neutrons difficult.
The vertex resolution for positrons is shown in Figures 2 and 3 The vertex resolution for IBD neutrons is shown in Figures 2 and 3 as a function of the distance of the neutron capture from the primary vertex. It is expected that the resolution would increase (deteriorate) as the distance from the primary vertex increases. However, since the distances are within the vertex resolution achieved, and statistical errors are large, this effect is not significant. The vertex resolution for neutrons improved with CoRe by greater than 30% at most distances in all configurations.
The saturation of the positron vertex resolution below 1.5 MeV, which is observed in the single-event BONSAI reconstruction in the 16m Gd-H 2 O detector, is due to the reconstruction threshold of 5 inner-PMT hits. This has the effect of improving the resolution at low energies as the threshold is increased (Figure 4), since the hit threshold removes the events with the lowest light yields and is therefore more likely to remove events at the lowest energies. Events with a lower light yield are less likely to reconstruct well.
The effect may be accentuated by the combination of the hit threshold with the greater isotropy of the light from the lowest-energy events ( Figure 5). This helps to constrain the fit in the Cherenkov direction and to mitigate, to a degree, the difficulties presented by a relatively low light yield. Overall, these effects are diluted with CoRe and in Gd-WbLS due to the additional, isotropic light from the neutron capture and scintillation respectively.
With the addition of WbLS, the BONSAI results are significantly improved since more of the lower-energy positrons have sufficient light to achieve a reasonable fit. Although the benefit of CoRe over BONSAI is less marked in Gd-WbLS for this reason, the improvement in vertex resolution achieved with the coincidence reconstruction remains significant. It is slightly more significant in the larger tank at 2.5 Mev, which is where the BONSAI fit is least robust, despite the addition of the scintillation light.
The vertex reconstruction worsens with increasing tank size with BONSAI for both detector fills. This is due in particular to the difficulty reconstructing vertices as the distance from the PMTs increases, making reconstruction towards the center of the detector more difficult as the tank size increases. This effect is much more marked with the BONSAI reconstruction in Gd-
H 2 O.
The CoRe reconstruction offers the most improvement over BONSAI in both tank sizes with the Gd-H 2 O medium. The CoRe results for the two detectors with this fill are consistent within the statistical error and may represent the limit of the reconstruction for detectors of this scale. The fit improves with the addition of WbLS in the 16 m detector and worsens with increasing tank size in Gd-WbLS, as expected. An unexpected result is the deterioration of the fit with the addition of WbLS in the 22 m detector. In fact, the addition of WbLS does not, overall, improve the vertex resolution with CoRe, compared to the vertex resolution achieved with CoRe in Gd-H 2 O. Although there is improvement over the single-event fitting with CoRe, no gain in terms of the vertex resolution with CoRe is achieved by adding WbLS, which suggests there may be room for improvement.
The fit is most difficult to constrain in the Cherenkov direction. This results in the pull of the reconstructed vertex forwards or backwards along the Cherenkov direction with respect to the true vertex, which is shown in Figure 6 as a function of mean photon travel distance. The mean photon travel distance is calculated for all of the hits in an event, excluding dark noise hits.
The pull is particularly noticeable with the BONSAI single-event reconstruction at the extremes -these are events with vertices very close to the PMTs and very far from the PMTs. Although there is a forward pull over most of the distances, the most significant pull with the BONSAI reconstruction tends to be backwards along the Cherenkov direction and is worse in Gd-H 2 O and in the larger detector.
The additional light from the neutron helps the reconstruction to converge on a point in the Cherenkov direction and flattens the pull in the CoRe reconstruction in all detector configurations. The pull is close to zero (within ∼25 cm) at most mean photon travel distances in all detectors. Although the dispersion at the extremities is minimized with the CoRe reconstruction, there remains a consistent pull in the forward direction, which is also seen in the case of the BONSAI reconstruction except at the extremities. This suggests that there are potential improvements to be made. Incorporating charge information into the fit, for example, could offer improvements by accounting for multiply-hit PMTs.
Background-Rejection Power of CoRe
The coincidence reconstruction brings the additional benefit of powerful background rejection capability by providing a means by which to differentiate true, correlated pairs of events from false pairs -accidental coincidences of uncorrelated events. While reconstruction of true pairs results in a better vertex reconstruction, false pairs tend to result in a poorer reconstruction.
Consequently, the timing goodness detailed in Section 5.1.1 can help to reject false pairs which otherwise pass coincidence cuts. Figure 7 shows the fraction of true IBD pairs and accidental coincidences remaining as a function of a timing goodness threshold applied to both the prompt and delayed event in a pair. The results for BONSAI are shown on the plots for comparison. For the BONSAI results, a requirement that the time between events is less than 200µs has been applied to the uncorrelated pairs for consistency, but there is no cut on the distance between the prompt and delayed reconstructed vertices, which would normally be applied for background rejection with BONSAI in Gd-doped media 1 .
The plots demonstrate the deterioration of the fit quality for uncorrelated events with CoRe, which is not reflected in the BONSAI results. Although the fit quality for uncorrelated events is better with BONSAI (i.e., the fit is not distorted by forcing a combined reconstruction), a threshold cut on the fit quality does not offer the potential for background rejection.
In all configurations, a threshold as low as 0.2 to 0.3 applied to the prompt and delayed event has the power to reject accidental coincidences of uncorrelated β decays from ambient radioactivity in the detector and surroundings, which are a significant source of background in the reactor antineutrino range. Table 3 shows the fraction of events remaining as a function of timing goodness thresholds between 0.4 and 0.6 for both detector sizes and both media. Conversely, the timing goodness from the coincidence reconstruction can be helpful in selecting true pairs from data which is a combination of all types of events. This is important when high sample purity is important, or where coincident events form a background to a single-event signal.
Conclusions and Future Work
A new position reconstruction was implemented to take the light from two coincident events in a Cherenkov detector and reconstruct a combined vertex. This has been applied to IBD pairs in the reactor antineutrino range in four detector configurations. The four configurations include two detector sizes -16 m and 22 m -and two Gd-doped Cherenkov detection media -Gd-H 2 O and Gd-WbLS.
The new reconstruction improved the vertex resolution by a factor of more than 2 in the Gd-H 2 O detectors for 2.5 MeV positrons, close to the peak of the reactor signal. The reconstruction improved by more than 25% in the 16 m and 22 m Gd-WbLS configurations at the same energy and the vertex resolution for IBD neutrons improved by more than 30% at most distances between the neutron capture and primary interaction vertex.
With position-based energy reconstruction, improved vertex resolution brings an improvement in energy resolution and the potential to see down to lower energies. A reconstruction threshold of 5 inner-PMT hits has been used in the BONSAI and CoRe implementations for this paper. The deterioration in the reconstruction at lower energies with the single-event reconstruction places a limit on the minimum number of hits required and how well the positions and energies can be reconstructed at the lowest energies. Given the stability of the fit with the coincidence reconstruction right down to the Cherenkov threshold, the reconstruction threshold would no longer be the limiting factor in seeing the lowest energy reactor antineutrinos and their energies could be reliably reconstructed.
A measure of the fit quality output by the coincidence reconstruction has been shown to be effective as a way to identify true pairs of correlated IBD events and reject false pairs. At the low energies of reactor antineutrinos, βdecay backgrounds from ambient radioactivity in and around a detector will contribute significantly to the rate of accidental coincidences of uncorrelated events, which can mimic the IBD signal in a Gd-doped medium. The power of CoRe to reject false pairs has been shown to help to reject 70% to 90% of this source of accidental coincidences while retaining ∼98% or more IBD pairs. Other accidental coincidences e.g., of a signal event with a background event or of different types of background could be rejected in a similar way.
The coincidence reconstruction was developed to improve detection of antineutrinos from nuclear reactors for remote monitoring to deter proliferation of nuclear weapons [12], and can also be applied to antineutrino detection for fundamental physics research.
Implementation of the coincidence reconstruction for IBD in a gadoliniumdoped detector could have wider application beyond reactor antineutrinos as the emerging Gd-H 2 O and Gd-WbLS technologies are adopted. Super-K has already deployed gadolinium in its detector to three tenths of the planned final 0.1% concentration. In Super-K, searches such as the hunt for supernova relic neutrinos and the detection of pre-supernova antineutrinos rely on IBD and the addition of gadolinium is seen as vital in these searches. Improving the vertex reconstruction and, perhaps more significantly, background rejection with an implementation of the coincidence reconstruction offers potential benefits in this area. right] detectors, using the standard BONSAI reconstruction (circles) and CoRe (solid dots). Positron vertex resolution is plotted as a function of positron kinetic energy, while neutron vertex resolution is plotted as a function of distance of the neutron capture from the primary vertex. Vertex resolution is the distance from the true vertex within which 68% of the events reconstruct. Note that very small errors are obscured by the markers in places. Figure 4: Effect of the hit threshold on the vertex resolution in the 16 m Gd-H 2 O detector. Vertex resolution (distance from the true vertex within which 68% of the events reconstruct) shown for inner-PMT hit thresholds of 5 hits (solid dots), 13 hits (circles) and 15 hits (triangles). As the threshold increases, the vertex resolution below 2.5 MeVand in particular below 1.5 MeV -improves (decreases).
(a) cos θ as a function of true energy (b) Mean cos θ as a function of true energy Figure 5: Hit direction (cos θ) with respect to the positron direction as a function of true energy. The light from lower-energy particles is more isotropic (left) and the mean cos θ between the particle direction and hit directions drops off steeply below ∼2.5 MeV, to around zero at 1 MeV (right). The BONSAI fit benefits from a higher fraction of backward hits.
Figure 1 :
1PDFs of true hit-time residuals calculated from the MC vertex using simulation for the two detector sizes, each with a Gd-H 2 O and Gd-WbLS fills. These show the effects of PMT timing, increasing scattering with detector size and wider peaks with the addition of scintillator.
as a function of positron energy and the results for 2.5 MeV and 5 MeV IBD positrons using BONSAI and CoRe are summarized in Table 2. The vertex resolution output by BONSAI improves with the addition of WbLS thanks to the additional, scintillation light but worsens with both BONSAI and CoRe with increasing detector size. Close to the peak of the positron signal, at 2.5 Mev, the resolution is improved by a factor of more than 2 in Gd-H 2 Ofrom 84.0 cm to 41.3 cm in the 16 m detector and from 99.2 cm to 40.3 cm in the 22 m detector. In Gd-WbLS the vertex resolution at the same energy improved by more than 25% from 54.8 cm to 39.9 cm in the 16 m detector and from 63.9 cm to 45.3 cm in the 22 m Gd-WbLS detector. The two methods tend to converge at the higher end of the energy range in the reactor IBD positron spectrum for all detector configurations.
is most effective in Gd-H 2 O and a threshold of 0.6 would remove over 90% of this type of accidental coincidence, while retaining ∼98% of the signal events in the 16 m and 22 m detector. The timing goodness is not optimized for a Gd-WbLS fill, however there is still significant capacity for background rejection. A lower threshold of 0.4 would remove almost 70% of the accidental coincidences while still retaining 99% of the IBD pairs in the 16 m detector and 94% of the IBD pairs in the 22 m detector.
in the Gd-H 2 O 16 m detector. (b) Positrons in the Gd-H 2 O 22 m detector. (c) Neutrons in the Gd-H 2 O 16 m detector. (d) Neutrons in the Gd-H 2 O 22 m detector.
Figure 2 :
2Comparison of vertex resolution for IBD positrons and neutrons in Gd-H 2 O. Results for the 16 m [top and bottom left] and 22 m [top and bottom
Table 1 :
1Summary of detector geometries used in this paper.Tank diameter
Inner
Inner PMT
and height [m] volume radius [m] coverage [%]
16
5.7
15
22
9.0
15
Table 2 :
2Vertex resolution in cm with statistical error at selected energies for IBD positrons in the 16 m detector (top) and 22 m detector (bottom). A threshold timing goodness of 0.1 has been applied.Detector
Reconstruction 2.5 MeV
5 MeV
16 m H 2 O
BONSAI
84.0 ±0.84 52.0 ±1.04
CoRe
41.3 ±0.25 34.3 ±0.41
16 m Gd-WbLS
BONSAI
54.8 ±1.10 47.2 ±1.91
CoRe
39.9 ±0.49 36.5 ±0.91
22 m Gd-H 2 O
BONSAI
99.2 ±0.98 53.4 ±1.07
CoRe
40.3 ±0.23 33.4 ±0.38
22 m Gd-WbLS
BONSAI
63.9 ±1.00 50.4 ±1.65
CoRe
45.3 ±0.53 39.0 ±0.98
Table 3 :
3Fraction of IBD pairs and accidental coincidences remaining as a function of timing goodness (g) threshold in the four detector configurations.
It is difficult to make a fair comparison of the overall background rejection capabilities of BONSAI and CoRe here, since there are a number of variables which can be used to reject background events. Cuts on these must be varied simultaneously and are dependent on the characteristics of both background and signal events.
AcknowledgmentsThe authors are grateful to M. Bergevin, for substantial contribution to the version of the RAT-PAC simulation used to generate the MC in this paper, to M. Askins, who wrote the generator used in the simulation of the coincident IBD events, and to Z. Bagasdarian, who incorporated the Gd-WbLS MC model into the RAT-PAC simulation.Funding: This work was supported by the Atomic Weapons Establishment (AWE), as contracted by the Ministry of Defence, and the Science and Technology Facilities Council (STFC) in the UK, and the US Department of Energy's National Nuclear Security Administration.
Detection of the free neutrino: a confirmation. C L Cowan, 10.1126/science.124.3212.103Science. 124C. L. Cowan, et al., Detection of the free neutrino: a confirmation, Science 124 (1956) 103-104. doi:10.1126/science.124.3212.103.
Evaluation of Gadolinium's Action on water Cherenkov Detector Systems with EGADS. L Marti, 10.1016/j.nima.2020.163549doi:10. 1016/j.nima.2020.163549Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 959163549L. Marti, et al., Evaluation of Gadolinium's Action on water Cherenkov Detector Systems with EGADS, Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 959 (2020) 163549. doi:10. 1016/j.nima.2020.163549.
First gadolinium loading to super-kamiokande. K Abe, 10.1016/j.nima.2021.166248Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 1027166248K. Abe, et al., First gadolinium loading to super-kamiokande, Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 1027 (2022) 166248. doi:10.1016/j.nima.2021.166248.
Accelerator Neutrino Neutron Interaction Experiment (ANNIE): preliminary results and physics phase proposal. A R Back, 10.48550/arXiv.1707.08222arXiv:1707.08222A. R. Back, et al., Accelerator Neutrino Neutron Interaction Experiment (ANNIE): preliminary results and physics phase proposal, 2017. doi:10. 48550/arXiv.1707.08222. arXiv:1707.08222.
Constraints on θ 13 from a three-flavor oscillation analysis of reactor antineutrinos at KamLAND. A Gando, 10.1103/PhysRevD.83.052002Phys. Rev. D. 83A. Gando, et al., Constraints on θ 13 from a three-flavor oscillation anal- ysis of reactor antineutrinos at KamLAND, Phys. Rev. D. 83 (2011). doi:10.1103/PhysRevD.83.052002.
Reactor ν e disappearance in the double chooz experiment. Y Abe, Double Chooz Collaboration10.1103/PhysRevD.86.052008doi:10. 1103/PhysRevD.86.052008Phys. Rev. D. 8652008Y. Abe, et al. (Double Chooz Collaboration), Reactor ν e disappearance in the double chooz experiment, Phys. Rev. D. 86 (2012) 052008. doi:10. 1103/PhysRevD.86.052008.
Improved measurement of electron antineutrino disappearance at daya bay. F P An, 10.1088/1674-1137/37/1/011001doi:10.1088/ 1674-1137/37/1/011001Chinese Phys. C. 3711001F. P. An, et al., Improved measurement of electron antineutrino disap- pearance at daya bay, Chinese Phys. C 37 (2013) 011001. doi:10.1088/ 1674-1137/37/1/011001.
Observation of reactor electron antineutrinos disappearance in the reno experiment. J K Ahn, RENO Collaboration10.1103/PhysRevLett.108.191802Phys. Rev. Lett. 108191802J. K. Ahn, et al. (RENO Collaboration), Observation of reactor electron antineutrinos disappearance in the reno experiment, Phys. Rev. Lett. 108 (2012) 191802. doi:10.1103/PhysRevLett.108.191802.
A new water-based liquid scintillator and potential applications. M Yeh, 10.1016/j.nima.2011.08.040Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 660M. Yeh, et al., A new water-based liquid scintillator and potential appli- cations, Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 660 (2011) 51-56. doi:10.1016/j.nima.2011.08.040.
THEIA: an advanced optical neutrino detector. M Askins, 10.1140/epjc/s10052-020-7977-8Eur. Phys. J. C. 80M. Askins, et al., THEIA: an advanced optical neutrino detector, Eur. Phys. J. C 80 (2020). doi:10.1140/epjc/s10052-020-7977-8.
Antineutrino sensitivity at THEIA. S Zsoldos, 10.48550/ARXIV.2204.12278arXiv:2204.122782022S. Zsoldos, et al., Antineutrino sensitivity at THEIA, 2022. doi:10. 48550/ARXIV.2204.12278. arXiv:2204.12278.
Coincidence-based reconstruction and analysis for remote reactor monitoring with antineutrinos. L Kneale, PhD thesisL. Kneale, Coincidence-based reconstruction and analysis for remote reactor monitoring with antineutrinos, 2021. PhD thesis.
Neutrino oscillation studies with reactors. P Vogel, L Wen, C Zhang, 10.1038/ncomms7935Nat. Commun. 6P. Vogel, L. Wen, C. Zhang, Neutrino oscillation studies with reactors, Nat. Commun. 6 (2015). doi:10.1038/ncomms7935.
An assessment of antineutrino detection as a tool for monitoring nuclear explosions. A Bernstein, T West, V Gupta, 10.1080/08929880108426496Sci. Glob. Secur. 9A. Bernstein, T. West, V. Gupta, An assessment of antineutrino de- tection as a tool for monitoring nuclear explosions, Sci. Glob. Secur 9 (2001) 235-255. doi:10.1080/08929880108426496.
GADZOOKS! Antineutrino spectroscopy with large water Cerenkov detectors. J F Beacom, M R Vagins, 10.1103/PhysRevLett.93.171101Phys. Rev. Lett. 93J. F. Beacom, M. R. Vagins, GADZOOKS! Antineutrino spectroscopy with large water Cerenkov detectors, Phys. Rev. Lett. 93 (2003) 1-4. doi:10.1103/PhysRevLett.93.171101.
A study on the eµ identification capability of a wateř Cerenkov detector and the atmospheric neutrino problem. S Kasuga, 10.1016/0370-2693(96)00138-4Phys. Lett. B. 374S. Kasuga, et al., A study on the eµ identification capability of a wateř Cerenkov detector and the atmospheric neutrino problem, Phys. Lett. B 374 (1996) 238-242. doi:10.1016/0370-2693(96)00138-4.
Production of a gadolinium-loaded liquid scintillator for the daya bay reactor neutrino experiment. W Beriguete, 10.1016/j.nima.2014.05.119Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 763W. Beriguete, et al., Production of a gadolinium-loaded liquid scintilla- tor for the daya bay reactor neutrino experiment, Nucl. Instrum. Meth- ods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 763 (2014) 82-88. doi:10.1016/j.nima.2014.05.119.
Current status and future prospects of the SNO+ experiment. S Andringa, 10.1155/2016/6194250doi:10. 1155/2016/6194250Advances in High Energy Physics. 2016S. Andringa, et al., Current status and future prospects of the SNO+ experiment, Advances in High Energy Physics 2016 (2016) 1-21. doi:10. 1155/2016/6194250.
RAT-PAC. S Seibert, Computer softwareS. Seibert, 2014, RAT-PAC [Computer software], URL: https:// github.com/rat-pac/rat-pac.
Recent developments in Geant4. J Allison, 10.1016/j.nima.2016.06.125Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 835J. Allison, et al., Recent developments in Geant4, Nucl. Instrum. Meth- ods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 835 (2016) 186-225. doi:10.1016/j.nima.2016.06.125.
GEANT4 -a simulation toolkit. S Agostinelli, 10.1016/S0168-9002(03)01368-8Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 506S. Agostinelli, et al., GEANT4 -a simulation toolkit, Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 506 (2003) 250-303. doi:10.1016/S0168-9002(03)01368-8.
CLHEP-a project for designing a C++ class library for high energy physics. L Lönnblad, 10.1016/0010-4655(94)90217-8Comput. Phys. Commun. 84L. Lönnblad, CLHEP-a project for designing a C++ class library for high energy physics, Comput. Phys. Commun. 84 (1994) 307-316. doi:10.1016/0010-4655(94)90217-8.
. G Horton-Smith, GLG4sim [Computer softwareG. Horton-Smith, 2005, GLG4sim [Computer software], URL: https://www.phys.ksu.edu/personal/gahs/GLG4sim/docs/html_ latest/index.html.
ROOT -an object oriented data analysis framework v6.18/02. R Brun, F Rademakers, 10.5281/zenodo.3895860doi:10.5281/ zenodo.3895860Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 389R. Brun, F. Rademakers, ROOT -an object oriented data analy- sis framework v6.18/02, Nucl. Instrum. Methods Phys. Res. A: Ac- cel. Spectrom. Detect. Assoc. Equip. 389 (1997) 81-86. doi:10.5281/ zenodo.3895860.
Mev-scale performance of water-based and pure liquid scintillator detectors. B J Land, 10.1103/PhysRevD.103.052004doi:10.1103/ PhysRevD.103.052004Phys. Rev. D. 10352004B. J. Land, et al., Mev-scale performance of water-based and pure liquid scintillator detectors, Phys. Rev. D. 103 (2021) 052004. doi:10.1103/ PhysRevD.103.052004.
Characterization of water-based liquid scintillator for cherenkov and scintillation separation. J Caravaca, 10.1140/epjc/s10052-020-8418-4Eur. Phys. J. C. 80J. Caravaca, et al., Characterization of water-based liquid scintillator for cherenkov and scintillation separation, Eur. Phys. J. C 80 (2020) 1-10. doi:10.1140/epjc/s10052-020-8418-4.
Time response of water-based liquid scintillator from x-ray excitation. D R Onken, 10.1039/D0MA00055Hdoi:10.1039/ D0MA00055HRSC Chem. Biol. 1D. R. Onken, et al., Time response of water-based liquid scintillator from x-ray excitation, RSC Chem. Biol. 1 (2020) 71-76. doi:10.1039/ D0MA00055H.
Low energy event reconstruction and selection in Super-Kamiokande-III. M Smy, of International Cosmic Ray Conference. 5International Cosmic Ray ConferenceM. Smy, Low energy event reconstruction and selection in Super- Kamiokande-III, in: International Cosmic Ray Conference, volume 5 of International Cosmic Ray Conference, 2008, pp. 1279-1282.
Characterization of the Hamamatsu R11780 12 inch photomultiplier tube. J Brack, 10.1016/j.nima.2013.02.022doi:10.1016/j. nima.2013.02.022Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 712J. Brack, et al., Characterization of the Hamamatsu R11780 12 inch photomultiplier tube, Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 712 (2013) 162 -173. doi:10.1016/j. nima.2013.02.022.
Slow fluors for effective separation of cherenkov light in liquid scintillators. S D Biller, E J Leming, J L Paton, 10.1016/j.nima.2020.164106Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 972164106S. D. Biller, E. J. Leming, J. L. Paton, Slow fluors for effective sepa- ration of cherenkov light in liquid scintillators, Nucl. Instrum. Methods Phys. Res. A: Accel. Spectrom. Detect. Assoc. Equip. 972 (2020) 164106. doi:10.1016/j.nima.2020.164106.
A brief technical history of the Large-Area Picosecond Photodetector. B W Adams, LAPPD) Collaboration10.48550/arXiv.1603.01843arXiv:1603.01843v1B. W. Adams, et al., A brief technical history of the Large-Area Pi- cosecond Photodetector (LAPPD) Collaboration, 2016. doi:10.48550/ arXiv.1603.01843. arXiv:1603.01843v1.
Spectral photon sorting for large-scale cherenkov and scintillation detectors. T Kaptanoglu, 10.1103/PhysRevD.101.072002doi:10. 1103/PhysRevD.101.072002Phys. Rev. D. 10172002T. Kaptanoglu, et al., Spectral photon sorting for large-scale cherenkov and scintillation detectors, Phys. Rev. D. 101 (2020) 072002. doi:10. 1103/PhysRevD.101.072002.
m detector (d) Neutrons in the Gd-WbLS 22 m detector Figure 3: Comparison of vertex resolution for IBD positrons and neutrons in Gd-WbLS. Results for the 16m [top and bottom left] and 22m [top and bottom right] detectors as a function of kinetic energy, using the standard BONSAI reconstruction (circles) and CoRe (solid dots). Vertex resolution is the distance from the true vertex within which 68% of the events reconstruct. Note that very small errors are obscured by the markers in places. (a) Positrons. Positrons in the Gd-WbLS 16 m detector (b) Positrons in the Gd-WbLS 22 m detector (c) Neutrons in the Gd-WbLS. 16Gd-H 2 O, 16m detector (b) Positrons, Gd-H 2 O, 22m detectorPositrons in the Gd-WbLS 16 m detector (b) Positrons in the Gd-WbLS 22 m detector (c) Neutrons in the Gd-WbLS 16 m detector (d) Neutrons in the Gd-WbLS 22 m detector Figure 3: Comparison of vertex resolution for IBD positrons and neutrons in Gd-WbLS. Results for the 16m [top and bottom left] and 22m [top and bottom right] detectors as a function of kinetic energy, using the standard BONSAI reconstruction (circles) and CoRe (solid dots). Vertex resolution is the distance from the true vertex within which 68% of the events reconstruct. Note that very small errors are obscured by the markers in places. (a) Positrons, Gd-H 2 O, 16m detector (b) Positrons, Gd-H 2 O, 22m detector
Results for Gd-H 2 O in the 16 m [top left] and 22 m [top right] detectors and in Gd-WbLS in the 16 m [bottom left] and 22 m [bottom right] detectors as a function of mean photon travel distance, using the standard BONSAI reconstruction (circles) and CoRe (solid dots). Note the different axis limits. Very small errors may be obscured by the markers in places. Gd-Wbls Positrons, 16m detector (d) Positrons, Gd-WbLS, 22m detector Figure 6: Comparison of the pull in the Cherenkov direction for IBD positrons. a) Gd-H 2 O, 16m detector (b) Gd-H 2 O, 22m detectorPositrons, Gd-WbLS, 16m detector (d) Positrons, Gd-WbLS, 22m detector Figure 6: Comparison of the pull in the Cherenkov direction for IBD positrons. Results for Gd-H 2 O in the 16 m [top left] and 22 m [top right] detectors and in Gd-WbLS in the 16 m [bottom left] and 22 m [bottom right] detectors as a function of mean photon travel distance, using the standard BONSAI reconstruction (circles) and CoRe (solid dots). Note the different axis limits. Very small errors may be obscured by the markers in places. (a) Gd-H 2 O, 16m detector (b) Gd-H 2 O, 22m detector
Effectiveness of fit quality threshold for discriminating correlated and uncorrelated events. Fraction of correlated IBD pairs remaining as a function of a fit quality threshold, as measured by the timing goodness, applied to the prompt and delayed events in a pair for CoRe (solid) and BONSAI (short-dashed). The fraction of uncorrelated accidental coincidences remaining is shown for CoRe (dotted), with no other cuts applied, and for BONSAI (long-dashed). Gd-Wbls ; Gd-Wbls, 716m detector (d. 22m detector Figure. with an additional requirement that the events occur within 200µs of each otherGd-WbLS, 16m detector (d) Gd-WbLS, 22m detector Figure 7: Effectiveness of fit quality threshold for discriminating correlated and uncor- related events. Fraction of correlated IBD pairs remaining as a function of a fit quality threshold, as measured by the timing goodness, applied to the prompt and delayed events in a pair for CoRe (solid) and BONSAI (short-dashed). The fraction of uncorrelated acci- dental coincidences remaining is shown for CoRe (dotted), with no other cuts applied, and for BONSAI (long-dashed), with an additional requirement that the events occur within 200µs of each other.
| []
|
[
"Spectral Variational Multi-Scale method for parabolic problems. Application to 1D transient advection-diffusion equations",
"Spectral Variational Multi-Scale method for parabolic problems. Application to 1D transient advection-diffusion equations"
]
| [
"Tomás Chacón Rebollo ",
"Soledad Fernández-García ",
"David Moreno-Lopez ",
"Isabel Sánchez ",
"Muñoz † "
]
| []
| []
| In this work, we introduce a Variational Multi-Scale (VMS) method for the numerical approximation of parabolic problems, where sub-grid scales are approximated from the eigenpairs of associated elliptic operator. The abstract method is particularized to the onedimensional advection-diffusion equations, for which the sub-grid components are exactly calculated in terms of a spectral expansion when the advection velocity is approximated by piecewise constant velocities on the grid elements.We prove error estimates that in particular imply that when Lagrange finite element discretisations in space are used, the spectral VMS method coincides with the exact solution of the implicit Euler semi-discretisation of the advection-diffusion problem at the Lagrange interpolation nodes. We also build a feasible method to solve the evolutive advection-diffusion problems by means of an offline/online strategy with reduced computational complexity.We perform some numerical tests in good agreement with the theoretical expectations, that show an improved accuracy with respect to several stabilised methods. | 10.1007/s40314-022-02174-w | [
"https://arxiv.org/pdf/2207.10449v1.pdf"
]
| 250,920,476 | 2207.10449 | 7a0f9affcfe0ec18a77f8c7e5b094c7b9358675d |
Spectral Variational Multi-Scale method for parabolic problems. Application to 1D transient advection-diffusion equations
July 22, 2022
Tomás Chacón Rebollo
Soledad Fernández-García
David Moreno-Lopez
Isabel Sánchez
Muñoz †
Spectral Variational Multi-Scale method for parabolic problems. Application to 1D transient advection-diffusion equations
July 22, 2022Variational Multi-ScaleParabolic ProblemsTranstient Advection-diffusionStabi- lized MethodSpectral Approximation
In this work, we introduce a Variational Multi-Scale (VMS) method for the numerical approximation of parabolic problems, where sub-grid scales are approximated from the eigenpairs of associated elliptic operator. The abstract method is particularized to the onedimensional advection-diffusion equations, for which the sub-grid components are exactly calculated in terms of a spectral expansion when the advection velocity is approximated by piecewise constant velocities on the grid elements.We prove error estimates that in particular imply that when Lagrange finite element discretisations in space are used, the spectral VMS method coincides with the exact solution of the implicit Euler semi-discretisation of the advection-diffusion problem at the Lagrange interpolation nodes. We also build a feasible method to solve the evolutive advection-diffusion problems by means of an offline/online strategy with reduced computational complexity.We perform some numerical tests in good agreement with the theoretical expectations, that show an improved accuracy with respect to several stabilised methods.
Introduction
The Variational Multi-Scale is a general methodology to deal with the instabilities arising in the Galerkin discretisation of PDEs (Partial Differential Equations) with terms of different derivation orders (see Hughes (cf. [18,19,20])).
The VMS formulation is based upon the formulation of the Galerkin method as two variational problems, one satisfied by the resolved and another satisfied by the sub-grid scales of the solution. To build a feasible VMS method, the sub-grid scales problem is approximately solved by some analytic or computational procedure. In particular, an element-wise diagonalisation of the PDE operator leads to the Adjoint Stabilised Method, as well as to the Orthogonal Sub-Scales (OSS) method, introduced by Codina in [4]. Within these methods, the effects of the sub-grid scales is modelled by means of a dissipative interaction of operator terms acting on the resolved scales. The VMS methods have been successfully applied to many flow problems, and in particular to Large Eddy Simulation (LES) models of turbulent flows (cf. [21,22,9]).
The application of VMS method to evolution PDEs dates back to the 1990s, when the results from [18] were extended to nonsymmetric linear evolution operators, see [19]. The papers [12,13] deal with the spurious oscillations generated in the Galerkin method for parabolic problems due to very small time steps. The series of articles [14,15,16] deal with transient Galerkin and SUPG methods, transient subgrid scale (SGS) stabilized methods and transient subgrid scale/gradient subgrid scale (SGS/GSGS), making a Fourier analysis for the one-dimensional advection-diffusion-reaction equation.
A stabilised finite element method for the transient Navier-Stokes equations based on the decomposition of the unknowns into resolvable and subgrid scales is considered in [5,6]. Further, [1] compares the Rothe method with the so-called Method of Lines, which consists on first, discretise in space by means of a stabilized finite element method, and then use a finite difference scheme to approximate the solution.
More recently, [7] introduced the use of spectral techniques to model the sub-grid scales for 1D steady advection-diffusion equations. The basic observation is that the eigenpairs of the advection-diffusion operator may be calculated analitycally on each grid element. A feasible VMS-spectral discretization is then built by truncation of this spectral expansion to a finite number of modes. An enhanced accuracy with respect to preceding VMS methods is achieved.
In [3], the spectral VMS method is extended to 2D steady advection-diffusion problems. It is cast for low-order elements as a standard VMS method with specific stabilised coefficients, that are anisotropic in the sense that they depend on two grid Péclet numbers. To reduce the computing time, the stabilised coefficients are pre-computed at the nodes of a grid in an off-line step, and then interpolated by a fast procedure in the on-line computation.
The present paper deals with the building of the spectral VMS numerical approximation to evolution advection-diffusion equations. We construct an abstract spectral VMS discretisation of parabolic equations, that is particularised to 1D advection-diffusion equations. The sub-grid components are exactly calculated in terms of spectral expansions when the driving velocity is approximated by piecewise constant velocities on the grid elements. We prove error estimates that in particular imply that when Lagrange finite element discretisations in space are used, the solution provided by the spectral VMS method coincides with the exact solution of the implicit Euler semi-discretisation at the Lagrange interpolation nodes. We also build a feasible method to solve the evolutive advection-diffusion problem by means of an offline/online strategy that pre-computes the action of the sub-grid scales on the resolved scales. This allows to dramatically reduce the computing times required by the method. We further perform some numerical tests for strongly advection dominated flows. The spectral VMS method is found to satisfy the discrete maximum principle, even for very small time steps. A remarkable increase of accuracy with respect to several stabilised methods is achieved.
The outline of the paper is as follows. In Section 2, we describe the abstract spectral VMS discretisation to linear parabolic problems, which is applied to transient advection-diffusion problems in Section 3. A feasible method is built in Section 4, based upon an offline/online strategy. We present in Section 5 our numerical results, and address some conclusions in Section 6.
Spectral VMS method
In this section, we build the spectral VMS discretisation to abstract linear parabolic equation.
Let Ω a bounded domain in R d and T > 0 a final time. Let us consider two separable Hilbert spaces on Ω, X and H, so that X ⊂ H with dense and continuous embedding. We denote (·, ·) the scalar product in X; X and H are the dual topological spaces of X and H, respectively, and ·, · is the duality pairing between X and X. We identify H with its topological dual H so that X ⊂ H ≡ H ⊂ X . Denote by L(X) the space of bilinear bounded forms on X and consider b ∈ L 1 (0, T ; L(X)) uniformly bounded and X-elliptic with respect to t ∈ (0, T ).
Given the data f ∈ L 2 (0, T ; X ) and u 0 ∈ H, we consider the following variational parabolic problem:
Find u ∈ L 2 ((0, T ); X) ∩ C 0 ([0, T ]; H) such that, d dt (u(t), v) + b(t; u(t), v) = f (t), v ∀ v ∈ X, in D (0, T ), u(0) = u 0 in H.(1)
It is well known that this problem is well posed and, in particular, admits a unique solution [11]. To discretize this problem, we proceed through the so-called Horizontal Method of Lines [1,2,13]. First, we discretise in time by the Backward Euler scheme and then we apply a steady spectral VMS method to the elliptic equations appearing at each time step. Consider a uniform partition of the interval [0, T ], {0 = t 0 < t 1 < ... < t N = T }, with timestep size ∆t = T /N . The time discretization of problem (1) by the Backward Euler scheme gives the following family of stationary problems: given the initialization u 0 = u 0 ,
Find u n+1 ∈ X such that, u n+1 − u n ∆t , v + b n+1 (u n+1 , v) = ∆t f n+1 , v ∀ v ∈ X, ∀ n = 0, 1, . . . , N − 1,(2)
where b n+1 and f n+1 are some approximations of b(t; ·, ·) and f (t), respectively, at t = t n+1 .
To discretise in space problem (2), we assume that Ω is polygonal (when d = 2) or polyhedric (when d = 3), and consider a family of conforming and regular triangulations of Ω, {T h } h>0 , formed by simplycial elements, where the parameter h denotes the largest diameter of the elements of the triangulation T h . The VMS method is based on the decomposition,
X = X h ⊕X,
where X h is a continuous finite element sub-space of X constructed on the grid T h , andX is a complementary, infinite-dimensional, sub-space of X. Notice that this is a multi-scale decomposition of the space X, being X h the large or resolved scale space andX the small or sub-grid scale space. This decomposition defines two projection operators P h : X → X h and P : X →X, by
P h (v) = v h ,P (v) =ṽ, ∀ v ∈ X,(3)
where v h andṽ are the unique elements belonging to X h andX, respectively, such that v = v h +ṽ. Hence, one can decompose the solution of problem (2) as
u n+1 = u n+1 h +ũ n+1 , where u n+1 h = P h (u n+1 ) andũ n+1 =P (u n+1 ) satisfy the coupled problem, u n+1 h − u n h ∆t , v h + ũ n+1 −ũ n ∆t , v h + b n+1 (u n+1 h , v h ) + b n+1 (ũ n+1 , v h ) = f n+1 , v h (4.1) u n+1 h − u n h ∆t ,ṽ + ũ n+1 −ũ n ∆t ,ṽ + b n+1 (u n+1 h ,ṽ) + b n+1 (ũ n+1 ,ṽ) = f n+1 ,ṽ (4.2) ∀v h ∈ X h , ∀ṽ ∈X,
for all n = 0, 1, . . . , N − 1. The small scales componentũ n+1 thus satisfies,
(ũ n+1 ,ṽ) + ∆t b n+1 (ũ n+1 ,ṽ) = R n+1 (u n+1 h ),ṽ(4)
where R n+1 (u n+1 h ),ṽ is the residual of the large scales component, defined as,
R n+1 (u n+1 h ),ṽ := (u n h +ũ n ,ṽ) + ∆t f n+1 ,ṽ − (u n+1 h ,ṽ) − ∆t b n+1 (u n+1 h ,ṽ)
, ∀ṽ ∈X. In condensed notation, this may be written as,
u n+1 = Π n+1 (R n+1 (u n+1 h )),(5)
where
Π n+1 :X →X g → Π n+1 (g) =G
is the static condensation operator onX defined as, (G,ṽ) + ∆t b n+1 (G,ṽ) = g,ṽ ∀ṽ ∈X, for any g ∈X .
Inserting expression (5) in the large scales equation (4.1), leads to the condensed VMS formulation of problem (2):
Find u n+1 h ∈ X h such that (u n+1 h , v h ) + ∆t b n+1 (u n+1 h , v h ) + (Π n+1 (R n+1 (u n+1 h )), v h ) + ∆t b n+1 (Π n+1 (R n+1 (u n+1 h )), v h ) = ∆t f n+1 , v h + (u n h + Π n (R n (u n h )), v h ) ∀ v h ∈ X h , ∀ n = 0, 1, . . . , N − 1,(6)
with u 0 h = P h (u 0 ). This problem is an augmented Galerkin formulation, where the additional terms represents the effect of the small scales component of the solution (ũ n+1 ) on the large scales component (u n+1 h ). To build an approximation of the sub-grid scales, we use a spectral decomposition of the operator associated to the variational formulation on each grid element, at each discrete time. To apply this approximation to problem (6), the small scales spaceX is approximated by the "bubble" sub-spaces,
X X h = K∈T hX K , withX K = {ṽ ∈X, such that supp(ṽ) ⊂ K}.(7)
Hence, we approximatẽ
u n+1 ũ n+1 h = K∈T hũ n+1 K , withũ n+1 K ∈X K , ∀ n = 0, 1, . . . , N − 1.(8)
Then, problem (4) is approximated by the following family of decoupled problems,
(ũ n+1 K ,ṽ K ) + ∆t b n+1 (ũ n+1 K ,ṽ K ) = R n+1 (u n+1 h ),ṽ K , ∀ṽ K ∈X K , ∀ K ∈ T h .(9)
Let L n+1 : X → X be the operator defined by
L n+1 w, v = b n+1 (w, v), ∀ w, v ∈ X,(10)
and let L n+1 K be the restriction of this operator toX K . Let us also consider the weighted L 2 space,
L 2 p (K) = {w : K → R measurable such that p|w| 2 ∈ L 1 (K)},
where p is some measurable real function defined on K, which is positive a.e. on K. This is a Hilbert space endowed with the inner product
(w, v) p = K p(x)w(x)v(x)dx.
We denote by · p the norm on L 2 p (K) induced by this inner product. Now, we can state the following result, which allows to compute the small scales on each grid element by means a spectral expansion.
Theorem 2.1. Let us assume that there exists a complete sub-set {z n,K j } j∈N onX K formed by eigenfunctions of the operator L n K , which is an orthonormal system in L 2 p n,K (K) for some weight function p n,K ∈ C 1 (K). Then,
u n K = ∞ j=1 β n,K j r n,K jz n,K j , ∀ n = 1, . . . , N,(11)
where β n,K j = (Λ n,K j ) −1 , with Λ n,K j = 1 + ∆t λ n,K j being λ n,K j the eigenvalue of L n K associated tõ z n,K j , and
r n,K j = R n (u n h ), p n,Kzn,K j .
This is a rather straightforward application of Theorem 1 in [7], that we do not detail for brevity.
Once the eigenpairs (z n+1,K j , λ n+1,K j ) are known, the previous procedure allows us to directly compute u n+1 h from problem (6), approximating the sub-grid componentũ n+1 by expressions (8) and (11). This gives the spectral VMS method to fully discretize problem (1). Namely,
Find u n+1 h ∈ X h such that (u n+1 h , v h ) + ∆t b n+1 (u n+1 h , v h ) + (ũ n+1 h , v h ) + ∆t b n+1 (ũ n+1 h , v h ) = ∆t f n+1 , v h + (u n h , v h ) + (ũ n h , v h ) ∀ v h ∈ X h , ∀ n = 0, 1, . . . , N − 1,(12)
where,
u n+1 h = K∈T h ∞ j=1 β n+1,K j R n+1 h (u n+1 h ), p n+1,Kzn+1,K j z n+1,K j , ∀ n = 0, . . . , N − 1,(13)
with
R n+1 h (u n h ),ṽ := (u n h +ũ n h ,ṽ) + ∆t f n+1 ,ṽ − (u n+1 h ,ṽ) − ∆t b n+1 (u n+1 h ,ṽ), ∀ṽ ∈X, u 0 h = P h (u 0 ) andũ 0 h ∈X h some approximation ofũ 0 .
Application to transient advection-diffusion problems
In this section, we apply the abstract spectral VMS method introduced in the previous section to transient advection-diffusion equations, that we state with homogeneous boundary conditions,
∂ t u + a · ∇u − µ∆u = f in Ω × (0, T ), u = 0 on ∂Ω × (0, T ), u(0) = u 0 on Ω,(14)
where a ∈ L ∞ (0, T ; W 1,∞ (Ω)) d is the advection velocity field, µ > 0 is the diffusion coefficient, f ∈ L 2 ((0, T ); L 2 (Ω)) is the source term and u 0 ∈ L 2 (Ω) is the initial data. Different boundary conditions may be treated as well, as these also fit into the general spectral VMS method introduced in the previous section. The weak formulation of problem (14) reads,
Find u ∈ L 2 ((0, T ); H 1 0 (Ω)) ∩ C 0 ([0, T ]; L 2 (Ω)) such that, d dt (u(t), v) + (a · ∇u(t), v) + µ(∇u(t), ∇v) = f (t), v ∀ v ∈ H 1 0 (Ω), u(0) = u 0 .(15)
Problem (15) admits the abstract formulation (1)
with H = L 2 (Ω), X = H 1 0 (Ω) and b(w, v) = (a · ∇w, v) + µ(∇w, ∇v), ∀ w, v ∈ H 1 0 (Ω)
. In practice, we replace the velocity field a by a h , the piecewise constant function defined a. e. on Ω such that a h = a K on the interior of each element K ∈ T h . Then, we apply the spectral VMS method to the approximated problem,
Find U n+1 ∈ H 1 0 (Ω) such that U n+1 − U n ∆t , v + (a n+1 h · ∇U n+1 , v) + µ(∇U n+1 , ∇v) = f n+1 , v , ∀ v ∈ H 1 0 (Ω), ∀ n = 0, 1, . . . , N − 1,(16)
with u 0 = u 0 . In this case, L n w = a n h · ∇w − µ∆w is the advection-difusion operator. Proposition 1 in [7] proved that the eigenpairs (w n,K j , λ n,K j ) of operator L n K can be obtained from the eigenpairs (W K j , σ K j ) of the Laplace operator in H 1 0 (K), in the following way:
w n,K j = ψ n,KW K j , ψ n,K (x) = exp 1 2µ a n K · x λ n,K j = µ σ K j + |a n K | 2 4µ 2 , ∀ j ∈ N.(17)
Moreover, for the weight function
p n,K (x) = (ψ n,K ) −2 = exp − 1 µ a K · x (18) the sequencez n,K j =w n,K j w n,K j p n,K , ∀ j ∈ N,(19)
is a complete and orthonormal system in L 2 p n,K (K) (see Theorem 2 in [7]). Then, Theorem 2.1 holds and it is possible to apply the method (12) to problem (16).
One dimension problems
The eigenpairs of the Laplace operator can be exactly computed for grid elements with simple geometrical forms, as it is the case of parallelepipeds. In the 1D case, the elements K ∈ T h are closed intervals, K = [a, b]. The eigenpairs (W K j , σ K j ) are solutions of the problem
−∂ xxW K = σ KW K in K, W K (a) =W K (b) = 0.
Solutions of this problem arẽ
W K j = sin σ K j (x − a) , σ K j = jπ h K 2 , with h K = b − a, for any j ∈ N.
As the function p n,K defined in (18) is unique up to a constant factor, to express the eigenpairs in terms of non-dimensional parameters, we replace p n,K by (we still denote it in the same way),
p n,K (x) = exp −2 P n,K x − a h K ,(20)
where
P n,K = |a n K | h K 2µ
is the element Péclet number. Then, from expressions (17) and (19),
z n,K j = 2 h K exp P n,K x − a h K sin jπ x − a h K , λ K j = µ jπ h K 2 + |a n K | 2 4µ .(21)
It follows β n,K
j = 1 1 + S K (P 2 n,K + π 2 j 2 ) for any j ∈ N,(22)
where
S K = ∆t µ h 2 K
is a non-dimensional parameter that represents the relative strength of the time derivative and diffusion terms in the discrete equations, at element K.
Error analysis
We afford in this section the error analysis for the solution of the 1D evolutive convectiondiffusion problem by the spectral VMS method (12).
Let {α i } I i=0 ∈Ω be the Lagrange interpolation nodes of space X h . Let ω i = (α i−1 , α i ), i = 1, . . . , I. SettingX i = H 1 0 (ω i ), it holds,H 1 0 (Ω) = X h ⊕X, withX = I i=1X i .
Observe that this decomposition generalises (7) withX h =X. Moreover, when operator in (10) is L n w = a n h ·∇w −µ∆w, problem (4) can be exactly decoupled into the family of problems (9). In particular, if the projection operator P h in (3) is the Lagrange interpolate on X h , then (12). Notice that thanks to the spectral expansion, the sub-grid scales contribution in method (12), when the advection velocity is element-wise constant, is exactly computed, and then, the discretisation error only is due to the time discretisation and the approximation of the advection velocity a, but not to the space discretisation.
U n h = P h (U n ),Ũ n h = U n − U n h ∈X and consequently, U n h ∈ X h satisfies method
Therefore, to analize the discretisation error we compare the solution of problem (16) to the solution of the implicit Euler time semi-discretisation of problem (15),
Find u n+1 ∈ H 1 0 (Ω) such that u n+1 − u n ∆t , v + (a n+1 ∂ x u n+1 , v) + µ (∂ x u n+1 , ∂ x v) = f n+1 , v ∀ v ∈ H 1 0 (Ω), ∀ n = 0, 1, . . . , N − 1,(23)with u 0 = u 0 .
We assume that a h restricted to each K is extended by continuity to ∂K. Given a sequence b = {b n , n = 1, · · · , N } of elements of a normed space Y , let us denote,
b l p (Y ) = ∆t N n=1 b n p Y 1/p , b l ∞ (Y ) = max n=1,··· ,N b n Y .
We shall use the following discrete Gronwall's lemma, whose proof is standard, and so we omit it.
Lemma 3.1. Let α n , β n , γ n , n = 1, 2, ... be non-negative real numbers such that
(1 − σ ∆t) α n+1 + β n+1 ≤ (1 + τ ∆t) α n + γ n+1 for some σ ≥ 0, τ ≥ 0. Assume that σ ∆t ≤ 1 − δ for some δ > 0. Then it holds α n ≤ e ρ tn α 0 + 1 δ n l=1 e ρ (tn−t l ) γ l , and n l=1 β l ≤ 1 + τ σ + (σ + τ ) e ρ t n−1 t n−1 α 0 + 1 δ 1 + (σ + τ ) e ρ t n−1 t n−1 n l=1 γ l , with ρ = (σ + τ )/δ.
Let e = {e n , n = 0, 1, · · · , N } ⊂ H 1 0 (Ω) be the sequence of errors e n = u n − U n ∈ H 1 0 (Ω), where we recall that U n is the solution of the discrete problem (16), and denote δ t e n+1 = e n+1 − e n ∆t . It holds the following result.
Proposition 3.2. Assume that a ∈ L ∞ (Ω×(0, T )) d , f ∈ L 2 (Ω×(0, T )), ∆t ≤ (1−ε) µ a 2 L ∞ (Ω×(0,T ))
for some ε ∈ (0, 1) and a h L ∞ (Ω×(0,T )) ≤ D a L ∞ (Ω×(0,T )) for some constant D > 0. Then,
δ t e l 2 (L 2 (Ω)) + µ e l ∞ (H 1 0 (Ω)) ≤ C a h − a l 2 (L ∞ (Ω)) ,(24)
for some constant C > 0 independent of h, ∆t and µ.
Proof. Let us substract (16) from (23) with
v = v h ∈ X h . This yields e n+1 − e n ∆t , v h + (a n+1 h ∂ x e n+1 , v h ) + µ (∂ x e n+1 , ∂ x v h ) = ((a n+1 h − a n+1 )∂ x u n+1 , v h ).
Setting v h = δ t e n+1 , and using the identity
2(b, b − a) = b 2 L 2 (Ω) − a 2 L 2 (Ω) + b − a 2 L 2 (Ω) for any a, b ∈ L 2 (Ω) d yields ∆t δ t e n+1 2 L 2 (Ω) + ∆t (a n+1 h ∂ x e n+1 , δ t e n+1 ) + µ 2 ∂ x e n+1 2 L 2 (Ω) − ∂ x e n 2 L 2 (Ω) ≤ ∆t ((a n+1 h − a n+1 )∂ x u n+1 , δ t e n+1 ). (25)
It holds
|(a n+1 h ∂ x e n+1 , δ t e n+1 )| ≤ a n+1 h L ∞ (Ω) ∂ x e n+1 L 2 (Ω) δ t e n+1 L 2 (Ω) ≤ 1 2 δ t e n+1 2 L 2 (Ω) + a 2 L ∞ (Ω×(0,T )) 2 ∂ x e n+1 2 L 2 (Ω) .(26)
As a ∈ L ∞ (Ω×(0, T )) d , f ∈ L 2 (Ω×(0, T )), then the u n are uniformly bounded in L ∞ (0, T ; H 1 0 (Ω)), due to the standard estimates for the implicit Euler method in strong norms. Then, for some constant C > 0,
((a n+1 h − a n+1 )∂ x u n+1 , δ t e n+1 ) ≤ a n+1 h − a n+1 L ∞ (Ω) ∂ x u n+1 L 2 (Ω) δ t e n+1 L 2 (Ω) ≤ C a n+1 h − a n+1 2 L ∞ (Ω) + 1 4 δ t e n+1 2 L 2 (Ω) .(27)
Hence, combining (26) and (27) with (25),
∆t 4 δ t e n+1 2 L 2 (Ω) + µ 2 (1 − σ ∆t) ∂ x e n+1 2 L 2 (Ω) ≤ µ 2 ∂ x e n 2 L 2 (Ω) + C ∆t a n+1 h − a n+1 2 L ∞ (Ω) , with σ = a 2 l ∞ (L ∞ (Ω))
µ . Applying the discrete Gronwall's lemma 3.1, estimate (24) follows.
µ e n l ∞ (L ∞ (Ω)) ≤ C a h − a l 2 (L ∞ (Ω))(28)
for some constant C > 0. Moreofer, if a is constant, then the solution U n h of the spectral VMS method (12) coincides with the solution u n of the implicit Euler time semi-discretisation (23) at the Lagrange interpolation nodes of space X h .
Proof. In one space dimension H 1 (Ω) is continuously injected in L ∞ (Ω). Then estimate (28) follows from estimate (24).
If a is constant obviously U n = u n for all n = 0, 1, · · · , N . As U n h (α i ) = U n (α i ) at the Lagrange interpolation nodes α i , i = 1, . . . , I, then U n h coincides with u n at these nodes.
Feasible method: offline/online strategy
Building the spectral VMS method using the formulation (12) requires quite large computing times, due to the summation of the spectral expansions that yield the coefficients of the matrices that appear in the algebraic expression of the method. In order to reduce this time, we shall neglect the dependency of method (12) w.r.t.ũ n−1 . Then, our current discretization of problem (1) is the following,
Find u n+1 h ∈ X h such that (u n+1 h , v h ) + ∆t b n+1 (u n+1 h , v h ) + (ũ n+1 h , v h ) + ∆t b n+1 (ũ n+1 h , v h ) = ∆t f n+1 , v h + (u n h , v h ) + (ũ n h , v h ) ∀ v h ∈ X h , ∀ n = 0, 1, . . . , N − 1,(29)
whereũ n+1 h is given by (13), butũ n h is defined from an approximated residual:
u n h = K∈T h ∞ j=1 β n,K j R n h (u n h ), p n,Kzn,K j z n,K j(30)
with R n h (u n h ),ṽ = (u n−1 h ,ṽ) + ∆t f n ,ṽ − (u n h ,ṽ) − ∆t b n (u n h ,ṽ), ∀ṽ ∈X. Neglecting the dependency of method (12) w.r.t.ũ n−1 allows to eliminate the recurrence in time of the sub-grid scales. Thanks to this fact, problem (29) is equivalent to a linear system (that we describe in detail in Appendix), whose coefficients only depend on non-dimensional parameters.
Application to 1D transient advection-diffusion problems
In this case the coefficients of the linear system equivalent to problem (29) only depend on two non-dimensional parameters, as we confirm below.
As we can see in Appendix, if {ϕ m } L+1 m=1 is a basis of the space X h associated to a partition {x 1 < x 2 < . . . < x L+1 } of Ω, the solution u n+1 h of (29) can be written as
u n+1 h = L+1 m=1 u n+1 m ϕ m .
Then, the unknown vector u n+1 = (u n+1
1 , u n+1 2 , . . . , u n+1 L , u n+1 L+1 ) t ∈ R L+1 is the solution of the linear system A n+1 u n+1 = b n+1 ,(31)
where the matrix and second term are defined in (43) We focus, for instance, on the coefficients of matrix A n 1 : (20) and (21), p n,K andz n,K j depend on the element non-dimensional parameters P n,K and S K and the non- readily proves that these expressions (up to a factor depending on h) can be written as functions of S K and P K . Further, by (22) the coefficients β n,K j also depend on P n,K and S K . Then, for each K ∈ T h the spectral expansion that determines the element contribution to coefficient
(A n 1 ) lm = K∈T h ∞ j=1 β n,K j (ϕ m , p n,Kzn,K j )(z n,K j , ϕ l ). Let K = [x l−1 , x l ] ∈ T h . From expressionsdimensional variablex = x − x l−1 h K . The change of variablex ∈ [0, 1] → x ∈ K from(A n 1 ) lm , that is, ∞ j=1 β n,K j (ϕ m , p n,Kzn,K j )(z n,K j , ϕ l ),
is a function of P n,K and S K , up to a factor depending on h. This also holds for the coefficients of all other matrices that defines the linear system (31), A n i and B n i , as these are built from the basic values (ϕ m , p n,Kz n,K j ), (z n,K j , ϕ l ), b n (ϕ m , p n,Kz n,K j ) and b n (z n,K j , ϕ l ). We take advantage of this fact to compute these matrices in a fast way, by means of an offline/online computation strategy.
Offline stage
In the offline stage we compute the element contribution to the coefficients of all matrices appearing in system (31) as a function of the two parameters P and S, that take values at the nodes of a uniform grid, between minimum and maximum feasible values of these parameters. That is,
{(P i , S j ) = (∆ i, ∆ j), ∀ i, j = 1, 2, . . . M } , with ∆ > 0.(32)
In order to set these values, we consider the piecewise affine finite element functions associated to a uniform partition of Ω with step h. In practical applications the advection dominates and P takes values larger than 1. Also, taking usual values of diffusion coefficient and h ∆t, S takes low positive values. Moreover, when we compute the spectral series that determines the coefficients of the system matrices as functions of P and S, we observe that these values are nearly constant as P and S approaches 20. For instance, we can see in figures 1, 2 and 3 how the spectral series for the diagonal coefficient of A 3 matrix tend to a constant value as P or S increase to 20. Therefore, in numerical tests, we will consider a step ∆ = 0.02 and M = 1000 in (32). To do the computations in this stage, in order to avoid computational roundoff problems due to large velocities, we express the eigenfunctions of the advection-diffusion operator given in (19) in terms of the midpoint of the grid elements xl,l+1 2 =
x l + x l+1 2 . That is, we consider
z K j = 2 h K exp |a K | 2µ (x − xl,l+1
2 ) sin jπ
x − x l,l+1 2 h K , for any j ∈ N.
We further truncate the spectral series neglecting all the terms following to the first term that reaches an absolute value less than a prescribed threshold ε. Actually, we have taken ε = 10 −10 . In Figure 4 we represent the number of these summands needed to reach a first term with absolute value smaller than this ε for the series defining the diagonal coefficient of A 3 matrix. As we can see, more terms are needed as P increases and as S decreases to 0. Figure 4: Number of summands needed to reach a first term with absolute value lower than ε = 10 −10 for the series defining the diagonal coefficient of matrix A 3 , in terms of (P, S).
Online stage
In the online stage, for each grid element K we compute the contribution of this element to the coefficients of all matrices appearing in system (31). Then, we sum up over grid elements, to calculate these coefficients. For that, we determine P K and S K and find the indices i, j ∈ 1, . . . , M such that (P K , S K ) belongs to [P i , P i+1 ] × [S j , S j+1 ]. In other case, if P k < ∆ we set i = 1 and if P K > ∆M we set i = M − 1, and similarly for j in terms of S K .
As we see above, each element contribution is a function of P K and S K that we denote C(P K , S K ) in a generic way. For instance, for matrix A n 1 ,
C(P K , S K ) = ∞ j=1 β n,K j (ϕ m , p n,Kzn,K j )(z n,K j , ϕ l ).
Then, we compute C(P k , S K ) by the following second-order interpolation formula:
C(P K , S K ) 4 k=1 Q k Q C(α k ),
where the α k are the four corners of the cell [P i , P i+1 ] × [S j , S j+1 ], Q = ∆ 2 is its area and the Q k are the areas of the four rectangles in which the cell is split by (P k , S K ) (see Figure 5).
Numerical Tests
In this section, we present the numerical results obtained with the spectral method to solve 1D advection-diffusion problems. Our purpose, on the one hand, is to confirm the theoretical results stated in Corollary 3.3 for the spectral VMS method and, on the other hand, test the accuracy of the spectral VMS and feasible spectral VMS methods for problems with strong advection-dominance, in particular by comparison with several stabilised methods.
Test 1: Accuracy of spectral VMS method for constant advection velocity
To test the property stated in Corollary 3.3, we consider the following advection-diffusion problem:
∂ t u + a ∂ x u − µ ∂ 2 xx u = 0 in (0, 1) × (0, T ),
whose exact solution is given by exp(x + (µ − a)t).
We set T = 0.1, a = 1 and µ = 20. We apply the spectral VMS method (12) to solve this problem with time step ∆t = 0.01 and piecewise affine finite element space on a uniform partition of interval (0, 1) with steps h = 0.05/(2 i ) for i = 2, 3, ..., 7. We have truncated the spectral expansions that yield the small scalesũ n h to 10 eigenfunctions. The errors in l ∞ (L 2 ) and l 2 (H 1 ) norms computed at grid nodes are represented in Figure 6. We observe that, indeed, the errors quite closely do not depend on the space step h.
Moreover, we have computed the convergence orders in time, obtaining very closely order 1 in l 2 (H 1 ) norm and order 2 in l ∞ (L 2 ) norm, as could be expected.
In the following numerical experiments we consider the 1D problem (14) setting Ω = (0, 1), with constant velocity field a, source term f = 0 and the hat-shaped initial condition
u 0 = 1 if |x − 0.45| ≤ 0.25, 0 otherwise.(34)
We also set X h to be the piecewise affine finite element space constructed on a uniform partition of interval (0, 1) with step size h.
Test 2: Accuracy of spectral VMS method Very large Péclet numbers
In this test we examine the accuracy of spectral VMS method for very high Péclet numbers. To do that, we set a = 1000 and µ = 1, and solve this problem by the spectral VMS method (12), truncating to 150 spectral basis functions the series (13) that yield the sub-grid components. The solution interacts with the boundary condition at x = 1 in times of order 1/a, that is, 10 −3 . We then set a time-step ∆t = 10 −3 . Moreover, we set h = 0.02 that corresponds to P = 10 and S = 2.5. We present the results obtained in Figure 7, where we represent the Galerkin solution (in red) on the left panels and the spectral solution (in cyan) on the right panels, both with the exact solution (in blue): in (a) the first 4 time-steps, in (b) time-steps from 5 to 7 and in c times-steps 8 and 9. By Corollary 3.3 the discrete solution coincides at the grid nodes with the exact solution of the implicit Euler semi-discretisation, the expected errors at grid nodes are of order ∆t = 10 −3 . We can see that the spectral solution indeed is very close to the exact solution at grid nodes.
As the discrete solution coincides at the grid nodes with the exact solution of the implicit Euler semi-discretisation and u 0 is exact, then u 1 h should coincide with the exact solution at grid nodes. This can already be observed in Figure 7 (a). We also test this result with different discretisation parameters. We actually set ∆t = 10 −5 and h = 0.02 that corresponds to P = 10 and S = 0.025. The solution in the first time-step is represented in Figure 8 (a) and a zoom around x = 0.7 in depicted in (b). Indeed the discrete solutions coincides with the exact one at grid nodes.
Very small time steps
We test here the arising of spurious oscillations due to extra small time-steps. These spurious oscillations occur in the solutions provided by the Galerkin discretisation when CF L < CF L bound = P/(3(1 − P )) (see [13]). For that, we consider the same problem as in this section but with a = 20, h = 0.01 and the time-step ∆t is chosen such that CF L/CF L bound = 1/2. We obtain the results shown in Figure 9, where we have represented the first five time-steps. As one can see the spectral solution does not present any oscillation.
Test 3: Accuracy of the feasible spectral VMS method. Comparison with other stabilised methods
We next proceed to compare the results obtained with the feasible spectral VMS method (29) with those obtained by several stabilised methods. Stabilised methods add specific stabilising terms to the Galerkin discretisation, generating the following matrix scheme,
(M + ∆t R n + ∆t a 2 τ M s ) u n+1 = M u n ,
where M and R n are, respectively, mass and stiffness matrices, while M s is a tridiagonal matrix defined by (M s )
i,i = 2 h , (M s ) i+1,i = (M s ) i,i+1 = − 1 h .
Each stabilised method is determined by the stabilised coefficient τ . In particular, we consider:
1. The optimal stabilisation coefficient for 1D steady advection-diffusion equation [10,23], 2. The stabilisation coefficient based on orthogonal sub-scales proposed by Codina in [4],
τ 1D = µ |a| 2 (P coth(P ) − 1).τ C = 4 µ h 2 2 + 2 |a| h 2 −1/2 .
3. The stabilisation coefficient based on L 2 proposed by Hauke et. al. in [17],
τ H = min h √ 3|a| , h 2 24.24µ
, ∆t .
4. The stabilisation coefficient separating the diffusion-dominated from the convection-dominated regimes proposed by Franca in [8],
τ F = h |a| min{P,P },
whereP > 0 is a threshold separating the diffusion dominated (P ≤P ) to the advection dominated (P e >P ) regimes.
In figures 10, 11 and 12, we show the solutions of each method for different values of P and S, always for advection-dominated regime P > 1. We also display the errors in l ∞ (L 2 ) and l 2 (H 1 ) norms for the solutions of these problems in tables 1, 2 and 3. As it can be observed in the three tables, spectral method reduces the error between 10 and 100 times compared to the stabilised methods, without presenting oscillations. Table 1: l ∞ (L 2 ) and l 2 (H 1 ) errors for the solutions represented in Figure 10.
Next, we consider the same tests performed in Section 5.2, but applying the feasible spectral VMS method.
Firstly, we check the behaviour of the feasible spectral VMS method (29) for very large Péclet numbers. In Figure 13 we represent the solution of same problem as in Figure 7 obtained with this method. We show solutions in time-steps 1 to 4 in (a), times-steps 5 to 7 in (b) and time-steps 8 and 9 in (c). As we can observe, the spectral method is the closest to the reference solution without presenting any spurious oscillations.
Secondly, Figure 14 is the analogous to Figure 8, but comparing the feasible spectral VMS with different stabilised methods. Although Hauke's solution is closer to the exact solution than Table 2: l ∞ (L 2 ) and l 2 (H 1 ) errors for the solutions represented in Figure 11. the spectral method, we can see on the right figure that this approximation does not satisfy the Maximun Principle. Finally, we illustrate the fact that the feasible spectral VMS method is the only method among those studied that does not have oscillations for small time steps, when CF L < CF L bound . In Figure 15, we can see the first five time-steps solutions obtained with each method using a time-step that verifies CF L/CF L bound = 1/2. Table 3: l ∞ (L 2 ) and l 2 (H 1 ) errors for the solutions represented in Figure 12. Regarding computing times, by means of the offline/online strategy the feasible spectral VMS method requires somewhat larger computing times than the remaining stabilised methods, due to the interpolation step to build the system matrices.
Conclusions
In this paper we have extended to parabolic problems the spectral VMS method developed in [7] for elliptic problems. We have constructed a feasible method to solve the evolutive advectiondiffusion problem by means of an offline/online strategy that pre-computes the effect of the sub-grid scales on the resolved scales.
We have proved that when Lagrange finite element discretisations in space are used, the solution obtained by the fully spectral VMS method (12) coincides with the exact solution of the implicit Euler semi-discretisation of the advection-diffusion problem at the Lagrange interpolation nodes.
We have performed some numerical tests that have confirmed this property for very large Péclet numbers and very small time steps, by the fully spectral VMS method. Also some additional tests show an improved accuracy with respect to several stabilised methods for the feasible spectral VMS method (29), with moderate increases of computing times.
The methodology introduced here may be extended to multi-dimensional advection-diffusion equations, by parameterising the sub-grid scales in an off-line step. This research is at present in progress.
Appendix: Matrix formulation of the scheme Problem (29) is equivalent to a linear system with a particular structure that we describe next. If {ϕ m } L+1 m=1 is a basis of the space X h , the solution u n+1 h is obtained as
u n+1 h = L+1 m=1 u n+1 m ϕ m ,
where u n+1 = (u n+1 1 , . . . , u n+1 L+1 ) t ∈ R L+1 is the unknown vector. Taking v h = ϕ l , with l = 1 . . . L, each term in (29) can be written in the following way: Here we are neglecting the interaction between different eigenfunctions in two consecutive time steps. Obviously, this occurs when the operator is time independent.
(u n+1 h , ϕ l ) = L m=1 (ϕ m , ϕ l ) u n+1 m = M u n+1 l b n+1 (u n+1 h , ϕ l ) = L m=1 b n+1 (ϕ m , ϕ l ) u n+1 m = R n+1 u n+1 l (f n+1 , ϕ l ) = F n+1 l where (M ) lm = (ϕ m , ϕ l ), (R n+1 ) lm = b n+1 (ϕ m , ϕ l ), (ũ n+1 h , ϕ l ) = L m=1 K∈T h ∞ j=1 β n+1,K j (ϕ m , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ) u n m + K∈T h ∞ j=1 β n+1,K j (ũ n h , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ) + ∆t K∈T h ∞ j=1 β n+1,K j (f n+1 , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ) − L m=1 K∈T h ∞ j=1 β n+1,K j (ϕ m , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ) u n+1 m − ∆t L m=1 K∈T h ∞ j=1 β n+1,K j b n+1 (ϕ m , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ) u n+1 m = A n+1 1 u n + G n+1 1 + ∆t F n+1 1 − A n+1 1 u n+1 − ∆t A n+1 2 u n+1 l , with (A n+1 1 ) lm = K∈T h ∞ j=1 β n+1,K j (ϕ m , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ),(35)(A n+1 2 ) lm = K∈T h ∞ j=1 β n+1,K j b n+1 (ϕ m , p n+1,Kzn+1,K j ) (z n+1,K j , ϕ l ),(36)+ ∆t K∈T h ∞ j=1 β n+1,K j (f n+1 , p n+1,Kzn+1,K j ) b n+1 (z n+1,K j , ϕ l ) − L m=1 K∈T h ∞ j=1 β n+1,K j (ϕ m , p n+1,Kzn+1,K j ) b n+1 (z n+1,K j , ϕ l ) u n+1 m − ∆t L m=1 K∈T h ∞ j=1 β n+1,K j b n+1 (ϕ m , p n+1,Kzn+1,K j ) b n+1 (z n+1,K j , ϕ l ) u n+1 m = A n+1 3 u n + G n+1 2 + ∆t F n+1 2 − A n+1 3 u n+1 − ∆t A n+1 4 u n+1 l , where (A n+1 3 ) lm = K∈T h ∞ j=1 β n+1,K j (ϕ m , p n+1,Kzn+1,K j ) b n+1 (z n+1,K j , ϕ l ),(37)
Thus, problem (29) is equivalent to the lineal system
A n+1 u n+1 = b n+1 ,
where A n+1 ∈ R (L+1)×(L+1) and b n+1 ∈ R (L+1) are given by
A n+1 = M + ∆t R n+1 − A n+1 , b n+1 = M − (A n+1 1 + ∆t A n+1 3 ) − (A n 1 + ∆t A n 2 ) − B n+1 u n + A n 1 − B n+1 1 − ∆t B n+1 3 u n−1 +∆t F n+1 − ∆t F n+1 1 − ∆t 2 F n+1 2 + ∆t F n 1 − ∆t F n+1 3 − ∆t 2 F n+1 4 , with A n+1 = A n+1 1 +∆t A n+1 2 +∆t A n+1 3 +∆t 2 A n+1 4 , B n+1 = B n+1 1 −∆t B n+1 2 −∆t B n+1 3 −∆t 2 B n+1 4 .
Here, M and R n+1 are, respectively, the mass and stiffness matrices from the Galerkin formulation and A n+1 i and B n+1 i are the matrices that represent the effect of the small scales component of the solution on the large scales component.
Corollary 3. 3 .
3Under the hypotheses of Proposition 3.2, it holds
Figure 1 :
1Values of the spectral series to compute the diagonal coefficient of matrix A 3 for each pair (P, S).
Figure 2 :
2Values of the spectral series to compute the diagonal coefficient of matrix A 3 for (P, S) ∈ (0, 20) × (0, 1).
Figure 3 :
3Values of the spectral series to compute the diagonal coefficient of matrix A 3 for (P, S) ∈ (0, 1) × (0, 1).
Figure 5 :
5Splitting of interpolation cell for online computation of matrices coefficients.
u( 0
0, t) = exp((µ − a)t), u(1, t) = exp(1 + (µ − a)t) on (0, T ),
Figure 6 :
6Test 1. l ∞ (L 2 ) and l 2 (H 1 ) errors for the spectral VMS solution of problem (33).
Figure 7 :
7Solution of problem (14) for a = 1000, µ = 1, f = 0 and u 0 given by (34) with ∆t = 10 −3 and h = 0.02 (P = 10, S = 2.5). The spectral VMS solution is compared to the exact solution and the Galerkin solution. The results for time-steps numbers 1 to 4, 5 to 7 and 8 to 9 are respectively represented in figures (a), (b) and (c).
Figure 8 : 0 Figure 9 :
809Solution of problem (14) for a = 1000, µ = 1, f = 0 and u 0 given by (34) with ∆t = 10 −5 and h = 0.02 (P = 10, S = 0.025). The spectral VMS solution is compared to the exact solution and the Galerkin solution at first time step. Figures (a) and (b) respectively show these solutions in the whole domain and a zoom around x = Solution of problem (14) for a = 20, µ = 1, f = 0 and u 0 given by (34) with h = 0.01 and ∆t such that CF L/CF L bound = 1/2 (P = 0.1, S = 0.0926). Red lines represent Galerkin solution and cyan lines represent spectral VMS solution in each step-time.
Figure 10 :
10Comparison of different stabilised methods to solve problem (14) when P = 3, S = 25 with ∆t = 10 −2 and h = 0.02. Solutions in the three first time-steps. Method l ∞ (L 2 ) l 2 (H 1 )
Figure 11 :
11Comparison of different stabilised methods to solve problem (14) when P = 1 and S = 5 with ∆t = 10 −3 and h = 10 −2 . Solutions in the three first time-steps. Right: zoom around x = 0.7. Method l ∞ (L 2 ) l 2 (H 1 )
Figure 12 :
12Comparison of different stabilised methods to solve problem (14) when P = 3.5 and S = 100 with ∆t = 10 −2 and h = 10 −2 . Solutions in the three first time-steps. Right: zoom around x = 0.99.
Figure 13 :
13Solution of problem(14) for a = 1000, µ = 1, f = 0 and u 0 given by (34) with ∆t = 10 −3 and h = 0.02 (P = 10, S = 2.5). The feasible spectral VMS is compared with different stabilised methods. The results for time-steps numbers 1 to 4, 5 to 7 and 8 to 9 are respectively represented in figures (a), (b) and (c).
Figure 14 :Figure 15 :
1415First time-step solution of problem (14) for a = 1000, µ = 1, f = 0 and u 0 given by (34) with ∆t = 10 −5 and h = 0.02 (P = 10, S = 0.025). The feasible spectral VMS is compared with different stabilised methods in the whole domain Ω = (0, 1) in (a) and in a zoom around x = 0.7 in (b). Solution of problem (14) for a = 20, µ = 1, f = 0 and u 0 given by (34) with h = 0.01 and ∆t such that CF L/CF L bound = 1/2 (P = 0.1, S = 0.0926). The feasible spectral VMS is compared with different stabilised methods.
Dpto. EDAN & IMUS, University of Seville, Campus de Reina Mercedes, 41012 Sevilla (Spain), e-mail: [email protected], [email protected], [email protected] 2 Dpto. Matemática Aplicada I, University of Seville, Ctra de Utrera s/n, 41013 Sevilla (Spain), email: [email protected]
AcknowledgementsThe research of T. Chacón and I. Sánchez has been partially funded and that of D. Moreno fully funded by Programa Operativo FEDER Andalucía 2014-2020 grant US-1254587. The research of S. Fernández has been partially funded by AEI -Feder Fund Grant RTI2018-093521-B-C31.
Coupling stabilized finite element methods with finite difference time integration for advection-diffusion-reaction problems. M I Asensio, B Ayuso, G Sangalli, Comput. Methods Appl. Mech. Engrg. 196M. I. Asensio, B. Ayuso, and G. Sangalli, Coupling stabilized finite element methods with finite difference time integration for advection-diffusion-reaction problems, Comput. Methods Appl. Mech. Engrg. 196 (2007) 3475-3491.
C Bernardi, Y Maday, F Rapetti, Discrétisations variationnelles de problèmes aux limites elliptiques. Mathématiques et Applications. Springer45C. Bernardi, Y. Maday, F. Rapetti, Discrétisations variationnelles de problèmes aux limites elliptiques. Mathématiques et Applications, Springer. 45, 2004.
Anisotropic VMS solution of advection-diffusion problems by spectral approximation of sub-grid scales. T Rebollo, S Fernández-García, M Gómez-Mármol, Journal of Computational and Applied Mathematics. J. Comput. Appl. Maths. 380112959T. Chacón Rebollo, S. Fernández-García, M. Gómez-Mármol, Anisotropic VMS solution of advection-diffusion problems by spectral approximation of sub-grid scales Journal of Computational and Applied Mathematics. J. Comput. Appl. Maths. 380 (2020) 112959.
Stabilization of incompressibility and advection through orthogonal sub-scales in finite element methods. R Codina, Comput. Methods Appl. Mech. Engrg. 190R. Codina, Stabilization of incompressibility and advection through orthogonal sub-scales in finite element methods, Comput. Methods Appl. Mech. Engrg. 190 (2000) 1579-1599.
Stabilized finite element approximation of transient incompressible flows using orthogonal subscales. R Codina, Comput. Methods Appl. Mech. Engrg. 191R. Codina, Stabilized finite element approximation of transient incompressible flows using orthogonal subscales, Comput. Methods Appl. Mech. Engrg. 191 (2002) 4295-4321.
Time dependent subscales in the stabilized finite element approximation of incompressible flow problems. R Codina, J Principe, O Guasch, S Badia, Comput. Methods Appl. Mech. Engrg. 196R. Codina, J. Principe, O. Guasch and S. Badia, Time dependent subscales in the stabilized finite element approximation of incompressible flow problems. Comput. Methods Appl. Mech. Engrg. 196 (2007) 2413-2430.
A variational multi-scale method with spectral approximation of the sub-scales:Application to the 1D advection-diffusion equations. T Rebollo, B M Dia, Comput. Methods Appl. Mech. Engrg. 285T. Chacón Rebollo, B. M. Dia, A variational multi-scale method with spectral ap- proximation of the sub-scales:Application to the 1D advection-diffusion equations, Comput. Methods Appl. Mech. Engrg. 285 (2015) 406-426.
Numerical Analysis of Penalty Stabilized Finite Element Discretizations of Evolution Navier-Stokes Equations. T Rebollo, M Gómez-Mármol, M Restelli, J. Sci. Comput. 63T. Chacón Rebollo, M. Gómez-Mármol, M. Restelli, Numerical Analysis of Penalty Stabilized Finite Element Discretizations of Evolution Navier-Stokes Equations. J. Sci. Comput. 63 (2015) 885-912.
Mathematical and numerical foundations of turbulence models and applications, Modeling and Simulation in Science, Engineering and Technology. T Rebollo, R Lewandowski, Springer Science+Business MediaNew YorkT. Chacón Rebollo, R. Lewandowski, Mathematical and numerical foundations of turbulence models and applications, Modeling and Simulation in Science, Engineering and Technology, Springer Science+Business Media, New York, 2014.
Finite element methods for second order differential equations with significant first derivatives. I Christie, D F Griffiths, A R Mitchell, O C Zienkiewicz, Int. J. Numer. Methods Eng. 10I. Christie, D.F. Griffiths, A.R. Mitchell, O.C. Zienkiewicz, Finite element methods for second order differential equations with significant first derivatives, Int. J. Numer. Methods Eng. 10 (1976) 1389-1396.
. R Dautray, J L Lions, Mathematical Analysis and Numerical Methods for Science and Technology. 5SpringerR. Dautray, J.L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, Vol. 5, Springer, Berlin, 1992.
Stability of semidiscrete formulations for parabolic problems at small time steps. I Harari, Comput. Methods Appl. Mech. Engrg. 193I. Harari, Stability of semidiscrete formulations for parabolic problems at small time steps, Comput. Methods Appl. Mech. Engrg. 193 (2004) 1491-1516.
Semidiscrete formulations for transient transport at small time steps. I Harari, G Hauke, Int. Journal for Numerical Methods in Fluids. 54I. Harari and G. Hauke, Semidiscrete formulations for transient transport at small time steps, Int. Journal for Numerical Methods in Fluids 54 (2007) 731-743.
Fourier analysis of semi-discrete and space-time stabilized methods for the advective-diffusive-reactive equation: I. SUPG Comput. I Harari, M H Doweidar, Methods Appl. Mech. Engrg. 194I. Harari and M. H. Doweidar, Fourier analysis of semi-discrete and space-time stabilized methods for the advective-diffusive-reactive equation: I. SUPG Comput. Methods Appl. Mech. Engrg. 194 (2005) 45-81.
Fourier analysis of semi-discrete and space-time stabilized methods for the advective-diffusive-reactive equation: II. I Harari, M H Doweidar, SGS Comput. Methods Appl. Mech. Engrg. 194I. Harari and M. H. Doweidar, Fourier analysis of semi-discrete and space-time stabilized methods for the advective-diffusive-reactive equation: II. SGS Comput. Methods Appl. Mech. Engrg. 194 (2005) 691-725.
Fourier analysis of semi-discrete and space-time stabilized methods for the advective-diffusive-reactive equation: III. SGS/GSGS Comput. I Harari, M H Doweidar, Methods Appl. Mech. Engrg. 195I. Harari and M. H. Doweidar, Fourier analysis of semi-discrete and space-time stabilized methods for the advective-diffusive-reactive equation: III. SGS/GSGS Comput. Methods Appl. Mech. Engrg. 195 (2006) 6158-6176.
Variational multiscale a-posteriori error estimation for multi-dimensional transport problems. G Hauke, D Fuster, M H Doweidar, Comput. Methods Appl. Mech. Engrg. 197G. Hauke, D. Fuster, M.H. Doweidar, Variational multiscale a-posteriori error es- timation for multi-dimensional transport problems, Comput. Methods Appl. Mech. Engrg. 197 (2008) 2701-2718.
Multiscale phenomena: Green's function, the Dirichlet-to-Neumann map, subgrid scale models, bubbles and the origins of stabilized methods. T J R Hughes, Comput. Methods Appl. Mech. Engrg. 127T. J. R. Hughes, Multiscale phenomena: Green's function, the Dirichlet-to-Neumann map, subgrid scale models, bubbles and the origins of stabilized methods, Comput. Methods Appl. Mech. Engrg. 127 (1995) 387-401.
A space-time formulation for multiscale phenomena. T J R Hughes, J R Stewart, Comput. Methods Appl. Mech. Engrg. 74T. J. R. Hughes, J. R. Stewart, A space-time formulation for multiscale phenomena, Comput. Methods Appl. Mech. Engrg. 74 (1995) 217-229.
The variational multiscale method: a paradigm for computational mechanics. T J R Hughes, G R Feijoo, L Mazzei, J B Quincy, Comput. Methods Appl. Mech. Engrg. 166T. J. R. Hughes, G. R. Feijoo, L. Mazzei, J. B. Quincy, The variational multiscale method: a paradigm for computational mechanics, Comput. Methods Appl. Mech. Engrg. 166 (1998) 3-24.
Large eddy simulation and the variational multiscale method. T J R Hughes, L Mazzei, K E Jansen, Comput. Vis. Sci. 3T. J. R. Hughes, L. Mazzei, K. E. Jansen, Large eddy simulation and the variational multiscale method. Comput. Vis. Sci. 3 (2000) 47-59.
On large eddy simulation and variational multiscale methods in the numerical simulation of turbulent incompressible flows. V John, Applications of Mathematics. 51V. John, On large eddy simulation and variational multiscale methods in the numerical simulation of turbulent incompressible flows. Applications of Mathematics. 51 (2006) 321- 353.
Novo Error Analysis of the SUPG Finite Element Discretization of Evolutionary Convection-Diffusion-Reaction Equations. V John, J , SIAM J. Numer. Anal. 49V. John, J. Novo Error Analysis of the SUPG Finite Element Discretization of Evolu- tionary Convection-Diffusion-Reaction Equations. SIAM J. Numer. Anal. 49 (2011) 1149- 1176.
| []
|
[
"Mapping Out the HPC Dependency Chaos",
"Mapping Out the HPC Dependency Chaos"
]
| [
"Farid Zakaria [email protected] ",
"Thomas R W Scogland [email protected] \nLawrence Livermore National Laboratory\nLivermoreCAUSA\n",
"Todd Gamblin [email protected] \nLawrence Livermore National Laboratory\nLivermoreCAUSA\n",
"Carlos Maltzahn [email protected] ",
"\nUniversity of California Santa Cruz\nSanta CruzCAUSA\n"
]
| [
"Lawrence Livermore National Laboratory\nLivermoreCAUSA",
"Lawrence Livermore National Laboratory\nLivermoreCAUSA",
"University of California Santa Cruz\nSanta CruzCAUSA"
]
| []
| High Performance Computing (HPC) software stacks have become complex, with the dependencies of some applications numbering in the hundreds. Packaging, distributing, and administering software stacks of that scale is a complex undertaking anywhere. HPC systems deal with esoteric compilers, hardware, and a panoply of uncommon combinations.In this paper, we explore the mechanisms available for packaging software to find its own dependencies in the context of a taxonomy of software distribution, and discuss their benefits and pitfalls. We discuss workarounds for some common problems caused by using these composed stacks and introduce Shrinkwrap: A solution to producing binaries that directly load their dependencies from precise locations and in a precise order. Beyond simplifying the use of the binaries, this approach also speeds up loading as much as 7× for a large dynamically-linked MPI application in our evaluation. | 10.1109/sc41404.2022.00039 | [
"https://export.arxiv.org/pdf/2211.05118v2.pdf"
]
| 251,497,470 | 2211.05118 | 54a89795a0ac670df9f122abed32df19be373628 |
Mapping Out the HPC Dependency Chaos
Farid Zakaria [email protected]
Thomas R W Scogland [email protected]
Lawrence Livermore National Laboratory
LivermoreCAUSA
Todd Gamblin [email protected]
Lawrence Livermore National Laboratory
LivermoreCAUSA
Carlos Maltzahn [email protected]
University of California Santa Cruz
Santa CruzCAUSA
Mapping Out the HPC Dependency Chaos
Index Terms-toolchainspackage managementoperating sys- temsfilesystem hierarchy
High Performance Computing (HPC) software stacks have become complex, with the dependencies of some applications numbering in the hundreds. Packaging, distributing, and administering software stacks of that scale is a complex undertaking anywhere. HPC systems deal with esoteric compilers, hardware, and a panoply of uncommon combinations.In this paper, we explore the mechanisms available for packaging software to find its own dependencies in the context of a taxonomy of software distribution, and discuss their benefits and pitfalls. We discuss workarounds for some common problems caused by using these composed stacks and introduce Shrinkwrap: A solution to producing binaries that directly load their dependencies from precise locations and in a precise order. Beyond simplifying the use of the binaries, this approach also speeds up loading as much as 7× for a large dynamically-linked MPI application in our evaluation.
I. INTRODUCTION
The Livermore Computing (LC) facility at Lawrence Livermore National Laboratory (LLNL) supports thousands of users on dozens of different clusters and hosts Sierra, ranked third on the Top500 [1] and is preparing for the upcoming deployment of El Capitan [2]. The simulation software written for these machines is complex and requires a large chain of dependencies that is continually growing. In 2015, it was significant to say that some applications required 70 dependencies, counted as packages which may contain one library or dozens depending on the packager. Today the Axom [3] library, a common support library for Livermore codes, can require more than 200 total dependencies. Each application and dependency can require specific compilers with specific runtime libraries and Message Passing Interface (MPI) implementations to achieve the best performance, or to work at all. Users may run several different codes or several different builds of the same code in the same environment as part of larger scientific workflows. Immense effort has gone into building complex software pipelines reliably, such as Spack [4] and EasyBuild [5] in HPC and in mainstream distributions as well, but far less work has gone into ensuring that the software will run consistently once it is built.
Part of the complexity in these systems comes from the fact that there is no single group or model in control of the package ecosystem on an HPC system. They are instead composed of layers managed by different domains of responsibility. As an example, a machine at LLNL will usually be based on RedHat Enterprise Linux (RHEL) using the Tri-Lab Operating System Stack (TOSS), an extension of RHEL, and on capability systems extended further by vendor extensions. Most users will still not use compilers or much software from TOSS or the vendors directly, they use software from a separate sitespecific development environment exposed by modules called TCE. If it were only these four layers, the problem might be reasonably tractable, but even TCE only provides the bare essentials. Each group then manages their own software stack, or stacks, built on top of some combination of the lower layers and possibly other groups's manually managed stacks. These are managed without any common infrastructure or planning, either manually or with one of the HPC package managers or some combination thereof.
The challenges faced at Livermore with regards to managing software complexity are a microcosm of what exists in the wider software ecosystem. Anywhere that multiple maintainers and versions of software stacks coexist these problems do as well. There has been an explosion in available software, packaging methodologies, and Linux distributions that compose them in an attempt to tame the chaos. The pursuit of creating software with increasingly complex dependencies reproducibly and portably is becoming a defining problem at large across all segments of the packaging and distribution community.
The primary differentiator between how packages find their dependencies in different packaging methodologies is whether they rely on established conventions or enforced search paths on each binary. The most common and well known method is to depend on a filesystem hierarchy similar to the FHS (Filesystem Hierarchy Standard) to determine the components linked into a binary at runtime. Some more recent systems use other mechanisms to locate binaries, libraries, or other necessary portions of an application. The first of these is the common mechanism used by nearly every Unixlike operating system and others throughout history. Everything from Multics, Darwin, Haiku, and BeOS, to modern hermetic root systems like CoreOS, search for binaries and libraries at well-known paths in the filesystem. The other main group we refer to as explicitly linked, or store-based. The major differentiator is that libraries loaded, and directories searched, by a given application are determined explicitly as part of build and distribution in the system, rather than being implicitly defined by the conventions of the distribution. This is accomplished in a number of ways and shows up in quite a few different places across the ecosystem. The trend toward containers like Podman or Singularity for distribution is one way to accomplish explicitly linking an application with all of its resources in a redistributable way. It can also mean directly referencing target libraries in a binary using RPATH or RUNPATH as package managers like Spack and most manual HPC installation trees do. It can even mean explicitly controlling or patching the loader as is done by systems such as Nix [6,7] and Guix [8].
All of these non-traditional distribution mechanisms, package managers, and techniques are working on mitigating an underlying issue: The management of binary loading and interfaces is under-specified and managed all too often by pervasive arbitrary conventions. Our goal in this paper is to discuss the various approaches used today and in the past, explore some of the issues and workarounds commonly found in HPC systems today, and to present Shrinkwrap, our tool implementing a workaround that gives us the ability to run a program with little need to adhere to these arbitrary conventions. We present the following contributions:
• A survey of the state of practice in software distribution; • A methodology, using existing loader mechanisms, to ensures binaries find their dependencies regardless of the user environment; • Shrinkwrap: A novel solution to caching dependency resolution for binaries, implementing the methodology; • An evaluation of Shrinkwrap and use cases detailing its use at LLNL and problems it has resolved.
II. COMMON PRACTICE OF SOFTWARE DISTRIBUTION
The taxonomy of packaging is synonymous with software deployment models. The nuance and variation across deployment models are often relegated to a small cohort across Linux distributions. Software complexity and the proliferation of software has made software deployment, specifically reproducibility, more relevant to the traditional user.
This section explores several common and rising software deployment models and discusses the tradeoffs of each. It ends with a discussion of how these models are composed to form the complete software ecosystem on an HPC system.
A. The Traditional Model: Filesystem Hierarchy Standard
Established in 1994, and in continuous refinement since, is the Filesystem Hierarchy Standard (FHS) [9], further extended with the XDG Base Directory specification (XDG) [10] in 2003 and the systemd file-hierarchy [11] specification. This is by far the most common layout used in modern Linux distributions, and is so pervasive that it is frequently replicated as a component of the other models. This is the familiar base directories such as /bin, /etc, /lib and others that we have all come to know. Using a consistent standard like this has huge advantages. Anyone can look at a system that is arranged this way and determine where to look for things. Package managers that target this model include some of the most venerable and well-tested in existence such as Debian's apt and the rpm ecosystems. These package managers work by inserting software components into well-known locations on the system such as /lib or /bin.
The goal of a single unified directory structure like this is curating a system as a unified whole. It forms a single coherent set of packages that work together seamlessly, and only rarely allows more than one version of any given package. Since all software components must reside within a few key wellknown directories, there is no easy way to support alternate versions of a component beyond appending suffixes. This aids in updating software components for security vulnerabilities, since there is only a single file that needs to be updated, and the need for rebuilds and expensive updates is minimized. Installation of a package is equivalent to writing files to this single root one at a time, potentially overwriting existing files of the same name. This multi-step approach to software delivery can leave the system in an inconsistent state if the process is interrupted, especially during distribution upgrades that replace critical base components like the C standard library. It is often difficult to undo a software deployment unless specific care is taken to create backups beforehand.
The lack of provenance of the files on disk is made worse by the fact that most packages declare their dependencies without any explicit version information. Figure 1 shows an analysis of the Debian package repository as of November 2021. Out of a total of roughly 209,000 packages, nearly 3/4 of them use completely unversioned dependency specifications. These packages work because, and only because, the maintainers of Debian diligently and manually ensure that the full graph of packages in a given distribution build, link, and work together. That is an extremely impressive feat, but means that an immense amount of knowledge about the needs of all of these packages is implicitly encoded and unenforceable in software. This cost is also paid repeatedly by different distributions such as Fedora [12]. While some may benefit as downstream consumers of packages from a parent distribution it is a fraction of the whole, leaving many distributions and package ecosystems duplicating the effort required to test, patch and compose packages.
B. Self-Referential (Bundled) Model
In an effort to make software applications more selfcontained and easier to manage, software components can be bundled along with their dependencies. These are typically vendored within the same directory or in a specially-marked directory tree for the purpose. Bundled applications either rely on a modified search path that prioritizes local libraries, normal on Windows or Darwin and done by AppImage and AppDir formatted packages on Linux, or is replaced with a script that provides the appropriate options. To use a wrapper script, the values are passed in a variable like LD_LIBRARY_PATH to include the current working directory or a subdirectory. The ELF header is also able to imitate this setup through the use of the $ORIGIN expansion variable in RPATH and RUNPATH entries to refer to the location of the binary. The tradeoff to this approach is that there is a significant loss in the potential for deduplication across U n v e r s io n e d V e r s io n R a n g e E x a c t binaries employing the technique. Further, applying a security update for these embedded libraries will entail upgrading all of the bundles individually rather than updating a central location.
The benefit is a far simpler experience for users. This is one of the main reasons that Darwin applications can commonly be distributed in a click-and-drag-to-install manner for example. It further provides a relatively simple linking model that allows an application author to guarantee the libraries loaded are the ones they intended rather than system libraries chosen by a downstream maintainer or a user. The software package can reside anywhere on the filesystem and is not subject to the limited key space dilemma of FHS. Allowing multiple versions or minor changes of the same software package to exist on the system at the same time allows for the atomic installation and removal of the package as well. As a result, it is alluring for distributions of stand-alone scientific packages, especially graphical tools or others that are difficult to build and meant for novice end-users to install.
The challenge to this model is that it relies heavily on the software developer. Catching build dependencies is challenging unless the builds are always done in a sandbox environment. Because such a package uses its own vendored environment, it is very hard to extend, which poses significant challenges for packages that allow loading of user code or extensions. For example, including an embedded Python interpreter in an application is straightforward, but should it support installing packages into that vendored python? If so, where should they go? How should the user interact with them? It can also pose a security issue because the user can choose where to place the bundle. If the library path includes a writable directory, an attacker can leverage it to load unintended code.
In terms of compatibility, this is not the most familiar model to Unix devotees, but it is one of the best known to desktop users. Beyond that, this model is what allows almost all packages for Windows and MacOS to be installed without extra dependencies outside of themselves. The lack of external dependencies, aside from the OS itself, is in fact so prevalent on Windows that the Microsoft WinGet package manager does not support dependency specification or resolution at all as of December 2021 [13]. From a user perspective, this model simplifies management and use but increases file size and increases the chances of libraries going unpatched because they are bundled with otherwise stable software that doesn't post updates to dependencies.
C. Hermetic Root Model
Hermetic Root systems are those whose goal it is to leverage many of the same implicit assumptions as the FHS model while improving upon the atomicity and security of the system. Given the pervasiveness of the FHS model for software packaging and administration, these systems benefit by providing a familiar and easy-to-target model. The key insight they provide is the creation of layers in constructing the filesystem, similar to those of overlayfs, with the added ability to deploy layers via a commit model that resembles git. The ability to commit a new layer or rollback to prior ones allows for the atomic delivery or rollback of installation or upgrade operations. The model does not seek to impose any restriction on how the data is laid out, and adopts any benefits or shortcomings of layouts used in addition to it. Although the creation of filesystem layers and the curation of working packages still represents a hurdle, once achieved the result can be distributed and reused providing a reproducible environment with which to run desired workloads. This model also easily supports making the entire operating system, and vast majority of package components, read-only with little effort making it resistent to many common classes of vulnerabilities.
The main challenge for this model in HPC is the slow move to support user namespaces in HPC centers. As restrictions relax and more centers deploy bubblewrap with FlatPack and Singularity or Podman-based solutions, this may become more practical, but for now the lack of capabilities to create images inside of normal compute resources, and inside of security domains, makes this difficult to use on many HPC systems.
D. Store Model
The Store Model refers to systems that install software components each in an individual directory under a specific filesystem root, usually with each individual package directory following the FHS. For example, Nix packages are installed under /nix/store and spack packages under <spack-repo>/opt/spack. This is a complete departure from the FHS model at the system level, moving nearly all directories from a single root-level location to one per package. References to dynamic libraries or shared code should only be done from other store locations, and only explicit references in the package descriptions should be respected. The explicit dependency linking between store paths creates a directed acyclic graph of software components and their dependencies. These systems often employ a consistent hash-naming scheme to avoid conflicts, and allow arbitrary versions of the code to reside congruently, providing the ability to perform upgrades or rollbacks atomically by installing the whole new graph without invalidating the old one. In order to exert control over the linking process, shared objects are resolved by setting RPATH/RUNPATH during compilation, or through post-build actions that modify binaries using patchelf or similar tools.
Several new package managers and distributions employ this model, including Nix, Guix, and Spack. These originated from concepts first introduced through Nix [14]. The model requires package authors to canonicalize the build steps into the system's model so that the graph of all dependencies is explicit and complete. The consistent hashing scheme is often referred to as pessimistic, because it takes into account the package's full source, build steps, and the same for its complete transitive closure. Any minor change from source to compiler flags for any package in the build graph will cause a domino effect of rebuilds.
This model fares well from an atomicity and reproducibility standpoint, but security requires updates to propagate to all dependent packages, rewriting potentially large segments of a system's packages when a popular library like libcurl is patched. Even so, it is inherently no less secure than the others since those updates are provided in a similarly timely manner, they just happen to be larger. Where this model runs into problems is compatibility, familiarity and implementation given existing tools. A NixOS system cannot natively run a dynamic executable built on any other distribution even if the system has every single dependency used by that executable. There are projects that help deal with this such as Nix-LD [15], but the fact that everything in a Nix system is placed under the store means that even fundamental building blocks like the loader (ld.so) are not where an FHS system would expect them to be 1 . It is done for good reason; this way a Nix system can use two different loaders with two C libraries side-by-side without issue, but the compatibility is poor by default.
While we normally don't conceive of it this way, the development tools, distributions and module directories of HPC systems tend to be a manually curated version of a Store Model system as well. This is hardly surprising, since that is a large part of the reason Spack builds the way it does, but when treated that way certain properties become clear. For consistency and usability, it is desirable to make each package in such a module directory work like a store model by ensuring that each package encodes all of its dependencies, rather than requiring environment variables loaded from a module.
Another issue that these models face is a model for loading of shared objects that was developed in and around the transition from SunOS to Solaris in the mid 1990s. Limited to using RPATH, RUNPATH, or environment variables they tend to use a combination of RPATH or RUNPATH and wrapper scripts to patch up the graph as best as they can to produce a working system. However, these methods were never meant to be used for this purpose, and create their own problems. We'll discuss correctness issues further in Section III, but loading performance can be significantly impacted by using a store model even when the setup is correct. Since RPATH and RUNPATH only allow specifying paths to be searched, and they are all searched in order for each needed entry, applications with many dependencies end up searching many directories to find each library. Figure 2 depicts the dependency graph of the Ruby package in Nix with all 453 dependencies. It is so dense, and so many components that it's nigh illegible, but it itself is a minor dependency for many other packages. On a local filesystem, the overhead is usually small enough to be ignored, but when the dependencies are on a network filesystem it can be a significant performance issue.
E. HPC and the Module Model
We have eluded to the common setup of HPC systems in other parts of this section, but any given HPC system is usually comprised of layered instances of the FHS model and some form of the store model. Often the store model portion is less strictly structured and presented in the form of software modules handled by a module manager like lmod. As an example, the software stack on Lassen (the open compute version of Sierra), the base system is an FHS formed from a combination of RedHat, TOSS and IBM base packages. On top of that, there is a large set of developer environment packages available through modules in /usr/tce. As of this writing, 338 separate directories are managed by application teams, many of which provide built versions of their software or tools for downstream consumers to use in whatever way they have decided in their own tree.
Within any one component of this system, packages tend to be self-consistent, usable and stable. Difficulties arise from combining elements of multiple components, and the strategies used to compose them. Common issues include: one layer using RPATH to ensure all dependencies can be found while another uses RUNPATH which causes the RPATH to be ignored; runtime libraries injected by compilers without RPATH entries added relying on the environment; and difficulty identifying which packages are ABI compatible with one another or which compilers use which runtime library versions. Applications are composed from some combination of these components, and frequently more are pulled from package managers like Spack, vcpkg, pip, conda, as well as other sources. The chaos of these deployment strategies, tools, and requirements can confound even sophisticated application developers and dedicated HPC support teams, resulting in fragile software deployments.
III. DISCUSSION
The survey of packaging methodologies demonstrates that different means of bundling software within the Linux environment are possible and can achieve varied levels of atomicity, reproducibility, and security. In each classification, the system relies on a shared set of simple primitives controlling the loader to distinguish itself. These differences are subtle to all except the most well-versed in the space. For instance, Using what mechanism are the libraries ultimately resolved? These questions may seem innocuous, but they are critical to what components load in a given environment. Issues can cause the link order, and thus the loaded dependencies, to subtly change long after installation and testing.
To answer some of these questions, mostly in the context of the glibc loader, shared objects are only loaded into memory a single time during traversal, usually based on their soname. If a shared object has already been visited and is needed by another dependency it will be provided without a lookup, and will not raise a warning or error if that library would not have been found otherwise. That is a useful performance optimization, but it also allows missing path entries to hide in working binaries that may surface later when the binary is run with a different set of flags or a new version of a library in the tree. Listing 1 shows an example of a library trace from a program called dbwrap_tool where the application and many of its libraries use RUNPATH to find what they need, but one library four levels down the tree has no RUNPATH. The libsamba-modules-samba4 library finds three of its dependencies through default search paths, but the fourth wouldn't be found at all if it hadn't been loaded earlier in the tree by another library with a correct RUNPATH. The use of RUNPATH to instruct the linker, such as in the Store Model, is potentially problematic given that the granularity of the search path is at the directory level. Without a direct mapping of needed shared objects to their respective location, it is possible to find an incorrect object. The usual solution to this is to ensure that the order of items in the search path gives the correct result, but even simple conflicts can produce cases where this is no longer possible. Consider a system with libraries arranged as in Figure 3, in which liba.so is needed from dirA and libb.so is needed from dirB. In any ordering of any of the available search path options, there is no way to get the correct intended behavior without creating a new directory with the correct versions. These issues have inspired a great deal of debate on the use of any of these mechanisms. The Debian community has publicly argued over policies in this space for many years; some of the resulting outcomes are now documented on their wiki [16]. The main concern centers around an application loading different versions of a library depending on loader order, arguing that the dynamic linker should solve this rather than RPATH or RUNPATH entries. That seems to be arguing for use of ld.so.conf or similar to resolve these issues, which makes sense from the perspective of a distribution maintainer who can control that configuration easily and desires a single coherent FHS system. Some of the caution also comes from early use of RPATH by tools like libtool, which once included directories that also existed in ld.so.conf in the RPATH, locking system paths into the search out of order. The general desire to avoid RPATH is not new. In fact, the first version of the manpage for ld.so on Linux [17] included a statement that RPATH was deprecated and should be avoided.
On the other side, the Qt project published a recommendation [18] that authors of Qt applications should use RPATH and ensure RUNPATH isn't set. Given the distaste for RPATH, that may seem surprising, but Qt loads component libraries from code inside other libraries. As a result, an application that uses QtGui with an RPATH will provide its RPATH entries to the load issued in QtGui, allowing the system to find the correct version of the dependency. If the application has any RUNPATH, the search will only include those paths set on QtGui itself. Normally this is even treated as a feature rather than a bug, and the QtGui library should have a RUNPATH to find the dependency. That works as long as the dependency tree is strictly a tree, but if the application provides a plugin or a library that gets loaded, say by dlopen, in a different shared object, it has no way to provide search paths to that dlopen with RUNPATH. The only recourse at that point is LD_LIBRARY_PATH, which can cause even more damage in a RUNPATH-based system where more of the paths can be overridden in sub-applications, or adding an explicit API to load the library by absolute path.
From an administrator perspective, working with either RUNPATH or RPATH in an executable or library can cause pain points. If a library is locked to point to a library at /opt/rocm-4.3.0 and that version is found to be buggy but binary compatible with 4.3.1 for example, then replacing 4.3.0 with 4.3.1 is more costly than without those paths set. They are forced to either recompile the library, symlink the new one into an inappropriate location, or override at the application level with RPATH or LD_LIBRARY_PATH. If the library uses RUNPATH, the RPATH solution on the application is insufficient as well, and can cause situations where parts of one version and parts of another are loaded due to two different search path orders with respect to LD_LIBRARY_PATH.
In the end, traditional search path management is fragile and poorly suited to store-based or module style package distribution, or separating packages by directory in general.
B. Questioning Dynamic Linking
The necessity for package curation in distributions is a point of contention as many language package managers trend towards vendoring dependencies to help improve reproducibility of the package [19] [20]. What at first became a solution to managing dependency hell soon became a holy grail to manage a single-unified graph of all software packages. The need for dependency-shared object resolution arose from a time when storage and bandwidth were expensive and in limited Only 4% of shared object files are used by more than 5% of the binaries quantities or availability. The ability to upgrade a buggy or vulnerable package with as small as a change as necessary to the dependency graph is a useful requirement of the past. There has been ongoing public discourse [21] [22] that demonstrates the total cost to re-download all binaries affected by CVEs in 2019 to be under 10 GiB (significantly smaller if you discount glibc). A survey of a local machine with 3,287 binaries demonstrates that the majority of libraries are used by relatively few binaries on the system as demonstrated in Figure 4. Why is dynamic linking continued given the lack of shared object reuse? Much of this paper has been focused on the pitfalls and short-comings of dynamic linking, many of which are nonexistent for a statically compiled executable. That said, in an HPC context, we have seen leadership class systems with only static linking that deduplicated statically linked binaries in memory, and a transition back to supporting dynamically linked binaries. The memory reuse benefits can be more noticeable when running the same application as one process per core and the same set of libraries loaded. In the wider community, Linux distributions have surfaced to explore the emerging pro-static philosophy [23], but have yet to gain meaningful traction.
Dynamic loading also provides one significant benefit we have not otherwise discussed. Many tools, especially prevalent in HPC, rely on dynamic linking to override or wrap symbols. For example, tools that use the PMPI interface are usually preloaded with LD_PRELOAD; same for performance or memory tools like gperf. Changing to fully static linking breaks all of these tools, rendering them unusable. That may still be worthwhile for final production release binaries in some cases, but it means that a great deal of work during development may need to use dynamic binaries.
C. Questioning the Loader Interface
In the mid-to-late 1990s, the tools provided by the linker allowed augmenting the search space for shared objects in well-intentioned ways, but these techniques have not aged well. RPATH was sufficiently misused and thus reviled on SunOS, Solaris, and later on Linux that it has been deprecated for 20 years. Yet, it is still supported and in active use in newly written systems to this day. Its replacement, RUNPATH, was meant to solve developers' issues by ensuring that LD_LIBRARY_PATH could override search paths in binaries, and that executables would not pollute their dependencies with their embedded search paths. While it succeeded in these goals, by not disallowing RPATH and RUNPATH from being used together and by eschewing propagation, it lost the ability to propagate search paths to dependencies when necessary.
The constraints we want to express are a combination of options to inject new paths into the library search path: prepend, append, and whether to inherit. All but one of the problems listed in Section III-A can be solved by offering prepend/append and a boolean propagation flag on each path added to the search space.
The desire to find all libraries within a few well-known directories from the FHS model has bled into the ability to specify the search space as a single list for all libraries. Allowing the ability to dictate the search space per shared object would give fine-grained control over the search semantics. This would also solve the final issue: the ability to load libraries with conflicting filenames from paths deterministically.
Having considered this direction, the other question that must be asked is why we must rely on a bare soname and a linear set of paths to search to do this job. The Fuchsia kernel and Zircon system loader implement a service to request dynamic libraries at load time, allowing load configurations to be changed between libraries during loading [24]. In the end, though, there is a standard ELF loader on top of it. Given the option to change the way dependencies are encoded in binaries could allow a system like Nix or Spack to store the hash of the library being requested, store the specification used to build it, or store enough information to be able to not just load it but determine with far greater detail which version is expected if it is not available. One can envision a system that would allow a user to take a binary set up that way and ask a tool to provide all of the dependencies it needs in place of distributing a static binary or a container.
D. Workarounds
While it's a pleasant thought experiment to imagine a world where we do not require backwards compatibility with established loaders, the state of the practice is that we must work within the limitations of ELF and the System V ABI model. Therefore, we propose a number of workarounds to allow packaging models to avoid some of these issues given the current capabilities of loaders. Additionally, we will discuss what seem to be the core features desired in a future loader to allow all of the package author, package maintainer, and the end user to request the behavior they want without requiring massively long search paths and ambiguous lookups.
1) Dependency Views: This method is based on the concept of environment views from the Spack package manager, but could be used by any system that wants to do per-package resolution of dependencies. Rather than setting RPATH or RUNPATH entries on the executable and every library to all dependencies, each gains a single RPATH or RUNPATH to a package-local directory containing an FHS-styled filesystem populated with symlinks to the package's dependencies. Rather than a long list of RPATHs, there is now only one, and resolution should necessarily be much faster, especially on network filesystems where stating a file can be slow. An extra benefit to this is that it may work for resources other than dynamic libraries such as data or font packages, by providing the combined view of those dependencies a package may expect from a traditional FHS system. The downside is that this method requires both a tremendous number of symlinks, and thus filesystem inode resources, to represent a full system. It is also constrained to only allowing a package to depend on a single version of any dependency, since they cannot link on top of each other.
2) Needy Executables: A less costly workaround in terms of filesystem resources is to use the issue shown in Listing 1 to our advantage. Since libraries are cached by soname, and libraries are loaded in breadth-first-search order starting from those needed by the executable, we can fix the load order in the executable. We do this by by directly linking all libraries required by the full transitive closure of dependencies into the executable. For example, consider an executable that depends on liba which depends on libb. Rather than having a normal needed list of just a, the binary would instead have a needed list of a,b and RPATH or RUNPATH entries to find both. The RUNPATH issue with propagating search paths to dependencies is partially mitigated here as well, since libraries are pulled to the top for resolution.
Despite working around some of the main issues of the traditional approach, this method still has flaws. If any pair of libraries in the set define the same strong symbol, the link will fail. Additionally, load paths for dlopen calls without a path are not directly resolved by this method. Since these are loaded programmatically, they are not part of any needed entry. We could envision a system that traces all such calls and adds the libraries to the needed list, but that could cause breakage due to initialization order or load parameters such as the local or global nature of symbols in the library.
IV. SHRINKWRAP
The introductions of Spack and similar store-like systems have added a much needed level of reproducibility and organization to managing the common combinatorial stacks required by the HPC community. The ability for a binary or shared object file to explicitly tell the dynamic linker through the use of RPATH to search specific content-addressable named locations is pivotal to this approach, as it is to the manual approaches described earlier.
Unfortunately, the ability to modify a dynamic linker's search algorithm is very coarse. All of RPATH, RUNPATH, and LD_LIBRARY_PATH are simply lists of directories, and apply to all dependencies needed by the binary. As the number of dependencies for a shared object grows, so does the length of the list that must be searched, penalizing the startup time for the process. This unnecessary work has real consequences. Frings et al. have written how with sufficiently large dependency graphs, one can flood the filesystem with requests and have process startup times on the order of hours [25].
Based on our experiences, we have developed Shrinkwrap, an open-source implementation of the Needy Executables option presented in Section III-D2. Shrinkwrap provides the following features:
• Encodes dynamic dependencies in the binary by their absolute path; • Lifts all transitive dependencies to the top shared object to simplify auditing and prevent RPATH/RUNPATH interference in transitive dependencies; • Offers virtual resolution strategies to handle crossplatform binaries or alternative dynamic linkers; • Available as open-source MIT licensed software. When faced with a recurring problem, often the solution is to cache the previous answer to avoid unnecessary work. Shrinkwrap adopts this approach by freezing the required dependencies directly into the DT_NEEDED section of the binary. Rather than listing the soname each entry is an absolute path. Furthermore, the transitive dependency list is lifted to the toplevel binary to simplify auditing the required dependencies. All of the needed dependencies, including transitive dependencies, are now listed by absolute paths on the top-level binary. Shrinkwrap is written in Python, leveraging the lief [26] library for parsing and writing ELF binaries. Lief was chosen for its clean interface, ability to work stand-alone, support for symbol analysis, and an option to support MachO and PE binaries to offer similar benefits on MacOS and Windows in the future.
The solution is conceptually simple, but applying the modifications to the binary in a general fashion is challenging. Our desire is to support all reasonably compliant linkers and loaders on Linux. In practice, Shrinkwrap currently supports glibc binaries and others that use the same loader behavior such as BSD libcs, but not musl [27]. Shrinkwrap relies on the dynamic linker deduplicating libraries with a common file basename or whose soname (ELF header value) are the same. Consider the example in Figure 5: Shrinkwrap elevated libac.so to a direct absolute dependency of the binary, but relies on the dynamic linker deduplicating the resolution for libxyz.so, which does not refer to it absolutely. Referencing dependencies by their absolute path makes it impossible to swap out dependencies for alternative libraries using traditional methods like LD_LIBRARY_PATH. The use of LD_PRELOAD remains viable, so in cases where specific functionality would still be preferred to be overwritten, a backdoor into dynamic linking remains. This also means that traditional preloaded tools continue to work as normal. When implementing Shrinkwrap against glibc, the deduplication is performed and the necessary libraries are resolved correctly. Unfortunately, other dynamic linkers such as musl do not exhibit the same behavior, causing the solution to not be compatible across other environments. The musl loader does not cache libraries loaded by their full path by soname, but by inode number, causing some load order issues with our scheme. They also do not implement the standard behavior of either RPATH or RUNPATH, but a meld of the two where paths are inherited by dependencies but are searched after LD_LIBRARY_PATH. This behavior would actually solve a number of problems with RUNPATH, but since it is nonstandard it makes supporting musl more difficult for a tool like Shrinkwrap. This incompatibility was raised to the musl developers on their mailing list [28], however, the primary challenge is that while the System V ABI specification requires dynamic libraries to be deduplicated, or at least not to be loaded redundantly, it does not specify how they are to be matched to determine if they are duplicates.
Aside from dealing with divergent loader behaviors, Shrinkwrap must also identify which library on the filesystem each needed entry resolves to. In a simple case, using Shrinkwrap on the target system, we can use ldd or run the binary interpreter extracted from the binary with an option to list, as in ld.so --list, to get the actual behavior the loader would use given current conditions. When that works it gives Shrinkwrap exactly what we need and ensures consistent behavior. To handle cases where binaries are not executable on the current system, or where the loader is either not usable in this way or not executable itself, Shrinkwrap also offers a native strategy that traverses the filesystem the way that the loader would to find libraries. This is a useful option, but the number of corner cases is large. Mainly the issue is that the System V standard says libraries that do not match the architecture of the loading binary should be silently ignored, so we must detect these and avoid them since they are very common on systems with multiple native ABIs (x86 and x86 64 for example). Additionally, glibc supports loading more specialized versions based on the target architecture from subdirectories of each directory in the search path, and other expansions which must be faithfully replicated.
Shrinkwrap works strictly by replacing the DT_NEEDED libraries of the binary, however, dependencies may still be resolved via additional means such as dlopen. Binaries whose runtime dependencies outnumber their static dependencies may not see optimal improvements. An area of future work as outlined in Section III-D2 would be to allow Shrinkwrap to audit all dlopen calls and lift them as DT_NEEDED so they can be easily referenced by absolute path. For cases where the user or packager knows what libraries will be dlopened, and the semantics allow, adding the names of these libraries to the needed section before using Shrinkwrap allows Shrinkwrap to resolve them as well. This works well for python modules for example, since they load cleanly and don't init until called. When the libraries are unknown however, perhaps plugins that aren't even installed in the same package, we will consider other mechanisms as future work.
All of that said, our tests across a varied array of binaries from Nix, Spack and hand-built HPC codes on a variety of architectures have given us confidence that Shrinkwrap is capable of handling most binaries found in the wild. Some example applications are discussed as part of Section V.
V. EVALUATION OF SHRINKWRAP
In order to evaluate Shrinkwrap as an approach to resolve practical issues with dynamic loading, we present evaluations of both the performance characteristics of a shrinkwrapped binary and case studies of applying Shrinkwrap to difficult library resolution problems. The performance of executing Shrinkwrap itself is bounded mostly by the time to traverse the filesystem, and if necessary, rewrite portions of the binary. To wrap a binary with 900 needed entries and an RPATH 900 entries long with a 213MiB main executable, took either four seconds on a Xeon E5-2695 system with the filesystem cache warm, or over a minute on a cold NFS cache. Since the operation is intended to be done only rarely, and usually on much smaller applications, its performance is sufficient. More important is the loading performance of binaries that have been shrinkwrapped, and the improvements in ergonomics of creating and using binaries on complex systems.
A. Loading Performance
The number of library dependencies needed by a particular binary and the number of entries in the RUNPATH can vary greatly. Consider a highly dynamic but common binary, the emacs editor, as built by Nix, lists 36 directories in its RUNPATH and requires 103 dependencies to be resolved. The result is that the dynamic linker could attempt nearly 3,600 filesystem operations to resolve the needed dependencies in the worst case, every time the process is started. This exorbitant cost can be made worse if the store itself resides on a shared filesystem such as NFS. This problem is not unique to Nix, and is present in other store-like systems. Courtès has written about this problem for the Guix system [29]. Table II shows the reduction to the stat and openat syscalls during process startup, and was captured using strace. The reduction in syscalls equates to a 36x speedup.
While the total time is not long when considered as part of a single invocation, the effects magnify when applied in the context of even a modestly sized MPI application. To dynamic application benchmark to measure the cost to launch and load a large MPI application at a modest scale. For our experiments, the benchmark is configured to match the general characteristics of a real LLNL application with approximately 900 shared libraries, using the "bigexe" configuration. All modules produced are listed as needed entries on the executable, modified slightly to place each of them in its own rpath directory. Figure 6 shows the results of running our Pynamic configuration on a system with two Xeon E5-2695 processors, loading the application and its libraries from NFS.
B. Use Cases
In the process of preparing for a new AMD-based supercomputer, several software integration issues have arisen that resist traditional workarounds. The first of these is caused by a combination of three factors: RPATH entries in the main executable that point to all of the appropriate libraries, LD_LIBRARY_PATH set in modules to help with internal library search issues in ROCM packages, and those same ROCM packages using RUNPATH in place of RPATH. Any one of these would not be a problem by itself, even any two of them, but all three combined produces unfortunate effects. Specifically, an application built with ROCM version 4.5 will segfault if run when the module for a different ROCM version is loaded. This happens because after the first ROCM library is loaded, having been found by RPATH, the presence of a RUNPATH inside the library causes the loader to ignore the RPATH entries. The loader then prioritizes the now incorrect LD_LIBRARY_PATH, causing incorrect versions of the internal libraries used in ROCM to be loaded. Applying Shrinkwrap and linking all dependencies directly to the binary fixes this issue given a built binary inside a consistent environment.
The second issue comes from a workaround used inside a vendor library. When using the system compiler on an El Capitan Early Access system, compiling with OpenMP links in libomp.so, without OpenMP links libompstubs.so instead. This is perfectly reasonable, it means OpenMP runtime calls are always available. The downsides are twofold: the application is now dependent on load order to work correctly, and the linking approach to the Needy Executables workaround does not work. The load order is important because if libompstubs.so loads first the application will run with no threading or offload support. The workaround breaks because the stub library and the main OpenMP library are drop-in replacements, and define the same symbols. When both are loaded at runtime this is fine; whichever loads first wins. When both are specified on a link line, the link fails due to the duplicates. Since Shrinkwrap does not depend on manipulating the link line it can encode the required libraries without duplicate symbol conflicts. Once re-written to absolute paths, the initial load for all needed libraries is no longer environment dependent and can be inspected in the build environment and relied upon in the user's run environment. Though Shrinkwrap does not explicitly check symbol shadowing or load orders, it preserves the order the user set. This prevents a common class of errors in production codes on HPC systems, especially where multiple compiler stacks must be used and the results linked together.
VI. RELATED WORK
The labor, time, and art that goes into producing package distributions for Linux and the software packages that comprise them is not often a subject written about, at least publicly. This research seeks to begin to provide a common set of language and a survey of the current landscape to begin to foster continued discussion.
Recently, there has been a renewed interest in emerging platforms as the landscape of specialized hardware has grown. The quest for portability of software to multiple platforms is in the same effort to better understand the taxonomy of packages, their dependencies, and the ability to reproduce the software artifact. Khazem et al.'s research [31] into how portable software can be given changes to the toolchain (C/C++ compiler) or standard C library demonstrates the challenge of producing reproducible platform-independent software. In their effort to survey the portability of a large corpus of packages, the distribution's dependency factor was a constant hurdle. Many popular Linux distributions' packages (e.g., Debian, Fedora) have a large transitive closure, demonstrating how connected the package graph is. Research into the Redirected Execution Daemon was necessary to silently bypass failures, allowing builds to progress to completion and avoid the domino effect of failing packages to their upstream dependents.
The focus of our paper with respect to package taxonomies was focused at the lowest runnable unit within a Linux environment, an ELF file (executable or shared object), which often assumes that the language used was C/C++. This focus helps narrow the differences amongst package management solutions as they differ across distributions. The cornucopia of languages, however, is much larger than this narrower view, and many come with their own package management solution. Many of these languages reside atop a virtual machine and typically have their own concept of modules and resolution strategies. An interesting field of research is in the ability of this diaspora of language package management tooling to communicate amongst each other, potentially with that of the operating system, and avoid duplicating efforts [32].
VII. CONCLUSIONS
Package dependencies in HPC and across the ecosystem have increased drastically in recent years. As the dependencies between software have become increasingly interconnected, package management solutions have emerged to address new deployment models. In this paper, we surveyed the most well-known deployment models in use today in Unix-like environments and identified challenges each face as they make certain trade-offs for desired guarantees.
Distributions and package managers have begun to make leaps into greater atomicity, security, and reproducibility, but these goals are coming into conflict with the semantics of the ELF loader and incurring non-trivial costs as a result. We have presented some solutions that work around many of the common problems in this space within those options, and presented our tool for mapping out the chaos of dependencies: Shrinkwrap. Shrinkwrap is our solution to some of the common issues found in composing packages across models and from different sources into a reliable executable. In doing so, we have shown we can improve load times for some highly dynamic applications by a factor of seven, while making the application easier to launch for users.
Fig. 1 :
1Debian package dependencies by type
Fig. 4 :
4Shared object reuse on a typical Debian installation with 3287 binaries.
Fig. 5 :
5Deduplication based on soname
Fig. 6 :
6Time-to-launch instances of Pynamic as built (Normal) and shrinkwrapped.
ruby-2.7.5.drv gcc-10.3.0.drv stdenv-linux.drv ruby-2.7.5.drv gcc-wrapper-10.3.0.drv stdenv-linux.drv perl-5.34.0.drv perl-5.34.0.drv bootstrap-stage4-gcc-wrapper-10.3.0.drv perl-5.34.0.drv source.drv bison-3.8.2.drv 261d8dd20afd26feb05f00a560abd99227269c1c.patch.drv rubygems.drv gdbm-1.20.drv groff-1.22.4.drv autoconf-2.71.drv openssl-1.1.1l.drv config-0.29.2.drv texinfo-6.8.drv libev-4.33.drv c-ares-1.17.2.drv pkg-config-0.29.2.drv automake-1.16.3.drv gettext-0.21.drv libtool-2.4.6.drv help2man-1.48.5.drv perl5.34.0-gettext-1.07.drv default-builder.sh remove-references-to.drv zlib-1.2.11.drv bash-5.1-p12.drv hook.drv mpfr-4.1.0.drv libmpc-1.2.1.drv bootstrap-stage3-gcc-wrapper-.drv perl-5.34.0.drv patchelf-0.13.drv glibc-2.33-56.drv gmp-6.2.1.drv gettext-0.21.drv isl-0.20.drv libelf-0.8.13.drv which-2. 21. drv texinfo-6.8.drv binutils-wrapper-2.35.2.drv gnugrep-3.7.drv attr-2.5.1.drv patchelf-0.13.drv acl-2.3.1.drv binutils-2.35.2.drv gnutar-1.34.drv gawk-5.1.1.drv patch-2.7.6.drv libunistring-0.9.10.drv binutils-wrapper-2.35.2.drv ed-1.17.drv libidn2-2.3.2.drv pcre-8.44.drv xz-5.2.5.drv gzip-1.11.drv gnumake-4.3.drv bzip2-1.0.6.0.2.drv autoconf-2.71.drv libtool-2.4.6.drv automake-1.16.3.drv gettext-0.21.drv help2man-1.48.5.drv perl-5.34.0.drv hook.drv hook.drv coreutils-9.0.drv help2man-1.48.5.drv zlib-1.2.11.drv bison-3.8.2.tar.gz.drv m4-1.4.19.tar.bz2.drv rubygems-3.2.26.tgz.drv perl-5.34.0.tar.gz.drv gdbm-1.20.tar.gz.drv ruby-2.7.5.tar.gz.drv CVE-2019-13232-1.patch.drv CVE-2019-13232-3.patch.drv CVE-2019-13232-2.patch.drv unzip60.tar.gz.drv groff-1.22.4.tar.gz.drv autoconf-2.71.tar.xz.drv patchutils-0.3.3.tar.xz.drv openssl-1.1.1l.tar.gz.drv texinfo-6.8.tar.xz.drv libffi-3.4.2.tar.gz.drv v6.2.tar.gz.pkg-config-0.29.2.tar.gz.drv automake-1.16.3.tar.xz.drv gettext-0.21.tar.gz.drv libtool-2.4.6.tar.gz.drv help2man-1.48.5.tar.xz.drv gettext-1.07.tar.gz.drv do-not-regenerate-revision.h.patch expat-2.4.1.tar.xz.drv pkg-config-0.29.2.tar.gz.drv requires-private.patch diffutils-3.8.tar.xz.drv openssl-1.1.1l.tar.gz.drv fix-chmod-exit-code.patch coreutils-9.0.tar.xz.drv disable-seek-hole.patch sed-4.8.tar.xz.drv c-ares-1.17.2.tar.gz.drvUnderstanding the link order of a binary is difficult in practice. Are needed libraries traversed breadth first or depth first?libffi-3.4.2.drv
ncurses-6.2.drv
readline-6.3p08.drv
libyaml-0.2.5.drv
curl-7.79.1.drv
unzip-6.0.drv
libssh2-1.10.0.drv
libkrb5-1.18.drv
nghttp2-1.43.0.drv
openssl-1.1.1l.drv
gnum4-1.4.19.drv
patchutils-0.3.3.drv
keyutils-1.6.3.drv
pkg-findutils-4.8.0.drv
diffutils-3.8.drv
coreutils-9.0.drv
expand-response-params.drv
gnused-4.8.drv
linux-headers-5.14.drv
glibc-iconv-2.33.drv
pkg-config-wrapper-0.29.2.drv
expand-response-params.drv
bison-3.8.2.drv
hook.drv
nuke-references.drv
bootstrap-stage0-glibc-iconv-bootstrap.drv
bootstrap-stage1-gcc-wrapper-.drv
bootstrap-stage0-glibc-bootstrap.drv
bootstrap-stage0-binutils-wrapper-.drv
xz-5.2.5.drv
lzip-1.22.drv
bootstrap-stage2-gcc-wrapper-.drv
binutils-wrapper-2.35.2.drv
expand-response-params.drv
autoconf-2.71.drv
automake-1.16.3.drv
libtool-2.4.6.drv
gnum4-1.4.19.drv
python3-minimal-3.9.6.drv
binutils-2.35.2.drv
pkg-config-wrapper-0.29.2.drv
setup-hook.drv
bzip2-1.0.6.0.2.drv
zlib-1.2.11.drv
bash-5.1-p12.drv
python-setup-hook.sh.drv
libffi-3.4.2.drv
hook.drv
xz-5.2.5.drv
expat-2.4.1.drv
gettext-0.21.drv
texinfo-6.8.drv
xz-5.2.5.drv
binutils-2.35.2.drv
source.drv
source.drv
mirrors-list.drv
drv
readline63-001.drv
readline-6.3.tar.gz.drv
readline63-002.drv
readline63-007.drv
readline63-008.drv
readline63-003.drv
readline63-006.drv
readline63-004.drv
readline63-005.drv
source.drv
rbconfig.rb
bootstrap-tools.drv
bootstrap-stage3-stdenv-linux.drv
bootstrap-stage4-stdenv-linux.drv
bootstrap-stage0-stdenv-linux.drv
bootstrap-stage2-stdenv-linux.drv
bootstrap-stage1-stdenv-linux.drv
bootstrap-stage4-stdenv-linux.drv
bootstrap-stage1-stdenv-linux.drv
bootstrap-stage2-stdenv-linux.drv
perl5.34.0-gettext-1.07.drv
perl5.34.0-gettext-1.07.drv
builder.sh
no-sys-dirs.patch
libsanitizer-no-cyclades.patch
gcc-10.3.0.tar.xz.drv
busybox.drv
bootstrap-tools.tar.xz.drv
unpack-bootstrap-tools.sh
builder.sh
multiple-outputs.sh
move-docs.sh
audit-tmpdir.sh
strip.sh
patch-shebangs.sh
move-systemd-user-units.sh
prune-libtool-files.sh
builder.sh
move-lib64.sh
move-sbin.sh
make-symlinks-relative.sh
compress-man-pages.sh
setup.sh
set-source-date-epoch-to-latest.sh
reproducible-builds.sh
grep-3.7.tar.xz.drv
mpfr-4.1.0.tar.xz.drv
separate-debug-info.sh
7.79.1-darwin-no-systemconfiguration.patch
curl-7.79.1.tar.bz2.drv
attr-2.5.1.tar.gz.drv
setup-hook.sh
patchelf-0.13.tar.bz2.drv
acl-2.3.1.tar.gz.drv
0001-add-post-extract-hook.patch
0002-binaries-with-env-shebang.patch
0003-gem-install-default-to-user.patch
mpc-1.2.1.tar.gz.drv
add-flags.sh
utils.bash
role.bash
cc-wrapper.sh
add-hardening.sh
setup-hook.sh
CVE-2021-3487.patch
CVE-2020-35448.patch
gold-Update-GNU_PROPERTY_X86_XXX-macros.patch
bfd-elf-Dont-read-non-existing-secondary-relocs.patch
always-search-rpath.patch
binutils-2.35.2.tar.bz2.drv
deterministic.patch
disambiguate-arm-targets.patch
tar-1.34.tar.xz.drv
gawk-5.1.1.tar.xz.drv
libssh2-1.10.0.tar.gz.drv
CVE-2019-13636.patch
Allow_input_files_to_be_m issing_for_ed-style_patches. patch
CVE-2018-6951.patch
patch-2.7.6.tar.xz.drv
CVE-2018-6952.patch
CVE-2019-13638-and-CVE-2018-20969.patch
CVE-2018-1000156.patch
libunistring-0.9.10.tar.gz.drv
setup-hook.sh
no-sys-dirs-5.31.patch
perl-5.34.0.tar.gz.drv
krb5-1.18.tar.gz.drv
ld-wrapper.sh
add-hardening.sh
setup-hook.sh
gnu-binutils-strip-wrapper.sh
add-flags.sh
add-flags.sh
setup-hook.sh
pkg-config-wrapper.sh
ed-1.17.tar.lz.drv
lzip-setup-hook.sh
lzip-1.22.tar.gz.drv
libidn2-2.3.2.tar.gz.drv
autoreconf.sh
2.71-fix-race.patch
autoconf-2.71.tar.xz.drv
bison-3.8.2.tar.gz.drv
stacksize-detection.patch
pcre-8.44.tar.bz2.drv
xz-5.2.5.tar.bz2.drv
expand-response-params.c
remove-references-to.sh
CVE-2014-9913.patch
CVE-2015-7697.diff
CVE-2014-8141.diff
CVE-2015-7696.diff
CVE-2016-9844.patch
CVE-2014-8140.diff
setup-hook.sh
dont-hardcode-cc.patch
CVE-2018-18384.patch
CVE-2014-9636.diff
CVE-2014-8139.diff
nix-nss-open-files.patch
locale-C.diff.drv
fix-x64-abi.patch
glibc-2.33.tar.xz.drv
dont-use-system-ld-so-cache.patch
2.33-master.patch.gz
glibc-reinstate-prlimit64-fallback.patch?id=eab07e78b691ae7866267fc04d31c7c3ad6b0eeb.drv
dont-use-system-ld-so-preload.patch
nix-locale-archive.patch
allow-kernel-2.6.32.patch
fix_path_attribute_in_getconf.patch
setup-hook.sh
automake-1.16.3.tar.xz.drv
gmp-6.2.1.tar.bz2.drv
site.tmac
0001-Fix-cross-compilation-by-looking-for-ar.patch
nghttp2-1.43.0.tar.bz2.drv
libev-4.33.tar.gz.drv
gzip-1.11.tar.xz.drv
nuke-refs.sh
zlib-1.2.11.tar.gz.drv
gettext-0.21.tar.gz.drv
gettext-setup-hook.sh
absolute-paths.diff
0001-No-impure-bin-sh.patch
0002-remove-impure-dirs.patch
make-4.3.tar.gz.drv
m4-1.4.19.tar.bz2.drv
isl-0.20.tar.xz.drv
bash51-005.drv
bash51-011.drv
bash51-002.drv
bash51-004.drv
bash51-008.drv
bash51-003.drv
bash51-006.drv
bash51-010.drv
bash51-001.drv
pgrp-pipe-5.1.patch
bash51-012.drv
bash-5.1.tar.gz.drv
bash51-007.drv
bash51-009.drv
preprocessor-warnings.patch
libelf-0.8.13.tar.gz.drv
dont-hardcode-ar.patch
bzip2-1.0.6.0.2.tar.gz.drv
CVE-2016-3189.patch
cve-2019-12900.patch
no-install-statedir.patch
findutils-4.8.0.tar.xz.drv
write-mirror-list.sh
fix-finding-headers-when-cross-compiling.patch
0001-On-all-posix-systems-not-just-Darwin-set-LDSHARED-if.patch
darwin-libutil.patch
no-ldconfig.patch
Python-3.9.6.tar.xz.drv
virtualenv-permissions.patch
setup-hook.sh
libffi-3.4.2.tar.gz.drv
libtool-2.4.6.tar.gz.drv
libtool2-macos11.patch
help2man-1.48.5.tar.xz.drv
gettext-1.07.tar.gz.drv
builder.sh
drop-comments.patch
make-wrapper.sh
nix-ssl-cert-file.patch
use-etc-ssl-certs.patch
conf-symlink.patch
Make-build-reproducible.patch.drv
keyutils-1.6.3.tar.gz.drv
link-against-ncurses.patch
no-arch_only-6.3.patch
which-2. 21. tar.gz. drv
die.sh
texinfo-6.8.tar.xz.drv
no-relocs.patch
linux-5.14.tar.xz.drv
Fig. 2: A graph, or snarl, of the build and runtime package dependencies needed by Ruby in Nix.
the RPATH specified within the ELF header has precedence
over all dynamic loading search locations unless RUNPATH is
set, in which case it is ignored. Additionally complicating
one's understanding of how libraries are resolved is that RPATH
entries in each ancestor are searched, whereas RUNPATH entries
are not.
A. Issues with RPATH and RUNPATH
TABLE I :
IProperties of RPATH and RUNPATHProperty
RPATH
RUNPATH
Before LD LIBRARY PATH
Yes
No
After LD LIBRARY PATH
No
Yes
Propagates
Yes
No
libsamba-errors.so.1 [default path] libsmbconf.so.0 [default path] libsamba-util.so.0 [default path]Listing 1: A demonstration that binaries can work due to shared objects being found by searching earlier pathsFig. 3: A paradoxical setup for RUNPATH where the desired libraries are dirA/liba.so and dirB/libb.so$ libtree /usr/bin/dbwrap_tool
libpopt-samba3-samba4.so [runpath]
libcli-smb-common-samba4.so [runpath]
libiov-buf-samba4.so [runpath]
libsmb-transport-samba4.so [runpath]
libsamba-sockets-samba4.so [runpath]
libgensec-samba4.so [runpath]
libsamba-modules-samba4.so [runpath]
libsamba-util.so.0 [default path]
libtalloc.so.2 [default path]
libsamba-errors.so.1 [default path]
libsamba-debug-samba4.so not found
libdbwrap-samba4.so [runpath]
libutil-tdb-samba4.so [runpath]
libsamba-debug-samba4.so [runpath]
libpopt.so.0 [default path]
libtalloc.so.2 [default path]
liba.so
liba.so
libb.so
libb.so
dirA
dirB
TABLE II :
IIEvaluation of emacs stat/openat syscalls evaluate larger scale applications we use the Pynamic [30]
Each test was run with a cold cache, and negative caching (caching the non-existence of a file) is disabled as it is by default on LLNL systems. At the smallest size, 512 processes on four nodes, the normal executable took 169 seconds to launch, while the wrapped executable took 30.5 for a speedup of 5.5×. At 2048 processes, the gap widens to 7.2× for a total time-to-launch of 344.6 seconds for the normal executable. While this result is on the high end for what can be expected, the costs scale as the scale of the job increases, and the startup time benefits only grow. Shrinkwrap applies because even though the libraries and Python modules are loaded dynamically by the application, they are known at build time and included in the needed list. If there were more that were not known, it could be worthwhile to explore combining Shrinkwrap with an approach like Spindle to improve the load performance of those as well.
In fact, Nix patches away the ability for the linker to refer to default system locations or ld.so.conf
VIII. ACKNOWLEDGEMENTSThis work was in part supported by the National Science Foundation under Cooperative Agreement OAC-1836650, by the US Department of Energy ASCR DE-NA0003525 (FWP 20-023266, subcontract with Sandia National Labs), and by the Center for Research in Open Source Software (https://cross. ucsc.edu). This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC.
The spack package manager: bringing order to hpc software chaos. T Gamblin, M Legendre, M R Collette, G L Lee, A Moody, B R De Supinski, S Futral, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisT. Gamblin, M. LeGendre, M. R. Collette, G. L. Lee, A. Moody, B. R. De Supinski, and S. Futral, "The spack package manager: bringing order to hpc software chaos," in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2015, pp. 1-12.
Easybuild: Building software with ease. K Hoste, J Timmerman, A Georges, S D Weirdt, 2012 SC Companion: High Performance Computing, Networking Storage and Analysis. K. Hoste, J. Timmerman, A. Georges, and S. D. Weirdt, "Easybuild: Building software with ease," in 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, 2012, pp. 572-582.
NixOS: A Purely Functional Linux Distribution. E Dolstra, A Löh, Proceedings of the 13th ACM SIGPLAN International Conference on Functional Programming, ser. ICFP '08. the 13th ACM SIGPLAN International Conference on Functional Programming, ser. ICFP '08New York, NY, USAACME. Dolstra and A. Löh, "NixOS: A Purely Functional Linux Distribution," in Proceedings of the 13th ACM SIGPLAN International Conference on Functional Pro- gramming, ser. ICFP '08. New York, NY, USA: ACM, 2008, pp. 367-378.
Nix: A Safe and Policy-Free System for Software Deployment. E Dolstra, LISA XVIIIM De Jonge, LISA XVIIIE Visser, LISA XVIIIser. LISA '04Proceedings of the 18th Large Installation System Administration Conference. the 18th Large Installation System Administration ConferenceE. Dolstra, M. de Jonge, and E. Visser, "Nix: A Safe and Policy-Free System for Software Deployment," in Proceedings of the 18th Large Installation System Ad- ministration Conference (LISA XVIII), ser. LISA '04.
. C A Berkeley, Usa: Usenix Association, Berkeley, CA, USA: USENIX Association, 2004, pp. 79- 92.
Reproducible and User-Controlled Software Environments in HPC with Guix. L Courtès, R Wurmus, 2nd International Workshop on Reproducibility in Parallel Computing (RepPar). Vienne, AustriaL. Courtès and R. Wurmus, "Reproducible and User- Controlled Software Environments in HPC with Guix," in 2nd International Workshop on Reproducibility in Parallel Computing (RepPar), Vienne, Austria, Aug. 2015. [Online]. Available: https://hal.inria.fr/hal-01161 771
Filesystem hierarchy standard. R Russell, D Quinlan, C Yeoh, 229R. Russell, D. Quinlan, and C. Yeoh, "Filesystem hier- archy standard," V2, vol. 3, p. 29, 2004.
Xdg base directory specification. W Bastian, R Lortie, L Poettering, W. Bastian, R. Lortie, and L. Poettering, "Xdg base directory specification. 2010," https://specifications.f reedesktop.org/basedir-spec/basedir-spec-latest.html.
systemd file hierarchy. "systemd file hierarchy," https://www.freedesktop.org/so ftware/systemd/man/file-hierarchy.html.
How to check for abi changes in a package -fedora project wiki. "How to check for abi changes in a package -fedora project wiki," https://fedoraproject.org/wiki/How to che ck for ABI changes in a package.
Support for package dependencies. denelon, "Support for package dependencies," https://gi thub.com/microsoft/winget-cli/issues/163.
Purely functional system configuration management. E Dolstra, A Hemel, Proceedings of the 11th USENIX Workshop on Hot Topics in Operating Systems, ser. HOTOS'07. USA: USENIX Association. the 11th USENIX Workshop on Hot Topics in Operating Systems, ser. HOTOS'07. USA: USENIX AssociationE. Dolstra and A. Hemel, "Purely functional system configuration management," in Proceedings of the 11th USENIX Workshop on Hot Topics in Operating Systems, ser. HOTOS'07. USA: USENIX Association, 2007.
Run unpatched dynamic binaries on nixos. "nix-ld: Run unpatched dynamic binaries on nixos," http s://github.com/Mic92/nix-ld.
Debian wiki: RPATH issue. J Smakov, J. Smakov, "Debian wiki: RPATH issue," https://wiki.d ebian.org/RpathIssue.
ld.so(8) -linux manual page. "ld.so(8) -linux manual page," https://man7.org/linux /man-pages/man8/ld.so.8.html.
Rpath and runpath. ckamm, "Rpath and runpath," https://www.qt.io/blog/2 011/10/28/rpath-and-runpath.
Packaging kubernetes for debian. "Packaging kubernetes for debian," https://lwn.net/Articl es/835599/.
Fedora documentation -package dependencies. "Fedora documentation -package dependencies," https: //docs.fedoraproject.org/en-US/packaging-guidelines.
Dynamic linking. "Dynamic linking," https://drewdevault.com/dynlib.
Static linking considered harmful" considered harmful. "Static linking considered harmful" considered harmful," https://gavinhoward.com/2021/10/static-linking-conside red-harmful-considered-harmful/.
static linux. "static linux," https://sta.li/.
Zircon program loading and dynamic linking. "Zircon program loading and dynamic linking," https://fu chsia.dev/fuchsia-src/concepts/process/program loading.
Massively parallel loading. W Frings, D H Ahn, M Legendre, T Gamblin, B R De Supinski, F Wolf, Proceedings of the 27th International ACM Conference on International Conference on Supercomputing, ser. ICS '13. the 27th International ACM Conference on International Conference on Supercomputing, ser. ICS '13New York, NY, USAAssociation for Computing MachineryW. Frings, D. H. Ahn, M. LeGendre, T. Gamblin, B. R. de Supinski, and F. Wolf, "Massively parallel loading," in Proceedings of the 27th International ACM Conference on International Conference on Supercomputing, ser. ICS '13. New York, NY, USA: Association for Computing Machinery, 2013, pp. 389-398. [Online].
. 10.1145/2464996.2465020Available: https://doi.org/10.1145/2464996.2465020
Dynamic linking of needed with absolute path differs than that of glibc. "Dynamic linking of needed with absolute path differs than that of glibc," https://www.openwall.com/lists/mus l/2021/12/21/1.
Taming the 'stat' storm with a loader cache. L Courtès, L. Courtès, "Taming the 'stat' storm with a loader cache," https://guix.gnu.org/blog/2021/taming-the-stat-storm-wit h-a-loader-cache/.
Making datadriven porting decisions with tuscan. K Khazem, E T Barr, P Hosek, 10.1145/3213846.3213855Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis. the 27th ACM SIGSOFT International Symposium on Software Testing and AnalysisNew York, NY, USAAssociation for Computing MachineryK. Khazem, E. T. Barr, and P. Hosek, "Making data- driven porting decisions with tuscan," in Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2018. New York, NY, USA: Association for Computing Machinery, 2018, pp. 276-286. [Online]. Available: https://doi.org/10.1145/3213846.3213855
Taxonomy of package management in programming languages and operating systems. H Muhammad, L C V Real, M Homer, 10.1145/3365137.3365402Proceedings of the 10th Workshop on Programming Languages and Operating Systems, ser. PLOS'19. the 10th Workshop on Programming Languages and Operating Systems, ser. PLOS'19New York, NY, USAAssociation for Computing MachineryH. Muhammad, L. C. V. Real, and M. Homer, "Taxonomy of package management in programming languages and operating systems," in Proceedings of the 10th Workshop on Programming Languages and Operating Systems, ser. PLOS'19. New York, NY, USA: Association for Computing Machinery, 2019, pp. 60-66. [Online]. Available: https://doi.org/10.1145/3365 137.3365402
| []
|
[
"Lepto-axiogenesis and the scale of supersymmetry",
"Lepto-axiogenesis and the scale of supersymmetry"
]
| [
"Patrick Barnes \nLeinweber Center for Theoretical Physics\nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA\n",
"Raymond T Co \nWilliam I. Fine Theoretical Physics Institute\nSchool of Physics and Astronomy\nUniversity of Minnesota\n55455MinneapolisMNUSA\n",
"Keisuke Harigaya \nTheoretical Physics Department\nCERN\nGenevaSwitzerland\n",
"Aaron Pierce \nLeinweber Center for Theoretical Physics\nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA\n"
]
| [
"Leinweber Center for Theoretical Physics\nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA",
"William I. Fine Theoretical Physics Institute\nSchool of Physics and Astronomy\nUniversity of Minnesota\n55455MinneapolisMNUSA",
"Theoretical Physics Department\nCERN\nGenevaSwitzerland",
"Leinweber Center for Theoretical Physics\nDepartment of Physics\nUniversity of Michigan\n48109Ann ArborMIUSA"
]
| []
| If the Peccei-Quinn field containing the QCD axion undergoes rotations in the early universe, the dimension-five operator responsible for neutrino masses can generate a lepton asymmetry that ultimately gives rise to the observed baryon asymmetry of the Universe. This lepto-axiogenesis scenario requires a flat potential for the radial direction of the Peccei-Quinn field, naturally realized in supersymmetric models. We carefully compute the efficiency of this mechanism for the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) and Kim-Shifman-Vainshtein-Zakharov (KSVZ) axion models and place lower bounds on the masses of scalar superpartners required to reproduce the observed baryon asymmetry. For the KSVZ model, we find an efficiency for generation of the asymmetry six times larger than the previously extant computation after including scattering channels involving superpartners. In this case, the superpartner scale should be above ∼ 30 TeV for a domain wall number of one; the lower bound weakens for larger domain wall numbers. We find that the superpartner mass scale may also be as low as 30 TeV for the DFSZ model. In all cases, the lower bound on the superpartner masses is inversely proportional to the sum of the squares | 10.1007/jhep05(2023)114 | [
"https://export.arxiv.org/pdf/2208.07878v2.pdf"
]
| 251,622,349 | 2208.07878 | bfae0f821d23f1df7fa7dfa2de63a71f50e4d123 |
Lepto-axiogenesis and the scale of supersymmetry
Patrick Barnes
Leinweber Center for Theoretical Physics
Department of Physics
University of Michigan
48109Ann ArborMIUSA
Raymond T Co
William I. Fine Theoretical Physics Institute
School of Physics and Astronomy
University of Minnesota
55455MinneapolisMNUSA
Keisuke Harigaya
Theoretical Physics Department
CERN
GenevaSwitzerland
Aaron Pierce
Leinweber Center for Theoretical Physics
Department of Physics
University of Michigan
48109Ann ArborMIUSA
Lepto-axiogenesis and the scale of supersymmetry
(Dated: May 24, 2023)
If the Peccei-Quinn field containing the QCD axion undergoes rotations in the early universe, the dimension-five operator responsible for neutrino masses can generate a lepton asymmetry that ultimately gives rise to the observed baryon asymmetry of the Universe. This lepto-axiogenesis scenario requires a flat potential for the radial direction of the Peccei-Quinn field, naturally realized in supersymmetric models. We carefully compute the efficiency of this mechanism for the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) and Kim-Shifman-Vainshtein-Zakharov (KSVZ) axion models and place lower bounds on the masses of scalar superpartners required to reproduce the observed baryon asymmetry. For the KSVZ model, we find an efficiency for generation of the asymmetry six times larger than the previously extant computation after including scattering channels involving superpartners. In this case, the superpartner scale should be above ∼ 30 TeV for a domain wall number of one; the lower bound weakens for larger domain wall numbers. We find that the superpartner mass scale may also be as low as 30 TeV for the DFSZ model. In all cases, the lower bound on the superpartner masses is inversely proportional to the sum of the squares
INTRODUCTION
The Peccei-Quinn (PQ) symmetry [1,2] provides an attractive solution to the strong CP problem. The pseudo Nambu-Goldstone boson associated with this symmetry, the axion [3,4], can have important implications for cosmology. It is a cold dark matter candidate, and it can also play a central role in the generation of the matter-antimatter asymmetry.
One possibility is that axion dark matter can be generated by the misalignment mechanism [5][6][7], wherein the axion field is displaced from the zero-temperature minimum of its potential in the early universe. In this case, the axion begins its motion from rest when the mass generated by the QCD anomaly becomes comparable to the Hubble expansion rate. However, similar to fields in models of Affleck-Dine baryogenesis, the complex PQ field that contains the axion may receive a kick at early times and rotate in field space. This has ramifications for cosmology. First, axion dark matter may be produced not from the misalignment mechanism, but rather the so-called "kinetic misalignment mechanism" [8,9], wherein the energy contained in the motion in field space is converted to axions. The observed abundance of dark matter points to heavier, less weakly-coupled axions than in the conventional misalignment case. Second, there is a PQ charge associated with the angular momentum in field space. This is analogous to the baryon/lepton number carried by Affleck-Dine fields. In the presence of chirality-and baryon/lepton number-violating interactions, the PQ charge is converted to baryon number, a mechanism known as axiogenesis [10].
In its minimal form, axiogenesis does not simultaneously explain the dark matter and baryon abundances; once the dark matter abundance is fixed, too little baryon asymmetry is produced. A successful simultaneous prediction requires additional physics beyond the Standard Model [10][11][12][13][14][15] to increase the efficiency of the transfer of PQ charge to baryon number. A particularly simple solution takes advantage of lepton-number violation present when neutrino masses are explained by a Majorana mass, a scenario known as lepto-axiogenesis [16,17]. The Majorana mass allows transfer of the PQ charge to baryon minus lepton number B − L, which can eventually be converted to baryon number by weak sphalerons.
In this paper, we revisit lepto-axiogenesis, considering both Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) [18,19] and Kim-Shifman-Vainshtein-Zakharov (KSVZ) [20,21] axion models. We focus on the case where lepto-axiogenesis is embedded in a supersymmetric model. As we will discuss below, supersymmetric scenarios provide the most natural setting for axiogenesis. As in the original lepto-axiogenesis proposal, lepton-number violation is provided by the supersymmetric generalization of the ∆L = 2 Weinberg operator [22] that is responsible for neutrino masses, (LH u )(LH u ).
In the DFSZ case, the PQ field couples directly to the Higgs fields. Then, the nontrivial dynamics of the PQ field can impact the masses of the Higgs fields present in the Weinberg operator and therefore the transfer of the lepton asymmetry. On the other hand, in the KSVZ case the PQ field couples to heavy quarks and not directly to the fields of the Standard Model, so the above effect is absent.
The precise baryon asymmetry depends on the details of the cosmological history, including the reheat temperature T R of the universe following inflation. In our discussion, we pay attention to constraints placed on T R from, for example, avoiding disruption of Big Bang Nucleosynthesis (BBN) by superpartner decays [23,24]. We also carefully account for whether various Yukawa interactions are in equilibrium throughout the thermal history.
This can affect the efficiency of the asymmetry transfer.
In Sec. 2, we review the dynamics of the rotating field and how dark matter is produced in the kinetic misalignment mechanism. We then discuss the computation of the baryon asymmetry in Sec. 3. In comparison to Ref. [16], we take special care to account for the presence of superpartners, which impacts the rate at which the lepton asymmetry is generated. We then present detailed results for the DFSZ model including the thermalization of the PQ field in Sec. 4. The outcome of our analysis is a prediction for the minimum scale of supersymmetry-breaking scalar masses. We also find parameter space where dark matter and the baryon asymmetry may be simultaneously explained. The scalar superpartner masses are bounded from above (≲ 300 TeV), and the axion decay constant is predicted to be approximately 10 9 GeV. We also discuss the possible production of a non-topological soliton, which in principle could disrupt the prediction of the baryon asymmetry. In Sec. 5, we summarize the results. The scale of supersymmetry breaking required by this mechanism is consistent with that indicated by the observed Higgs boson mass. 4
DYNAMICS OF THE ROTATING FIELD
We define our complex PQ field P containing the axion as
P = (f a N DW + S) √ 2 e iθ/N DW . (2.1)
Here f a is the decay constant, N DW is the domain wall number, S is the radial direction which we call the saxion, and θ = a/f a is the angular direction. We assume that the potential of S is nearly quadratic. This assumption allows large field values for S in the early universe. This is necessary for initiating the rotation in field space, as we will discuss below. A nearly quadratic potential can be naturally realized in supersymmetric theories,
where the potential can be flat up to supersymmetry-breaking corrections. This is the case for a two-field model, with superpotential and soft masses given by
W = λX(PP − v 2 PQ ), V soft = m 2 P |P | 2 + m 2P |P | 2 . (2.2)
Here, X is a chiral multiplet whose F -term potential fixes the PQ-charged fields P andP to PP = v 2 PQ . Without loss of generality, we take |P | ≫ v PQ ≫ |P | in the early universe. We may then consider effective single-field dynamics for P with a nearly quadratic potential m 2 P |P | 2 , whileP is fixed to a small field value byP = v 2 PQ /P and is irrelevant. X will be fixed near the origin because of the large mass ≃ λP . A nearly quadratic potential is also achieved by a one-field model with logarithmic corrections [25] V (P ) = 1 2
m 2 S |P | 2 ln 2|P | 2 f 2 a N 2 DW − 1 ,(2.3)
with m S the mass of the saxion. The logarithmic corrections arise from the quantum corrections due to a Yukawa coupling of P , which can be that with the KSVZ quark in the KSVZ model, while extra fields are required in the DFSZ model.
In the one-field model, the mass of the fermionic superpartner of the axion, the axino, is generated by one-loop corrections and is suppressed relative to the typical scale of scalar soft masses. This tends to make the axino the lightest supersymmetric particle (LSP).
In the two-field case, R-or supersymmetry-breaking effects will induce an axino mass of order the gravitino mass, and an axino LSP is less likely. If stable, an axino LSP has the 5 potential to be problematic because it will typically overclose the universe. 1 The axino may decay if R-parity violation is introduced, or an axino LSP could be avoided if a bino and/or Higgsino were sufficiently light. See Appendix C for details, where we also discuss potential constraints from BBN.
In both the one-and two-field models, assuming the simplest mediation scheme of supersymmetry breaking by Planck-suppressed interactions, the saxion mass is expected to be of the same order as the soft scalar masses of the Minimal Supersymmetric Standard Model (MSSM). We will see that this curvature impacts the rotation of the axion in field space and the generation of the baryon asymmetry, and so the scalar mass may be constrained or predicted. In the one-field model, the curvature of the potential depends logarithmically on the field value of S. When we present results, we neglect this logarithmic dependence.
So, they apply directly to the two-field case, but a small correction should be applied when interpreting results in the context of the one-field model, see Sec. 4.4.
Initiation and evolution of rotation
During inflation, the presence of a Hubble-induced mass term can induce a large field value for P [26]. Then, at these early times, operators that explicitly break the PQ symmetry of the form
W = 1 q P q M q−3 (2.4)
can be enhanced, where q is an integer. Even if these operators are suppressed today so as to not spoil the solution to the strong CP problem, they can have important implications in the early universe.
The potential of P is, for S ≫ f a ,
V (P ) = (m 2 S − c H H 2 )|P | 2 + |P | 2q−2 M 2q−6 + A P q M q−3 + h.c. ,(2.5)
where H is the Hubble expansion rate, A is a constant coming from R-symmetry breaking, and c H , the coefficient of the Hubble-induced mass term, is a constant expected to be 1 In general, we expect that the axino will thermalize via the supersymmetric analog of the couplings that thermalize the saxion (see Sec. 4.1), which would overproduce axinos. It is conceivable that saxion thermalization might not occur until temperatures near the EW scale, in which case supersymmetrybreaking masses would be non-negligible, and the axino might not thermalize even if the saxion does. However, we have checked that even in this case the suppressed freeze-in abundance of an axino LSP would be problematically large.
O(1) [26]. Here, m S is a soft supersymmetry-breaking mass, which in the two-field case would be identified with m P . We focus on gravity-mediated scenarios, where A is of the same order as m S . The superpotential in Eq. (2.4) preserves a linear combination of PQsymmetry and R-symmetry, so the explicit breaking of the U (1) symmetry of P requires R-symmetry breaking. Assuming c H > 0 and that the Hubble scale during inflation is larger than m S , P is driven to a large field value where the Hubble-induced mass term and the |P | 2q−2 term balance with each other. After inflation, P follows the minimum where two terms balance with each other [26,27]. When 3H ≃ m S , P begins oscillations. At the same time, the A-term provides a kick for P in the angular direction, and P begins to rotate.
This occurs at a temperature T osc ,
T osc ≃ 4 × 10 9 GeV m S TeV 1 4 T R 10 9 GeV 1 2 g MSSM g * (T osc ) 1 8 for T R < T osc , (2.6)
where T R is the reheat temperature after inflation and g * denotes the number of relativistic degrees of freedom in the bath with a full MSSM value of g MSSM = 228. 75. We assume that inflationary reheating proceeds via perturbative inflaton decay, and thus the scale factor R obeys R 3 ∝ T −8 during reheating [28]. The PQ charge density associated with the rotation is
n θ = i N DW Ṗ P * −Ṗ * P = −θ f a + S N DW 2 .
(2.7)
We normalize the charge density so that it coincides with −θf 2 a for S = 0. The charge density normalized by the entropy density for T R < T osc can be computed as follows. The inflaton energy density ρ inf scales in the same way as n θ after the initiation of the rotation
(as R −3 ), so n θ /ρ inf remains constant until T = T R . The result is Y θ ≡ n θ s = n θ ρ inf Tosc × ρ inf s T R ≃ 10 3 N DW A m S TeV m S T R 10 9 GeV S(T osc ) 10 16 GeV 2 . (2.8)
The A/m S is the ratio of the potential gradient in the angular and radial directions at T osc .
There is also energy density ρ S stored in the oscillations of the radial mode S. Whether ρ S is of importance depends on the cosmological history and at what temperature T th this mode is thermalized. The saxion energy density may come to dominate the energy of the universe 7 if this thermalization is late. We comment on this scenario further at the end of this section.
Following thermalization, the motion of the PQ field becomes circular due to PQ charge conservation: the radial mode dissipates, but much of the axial motion remains-for while part of the charge can be transferred into a charge asymmetry of particles in the thermal bath, it is free-energetically favored to keep almost all of the charge in the rotation [10,29].
The field will rotate around the body of the potential, with the radial direction eventually settling down to its minimum N DW f a . The energy in rotation ρ θ , accounting for both the potential and kinetic energy, is given as −θn θ . Before the radial direction S reaches its minimum, which occurs at a temperature denoted by T S ,θ is a constant, and conservation of the PQ charge implies the energy density of the rotation scales as matter, ρ θ ∝ R −3 . For T < T S , the scaling of the rotational energy density resembles that of kination, ρ θ ∝ R −6 .
This scaling can be derived by noting that conservation of charge n θ R 3 at constant radial field value impliesθ ∝ R −3 . When the saxion settles to its minimum at T = T S , we know both θ ≃ N DW m S and the PQ yield Y θ = −θf 2 a /s, so we can derive
T S ≃ 1.4 × 10 6 GeV 100 Y θ 1 3 f a 10 9 GeV 2 3 m S 10 TeV 1 3 N DW 3 1 3 g MSSM g * (T S ) 1 3 . (2.9)
If the energy of the rotation dominates the energy of radiation at this time and if the saxion has already undergone thermalization (i.e. T th > T S ), then this T S is also the temperature T MK at which the universe transitions from a matter-dominated to a kination-dominated one. This history is illustrated in Fig. 1. We denote the temperature at which the universe transitions from radiation domination to matter domination as T RM . We emphasize that the matter domination we refer to here is domination by an energy density of rotation that scales as matter, not ordinary matter. This occurs at temperature
T RM = 4 3 N DW m S Y θ = 4 × 10 6 GeV Y θ 100 m S 10 TeV N DW 3 . (2.10)
This expression is general as long as no entropy is produced after T RM , and Y θ refers to the charge yield evaluated at T RM . In particular, this result applies whether or not there was an era where the saxion came to dominate the energy density of the universe prior to T RM . The kination-dominated era ends by the redshift of ρ θ =θ 2 f 2 a /2 = n 2 θ /(2f 2 a ) at temperature
T KR = 135 4π 2 g * 1 2 f a Y θ ≃ 1.2 × 10 6 GeV 100 Y θ f a 10 9 GeV g MSSM g * (T KR ) 1 2 .
(2.11)
A matter-dominated era followed by a kination-dominated one would modify the primordial gravitational wave spectrum in a way that potentially provides a unique signal [30][31][32].
It is also possible that the energy density due to rotation remains subdominant to the thermal bath. As we will see, in this case the temperature T S where the saxion reaches its minimum is still of significance for determination of the baryon asymmetry, as it marks the time whereθ changes its scaling. However, this temperature would no longer mark the onset of a kination era because radiation remains dominant.
We now comment on the possibility that saxion thermalization is late so that the saxion comes to dominate the energy density of the universe. We define the ratio r of the axion rotation to the saxion oscillation energy densities, which is in turn determined by the ratio of the potential gradients between the angular and radial modes,
r ≡ ρ θ ρ S ≃ A m S . (2.12)
This r is inversely related to the ellipticity of the initial motion and r = 1 corresponds to nearly circular rotations. Here it is assumed that the angular direction is not accidentally 9 close to the minimum of the potential in Eq. (2.5); otherwise r becomes smaller than A/m S .
The thermal bath created from the saxion is at a temperature T th upon completion of thermalization. This fact allows us to predict T RM because ρ S × r = ρ θ should hold at T th .
This gives π 2 30 g * T 4 th × r = π 2 30 g * T 4 RM (T th /T RM ) 3 , or equivalently,
T RM = rT th (for saxion domination). (2.13)
Kinetic misalignment and production of axion dark matter
In the conventional misalignment mechanism, the value of the axion field is initially frozen by Hubble friction. But once 3H < m a (T ), the axion field begins to oscillate around the minimum of its potential. In the axiogenesis framework, on the other hand, the axion is not frozen, rather the PQ field is already rotating with high velocity. This qualitatively changes the dark matter production story. The kinetic misalignment mechanism (KMM) occurs when the kinetic energy of the axion field is greater than the potential energy. The KMM delays and the temperature at the transition from radiation to matter domination using Eq. (2.10)
T RM, KMM ≃ 2.9 × 10 6 GeV m S 10 TeV f a 10 9 GeV N DW 3 .
(2.17)
COMPUTATION OF THE BARYON ASYMMETRY
In this section, we describe the computation of the baryon asymmetry in lepto-axiogenesis.
Basics of lepto-axiogenesis
The axion rotation couples to the thermal bath via the gluon in the KSVZ theory and via the Higgs fields in the DFSZ theory. The PQ-charge is transferred to a particle-antiparticle asymmetry of particles in the thermal bath, and in equilibrium the charge asymmetry in the bath is of the order ofθT 2 [16]. The total B − L charge vanishes in the absence of B − L violation, and the baryon asymmetry is fixed at the electroweak phase transition [10].
The B − L symmetry is broken if the observed non-zero neutrino mass is explained by a Majorana mass term. We consider the Majorana mass given by the Weinberg operator [22], whose supersymmetrization is given by the superpotential
W ν = c ij (L i H u )(L j H u ) 2Λ ,(3.1)
which can be UV-completed by the seesaw mechanism [43][44][45][46]. This operator gives rise to neutrino mass terms, in terms of the vacuum expectation value v Hu of the up-type Higgs
field with v 2 Hu + v 2 H d ≃ (174 GeV) 2 , m ij ν = c ij v 2 Hu Λ ,(3.2)
related to eigenvalues through the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix
U T PMNS m ν U PMNS = diag(m 1 , m 2 , m 3 ). (3.3)
The Weinberg operator will transfer the particle-antiparticle asymmetry of L i and H u to B−L through scattering between the lepton and Higgs fields (and their superpartners) in the bath. The scattering is not in equilibrium for temperatures T < ∼ 10 12 GeV × (0.03 eV/m ν ) 2 , so the transfer of the PQ charge to B − L is suppressed by a factor of Γ L /H with Γ L the lepton-number violating rate. That is, B − L asymmetry is produced by a "freeze-in" process [10]. This B−L asymmetry is ultimately further processed by electroweak sphalerons to give a baryon asymmetry n B ≃ (28/79)n B−L [47].
To calculate the rate of B − L asymmetry production, we must account for all scattering processes due to the operator in Eq. (3.1). Each contribution takes the forṁ
n B−L ⊃ 2 dΠ a dΠ b dΠ c dΠ d e − Ea+E b T e µa T + µ b T − e µc T + µ d T (2π) 4 δ (4) (p a + p b − p c − p d )|M| 2 , (3.4)
where a, b, c, and d are field labels, momenta {p a , p b } are incoming and {p c , p d } are outgoing,
and dΠ X ≡ 1 (2π) 3 d 3 p X 2E
X with E X the energy of field X. The initial factor of two is because the processes have ∆L = 2. Here, µ X is the chemical potential of field X. See the Appendix of Ref. [14] for discussion in a similar context. For chemical potentials much smaller than temperature, the sum over all scattering processes giveṡ
n B−L = 72 π 5 T 5 i j c ij 2Λ 2 1 2 (µ ℓ i + µ ℓ j ) + µ Hu + µ λ ,(3.5)
with i and j running over the three generations. We have assumed that processes involving scattering between Higgsinos H u , gauginos λ, and Higgs bosons are in equilibrium (and similarly for sleptons), which is typically the case. We have included scattering processes involving superpartners, which were neglected in Ref. [16]. We also go beyond the onegeneration approximation used there; this has a smaller effect. We have also assumed that the masses of ℓ and H u are smaller than T . As we will discuss in Sec. 3.3, this is not true for the DFSZ model for sufficiently high temperatures, since a large field value of S may impart a mass > T to H u and H d .
The coefficient of the Weinberg operator can be related to the neutrino masses and mixings as in Eqs. (3.2) and (3.3), so the production rate of the B − L asymmetry may be recast asṅ
B−L = C i (T )m 2 ν iθ T 5 v 4 Hu ,(3.Y B−L = ṅ B−L s dt = − T f T iṅ B−L sHT dT, H = π 2 g * 90 T 2 M Pl . (3.8)
We obtain an analytic result
Y B−L = 90 π 2 g * 1 2 45 2π 2 g * C j m 2 ν j M Pl N DW m S v 4 Hu ln T i T f for T i ≫ T f ,(3.9)
where T i and T f mark the initial and final temperatures of the era when ∆Y B−L is a constant.
Reproducing the observed baryon asymmetry, Y obs B = 8.7×10 −11 [48], requires a saxion mass
m S ≃ 135 TeV N −1 DW g * g MSSM 3 2 0.01 × 0.005 eV 2 C j m 2 ν j 7 ln T i T f . (3.10)
For T > T R (or for T < T RM ), the universe is not radiation-dominated, and production becomes IR (or UV)-dominated. Again, this is summarized in Table IV and illustrated in the left panel of Fig. 2.
If the reheat temperature is lower than the temperature where the saxion settles to its minimum, i.e., T R < T S , then Eq. (3.9) does not hold becauseθ is never constant during the radiation domination era, insteadθ ∝ T 3 . In this case, B − L production peaks at T S , which is illustrated in the right panel of Fig. 2. Then, the asymmetry may be obtained by first computing the redshift-invariant quantityṅ B−L /(Hρ inf ), with the inflaton energy density denoted by ρ inf . This quantity is readily evaluated at T S , see Eq. (3.6), recalling that θ ≃ N DW m S at this time. Then we can normalize the quantity to n B−L /s at T R :
Y B−L =ṅ B−L Hρ inf T =T S × ρ inf s T =T R = 90 π 2 g * 1 2 45 2π 2 g * C i m 2 ν i M Pl N DW m S v 4 Hu T R T S 7 , (3.11)
where we have assumed inflationary reheating by perturbative decays of the inflaton so
H(T ) = H(T R ) × (T /T R ) 4 for T > T R .
The result depends on the choice of the neutrino spectrum. We will show results for a normal hierarchy (NH), or inverted hierarchy (IH), assuming the lowest mass eigenvalue is negligible, so the overall mass scale is given by the mass differences determined by oscilla-14 tions. Even if we saturate the upper bound m ν < 0.12 eV from the Cosmic Microwave
Background along with data from Baryon Acoustic Oscillations [48], the predictions for this case are not so different from those of the inverted hierarchy case. Precisely speaking, the values of C i m 2 ν i with the upper bound saturated are 8% (normal hierarchy) and 16% (inverted hierarchy) larger than that for the inverted hierarchy with a negligible lightest neutrino mass.
KSVZ
The KSVZ model includes a coupling
W KSVZ = λ Ψ PΨΨ (3.12)
with Ψ a new colored quark charged under the PQ symmetry such that charges P Q Ψ + P QΨ + P Q P = 0. This coupling is the origin of the mixed PQ-QCD anomaly which allows the axion to solve the strong CP problem. The λ Ψ coupling plays an important role in the thermalization of the rotation.
The KSVZ model was carefully examined in Refs. [16,17]. We refer readers to these references for details, including the thermalization of the rotating PQ field. Here we focus on the implications of a factor of six enhancement in the baryon asymmetry production efficiency compared to Ref. [16]. This factor of six is the result of supersymmetrizing the Weinberg operator in Eq. (3.1), allowing lepton asymmetry production from scattering involving superpartners. This factor is independent of the UV completion of the axion and applies to the DFSZ case as well. The existence of superpartners in the bath also changes the efficiency of baryon asymmetry production by affecting the equilibrium Boltzmann equations and conserved quantities given in Appendix A.
As a benchmark, the observed baryon asymmetry is reproduced for a saxion mass
m S ≃ 190 TeV 1 N DW g * g MSSM 3 2 0.0106 × 0.005 eV 2 C j m 2 ν j 5.3 ln T i T f (KSVZ),(3.13)
where we use C i = 0.0106 based on Table III. In the determination of C i , we have gone beyond the one-generation approximation of Ref. [16]. This value of C i = 0.0106 applies when the anomaly coefficients for the weak and strong interaction are identical c W = c g (= 1) and when all Yukawa couplings are in equilibrium. To get the benchmark value 5.3, we take
T i = T R = 2 × 10 9
GeV and T f = T RM = 10 7 GeV.
DFSZ
In the DFSZ case, the effective µ-term depends upon the value of the scalar field. This effective µ-terms arises from the superpotential coupling
W µ = λ P n H u H d M n−1 . (3.
14)
The idea of relating the µ-term to the scale of Peccei-Quinn symmetry is sometimes known as the Kim-Nilles mechanism, which was originally explored for the n = 2 case in [49].
Because the value of P changes during the universe's history, so too will the masses of the Higgs fields. As discussed below, this can impact the way in which the lepton asymmetry is transferred to the bath via Eq. (3.1).
The superpotential of Eq. (3.14) gives a temperature dependent µ(T ) = λP n /M n−1 . At temperatures before P settles to its minimum, this scales as R −3n/2 , which is proportional to T 3n/2 during radiation domination. We define a temperature T µ at which the temperature and the effective µ(T ) are equal, where µ is the present-day value to which µ(T ) settles for temperatures below T S . For temperatures T > T µ , scattering via the Weinberg operator is ineffective as the leptonnumber violation is limited to even higher dimension operators generated by integrating out the Higgs superfields.
T µ = T 3n 2 S µ 1 3n 2 −1 = 10 9 GeV
So, the earliest temperature at which the chiral asymmetry may be effectively transferred to B − L is T µ . However, the reheat temperature T R is sometimes limited by BBN constraints [23,24] to values that are smaller than T µ . In this case, the earliest temperature relevant for transfer to B − L is T R . Based on Eq. (3.10) and N DW = 3n, a benchmark prediction of the saxion mass is
m S ≃ 39 TeV × 1 n g * g MSSM 3 2 0.0153 × 0.005 eV 2 C j m 2 ν j 5.3 ln T i T f (DFSZ). (3.16)
To get the benchmark value of 5.3 in the parentheses, we have taken T i = T R = 2 × 10 9 GeV and T f = T S = 10 7 GeV. The benchmark value of C i corresponds to the case where all Yukawa interactions and the gaugino mass are in equilibrium; see Table II in the Appendix.
The saxion mass, which we assume to be of the same order as the soft scalar masses of the MSSM, may be O(10) TeV; this is consistent with the observed Higgs boson mass if the ratio of the Higgs field vacuum expectation values tan β ≫ 1. Larger m S is also possible, which could reproduce the Higgs boson mass for more modest values of tan β.
DETAILED ANALYSIS OF THE DFSZ MODEL
We now analyze the DFSZ model in detail. We discuss the thermalization of P via the coupling with the Higgs superfields in Eq. (3.14). We then show the allowed parameter space, determining both the minimum values of m S consistent with the generation of the baryon asymmetry and also the values of m S predicted by the production of both the baryon asymmetry and the dark matter abundance. We analyze the cases where the asymmetry is generated during reheating or the subsequent radiation-dominated era and the case where the saxion eventually dominates the energy. We discuss complications that may arise from the possible fragmentation of the rotation into Q-balls and explain how they can be avoided.
Thermalization
If the saxion does not thermalize sufficiently early, it will come to dominate the energy density of the universe. In this case, when it ultimately decays, it will produce entropy which can dilute the baryon asymmetry.
We assume that the dominant interactions of the saxion are via the coupling in the superpotential that gives the effective µ-term, Eq. (3.14). Then the saxion can be thermalized via its interaction with the Higgsino at a rate given by
Γ S H H ≃ 0.1 µ 2 (T + m S ) S 2 S N DW f a 2n , (4.1)
where the term with T or m S corresponds to the scattering or the decay rate, respectively.
For n = 1, the rate is independent of the evolution of the saxion field value S. The thermalization temperature is found by setting this rate equal to 3H, and is given by
T th ≃ 90 π 2 g * 1 2 µ 2 M Pl 30N 2 DW f 2 a ≃ 200 TeV µ TeV 2 10 8 GeV f a 2 3 N DW 2 g MSSM g * (T th ) 1 2 , (4.2)
which is valid for T th ≫ m S , often the case for parameters of interest. The above expression assumes radiation domination. If the reheat temperature is below this T th , thermalization instead occurs during the period of inflationary reheating, and the actual thermalization temperature becomes lower than that in Eq. (4.2) (but above T R ) due to an enhanced Hubble rate with respect to that for radiation domination. However, thermalization of the saxion during inflationary reheating will not create more entropy than already created by the inflaton. So, the precise value T th will be irrelevant; instead, the value of T R will be important for analysis of the baryon asymmetry.
For n > 1 and S > N DW f a , the thermalization rate depends on S, so we need the scaling of S with temperature. Conservation of S number implies that S scales as R −3/2 . During
radiation domination, R ∝ T −1 , so Γ S H H ∝ (T + m S )S 2n−2 ∝ (T + m S )T 3n−3 increases with
increasing T faster than a radiation-dominated H ∝ T 2 does for any n > 1. Therefore, for n > 1 the saxion may thermalize at a high temperature but then decouple from the thermal bath when Γ S H H drops below the Hubble expansion rate. However, there is a maximum temperature at which the saxion can thermalize via Higgsino scattering, namely T ∼ T µ . Above this temperature, Higgsinos are out of equilibrium because their mass exceeds the temperature. To test whether thermalization occurs at this point, we equate Γ SHH | T =Tµ = 3H(T µ ). Using Eq. (3.15), for n = 2 we find that T S drops out from this relation, and the following constraint on f a may be derived f a ≲ 2 × 10 9 GeV µ 10 TeV
1 2 g MSSM g * (T µ ) 1 4 6 N DW (n = 2). (4.3)
For f a larger than this critical value, the coupling of saxion is too weak to thermalize at T µ .
Instead, thermalization waits until after T S and occurs at the lower T th given in Eq. Another possible thermalization channel is via the saxion scattering with the W gauge boson. This occurs with a rate given by
Γ SW W = n 2 × b T 3 S 2 , (4.4)
where b ≃ 10 −5 [50][51][52]. Even when the saxion-W scattering does not completely thermalize the saxion, such scattering can play an important role in generating the thermal bath necessary for the Higgsinos to thermalize the saxion. This will be discussed in Sec. 4.3.2.
No saxion domination
The analysis of baryon asymmetry and dark matter production from axion rotations depends on whether the saxion comes to dominate the energy density and creates entropy upon its thermalization. In this section, we focus on the case where the saxion is thermalized sufficiently early so this does not occur. Then, for much of the parameter space, the baryon asymmetry production dominantly occurs during the radiation domination era following inflationary reheating. We also consider the possibility that the dominant production occurs during inflationary reheating, which can happen for low reheat temperatures. The case with saxion domination is analyzed in Sec. 4.3.
n = 1
In this section we give results for the n = 1 case, where the µ-term arises through a renormalizable coupling defined in Eq. (3.14). First, we discuss whether both the baryon asymmetry and dark matter may be generated by the dynamics of the axion field. In this n = 1 case, consistency with bounds on the axion decay constant from red giant cooling significantly constrains the ability to simultaneously realize the baryon asymmetry and axion Moreover, if additional thermalization channels beyond those described in Sec. 4.1 are present, then it would be possible to increase T th and hence the maximal yield. This could allow the KMM to reproduce the observed DM abundance for larger f a ; see Ref. [8].
Here, we do not include such channels. So, above this green line, an additional source of dark matter would be required. We assume that whatever produces the balance of the dark matter budget does not disturb the prediction of the baryon asymmetry. This would be the case, for example, if the dark matter were produced by thermal freeze-out of an LSP. The optimal cosmological evolution to obtain the smallest m S can be obtained as follows.
It is best to minimize T f = max(T S , T RM ) so the logarithmic enhancement in Eq. (3.9) is maximized, but this should be done while avoiding entropy production that would dilute the asymmetry. Thus, the maximum baryon generation efficiency is achieved if neither the saxion nor the rotation comes to dominate the total energy density. This is accomplished if T RM = min(T th , T S ). This ensures saxion thermalization (which occurs at T th ) happens early enough to avoid entropy production. It also ensures that the rapid redshift of the energy of rotation (which begins at T S ) occurs early enough that the rotation does not come to dominate; this will make the radiation-dominated era-and hence the period of logarithmically enhanced baryon production-as long as possible. This procedure also minimizes T f .
To clarify this cosmological history that produces the maximum asymmetry, we have shown relevant temperatures as functions of f a for m S = 30 TeV in Fig. 4. The left (right) Fig. 3. The subscript "optimal" indicates the cosmological scenario where Y B is most efficiently produced, leading to the smallest required m S , corresponding to the curve segments above the green dotted line in Fig. 3. The optimal scenario is achieved by choosing the charge yield Y θ such that T RM, optimal = min(T S, optimal , T th ) so as to avoid rotation or saxion domination. For f a > 1.5 (7.5)×10 8 GeV, T RM, optimal follows T S (T th ), while T µ, optimal and T S, optimal change accordingly. Temperatures marked T y d indicate when interactions involving the down-Yukawa coupling come into equilibrium for different choices of tan β; different C i will apply above and below these lines, see (III). Temperatures denoted T µ show when Higgsinos come into thermal equilibrium.
panel is for T R = 2 × 10 9 GeV (10 7 GeV). As discussed above, the optimal scenario for efficient baryon asymmetry production is to choose the PQ charge Y θ (and hence the energy density in the saxion and rotation) so that T RM = min(T S , T th ). For f a > 7.5 × 10 8 GeV, this imposes T RM = T th , and we refer to this T RM as T RM, optimal , according to Eq. (2.13).
Once we have fixed T RM in this way, T S and T µ may be determined. We have shown these as the T S, optimal and T µ, optimal curves, which deviate from the values required by the KMM shown in dashed lines.
There is one final minor complication. The region above the orange line in Fig. 3 would lead to a period of matter domination followed by kination domination had we assumed axion dark matter from the KMM, i.e., T RM, KMM > T S, KMM using Eqs. (2.16) and (2.17). However, the goal for Fig. 3 is to derive the minimum m S rather than requiring the KMM. The optimal choice for T RM to find this minimal m S is not T RM, KMM , but rather T RM, optimal = T S, optimal , and this choice is applied in the region between the green dotted and the orange lines in Fig. 3 with m S ≳ 10 TeV. This is the case between f a = (1.5-7.5) × 10 8 GeV in both panels of Fig. 4, where T RM, optimal = T S, optimal . This optimal cosmology corresponds to Fig. 1 but with the blue curve shifted downwards and to the left so that the radiation energy density is equal to that of rotation at the kink in the rotation energy density. Table II and Appendix A for details. See possible that for sufficiently low T R , the baryon asymmetry is dominantly produced during the period of inflationary reheating. We will discuss such effects below after commenting on the constraints from BBN.
A reheat temperature T R = 2 × 10 9 GeV with m S ∼ TeV requires either i) R-parity violation, ii) a small gravitino mass m 3/2 ∼ 100 GeV, or iii) a sneutrino as the next-to-LSP that is nearly degenerate with the gravitino LSP. For m 3/2 ≳ 7 TeV the bound on T R rapidly weakens, so we expect that the dashed and dotted curves are valid without any additional assumptions. See Appendix C for more details on BBN constraints.
As T R decreases, the generation of the baryon asymmetry is less efficient, and higher values of m S are needed to reproduce the observed baryon abundance. At minimum, this is because lowering T R reduces T i , the onset of the radiation-dominated era that is responsible for the logarithmic enhancement in the generation of the asymmetry in Eq. (3.9). This explains why the dashed and dotted curves are shifted to higher m S than the solid curves at low values of f a . For higher f a , the curve bends further and becomes a straight line because, although T RM still follows T th , eventually the resultant T S, optimal exceeds T R . When this occurs, Y B is no longer dominantly produced during an radiation-dominated era but rather during inflationary reheating. The logarithmic enhancement disappears, and the baryon asymmetry is diluted by entropy produced from the reheating, as in Eq. (3.11). The result is that the baryon asymmetry is sensitive to T S and therefore f a . This cosmological evolution may be clarified by examining the right panel of Fig. 4. There, T S, optimal can be seen to deviate from T S, KMM at f a ≃ 1.5 × 10 8 GeV (when T RM, optimal starts to track T S, optimal ), change its slope at f a ≃ 7.5 × 10 8 GeV (when T RM, optimal starts to track T th ), and then eventually exceed T R = 10 7 GeV for f a ≳ 2 × 10 9 GeV. hierarchy. The blue and red contours show the reheat temperatures required to reproduce both the baryon asymmetry from lepto-axiogenesis and dark matter from kinetic misalignment. The kinetic misalignment mechanism predicts a period of matter domination followed by kination domination in the entire parameter space shown here. The green region leads to underproduction of axion dark matter from kinetic misalignment because of entropy production from saxion domination, i.e., T RM > T th using Eqs. (2.10) and (4.2). The purple region is excluded by the red giant brightness observations. The brown region is excluded because the required PQ charge leads to an energy density of the PQ field ρ P exceeding that of the inflaton, while the brown dotted contours show lower values of ρ P /ρ inf . and the saxion ρ P ≡ ρ S + ρ θ (≃ 2ρ θ for A ≃ m S according to Eq. (2.12)), exceeds that of the inflaton. The origin of this region may be understood by noting that larger values of m S require less efficient production of Y B , which may be achieved by a smaller logarithmic enhancement during radiation domination by decreasing the ratio between T R and T RM . For fixed T R , this requires a larger T RM in Eq. (2.10). However, eventually T RM becomes as large as T R /2, at which point ρ P ≃ 2ρ θ = ρ inf , and an inconsistency arises because P would instead drive an epoch of inflation. Two brown dotted curves are shown for ρ P /ρ inf = 0.1 and 0.01, which are perhaps more realistic energy densities for P . In summary, the saxion mass is now predicted to have a strict upper limit of 125 (240) TeV for inverted (normal) hierarchy with lower m S preferred for a realistic ρ P . This mass range is intriguingly consistent with the supersymmetry-breaking scale determined from the observed value of the Higgs boson mass.
n = 2
We now move to the case of n = 2 and N DW = 6. Our focus in this case will be on the parameter space where both the baryon asymmetry and dark matter abundance result from the axion rotation. This case is presented in the left panel of Fig. 6 for µ = m S /5. The blue curve shows the minimum values of m S compatible with axion dark matter and the baryon asymmetry both arising from axion rotations. This minimum m S is achieved when T R is sufficiently high, as discussed below. The blue curve assumes the inverted hierarchy neutrino mass spectra as labeled, while the normal hierarchy case overproduces the baryon asymmetry in this parameter space. We will see that the requirement of the successful generation of both the baryon asymmetry and dark matter prefers a relatively small region of m S ranging from around 60 TeV to nearly 100 TeV and f a = (1-2) × 10 9 GeV.
Above the orange line in the left panel of Fig. 6, a period of matter domination followed by kination domination exists because T RM > T KR . In this case, the era of logarithmically enhanced baryon production ends at T RM , when the matter domination begins. The region above the orange line also gives a potential signal in the modification of primordial gravitational waves [30][31][32]. This is discussed around Eq. (3.9) and illustrated by the red shaded region in the right panel of Fig. 6. To accurately obtain the final Y B , rather than using the analytic estimate given in Eq. (3.9), we numerically integrateṅ B−L R 3 from T µ to a temperature much lower than T RM ; this improves the precision of the prediction on m S and changes the prediction by a factor as large as 2.
In the green shaded region, axions from kinetic misalignment cannot account for all of dark matter because the necessary axion yield Y θ = 3rT th /4N DW m S requires a T th value that is higher than can be achieved from saxion-Higgsino scattering given in Eq. negatively-sloped boundary of the green region, we have T th = T µ and T th > T RM so that the saxion does not create entropy upon thermalization. Lastly, as can be seen in the right panel, the era that dominates the production of the baryon asymmetry begins at T µ , and therefore the result is independent of T R as long as T R > T µ . Using T µ from Eq. (3.15) and
T S from Eq. (2.16), one finds T µ ≃ 6 × 10 7 GeV (f a /10 9 GeV) 1/2 (m S /5µ) 1/2 . The calculated asymmetry will be valid for all T R larger than this value.
In other words, by lowering T R , one can explain the baryon asymmetry and dark matter in the region to the right of the magenta line. However, there is a limit on how low T R can be:
in the low T R and high m S limits, T R and T RM approach each other, and when T R = 2T RM , the energy density of the complex field ρ P ≡ ρ S + ρ θ is equal to that of the inflaton ρ inf if A ≃ m S , i.e., ρ S ≃ ρ θ based on Eq. We do not analyze the baryon asymmetry in the green shaded region, where axion dark matter is underproduced by kinetic misalignment, because we find that in some of the parameter space the onset of the P rotation can be initiated by the saxion thermal mass. This is because the thermal mass is proportional to µ, which is in turn enhanced at high temperatures by S n−1 . Rotations initiated by the thermal mass complicate the determination of the optimal cosmological evolution for the most efficient baryon asymmetry production.
The thermal mass also leads to potential formation of Q-balls whose presence makes the baryon asymmetry calculation uncertain; see Sec. 4.5.
Saxion domination
In this subsection, we discuss a different cosmology where both dark matter and the baryon asymmetry may still be produced by axion rotations. We do not optimize the pro-duction of the baryon asymmetry nor utilize T R to explore the parameter space as in Sec. 4.2, but rather study the case where the saxion dominates the energy density before it is thermalized. In this case, the saxion creates entropy that dilutes the baryon asymmetry produced immediately after inflationary reheating. Consequently, the final asymmetry is dominantly produced after saxion thermalization and is therefore independent of the inflationary reheat temperature T R . The predicted m S is generically larger than the optimal cases studied in Sec. 4.2 giving the most efficient baryon asymmetry production. We now comment on the effect of changing µ. If the value of µ is increased, this makes thermalization more efficient and would increase T th . This breaks the relation T RM = rT th .
However, this relation can be restored by going to higher f a since T RM ∝ f a and T th decreases with increasing f a . The predictions for µ = 3m S will be shown and discussed.
n = 1
The thermalization temperature for n = 1 is given in Eq. (4.2). In the saxion domination case, T RM = rT th with T RM determined by requiring dark matter from kinetic misalignment, and this gives a relation between m S and f a for a given r. Furthermore, to accurately derive line is the prediction of kinetic misalignment alone for n = 2 assuming r = 1 (predictions for n = 2, r < 1 are not included due to complications involving thermalization, see text). In the regions above the orange curves, axion dark matter from kinetic misalignment predicts eras with matter and kination domination, which may leave imprints in primordial gravitational waves [30][31][32].
n = 2
We continue to analyze the case where the saxion comes to dominate the energy density of the universe before being thermalized, i.e., T RM > T th , but now for n = 2. The results are given in Fig. 7. The diamonds show the points predicted by requiring both the baryon asymmetry and dark matter abundance, which are in tension with the red-giant observations. We nevertheless analyze this n = 2 scenario to obtain the prediction from the dark matter abundance (black dashed curves) but with an underproduced baryon asymmetry. In what follows, we assume the high T R and/or large initial S limit so the inflationary reheating contribution to the bath is negligible around thermalization. (For instance, for the initial S close to the Planck scale, T R > O(10 7 ) GeV is sufficient for this assumption to hold.) The origin of the bath must be from the saxion-W scattering. The temperature of the bath that originates in this way increases [57] during saxion reheating because the temperature dependence of the rate given in Eq. (4.4).
As the temperature increases, it may eventually become equal to the effective µ(T ) at a temperature we call T th,i , at which point thermalization via Higgsinos is initiated. Thermalization may then suddenly complete via saxion-Higgsino scattering, and the temperature increases abruptly to T th as the saxion energy is suddenly converted to the bath. This occurs as long as Γ S H H > H. We assume the saxion field value does not change significantly after thermalization, S th ≃ S th,i , which is the case if the initial rotation is nearly circular (r ≃ 1). The temperature right after thermalization T th can be computed by requiring 1) conservation of energy ρ S = m 2 S S 2 th,i = π 2 30 g * T 4 th , 2) initial radiation created by W scattering ρ S
Γ SW W H = m 2 S S 2 th,i Γ SW W H = π 2 30 g * T 4
th,i with the subscript "th, i" indicating evaluation right be-fore the abrupt thermalization, and 3) the condition for Higgsinos to just come into thermal equilibrium T th,i = µ(T th,i ) = µ × (S th,i /N DW f a ) 2 . We obtain
T th = 7 × 10 6 GeV N DW 6 1 3 m S 20 TeV 1 2 m S µ 1 6 f a 6 × 10 8 GeV 1 3 .
(4.5)
Using this expression, we find that along the black dashed line of Fig. 7, the PQ charge yield can eventually overtake H. However, in this regime, we find that axion dark matter is underproduced because the low T th gives an insufficient PQ charge Y θ = 3rT th /4N DW m S .
Y θ = 3rT th /4N DW m
In deriving this black dashed line, we have assumed r = 1. One may be tempted to naively extend the calculation to derive the prediction for lower values of r because r seemingly appears to affect only S th /S th,i . For r < 1, Γ S H H ∝ S 2 may first be larger than H when T reaches T th,i but become smaller than H before reaching complete thermalization at T th .
The condition Γ S H H > H should instead be evaluated at T th with S th rather than at T th,i .
However, we note that thermalization via Higgsino scattering may be further complicated by the fact that the value of S can get close to the origin in some portion of the cycle when the rotations are very elliptical, r ≪ 1. During this portion, the saxion-Higgsino scattering may proceed because T < µ(S) near the origin, while T > µ(S) when P is far away from the origin. We do not pursue this possibility further. but such low f a is in tension with the red giant constraints. In fact, even the degenerate limit of the neutrino spectrum leads to underproduction of the baryon asymmetry. Lowering µ may increase the predicted values of f a , moving towards compatibility with the red-giant bound, but the black dashed line is truncated at lower f a . As a result, we do not find viable parameter space for a sufficient baryon asymmetry after marginalizing over µ.
Interpretation of results for one-field model
In this subsection, we re-interpret the results presented in Figs In what follows, we define m UV S as the curvature at S = M Pl . We expect this value to be comparable to other scalar masses in the UV. We will discuss how our earlier results are modified under the understanding that the x-axes of the figures will now refer to m UV S . We denote m S (z) as the curvature at lower energy scales; z may refer to the S field value or a temperature to indicate the corresponding field value S(T ) at T , i.e., m S (T ) = m S (S(T )).
The field dependence of m S (S) is shown in Fig. 8. Because the change of m S (z) is only logarithmic, the overall effect on the results is modest. In what follows, we will discuss this effect in detail. Our strategy will be to fix f a and find the value of m UV S that would reproduce the physics of a field-independent m S in each case.
We begin by discussing the effects of the evolution of m S (z) on saxion thermalization.
The green dotted line in Fig. 3, the green boundary of Fig for T i > T > T f . For Figs. 3 and 5 (n = 1), T i is T R and T f is often T S (see Fig. 4). For Fig. 6 (n = 2), T i is T µ , which is not much above T S , and T f is often T S . Since m S is lower at these lower temperatures compared to m UV S , the effect is to reduce the efficiency of Y B−L production. To compensate for this, m UV S needs to increase by a factor of a few.
This results in a shift of the prediction curves to the right. As a result of this shift and the left shift of the green dotted line in Fig. 3, the hierarchical cases shown by blue and red curves are more easily compatible with the red giant bound and the green constraint for dark matter. In other words, a viable parameter space for dark matter would open up with a milder hierarchy between µ and m S than µ = 3m S assumed in Fig. 5. The shift would be more prominent in Fig. 6 than Fig. 3 because the former case involves m S (T ) only at temperatures close to T S .
The brown regions/curves in Figs. 5 and 6 are also affected by a changing m S . As explained in Secs. 4.2.1 and 4.2.2, these constraints occur because a successful production of Y B would require T RM > T R /2, which would result in a period of inflation driven by the saxion. First, as explained above, an evolving m S (T ) decreases the efficiency of production relative to the constant case, so to reproduce the baryon asymmetry, a larger m UV S is required. Second, because m S (T RM ) is smaller than m UV S , the saxion will take longer to come to dominate, and so T RM is smaller in the case where the saxion mass evolves. This means the constraint is relaxed, which also shifts the brown regions/curves to higher m UV S . Lastly, in Fig. 7, the dominant (logarithmically enhanced) era of asymmetry production is present between T th and T RM = rT th . For r = O(1), T RM ≃ T th ≫ T S , so m S (T ) during this era is O(0.5)m UV S , and the predicted points will shift to the right by a factor of 2 or so. The predicted points for r ≪ 1 are excluded by red giants whether or not we account for the effect of m S (z).
On balance, for the one-field model, larger values of m S are preferred than in the two-field case, often by a factor of few.
We discuss the potential domain wall problem in the one-field model. After the initiation but before the thermalization, the rotation is generically not circular. For non-circular motion, fluctuations of the PQ breaking field can be produced by parametric resonance [16,[33][34][35][36][37]58]. The PQ symmetry may be non-thermally restored by the fluctuations and broken again once the fluctuations are reduced by the expansion of the universe. If this actually occurs, a domain wall-string network is produced, which is stable if N DM > 1 and causes a domain wall problem. Unlike the case without angular momentum [59,60], it is not clear if the restoration actually occurs, since the non-zero angular momentum provides an effective potential that strongly disfavors the origin of the field space. We leave the investigation of the dynamics via numerical lattice computation to future work, and only note that the one-field model may require N DW = 1 or explicit PQ breaking that can destroy the domain walls N DW > 1 [61].
Q-balls
If the potential of the S field is nearly quadratic, a small correction may make the potential flatter than a quadratic one, for which a non-topological soliton called a Q-ball may be formed [62][63][64][65][66]. Q-ball formation can complicate the thermal history. If formed, Qballs will localize the PQ charge inside them. It is unclear as to what the spatial distribution of theθ will be as the universe evolves in the presence of these Q-balls. This uncertainty would confuse the evaluation of the baryon asymmetry.
Most discussions of Q-balls have taken place in the context of potentials with minima near the origin in field space. It is possible that the symmetry-breaking potential of P allows the Q-balls to decay or even prevents its initial formation. While understanding the dynamics of the Q-balls associated with a symmetry-breaking potential such as the one needed for the axion is of interest, we leave it for future work. For now, we assume that the properties of the Q-balls in the present setup are identical to the more familiar ones associated with potentials that have minima at the origin. We then comment on which cases Q-ball formation might confuse the calculation of the baryon asymmetry, while keeping in mind that future investigations might mitigate these concerns. Histories that include Q-ball formation may actually ultimately prove viable.
For n = 1, the thermal potential, given in the second terms of Eqs. (4.6) and (4.8) below, is flatter than a quadratic one for both µ(S) > T and µ(S) < T , so once Q-balls are formed, they would remain stable until T ≪ m S when the quantum correction to the soft mass of S from interactions with the Higgs fields dominates over the thermal potential. In this case, the estimation of the baryon asymmetry would potentially be rendered invalid, because the Q-balls would be present during the epoch that is important for the generation of the asymmetry. We may avoid the era of a flat potential by coupling P to additional fields, W = y ψ P ψψ. Because we will require a large y ψ , the ψ fields receive a large mass from the large P field value and are not present in the thermal bath. Assuming that it is gauge-singlet, ψ also does not introduce a coupling of P to gauge bosons. So, the effect of ψ is to introduce a modification of the zero-temperature potential. Assuming that the soft mass squared of ψψ is positive, quantum corrections to the soft mass of P induced by this coupling steepens the zero-temperature potential and can destabilize Q-balls. So, with an O(1) coupling y ψ , for µ(S) > T the non-quadratic part of the potential of S is
V ⊃ κm 2 S S 2 ln S µ + α 2 2 T 4 ln S T ,(4.6)
where κ ∼ 1/(16π 2 ). Q-ball solutions exist if V /S 2 is minimized at non-zero S. The above potential has a minimum at S 2 ∼ α 2 T 4 /(κm 2 S ). Requiring self-consistency with the condition µ(S) > T , we obtain
T > 1 g 2 2 κ 1/16π 2 1/2 N DW m S µ f a (Q-balls : µ(S) > T ). (4.7)
The Q-ball solution may also exist in the regime µ(S) < T , for which the non-quadratic part of the potential of S is
V ⊃ κm 2 S S 2 ln S µ − c T y 4 S 4 ,(4.8)
where y = µ/(f a N DW ) is the coupling between P and H u H d , c T ∼ 1/(16π 2 ). Note that the thermal trilinear term −y 3 S 3 T is absent since the Higgs field obtains a large thermal mass ∼ gT and the IR singularity is removed. The minimum of V /S 2 is at S 2 ∼ κm 2 S /(c T y 4 ). For consistency, this should satisfy yS < T , so we obtain
T > κ c T 1/2 N DW m S µ f a (Q-balls : µ(S) < T ). (4.9)
Comparing Eq. (4.7) with Eq. (4.9), the latter gives a slightly stronger condition, so Q-balls disappear when Eq. (4.9) is violated. Unless T R ≫ f a , the production of B − L asymmetry dominantly occurs after Q-balls disappear, so the estimation of B − L asymmetry is not affected by the production of Q-balls. Given current bounds on f a and constraints on T R from BBN, we do not expect T R ≫ f a .
For n = 2, the potential of S is flatter than a quadratic one only for µ(S) > T . Therefore, even if Q-balls are formed, once the field value of S inside the Q-balls is such that µ(S) < T , Q-balls should disappear. However, this can occur only at a temperature below T µ , since the field value of S inside the Q-balls is larger than the average field value. With Q-balls at temperatures below T µ , the estimation of B − L asymmetry may be affected. We may avoid this by a coupling W = P ψψ as in n = 1. So, for n = 2, when the condition in Eq. (4.7) is violated, Q-balls disappear.
So, for both n = 1 and 2, even if Q-balls form at the early stage of the evolution of the axion rotation, they can disappear by the era when B − L asymmetry is produced by lepto-axiogenesis if there exists a coupling to extra fields ψψ. We stress again that this extra couplings may not be necessary because the symmetry breaking potential of S may lead to additional effects that destabilize the Q-balls.
We note that the Q-ball formation may lead to production of domain walls. Indeed, Q-ball formation is a result of the growth of fluctuations. As in the parametric resonance during the oscillation of the PQ symmetry breaking field [59,60], the growth of fluctuations may non-thermally restore the PQ symmetry and produce domain walls. Since N DW > 1 for the DFSZ model, domain walls are stable and will come to dominate the universe. However, we expect that the symmetry restoration would not occur in the two-field model since the PQ symmetry-breaking fields are fixed on the moduli space where the PQ symmetry is broken.
In the one-field model, on the other hand, the symmetry restoration might occur. Whether or not the domain wall production actually occurs should be investigated by numerical computation; it is possible that the non-zero angular momentum in field space tends to expel the field from the center and prevent the symmetry restoration.
In summary, it remains to be seen whether or not Q-balls, if formed, are ultimately problematic, and whether they disturb the calculation of the baryon asymmetry. However, coupling the PQ-field to other fields induces quantum corrections to the saxion potential that steepen it and can avert Q-ball production.
DISCUSSION
In this work we have explored the possibility that the observed baryon asymmetry arises from the interplay of early-universe dynamics of the axion and the origin of neutrino masses.
Under this assumption, we could obtain information on the mass of the saxion, the radial mode of the complex field that contains the axion. In models of gravity mediation, the mass of the saxion would be comparable to the masses of the MSSM particles. So, one can interpret the results as predictions for the masses of the superpartners. We have investigated the DFSZ model in detail including the successful thermalization of the saxion.
For a hierarchical neutrino mass spectrum, the scalar mass may be as low as O(10) TeV.
The observed Higgs boson mass in this case may be explained by moderately large tan β. For the scalar mass of O(10) TeV, the gaugino masses given by the anomaly mediation [67,68] is below O(100) GeV, so singlet SUSY-breaking fields must be present to give phenomenologically viable gaugino masses. This generically leads to the Polonyi problem [69], which can be avoided by a large coupling between the SUSY-breaking fields and the inflaton [70][71][72] or a coupling between the SUSY-breaking fields and a pseudo-flat direction [73].
Successful thermalization of the rotation typically requires µ different from m S by an O(1) factor. For µ > m S , electroweak symmetry breaking requires the soft masses of the Higgs fields to be also larger than m S by an O(1) factor.
If reheat temperatures are somewhat lower than the maximum value considered here, or if the saxion comes to dominate the energy density of the universe at some point in its history, then the scalar mass is required to be larger. Interestingly, after requiring the kinetic misalignment mechanism to explain the observed dark matter abundance, we find the scalar mass is at most 300 TeV. (One can check that the predicted scalar mass is still small enough that the tachyonic instability to create a helical magnetic field is ineffective, so the associated overproduction of the baryon asymmetry recently noted in Ref. [74] is avoided.) The scalar mass of 300 TeV is compatible with the scenario without singlet supersymmetry-breaking fields [68], also known as mini-split SUSY, pure gravity mediation, spread SUSY, etc. In this scenario, the infamous Polonyi problem and the BBN gravitino problem are absent, the SUSY flavor/CP problem mitigated, and the observed Higgs boson mass in this case can be explained with tan β of order unity [75][76][77][78][79][80][81][82][83][84]. The dominant contribution to the gaugino mass is given by anomaly mediation [67,68], and the gauginos may be searched for at the LHC.
As for the axions, we find a preferred region that simultaneously predicts the dark matter and the baryon asymmetry with f a ∼ 10 9 GeV, just above the current bound from observations of red giants. This presents a target for experimental searches including the Broadband Reflector Experiment for Axion Detection (BREAD) [85], the Axion Resonant InterAction
Detection Experiment (ARIADNE) [86,87], or other future detectors [88].
Questions regarding the dynamics of the rotating axion field remain. As discussed in Sec. 4.5, Q-balls can form when the saxion potential is flatter than a quadratic one. The spatial distribution of the angular velocity of the axion field after Q-balls form, but prior to their decay, is of importance to accurately estimate the efficiency of axiogenesis scenarios. In Sec. 4.5, we introduced new couplings of the PQ-field to hasten the disappearance of the Qballs, rendering them irrelevant. However, even in the absence of these additional couplings,
we expect Q-balls to eventually decay since the zero-temperature potential does not admit isolated Q-ball solutions. When the decay actually occurs requires additional investigation, perhaps with the help of a lattice computation. Because the requirement of a large initial field value constrains the potential of the saxion to be nearly quadratic in axiogenesis, the condition for an epoch of Q-ball formation should be satisfied rather generically. This makes answering the fate of axiogenesis in the presence of Q-balls a particularly interesting question.
number asymmetry in the Boltzmann equations to vanish. The solution to this system of equations depends on the magnitudes of coupling constants. However, because the up-Yukawa coupling is small, it can be set to zero to a good approximation [14]. And while the goal is to find the (quasi)-equilibrium values for the case where the chiral symmetry is completely broken, this procedure, wherein we take the parameter which breaks the chiral asymmetry the least (the up-Yukawa) to vanish, will reproduce the leading contribution to the asymmetry. Then, with this prescription for the up-Yukawa coupling in place, taking the time-derivatives to be zero is equivalent to applying the principle of detailed balance to each scattering process.
The equilibrium conditions for the remaining Yukawa interactions are
µ ℓ i + µē i + µ H d + µ λ = 0, (A.1) µ Q 2 + µū 2 + µ Hu + µ λ = 0, (A.2) µ Q 3 + µū 3 + µ Hu + µ λ = 0, (A.3) µ Q i + µd j + µ H d + µ λ = 0. (A.4)
We have chosen to express equilibrium conditions in terms of the fermionic part of each chiral supermultiplet, and µ λ is the chemical potential of gauginos. Since the doublet quarks and squarks couple to all gauginos, as long as the gauge interaction is in thermal equilibrium, all gauginos have the same chemical potential. The scalar and fermionic chemical potentialsowing to in equilibrium interactions with gauginos-are related by
µ λ + µ ψ − µ ϕ = 0, (A.5)
where ϕ and ψ represent the scalar and fermion part of a chiral supermultiplet, respectively. While the charged lepton and up-quark Yukawa interactions may be taken to be flavor diagonal, in general, there will be off-diagonal components for the down-quarks, see
(3µ Q k + µ ℓ k ) + µ Hu + µ H d + 4µ λ + c W µ θ = 0, (A.6) Ng k=1 (2µ Q k + µū k + µd k ) + 6µ λ + c g µ θ = 0, (A.7)
where c W and c g are the weak and strong anomaly coefficients of the PQ symmetry. These anomaly coefficients are set to zero in the DFSZ case, but not the KSVZ case. Because ρ θ is given as −θn θ , µ θ must be −θ. Other interactions to consider are chiral-symmetry violation by the gaugino mass and either the standard MSSM µ-term (for KSVZ) or the interaction in Eq. (3.14) (for DFSZ), which give
µ λ = 0, (A.8) µ Hu + µ H d + n N DW µ θ = 0. (A.9)
Setting n = 0 in Eq. (A.9) corresponds to taking the standard MSSM µ-term for KSVZ, while taking n ̸ = 0 corresponds to the DFSZ case.
In addition to the above detailed balance relations, we must also impose the conservation laws to determine the asymmetry. With all interactions in thermal equilibrium, the only conservation laws are those of weak hypercharge, Y = 0, and B/3 − L i for each generation i. B/3 − L i is violated by the superpotential in Eq. (3.1), but this interaction is never close to equilibrium, and it is therefore a small perturbation that may be neglected for the computation of chemical potentials. For µ i , m i ≪ T , the net fermion and boson densities are given by n ψ − n ψ † = g 6 T 2 µ ψ and n ϕ − nφ = g 3 T 2 µ ϕ , so the hypercharge and B/3 − L i conservation conditions can be expressed in terms of chemical potentials as
Ng k=1 (µ Q k − µ ℓ k − 2µū k + µd k + µē k ) + µ Hu − µ H d = 0, (A.10) Ng k=1 (2µ Q k − µū k − µd k ) − 6µ ℓ i + 3µē i − 2µ λ = 0. (A.11) 43
Out of equilibrium Yukawa interactions
The conservation law that is broken at the lowest temperature isē 1 number conservation.
This is finally broken when the scattering rate involving the electron Yukawa coupling with rate α 2 y 2 e T overtakes the Hubble expansion rate. Whenē 1 number is conserved, Eq. (A.1) for i = 1 should be replaced by
2(2µ Q 1 − µū 1 − µd 1 ) − (2µ Q 2 − µū 2 − µd 2 ) − (2µ Q 3 − µū 3 − µd 3 ) = 0.
(A.14)
Other Yukawa interactions could be out of equilibrium, but this would occur at high enough temperatures that the B − L production by lepto-axiogenesis is subdominant.
The temperatures at which these different conservation laws are broken are shown in RGEs [90] of the MSSM. In Fig. 9, it is assumed that the Hubble parameter is that of a radiation-dominated universe with g * = g MSSM = 228.75. A universe with fewer relativistic degrees of freedom would break the symmetries at a higher temperature, while a universe not dominated by radiation would break them at a lower temperature.
Whether or not the high temperatures where these new conservation laws apply are compatible with constraints from supersymmetric relics, see Appendix C, depends on the details of the spectrum.
Out of equilibrium gaugino masses and µ-term
At sufficiently high temperatures, scattering due to the gaugino mass or the µ-term may be ineffective.
First, we discuss the gaugino mass. The rate of chiral-symmetry violation by the gaugino mass is Γ ∼ m 2 λ /T . Equating this rate with the Hubble expansion rate during a radiationdominated era, we find this interaction goes out of equilibrium for temperatures above Above these temperatures, the chemical potential associated with the gauginos µ λ will no longer vanish, and it will enter into the equations that result from the Yukawa interactions To find the new conservation law, we must identify the relevant R-symmetry. It should be non-anomalous with respect to SU (3) and SU (2). One set of R-charge assignments for this symmetry for the MSSM superfields is given in Table I. Because this symmetry is an R-symmetry, the charge of the fermions is Q ψ = Q R − 1. Gauginos have Q λ = 1.
Q U c 1 U c 2,3 D c E c L H u H d Q R 1 0 0 0 4 3 − 1 3 1 1 W W ′ −1 6 0 0 −
Contributions to would-be anomalies are 2N c from gauginos, and Q R × N from the chiral superfields, where N counts the multiplicity. This allows us to verify that the would-be R-SU (2)-SU (2) anomaly cancels between winos and the leptons (4 − (4/3)N g =0). The would-be R-SU (3)-SU (3) anomaly cancels between gluinos and the right-handed quarks (6+ 2N g (−1) = 0), where we have combined the contributions from the up-type and down-type quarks. In terms of chemical potentials, the conservation condition for this R-symmetry is
12µ λ + Ng k=1 −3(µū k + µd k ) + 1 3 µē k − 8 3 µ ℓ k + 12(µ Q k + µ λ ) +4(µ Hu + µ H d + 2µ λ ) + 8 3 (µē k + µ λ ) − 4 3 (µ ℓ k + µ λ ) = 0. (A.16)
If the axino-gluino-gluon or axino-Higgs-Higgsino coupling were in thermal equilibrium, the axino would contribute µ λ or µ Hu + µ H d + µ λ to this expression for the conserved charge.
This affects the values of C i by at most a few percent, so we ignore these possibilities.
In the case of the KSVZ axion, the µ-term is ineffective at temperature higher than Eq. (A.15) with m λ replaced with µ. We do not expect that there is such a temperature regime in the DFSZ case because the µ-term itself will increase with temperature, keeping it in equilibrium. If such a temperature regime does exist, this results in yet another conserved symmetry, which can be taken to be a linear combination of the Weinberg-Wilczek Peccei-Quinn (WW) symmetry wherein Q Hu = Q H d = 1, and Q Q = Q L = −1 and two additional symmetries: B + L, and the symmetry under which only the right handed up quark is charged Q u1 , (Q(U c ) = 1). The resulting charges for a non-anomalous symmetry are given
by W W ′ = W W + 5 3 (B + L) + 6Q u1 − 5 3 (B − L),
where we have added a multiple of the non-anomalous B − L symmetry to give the more convenient charge assignments shown in Table I. In terms of chemical potentials, the W W ′ conservation condition is
18(3µū 1 + 2µ λ ) + 2(3µ Hu + 3µ H d + 4µ λ ) + Ng k=1 −6(3µ Q k + 2µ λ ) − 10 3 (3µē k + 2µ λ ) + 14 3 (3µ ℓ k + 2µ λ ) = 0. (A.17) 4
. Results for C i Solving the relevant system of equations for the µ i allows determination of C i , see
Eqs. (3.5) and (3.6).
DFSZ:
In Table II, we show C i for the various cases in the DFSZ model. When all Yukawa interactions and the gaugino mass term are in equilibrium, C 1 = C 2 = C 3 = 0.0459 n N DW , independent of the PMNS mixing angles. When the electron Yukawa is out of equilibrium, the C i coefficients are slightly different from each other and depend on the PMNS mixing angles. But using PMNS mixing angles θ 12 = 34 • , θ 23 = 48 • , θ 23 = 8.5 • , the C i coefficients are still all 0.046 n N DW to two significant digits. Whether the down-Yukawa interaction is out of equilibrium has a more significant impact; in this case the coefficients become C 1 = 0.0229 n N DW , C 2 = 0.0203 n N DW , and C 3 = 0.0182 n N DW , and the resulting asymmetry can be affected by more than a factor of 2. Whether or not the offdiagonal Yukawa interactions with the down quark are in equilibrium has no effect on C i .
Whether the gaugino mass term is in equilibrium has a small effect. For the cases when the down Yukawa is in equilibrium, the difference is roughly 3%, pushing C i to 0.0446 n N DW . The effect on the cases where the down is out of equilibrium is similarly small. Table III KSVZ m λ and µ Efficient All Yukawas Efficient y e Inefficient y e and y d Inefficient C 1 0.0037c g + 0.0069c W 0.0016c g + 0.0082c W −0.0063c g + 0.0083c W C 2 0.0037c g + 0.0069c W 0.0033c g + 0.0072c W −0.0055c g + 0.0074c W C 3 0.0037c g + 0.0069c W 0.0047c g + 0.0064c W −0.0050c g + 0.0066c W m λ Inefficient All Yukawas Efficient y e Inefficient y e and y d Inefficient C 1 0.0037c g + 0.0089c W 0.0016c g + 0.0098c W −0.0063c g + 0.0083c W C 2 0.0037c g + 0.0089c W 0.0033c g + 0.0091c W −0.0055c g + 0.0074c W C 3 0.0037c g + 0.0089c W 0.0047c g + 0.0085c W −0.0050c g + 0.0067c W m λ and µ Inefficient All Yukawas Efficient y e Inefficient y e and y d Inefficient C 1 −0.0107c g + 0.0063c W −0.0126c g + 0.0072c W −0.0127c g + 0.0071c W C 2 −0.0107c g + 0.0063c W −0.0111c g + 0.0064c W −0.0112c g + 0.0064c W C 3 −0.0107c g + 0.0063c W −0.0098c g + 0.0059c W −0.0101c g + 0.0058c W TABLE III. C i coefficients in the KSVZ model when different reactions are in equilibrium. The first group of rows corresponds to the case when scattering via the gaugino mass and µ-term are in equilibrium. The second group corresponds to the case where the µ-term is in equilibrium but the gaugino mass is not. The third group corresponds to the case where both the gaugino mass and µ-term are out of equilibrium. The first column corresponds to the low-temperature case when all Yukawa interactions are in equilibrium. The second gives results when only the interactions via the electron-Yukawa is out of equilibrium, and the third also has down-Yukawa interactions out of equilibrium. In the standard normalization of the axion-gluon coupling, c g = 1.
KSVZ: In
are in equilibrium has no effect when scattering via the µ-term is efficient, and an effect only on the level of several percent when the µ-term is inefficient.
Epoch H T Γ L ρ matterθ ∆n B−L s ∆n B−L ρmatter MD inf NA T > T S R − 3 2 R − 3 8 R − 9 8 R −3 R 0 - R 21 8 T < T S R −3 - R − 3 8 RD T > T S R −2 R −1 R −3 - R 0 R 0 - T < T S R −3 R −3 - MD osc NA Γ S H H Γ SW W T > T S R − 3 2 R 3 2 R 9 2 R −3 R 0 - R 12 T > T S R − 3 2 R − 1 2 R − 3 2 R 0 - R 2 MD rot A T > T S R − 3 2 R −1 R −3 - R 0 R − 1 2 - KD T < T S R −3 R −1 R −3 - R −3 R −2 - TABLE IV.
Scaling of quantities relevant for the estimation of the B − L asymmetry. Positive (negative) exponents for R in the final two columns indicate IR (UV)-dominated production. The case that scales as R 0 has equal contributions per Hubble time and so receives a logarithmic enhancement; see text for details. We note that ρ matter represents either ρ inf or ρ S depending on which one dominates and creates entropy.
thermalization before dominating the total energy density. If instead the saxion comes to dominate and subsequently creates a large amount of entropy from its thermalization, any previously produced baryon asymmetry can be sufficiently diluted so that the production after saxion thermalization dominates. As discussed in Sec. 4.3, during the non-adiabatic era before the end of thermalization, the relevant thermalization processes are saxion-Higgsino and saxion-W scatterings for n = 1 and n = 2, respectively. Production of n B−L per Hubble time is listed in Table IV for these two cases with the label MD osc NA , and one can see that production is IR-dominated for both cases. (We do not show the scaling for T < T S here; it is never realized in our parameter space.) This verifies that the contribution produced subsequent to thermalization of the saxion dominates over that produced during thermalization.
The production after thermalization is again logarithmically enhanced during a radiationdominated era but now between T th and max(T RM , T S ) with T RM given by Eq. (2.13). The results for the saxion domination scenario are presented in Sec. 4.3.
Appendix C: Constraints from supersymmetric relics
In this supersymmetric framework, there are a number of potentially long-lived relics.
These relics may provide constraints on the theory. The constraints depend upon the identity of both the LSP, and if long-lived, the next-to-lightest supersymmetric particle (NLSP). The predictions of Big Bang Nucleosynthesis (BBN) must not be disturbed, and, if stable, the LSP density may not exceed the dark matter density.
Non-gravitino/axino LSP: We first consider the case of the LSP being a superpartner of a Standard Model particle. The constraint on the mass spectrum and/or the reheat temperature from BBN is discussed in [23]. If the gravitino mass m 3/2 ∼ TeV, late gravitino decays will disturb BBN unless the reheat temperature T R < ∼ 10 6 GeV. The bound can be relaxed if the LSP is a slepton, but a charged LSP is strongly constrained by searches for heavy hydrogen [91] and a sneutrino LSP is excluded by direct detection experiments.
Because this value for T R is close to the typical T RM (or even smaller) this means that any logarithmic enhancement, see Eq. (3.9), is necessarily absent, and the prediction for m S is somewhat modified (increased). If m 3/2 ∼ 10 TeV, the upper bound is T R < ∼ 10 8 GeV.
It is conceivable that m 3/2 is quite large with mass > ∼ 100 TeV, in which case the gravitino decays might be early enough to avoid conflicts with BBN and larger reheat temperatures might be allowed. However, in this case, the scalar mass must be also O(100) TeV; otherwise for m 3/2 ≫ m S , the A-term in Eq. (2.5) becomes much larger than m S , and P is trapped at a minimum with large S. Additionally, in order for the thermal freeze-out abundance of the LSP (say wino or Higgsino) not to be too large, a hierarchy of the type m LSP ≪ m S ∼ m 3/2 is required. Moreover, even if gravitino decays during BBN are avoided and the LSP thermal abundance is not too large, there is still a danger of non-thermal overproduction of the LSP from gravitino decays. For a gravitino mass of 100 TeV and a LSP mass of a TeV, this constrains the reheat temperature T R < 2 × 10 9 GeV [23].
The above upper bounds on the reheat temperature could be relaxed if R-parity is violated. In this case, we may assume a slepton LSP, thereby weakening the BBN constraints from gravitino decays, but without conflicting with heavy isotope searches nor direct detection. If there are no sparticles between the gravitino and the slepton(s), the upper bound becomes T R < 10 9 (10 11 ) GeV for m 3/2 ∼ 1 (10) TeV. For m 3/2 > 100 TeV, the LSP overproduction bound disappears and T R may be much above 10 9 GeV.
Gravitino LSP: To avoid overproduction of a gravitino LSP from thermal processes requires a reheat temperature T R < 2 × 10 9 GeV × (TeV/m 3/2 ). In this case a logarithmic enhancement as in Eq. (3.9) can remain. However, avoiding disruption of BBN via decay of the (visible sector) NLSP puts strong constraints on the parameter space if T R is above the sparticle masses. The strongest constraints arise [23,24] when where the NLSP has a large branching ratio to hadrons; constraints are minimized for a sneutrino NLSP. This can be realized by taking the soft mass of5 to be smaller than that of 10. Then, the dominant constraint comes from a three-body decay involving a weak boson, whose branching ratio is O(10 −2 ). For the gravitino mass of TeV, the sneutrino NLSP lifetime is around 10 5 s, and from the constraint on the decay into weak gauge bosons derived in [24], we obtain mνYν × Br(three-body) < 10 −14 ⇒ mνYν < 10 −12 .
(C.1)
The freeze-out abundance of a TeV-scale sneutrino would violate this bound. To evade the bound requires mν > 10 TeV, so that the lifetime of the sneutrino is shorter than 100 s. We may also avoid the bound by taking mν − m 3/2 < m Z . In this case, the decay mode relevant for the BBN constraints becomes a four-body decay with a branching ratio ∼ 10 −4 , and the constraint is marginally satisfied.
If the cutoff scale is below the Planck scale, m 3/2 ≪ m NLSP and the NLSP lifetime may be shorter. For example, with the cutoff scale around the string scale ∼ 10 17 GeV, m 3/2 ∼ 100
GeV with m NLSP ∼ few TeV is possible. The lifetime is then shorter than 100 s, and m NLSP Y NLSP < 10 −10 and 10 −7 is required for the NLSP with the leading hadronic decay mode and the sneutrino NLSP, respectively. This is satisfied for m NLSP = O(1) TeV.
The bound on the mass spectrum may be avoided if R-parity is violated, since the NLSP can decay much before BBN. The gravitino may still be long-lived enough to be dark matter.
In this case, R-parity violating couplings could provide an additional source of asymmetry, see [14], but this is small if the couplings are not large.
Axino LSP: The axino should not be the LSP unless R-parity violation is introduced.
To see why this is so, recall the saxion is thermalized. Unless this thermalization occurs below the masses of the sparticles, the axino is also thermalized. Unless the axino mass mã is below O(100) eV, axino dark matter is overproduced. However, even a subdominant component of hot dark matter is constrained, so a stronger bound mã < ∼ O(10) eV applies [92].
While in some of the parameter space saxion thermalization does occur at T th < ∼ m S , and it may be possible to avoid thermalization of the axino, it is nevertheless potentially produced in dangerous amounts via freeze-in at higher temperatures. Indeed, unless the mã ≪ TeV-difficult in gravity mediation-the axino is still overproduced. Indeed, the two-field model gives mã ≃ m 3/2 because a non-zero vacuum expectation value for X is induced by a supergravity tadpole. In the one-field model, although mã vanishes at tree-level, it is still generated by one-loop quantum corrections. The dominant contribution comes from the Yukawa coupling yP ψψ and the associated A-term, where this interaction is also responsible for the generation of the logarithmic potential.
The axino LSP can be viable if R-parity violation allows the axino to decay. For example, for mã above the electroweak scale, the axino can decay before BBN via the LH u operator without giving a too-large neutrino mass. The contribution to the baryon asymmetry from axiogenesis via R-parity violation [14] is subdominant compared to the lepto-axiogenesis contribution. Such a large mã is readily obtained in the two-field model. In the one-field model, generating a loop-induced mã exceeding the electroweak scale places bounds on the supersymmetry-breaking scale. In gravity mediation with a singlet supersymmetry-breaking field, A ∼ m S , so mã above the electroweak scale requires m S > 10 TeV. In gravity mediation without singlets, A ∼ 0.01m S , so m S > 10 6 GeV would be required.
The upper bound on T R from BBN is not relaxed in comparison with other cases. Although the gravitino can have dominant decayG →ãa if it is the NLSP, the axino anyway decays into SM particles, so the BBN constraint still applies.
FIG. 1 .
1An example evolution of energy densities as a function time for radiation (red), oscillations in the radial direction (orange), and rotations of the PQ-breaking field (blue). Relevant temperatures are labeled in gray and corresponding cosmological eras are labeled in black.
FIG. 2 .
2The baryon minus lepton asymmetry produced per Hubble time ∆Y B−L as a function of time in log-log scales during radiation-dominated and matter-dominated eras. Relevant temperatures are labeled in gray and corresponding cosmological eras are labeled in black.
ν i is the i th neutrino mass eigenvalue. The coefficients C i (T ) (which are in general a function of PMNS mixing) are determined by calculating the relevant chemical potentials. Their values depend on what interactions are in equilibrium at a given temperature as well as the choice of axion model. Results for C i (T ), generally of order 10 −2 -10 −3 , and the details of their computation are given in Appendix A. The yield of the B − L asymmetry produced per Hubble time may then be estimated as ∆Y B−L ≃ṅ the time average ofθ. During radiation domination, H ∝ T 2 , so for T > T S , where ⟨θ⟩ ≃ N DW m S is a constant [16], the temperature dependence of ∆Y B−L in Eq. (3.7) is especially simple. It is independent of the temperature, except for a small implicit dependence through the determination of C i . On the other hand, ∆Y B−L decreases with temperature after T < T S because thenθ ∝ T 3 . The scaling of ∆Y B−L during different epochs is summarized in Table IV in Appendix B, and is illustrated in the left panel of Fig. 2. An era of constant ∆Y B−L indicates a logarithmic enhancement in the integrated production of Y B−L . For the case of a long radiation-dominated era, we derive the expression 13 of the final asymmetry Y B−L by integratingṅ B−L /s over time from T i to T f using Eq. (3.6),
↑↑FIG. 3 .
3ax ion DM un de rp ro du ced fro m th e KM M ax ion DM un de rpr od uc ed fro m the KM M Minimum m S for n = 1, domain wall number N DW = 3, and µ = m S . The baryon asymmetry can be correctly reproduced on and to the right of (blue/red) lines with the associated cosmology described in (II) and (III). Different colors distinguish the assumed neutrino mass spectra. In the left panel with T R = 2 × 10 9 GeV, solid curves are valid for all tan β ≥ 35, while dot-dashed curves correspond to tan β = 5. In the right panel, the solid, dashed, and dotted line styles indicate reheat temperatures T R = 2 × 10 9 GeV, 2 × 10 8 GeV,10 7 GeV. The effects of tan β and T R are described in (III). Above the green dotted line, as discussed in (I), kinetic misalignment underproduces axion dark matter. The possibility of generating sufficient dark matter using a larger µ is discussed in (IV) with results shown inFig. 5. The purple region is excluded by observations of red giants[53,54]. dark matter. The generation of dark matter is discussed in (I) below. Then, independent of the origin of dark matter, we focus on the determination of the lowest scale of supersymmetry breaking consistent with the successful generation of the baryon asymmetry. In (II) we outline how to find this minimum scale. In (III) we present results, including the dependence on tan β and the reheat temperature T R . Finally, drawing from the knowledge from (III), we present and discuss the parameter space for achieving both the baryon asymmetry and axion dark matter in (IV), including the effects of the reheat temperature.(I) Axion dark matter? Below the green dotted line inFig. 3, the dark matter abundance would be successfully explained by the kinetic misalignment mechanism. Above the green dotted line, axion dark matter is necessarily underproduced. This is because even the maximum possible charge yield, achieved when the saxion dominates, Y θ = 3rT th /4N DW m S with T th given in Eq. (4.2), is too low to provide axion dark matter. Low values of f a in the 20 purple shaded region ofFig. 3are excluded by red giant brightness observations that bound axion-electron couplings[53,54]. The incompatibility of these regions shows that generation of all of the dark matter is not possible with the parameters shown. Here we have assumed µ = m S . Higher values of µ relative to m S shift this green dotted line upward, eventually allowing compatibility with the bound. We discuss this possibility further in (IV).
2 (
2II) Finding the minimum m S : Even in cases where it is impossible to reproduce the full DM abundance, it is nonetheless of interest to understand what sets the minimum superpartner scale m S consistent with the production of the baryon asymmetry. Since the size of the baryon asymmetry is proportional toθ and hence m S , this minimum m S scale can be found by maximizing the baryon asymmetry production efficiency.
FIG. 4 .
4Cosmologically relevant temperatures as functions of f a for fixed m S = 30 TeV and T R = 2 × 10 9 GeV (left panel) and T R = 10 7 GeV (right panel). The red shaded region between the reheat temperature T R and max(T S , T RM ) indicates the range of temperatures where the dominant baryon asymmetry is produced. During this epoch, the baryon asymmetry produced per Hubble time is constant so the total Y B−L receives a logarithmic enhancement, see Eq. (3.9). The subscript "KMM" refers to values that ensure axion dark matter is produced by the kinetic misalignment mechanism; see Eqs. (2.16) and (2.17) for example. This can be satisfied only to the left of the vertical green line. This corresponds to the region below the green line in
(
III) Results on minimum m S : The blue and red curves in Fig. 3 show the minimum values of m S for which the baryon asymmetry may be achieved, with different colors corresponding to the choice of the neutrino mass spectrum. The solid curves in both panels of Fig. 3 are identical and assume T R = 2 × 10 9 GeV. Although the dot-dashed curves in theleft panel also assume T R = 2 × 10 9 GeV, they assume a different value of tan β, whose effect is to be discussed below. For high T R such as this, Y B primarily depends on m S , and the dependence on T R is logarithmic because of its role in setting T i in Eq. (3.9). For these curves in the left panel ofFig. 3, the dependence on f a is also only logarithmic and enters via its impact on T f = T S . This explains the nearly vertical segments of the curves starting at low values of f a . Indeed, starting at the bottom of these curves, we have the baryon asymmetry generated during a radiation-dominated era with a logarithmic enhancement. Moving to larger f a , it eventually becomes impossible to reproduce the dark matter abundance above the green dotted line as explained in (I). Above this point, the most efficient generation of the asymmetry may be found by ensuring that the saxion does not come to dominate the energy density (and thus generate entropy) as described in (II). Consequently, a kink in the curve develops here because the PQ charge must be such that T RM only occurs at thermalization temperature T th so as to avoid this dilution. For the curve segments below the green dotted line and above the orange line, the PQ yield needs to be chosen in a way such that the rotation does not dominate the energy density either; see discussion in (II).Effects of tan β: The dot-dashed curves in the left panel assume a lower value for tan β = 5 than the solid curves. The value of tan β can impact the baryon asymmetry via its effect on the down-and electron-Yukawa couplings. When interactions involving the down-or electron-Yukawa coupling are out of equilibrium, this may change the chemical potentials and hence C i (T ) in the baryon asymmetry of Eq. (3.7). In constructingFig. 3, we have used the relevant C i for each temperature range; see
Fig. 9
9for the temperatures at which the Yukawa interactions come into equilibrium. We find that the values of C i depend most sensitively on whether the down-Yukawa interaction is in equilibrium, and they are relatively insensitive to whether the electron-Yukawa interaction is. In the left panel ofFig. 3, dot-dashed lines assume tan β = 5, while solid lines assume down-Yukawa interactions are in equilibrium. The solid lines apply for all tan β ≥ 35 because the interactions come into equilibrium at temperatures higher than the reheat temperature T R = 2 × 10 9 GeV assumed in this panel. The dot-dashed lines with lower tan β shift to higher m S compared to the solid lines with higher tan β because out-of-equilibrium down-Yukawa couplings reduce the coefficients C i and thus the efficiency in producing Y B and a largerθ is needed to compensate. All lines in the right panel of Fig. 3 assume that down-Yukawa interactions are in equilibrium, which is valid for tan β > 10 (2) in the case of the dashed (dotted) lines with T R = 2 × 10 8 GeV (10 7 GeV). Impact of reheat temperature: In Fig. 3, the solid and dot-dashed curves in the left panel and the solid lines in the right panel assume T R = 2 × 10 9 GeV, whereas the dashed (dotted) lines in the right panel are for T R = 2 × 10 8 GeV (10 7 GeV). The predictions are affected because the logarithmic enhancement of Eq. (3.9), if present, starts at T i = T R . It is also
(FIG. 5 .
5IV) Results on dark matter: We now focus on the region where axion dark matter can be accounted for by kinetic misalignment, i.e., below the dotted green line inFig. 3. As can be seen in that figure and explained in (I), if µ = m S , this possibility is in tension with bounds from observations of red giants. However, if this strict relation between µ and m S is modified, we find that it is possible to produce dark matter in this way. For larger µ, the saxion thermalization rate in Eq. (4.1) is enhanced and therefore the maximum yield Y θ = 3rT th /4N DW m S increases, so the green dotted line inFig. 3shifts upward. And for µ = 3m S , the green line is above the purple boundary for m S ≳ 30 TeV, and axion dark matter from kinetic misalignment becomes viable. This benchmark case is shown inFig. 5. Given the narrow range in m S of interest there, we improve the precision of the prediction by going beyond the analytic evaluation of Y B that relies on estimating the production of asymmetry per Hubble time ∆Y B . We instead numerically solve the coupled Boltzmann equations of the inflaton and radiation, while adding an energy component from the axion rotation on top of this background evolution. We numerically integrateṅ B−L R 3 using Eq. (3.6) to obtain the baryon asymmetry. We find the predictions of m S are modified (increased) by up to a factors of two for a fixed T R using this more sophisticated treatment.In the left (right) panel ofFig. 5, an inverted (normal) neutrino mass hierarchy is assumed, and the predictions are shown by the blue (red) contours. We include contours of T R to show how the reheat temperature affects the prediction. The brown region is excluded because the required energy density in the complex field P , comprised of contributions from Parameter space reproducing both the observed dark matter abundance and the baryon asymmetry. The left (right) panel assumes the neutrino mass spectrum with an inverted (normal)
(4. 2 )
2. In particular, above (below) the positively-sloped boundary of the green region, thermalization occurs below (at) T µ ; see Eq. (4.3). This thermalization condition is also the origin of the vertical green line labeled with KMM in the right panel, which shows various temperatures as functions of f a for the benchmark point m S = 70 TeV. On the other hand, below the
( 2 .
212). The resulting upper bound on m S is shown by the brown line; see a related discussion in Sec. 4.2.1. To the right of this brown curve, ρ P > ρ inf , which is inconsistent because P would drive inflation. The constraints involving T R are obtained by calculating Y B numerically, also including the coupled Boltzmann equations for inflationary reheating.As the sum of the neutrino masses decreases, both the blue and brown curves will move to the right, so it is possible to reproduce both the baryon asymmetry and the dark matter abundance for all of the white region to the right of the magenta curve. For small enough neutrino mass, the blue curve will reach the intersection of the purple and green regions at the right of the figure, at which point, the window closes.In summary, simultaneous production of dark matter and the baryon asymmetry is possible between m S of 60-100 TeV depending on the sum of the neutrino masses, and f a should lie in the window (1-2) × 10 9 GeV. The NH case (with vanishing lowest eigenvalues) is excluded by observations of red giants.Which neutrino spectra are allowed, however, depends on µ. We have assumed µ = m S /5 inFig. 6. Smaller µ would decrease the thermalization rate Γ S H H . This would make the negatively-sloped boundary of the green region, set by Γ S H H = 3H at T µ , shift downward.The positively-sloped boundary, set by T µ = T RM , would shift upward because T µ ∝ µ −1/2 and T RM ∝ f a . Finally, a small µ increases T µ and therefore the logarithmic enhancement in Y B−L , which in turn requires a smallerθ ∝ m S to compensate for the increased efficiency in producing Y B−L . This shifts the prediction curves to the left. Numerically, µ < m S /10 makes the NH case with a vanishing lowest eigenvalue viable for a small range of saxion masses.
Tth , Tμ, KM M FIG. 6. Analysis of the n = 2, N DW = 6 case. Left: The baryon asymmetry and the dark matter abundance are correctly reproduced along the blue dashed line for the case of an inverted hierarchy neutrino spectrum, for sufficiently high reheat temperatures. Lower reheat temperatures lead to higher m S , up to the brown dashed line where the P field starts to drive inflation. In contrast, the normal hierarchy case leads to overproduction of the baryon asymmetry. The purple region is excluded by observations of red giants, while the green region underproduces dark matter. Right: Temperatures in this combined dark matter/baryon asymmetry scenario for fixed m S = 70 TeV. The temperature T S, KMM (magenta dashed) indicates where the saxion reaches its minimum. T µ, KMM indicates the temperature below which the Higgsinos are in thermal equilibrium. T RM (yellow dashed) indicates the temperature where the rotational energy would come to dominate. For low f a , T S, KMM is reached first, and no era of rotational energy domination occurs. The green shaded region corresponds to the green shaded region in the left panel where the KMM is unable to fully reproduce the dark matter density.
We require both Y B and dark matter from axion rotations. Then, this scenario makes a prediction for (m S , f a ) as a function of r, defined in Eq.(2.12) as the ratio of the axion rotation to the saxion oscillation energy densities. The reason for the unique prediction is as follows. For a given µ, the relation T RM = rT th from Eq. (2.13) is satisfied along a contour in the (m S , f a ) plane because T RM and T th are independently determined by (m S , f a ). In particular, T RM is given by Eq. (2.10) with Y θ from kinetic misalignment using Eq. (2.15), and see Sec. 4.1 for discussions of T th for different values of n. Finally, a successful production of Y B picks out a single point along this contour once the neutrino mass spectrum has beenspecified. We find that, for µ = m S , the predicted values of f a are in tension with red giant bounds except for the normal hierarchy case with n = 1 and r ≃ 1.
YnFIG. 7 .
7B , we numerically solve the coupled Boltzmann equations of the saxion and radiation with the thermalization rate given in Eq. (4.1) and then integrateṅ B−L R 3 . This then makes a single point prediction (m S , f a ) when given µ, r, and a neutrino spectrum. The final predictions are shown by the symbols connected by the solid lines in Fig. 7. (The diamonds connected by the dashed black curve are for n = 2 and will be discussed below.) The n = 1, DM+Y B (fixed m ν ) n = 2, DM (fixed r) = 1, DM+Y B (fixed m ν ) n = 2, DM (fixed r) Predictions for m S and f a from the baryon asymmetry via lepto-axiogenesis and axion dark matter from kinetic misalignment in the scenario where the saxion dominates. The left (right) panel is for µ = m S (µ = 3m S ). The symbols connected by the solid (dashed) lines are for n = 1 (n = 2). The triangle and diamond symbols assume the saxion has the same energy as the axion rotation, i.e., r = 1 as defined in Eq. (2.12). The circles below triangles denote lower values of r in steps of 0.2 from r = 0.8 until r = 0.02, after which it is 0.03 and 0.01. The colors refer to the chosen neutrino mass spectrum as labeled. The colored lines connect the predictions of both kinetic misalignment and lepto-axiogenesis for n = 1 with various values of r, whereas the dashed black
triangles at the top are the predictions assuming r = 1, while the circles below them show the predictions for smaller r, decreasing in steps of 0.2 until r = 0.2, below which the circles are for r = 0.03 and r = 0.01. The two colors indicate different neutrino mass spectra. The left (right) panel of Fig. 7 shows the predictions for µ = m S (µ = 3m S ). The predictions are in a small region with f a = (1-3) × 10 9 GeV. The required values of m S range from 100-360 TeV depending upon the choice of neutrino spectrum and are in an interesting range considering the observed Higgs boson mass.
The prediction is sharp and points to m S ≃ 10 TeV and f a ≃ (1.2-1.3) × 10 9 GeV as shown by the black dashed segment above the purple region. The truncation of the black dashed curves at low m S is due to thermalization constraint discussed below.The thermalization analysis for n = 2 is more involved than for n = 1 since Γ S H H increases with (T + m S )S 2n−2 , and thermalization can potentially occur at temperatures higher than T S when the non-trivial scaling of S may matter. For the saxion to thermalize via scattering with Higgsinos, a thermal bath must be present with a temperature larger than the Higgsino mass parameter µ(T ) = µ × (S(T )/N DW f a ) 2 . This bath can in principle originate from inflationary reheating or from the saxion scattering with the W gauge boson.
Finally
, we show diamonds along the black dashed line to indicate the prediction from lepto-axiogenesis for the different hierarchical neutrino mass spectra. In deriving these predictions, we again numerically solve the coupled Boltzmann equations for the saxion and radiation with a non-trivial thermalization rate scaling and then integrateṅ B−L R 3 to obtain the final Y B . As can be seen in thefigure,lower values of f a are preferred by the predictions,
FIG. 8 .
8Curvature of the PQ-field potential normalized to the UV value in the one-field model as a function of the ratio of the field value S to the axion decay constant f a .
field model defined in Eq. (2.3). This model requires special treatment because, unlike the two-field model of Eq. (2.2), the curvature of the potential in the radial direction is logarithmically enhanced at large field values for S.
. 5, and the positively-sloped green boundary ofFig. 6are all determined by thermalization requirements, and they are set such that Y θ = 3rT th /4N DW m S (T th ) reproduces the required PQ charge yield Y θ, KMM in Eq. (2.15). In Figs. 3 and 5, T th is determined via Eq. (4.2) and scales as the lowenergy value of the Higgsino mass squared, µ 2 , which we expect to be m UV S 2 . Thus,Y θ ∝ m UV S 2 /m S (T th ).For a fixed f a , we can find the correct value of m UV m S (T th ) is equal to the constant m S of our previous analysis.Thermalization occurs when the saxion is at (close to) the minimum, for low (high) m S , as can be seen inFig. 4. Therefore, m S (T th ) ≃ (0.m S (T th ) = (2-5)m UV S . The green line/region will then shift to the left by a factor of 2-5 to compensate for this. On the other hand, for Fig. 6, the positively-sloped boundaries are in fact unaffected because the condition given in Eq. (4.3) depends on only µ ≈ m UV S . The negatively-sloped boundaries are set by T th = T RM , where T th = T µ . This condition translates to m S (T RM )/(m S (T S )/µ) 1/2 being equal to the m S derived for the fixed curvature case; this condition has an accidental cancellation numerically so the boundaries do not move appreciably. We now discuss how the predictions of m S that reproduce the baryon asymmetry are affected by m S (z). During the epoch where the ∆Y B−L is a constant, the dependence of the total Y B−L on m S (T ) is through a now slightly temperature dependent θ (T ) = N DW m S (T )
Eq. (A.4). Note that four of the nine equations in Eq. (A.4) are redundant. Among the linearly dependent equilibrium conditions, it is convenient to use those which violate conservation laws with the largest rate. It is that rate which sets the temperature at which the conservation law is broken and the corresponding equilibrium condition is satisfied. We choose the (i, j)
result in a solution to the chemical potentials for leptons that depends on the generation. The conservation law that persists to the next lowest temperature isū 1 −d 1 number, which is broken by the down-Yukawa interaction with a rate α 3 |Y d 11| 2 T . Whenū 1 −d 1 number is conserved, the (i, j) = (1, 1) component of Eq. (A.4) should be replaced by µū 1 − µd 1 = 0. (A.13)The last symmetry we consider is 3B 1 − B, which is broken by off-diagonal down-typeYukawa interactions of the first generation with the second and third generations. Because of the large charm and top Yukawa couplings, we take a quark basis where the down-type Yukawa matrix Y d = V CKM diag(y d , y s , y b ) and the up-type Yukawa is diagonal. In this basis, the dominant contributions to 3B 1 − B breaking are from the interactions of Q 1 with d 2 and d 3 , so the rate of symmetry breaking is α 3 (|Y d 12 | 2 + |Y d 13 | 2 )T . When 3B 1 − B is conserved, the (i, j) = (1, 2) component of Eq. (A.4) should be replaced by
Fig. 9 .FIG. 9 .
99These are functions of tan β because of the dependence of the MSSM Yukawa matrices on tan β. Ignoring threshold corrections from integrating out superpartners, Y u =Y u SM / sin β, Y d = Y d SM / cos β,and Y e = Y e SM / cos β. We take the gauge and SM Yukawa couplings defined at a scale of 10 TeV from Ref.[89] and then run them using the 1Temperatures at which conservation laws are broken by Yukawa interactions coming into equilibrium. These temperatures are functions of tan β, the ratio of Higgs field vacuum expectation values. Theē 1 number is broken by the electron-Yukawa interaction,ū 1 −d 1 by the down-quark-Yukawa interaction, and 3B 1 − B by off-diagonal down-type-quark-Yukawa interactions.
T λ ≃ (m 2 λ M Pl ) 1/3 = 10 8 GeV m λ 1 TeV
and weak and strong sphalerons, see Eqs. (A.1)-(A.4), (A.6), and (A.7). Another chiral symmetry, R-symmetry, is present, and we must impose an additional conservation law in our system of equations.
S matches the value required by the observed dark matter abundance The black dashed line is truncated at low m S because Γ S H H < H when T = T th,i = µ(T th,i ). That is to say, even though a bath has been created via saxion-W scattering that allows Higgsinos to come into equilibrium, the interaction rate between Higgsinos and the saxion is still too small to complete thermalization at this time. In this case, only a small fraction, Γ S H H /H, of the saxion energy density is transferred into the bath at this time.And since Γ S H H decreases faster than H when S is still away from the minimum at N DW f a , thermalization is only possible after S settles to the minimum so that Γ S H H ∝ (T + m S )via kinetic misalignment, i.e., Eq. (2.15).
TABLE I .
ICharge assignments for the superfields for the additional symmetries present at high temperatures. The R-symmetry Q R is present for large temperatures when the gaugino mass is ineffective. The W W ′ symmetry (not an R-symmetry) is present when the µ-term is ineffective. The symmetries are chosen to be non-anomalous with respect to SU (2) and SU(3), see text.
, we show C i for the various cases in the KSVZ model. The case when the off-diagonal Yukawa interactions with the down quark are out of equilibrium is not shown because it has a small impact on the result. Whether or not these interactions 47 DFSZ m λ Efficient All Yukawas Efficient y e Inefficient y e and y d Inefficient Inefficient All Yukawas Efficient y e Inefficient y e and y d InefficientC 1
0.0459 n
N DW
0.0455 n
N DW
0.0229 n
N DW
C 2
0.0459 n
N DW
0.0458 n
N DW
0.0203 n
N DW
C 3
0.0459 n
N DW
0.0461 n
N DW
0.0182 n
N DW
m λ C 1
0.0446 n
N DW
0.0439 n
N DW
0.0213 n
N DW
C 2
0.0446 n
N DW
0.0444 n
N DW
0.0189 n
N DW
C 3
0.0446 n
N DW
0.0449 n
N DW
0.0170 n
N DW
TABLE II .
IIC i coefficients in the DFSZ model when different reactions are in equilibrium. The first group of rows corresponds to the case when scattering through the gaugino mass is in equilibrium, and the second group corresponds to the case where it is not. The first column of numbers corresponds to the low-temperature case when all Yukawa interactions are in equilibrium. The second corresponds to the case when only interactions via the electron-Yukawa are out of equilibrium, and the third also has down-Yukawa interactions out of equilibrium.
Given the superpartner scales considered here, this might require a hierarchy between gaugino/Higgsino masses and scalar masses. Note, a thermally produced wino LSP would require a cored DM profile to avoid indirect detection bounds, see, e.g.[55,56].
Appendix A: Computation of chemical potentialsIn this appendix, we calculate the chemical potentials for Eq.(3.5), which in turn allow us to compute the C i (T ) of Eq. (3.6) that are necessary to evaluate the baryon asymmetry.To calculate the chemical potentials we apply the principle of detailed balance to scattering processes in equilibrium[47]. This sets the sum of the chemical potentials participating in a given reaction to zero. If a certain scattering process is out of equilibrium, we replace the equilibrium condition with a corresponding conservation law. Solving the resulting system of equations allows for the determination of the chemical potentials.We will discuss the equilibrium condition for each scattering process and the corresponding conservation laws. In the present case, the scattering processes include Yukawa interactions, electroweak and strong sphaleron processes, gaugino masses, and the µ-term.All interactions in equilibriumAt low temperatures, all Yukawa couplings, sphaleron processes, and mass terms are in thermal equilibrium. Because of the explicit PQ breaking by the QCD anomaly, the rotation is slowly washed out, and it would vanish at the true thermal equilibrium. However, the washout rate is much smaller than the Hubble expansion rate, and the true thermal equilibrium is never reached[10]. Instead, we should consider a quasi-equilibrium state whereθ is taken to be constant with its value determined by the potential of the saxion.The quasi-equilibrium can be found by taking the time derivatives of the MSSM particle For example, if T S > T R , the table shows that production peaks at T S during inflationary reheating, labeled as the inflaton non-adiabatic, matter-dominated era MD inf NA . This is because ∆n B−L /ρ inf is IR-dominated (UV-dominated) before (after) T S during MD inf NA , while ∆n B−L /s stays UV-dominated in all subsequent eras with T < T S . This result is illustrated in the right panel ofFig. 2.On the other hand, if T S < T R , the baryon asymmetry is produced in equal amount in each Hubble time, ∆n B−L /s ∝ R 0 , during a radiation-dominated era labeled by RD until T f = max(T S , T RM ). We first discuss the case without saxion domination, i.e., with early thermalization. At this T f , the production subsequently becomes UV-dominated because, if T < T S during radiation domination, ∆n B−L /s ∝ R −3 or if T < T RM (but T > T S ) there is a matter-dominated era by the rotation energy density MD rot A and ∆n B−L /s ∝ R −1/2 in this era. This (adiabatic) matter-dominated era MD rot A does not result in any entropy production as the energy density ultimately becomes subdominant to radiation due to the era where it scales as kination. This is the case where continuous production leads to the logarithmic enhancement discussed around Eq.(3.9). This case is illustrated in the left panel ofFig. 2.In the above discussion, we assumed that the saxion energy density was depleted by
CP Conservation in the Presence of Instantons. R D Peccei, H R Quinn, 10.1103/PhysRevLett.38.1440Phys. Rev. Lett. 38R. D. Peccei and H. R. Quinn, "CP Conservation in the Presence of Instantons," Phys. Rev. Lett. 38, 1440-1443 (1977).
Constraints Imposed by CP Conservation in the Presence of Instantons. R D Peccei, H R Quinn, 10.1103/PhysRevD.16.1791Phys. Rev. D. 16R. D. Peccei and H. R. Quinn, "Constraints Imposed by CP Conservation in the Presence of Instantons," Phys. Rev. D 16, 1791-1797 (1977).
A New Light Boson?. S Weinberg, 10.1103/PhysRevLett.40.223Phys. Rev. Lett. 40S. Weinberg, "A New Light Boson?" Phys. Rev. Lett. 40, 223-226 (1978).
Problem of Strong P and T Invariance in the Presence of Instantons. F Wilczek, 10.1103/PhysRevLett.40.279Phys. Rev. Lett. 40F. Wilczek, "Problem of Strong P and T Invariance in the Presence of Instantons," Phys. Rev. Lett. 40, 279-282 (1978).
Cosmology of the Invisible Axion. J Preskill, M B Wise, F Wilczek, 10.1016/0370-2693(83)90637-8Phys. Lett. B. 120J. Preskill, M. B. Wise, and F. Wilczek, "Cosmology of the Invisible Axion," Phys. Lett. B 120, 127-132 (1983).
A Cosmological Bound on the Invisible Axion. L F Abbott, P Sikivie, 10.1016/0370-2693(83)90638-XPhys. Lett. B. 120L. F. Abbott and P. Sikivie, "A Cosmological Bound on the Invisible Axion," Phys. Lett. B 120, 133-136 (1983).
The Not So Harmless Axion. M Dine, W Fischler, 10.1016/0370-2693(83)90639-1Phys. Lett. B. 120M. Dine and W. Fischler, "The Not So Harmless Axion," Phys. Lett. B 120, 137-141 (1983).
Axion Kinetic Misalignment Mechanism. R T Co, L J Hall, K Harigaya, 10.1103/PhysRevLett.124.251802arXiv:1910.14152Phys. Rev. Lett. 124251802hep-phR. T. Co, L. J. Hall, and K. Harigaya, "Axion Kinetic Misalignment Mechanism," Phys. Rev. Lett. 124, 251802 (2020), arXiv:1910.14152 [hep-ph].
ALP dark matter from kinetic fragmentation: opening up the parameter window. C Eröncel, R Sato, G Servant, P Sørensen, 10.1088/1475-7516/2022/10/053arXiv:2206.14259JCAP. 1053hep-phC. Eröncel, R. Sato, G. Servant, and P. Sørensen, "ALP dark matter from kinetic fragmenta- tion: opening up the parameter window," JCAP 10, 053 (2022), arXiv:2206.14259 [hep-ph].
Axiogenesis. R T Co, K Harigaya, 10.1103/PhysRevLett.124.111602arXiv:1910.02080Phys. Rev. Lett. 124111602hep-phR. T. Co and K. Harigaya, "Axiogenesis," Phys. Rev. Lett. 124, 111602 (2020), arXiv:1910.02080 [hep-ph].
Predictions for Axion Couplings from ALP Cogenesis. R T Co, L J Hall, K Harigaya, 10.1007/JHEP01(2021)172arXiv:2006.04809JHEP. 01172hep-phR. T. Co, L. J. Hall, and K. Harigaya, "Predictions for Axion Couplings from ALP Cogenesis," JHEP 01, 172 (2021), arXiv:2006.04809 [hep-ph].
Axiogenesis from SU (2) R phase transition. K Harigaya, I R Wang, 10.1007/JHEP10(2021)022arXiv:2107.09679JHEP. 1022Erratum: JHEP 12, 193 (2021). hep-phK. Harigaya and I. R. Wang, "Axiogenesis from SU (2) R phase transition," JHEP 10, 022 (2021), [Erratum: JHEP 12, 193 (2021)], arXiv:2107.09679 [hep-ph].
Composite neutrinos and the QCD axion: Baryogenesis, dark matter, small Dirac neutrino masses, and vanishing neutron electric dipole moment. S Chakraborty, T H Jung, T Okui, 10.1103/PhysRevD.105.015024arXiv:2108.04293Phys. Rev. D. 10515024hep-phS. Chakraborty, T. H. Jung, and T. Okui, "Composite neutrinos and the QCD axion: Baryo- genesis, dark matter, small Dirac neutrino masses, and vanishing neutron electric dipole mo- ment," Phys. Rev. D 105, 015024 (2022), arXiv:2108.04293 [hep-ph].
R-parity violation axiogenesis. R T Co, K Harigaya, Z Johnson, A Pierce, 10.1007/JHEP11(2021)210arXiv:2110.05487JHEP. 11210hep-phR. T. Co, K. Harigaya, Z. Johnson, and A. Pierce, "R-parity violation axiogenesis," JHEP 11, 210 (2021), arXiv:2110.05487 [hep-ph].
Axiogenesis with a heavy QCD axion. R T Co, T Gherghetta, K Harigaya, 10.1007/JHEP10(2022)121arXiv:2206.00678JHEP. 10121hep-phR. T. Co, T. Gherghetta, and K. Harigaya, "Axiogenesis with a heavy QCD axion," JHEP 10, 121 (2022), arXiv:2206.00678 [hep-ph].
Lepto-Axiogenesis. R T Co, N Fernandez, A Ghalsasi, L J Hall, K Harigaya, 10.1007/JHEP03(2021)017arXiv:2006.05687JHEP. 2117hep-phR. T. Co, N. Fernandez, A. Ghalsasi, L. J. Hall, and K. Harigaya, "Lepto-Axiogenesis," JHEP 21, 017 (2020), arXiv:2006.05687 [hep-ph].
Lepto-axiogenesis in minimal SUSY KSVZ model. J Kawamura, S Raby, 10.1007/JHEP04(2022)116arXiv:2109.08605JHEP. 04116hep-phJ. Kawamura and S. Raby, "Lepto-axiogenesis in minimal SUSY KSVZ model," JHEP 04, 116 (2022), arXiv:2109.08605 [hep-ph].
On Possible Suppression of the Axion Hadron Interactions. A R Zhitnitsky, Sov. J. Nucl. Phys. 31260In RussianA. R. Zhitnitsky, "On Possible Suppression of the Axion Hadron Interactions. (In Russian)," Sov. J. Nucl. Phys. 31, 260 (1980).
A Simple Solution to the Strong CP Problem with a Harmless Axion. M Dine, W Fischler, M Srednicki, 10.1016/0370-2693(81)90590-6Phys. Lett. B. 104M. Dine, W. Fischler, and M. Srednicki, "A Simple Solution to the Strong CP Problem with a Harmless Axion," Phys. Lett. B 104, 199-202 (1981).
Weak Interaction Singlet and Strong CP Invariance. J E Kim, 10.1103/PhysRevLett.43.103Phys. Rev. Lett. 43103J. E. Kim, "Weak Interaction Singlet and Strong CP Invariance," Phys. Rev. Lett. 43, 103 (1979).
Can Confinement Ensure Natural CP Invariance of Strong Interactions?. M A Shifman, A I Vainshtein, V I Zakharov, 10.1016/0550-3213(80)90209-6Nucl. Phys. B. 166M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, "Can Confinement Ensure Natural CP Invariance of Strong Interactions?" Nucl. Phys. B 166, 493-506 (1980).
Baryon and Lepton Nonconserving Processes. S Weinberg, 10.1103/PhysRevLett.43.1566Phys. Rev. Lett. 43S. Weinberg, "Baryon and Lepton Nonconserving Processes," Phys. Rev. Lett. 43, 1566-1570 (1979).
Big-Bang Nucleosynthesis and Gravitino. M Kawasaki, K Kohri, T Moroi, A Yotsuyanagi, 10.1103/PhysRevD.78.065011arXiv:0804.3745Phys. Rev. D. 7865011hep-phM. Kawasaki, K. Kohri, T. Moroi, and A. Yotsuyanagi, "Big-Bang Nucleosynthesis and Gravitino," Phys. Rev. D 78, 065011 (2008), arXiv:0804.3745 [hep-ph].
Revisiting Big-Bang Nucleosynthesis Constraints on Long-Lived Decaying Particles. M Kawasaki, K Kohri, T Moroi, Y Takaesu, 10.1103/PhysRevD.97.023502arXiv:1709.01211Phys. Rev. D. 9723502hep-phM. Kawasaki, K. Kohri, T. Moroi, and Y. Takaesu, "Revisiting Big-Bang Nucleosyn- thesis Constraints on Long-Lived Decaying Particles," Phys. Rev. D 97, 023502 (2018), arXiv:1709.01211 [hep-ph].
Peccei-Quinn Symmetry Breaking by Radiative Corrections in Supergravity. P Moxhay, K Yamamoto, 10.1016/0370-2693(85)91655-7Phys. Lett. B. 151P. Moxhay and K. Yamamoto, "Peccei-Quinn Symmetry Breaking by Radiative Corrections in Supergravity," Phys. Lett. B 151, 363-366 (1985).
Baryogenesis from flat directions of the supersymmetric standard model. M Dine, L Randall, S D Thomas, 10.1016/0550-3213(95)00538-2arXiv:hep-ph/9507453Nucl. Phys. B. 458M. Dine, L. Randall, and S. D. Thomas, "Baryogenesis from flat directions of the supersym- metric standard model," Nucl. Phys. B 458, 291-326 (1996), arXiv:hep-ph/9507453.
Dynamics of Peccei-Quinn Breaking Field after Inflation and Axion Isocurvature Perturbations. K Harigaya, M Ibe, M Kawasaki, T T Yanagida, 10.1088/1475-7516/2015/11/003arXiv:1507.00119JCAP. 15113hep-phK. Harigaya, M. Ibe, M. Kawasaki, and T. T. Yanagida, "Dynamics of Peccei-Quinn Break- ing Field after Inflation and Axion Isocurvature Perturbations," JCAP 1511, 003 (2015), arXiv:1507.00119 [hep-ph].
E W Kolb, M S Turner, 10.1201/9780429492860The Early Universe. 69E. W. Kolb and M. S. Turner, The Early Universe, Vol. 69 (1990).
Charge transfer between rotating complex scalar fields. V Domcke, K Harigaya, K Mukaida, 10.1007/JHEP08(2022)234arXiv:2205.00942JHEP. 08234hep-phV. Domcke, K. Harigaya, and K. Mukaida, "Charge transfer between rotating complex scalar fields," JHEP 08, 234 (2022), arXiv:2205.00942 [hep-ph].
Gravitational wave and CMB probes of axion kination. R T Co, D Dunsky, N Fernandez, A Ghalsasi, L J Hall, K Harigaya, J Shelton, 10.1007/JHEP09(2022)116arXiv:2108.09299JHEP. 09116hep-phR. T. Co, D. Dunsky, N. Fernandez, A. Ghalsasi, L. J. Hall, K. Harigaya, and J. Shel- ton, "Gravitational wave and CMB probes of axion kination," JHEP 09, 116 (2022), arXiv:2108.09299 [hep-ph].
Revealing the Primordial Irreducible Inflationary Gravitational-Wave Background with a Spinning Peccei-Quinn Axion. Y Gouttenoire, G Servant, P Simakachorn, arXiv:2108.10328hep-phY. Gouttenoire, G. Servant, and P. Simakachorn, "Revealing the Primordial Irreducible In- flationary Gravitational-Wave Background with a Spinning Peccei-Quinn Axion," (2021), arXiv:2108.10328 [hep-ph].
Kination cosmology from scalar fields and gravitational-wave signatures. Y Gouttenoire, G Servant, P Simakachorn, arXiv:2111.01150hep-phY. Gouttenoire, G. Servant, and P. Simakachorn, "Kination cosmology from scalar fields and gravitational-wave signatures," (2021), arXiv:2111.01150 [hep-ph].
. A Dolgov, D Kirilova, " On, Creation By A Time, Dependent Scalar, Field, Sov. J. Nucl. Phys. 51A. Dolgov and D. Kirilova, "ON PARTICLE CREATION BY A TIME DEPENDENT SCALAR FIELD," Sov. J. Nucl. Phys. 51, 172-177 (1990).
Particle Production During Out-of-equilibrium Phase Transitions. J H Traschen, R H Brandenberger, 10.1103/PhysRevD.42.2491Phys. Rev. D. 42J. H. Traschen and R. H. Brandenberger, "Particle Production During Out-of-equilibrium Phase Transitions," Phys. Rev. D 42, 2491-2504 (1990).
Reheating after inflation. L Kofman, A D Linde, A A Starobinsky, 10.1103/PhysRevLett.73.3195arXiv:hep-th/9405187Phys. Rev. Lett. 73L. Kofman, A. D. Linde, and A. A. Starobinsky, "Reheating after inflation," Phys. Rev. Lett. 73, 3195-3198 (1994), arXiv:hep-th/9405187.
Universe reheating after inflation. Y Shtanov, J H Traschen, R H Brandenberger, 10.1103/PhysRevD.51.5438arXiv:hep-ph/9407247Phys. Rev. D. 51Y. Shtanov, J. H. Traschen, and R. H. Brandenberger, "Universe reheating after inflation," Phys. Rev. D 51, 5438-5455 (1995), arXiv:hep-ph/9407247.
Towards the theory of reheating after inflation. L Kofman, A D Linde, A A Starobinsky, 10.1103/PhysRevD.56.3258arXiv:hep-ph/9704452Phys. Rev. D. 56L. Kofman, A. D. Linde, and A. A. Starobinsky, "Towards the theory of reheating after inflation," Phys. Rev. D 56, 3258-3295 (1997), arXiv:hep-ph/9704452.
Monodromy Dark Matter. J Jaeckel, V M Mehta, L T Witkowski, 10.1088/1475-7516/2017/01/036arXiv:1605.01367JCAP. 0136hep-phJ. Jaeckel, V. M. Mehta, and L. T. Witkowski, "Monodromy Dark Matter," JCAP 01, 036 (2017), arXiv:1605.01367 [hep-ph].
Foamy Dark Matter from Monodromies. J Berges, A Chatrchyan, J Jaeckel, 10.1088/1475-7516/2019/08/020arXiv:1903.03116JCAP. 0820hep-phJ. Berges, A. Chatrchyan, and J. Jaeckel, "Foamy Dark Matter from Monodromies," JCAP 08, 020 (2019), arXiv:1903.03116 [hep-ph].
Axion fragmentation. N Fonseca, E Morgante, R Sato, G Servant, 10.1007/JHEP04(2020)010arXiv:1911.08472JHEP. 0410hep-phN. Fonseca, E. Morgante, R. Sato, and G. Servant, "Axion fragmentation," JHEP 04, 010 (2020), arXiv:1911.08472 [hep-ph].
Axion fragmentation on the lattice. E Morgante, W Ratzinger, R Sato, B A Stefanek, 10.1007/JHEP12(2021)037arXiv:2109.13823JHEP. 1237hep-phE. Morgante, W. Ratzinger, R. Sato, and B. A. Stefanek, "Axion fragmentation on the lattice," JHEP 12, 037 (2021), arXiv:2109.13823 [hep-ph].
Gravitational waves and dark photon dark matter from axion rotations. R T Co, K Harigaya, A Pierce, 10.1007/JHEP12(2021)099arXiv:2104.02077JHEP. 1299hep-phR. T. Co, K. Harigaya, and A. Pierce, "Gravitational waves and dark photon dark matter from axion rotations," JHEP 12, 099 (2021), arXiv:2104.02077 [hep-ph].
Horizontal gauge symmetry and masses of neutrinos. T Yanagida, Proceedings: Workshop on the Unified Theories and the Baryon Number in the Universe: Tsukuba. Workshop on the Unified Theories and the Baryon Number in the Universe: TsukubaJapanConf. Proc. C7902131T. Yanagida, "Horizontal gauge symmetry and masses of neutrinos," Proceedings: Workshop on the Unified Theories and the Baryon Number in the Universe: Tsukuba, Japan, February 13-14, 1979, Conf. Proc. C7902131, 95-99 (1979).
Complex Spinors and Unified Theories. M Gell-Mann, P Ramond, R Slansky, arXiv:1306.4669Supergravity Workshop. Stony Brook, New YorkConf. Proc. C790927. hep-thM. Gell-Mann, P. Ramond, and R. Slansky, "Complex Spinors and Unified Theories," Su- pergravity Workshop Stony Brook, New York, September 27-28, 1979, Conf. Proc. C790927, 315-321 (1979), arXiv:1306.4669 [hep-th].
µ → eγ at a Rate of One Out of 10 9 Muon Decays?. P Minkowski, 10.1016/0370-2693(77)90435-XPhys. Lett. 67P. Minkowski, "µ → eγ at a Rate of One Out of 10 9 Muon Decays?" Phys. Lett. 67B, 421-428 (1977).
Neutrino Mass and Spontaneous Parity Nonconservation. R N Mohapatra, G Senjanovic, 10.1103/PhysRevLett.44.912Phys. Rev. Lett. 44912231(1979)R. N. Mohapatra and G. Senjanovic, "Neutrino Mass and Spontaneous Parity Nonconserva- tion," Phys. Rev. Lett. 44, 912 (1980), [,231(1979)].
Cosmological baryon and lepton number in the presence of electroweak fermion number violation. J A Harvey, M S Turner, 10.1103/PhysRevD.42.3344Phys. Rev. D. 42J. A. Harvey and M. S. Turner, "Cosmological baryon and lepton number in the presence of electroweak fermion number violation," Phys. Rev. D 42, 3344-3349 (1990).
Planck 2018 results. VI. Cosmological parameters. N Aghanim, Planck10.1051/0004-6361/201833910arXiv:1807.06209Astron. Astrophys. 641astro-ph.CON. Aghanim et al. (Planck), "Planck 2018 results. VI. Cosmological parameters," Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO].
The mu Problem and the Strong CP Problem. J E Kim, H P Nilles, 10.1016/0370-2693(84)91890-2Phys. Lett. B. 138J. E. Kim and H. P. Nilles, "The mu Problem and the Strong CP Problem," Phys. Lett. B 138, 150-154 (1984).
Moduli decay in the hot early Universe. D Bodeker, 10.1088/1475-7516/2006/06/027arXiv:hep-ph/0605030JCAP. 060627hep-phD. Bodeker, "Moduli decay in the hot early Universe," JCAP 0606, 027 (2006), arXiv:hep- ph/0605030 [hep-ph].
On bulk viscosity and moduli decay. M Laine, 10.1143/PTPS.186.404arXiv:1007.2590Prog. Theor. Phys. Suppl. 186hep-phM. Laine, "On bulk viscosity and moduli decay," Prog. Theor. Phys. Suppl. 186, 404-416 (2010), arXiv:1007.2590 [hep-ph].
Dynamics of oscillating scalar field in thermal environment. K Mukaida, K Nakayama, 10.1088/1475-7516/2013/01/017arXiv:1208.3399JCAP. 130117hep-phK. Mukaida and K. Nakayama, "Dynamics of oscillating scalar field in thermal environment," JCAP 1301, 017 (2013), arXiv:1208.3399 [hep-ph].
Axion and neutrino bounds improved with new calibrations of the tip of the red-giant branch using geometric distance determinations. F Capozzi, G Raffelt, 10.1103/PhysRevD.102.083007arXiv:2007.03694Phys. Rev. D. 10283007astro-ph.SRF. Capozzi and G. Raffelt, "Axion and neutrino bounds improved with new calibrations of the tip of the red-giant branch using geometric distance determinations," Phys. Rev. D 102, 083007 (2020), arXiv:2007.03694 [astro-ph.SR].
The RGB tip of galactic globular clusters and the revision of the axion-electron coupling bound. O Straniero, C Pallanca, E Dalessandro, I Dominguez, F R Ferraro, M Giannotti, A Mirizzi, L Piersanti, 10.1051/0004-6361/202038775arXiv:2010.03833Astron. Astrophys. 644astro-ph.SRO. Straniero, C. Pallanca, E. Dalessandro, I. Dominguez, F. R. Ferraro, M. Giannotti, A. Mi- rizzi, and L. Piersanti, "The RGB tip of galactic globular clusters and the revision of the axion-electron coupling bound," Astron. Astrophys. 644, A166 (2020), arXiv:2010.03833 [astro-ph.SR].
Wino Dark Matter Under Siege. T Cohen, M Lisanti, A Pierce, T R Slatyer, 10.1088/1475-7516/2013/10/061arXiv:1307.4082JCAP. 1061hep-phT. Cohen, M. Lisanti, A. Pierce, and T. R. Slatyer, "Wino Dark Matter Under Siege," JCAP 10, 061 (2013), arXiv:1307.4082 [hep-ph].
J Fan, M Reece, 10.1007/JHEP10(2013)124arXiv:1307.4400Wino Veritas? Indirect Searches Shed Light on Neutralino Dark Matter. 10124hep-phJ. Fan and M. Reece, "In Wino Veritas? Indirect Searches Shed Light on Neutralino Dark Matter," JHEP 10, 124 (2013), arXiv:1307.4400 [hep-ph].
Increasing Temperature toward the Completion of Reheating. R T Co, E Gonzalez, K Harigaya, 10.1088/1475-7516/2020/11/038arXiv:2007.04328JCAP. 1138astro-ph.COR. T. Co, E. Gonzalez, and K. Harigaya, "Increasing Temperature toward the Completion of Reheating," JCAP 11, 038 (2020), arXiv:2007.04328 [astro-ph.CO].
Axion Kinetic Misalignment and Parametric Resonance from Inflation. R T Co, L J Hall, K Harigaya, K A Olive, S Verner, 10.1088/1475-7516/2020/08/036arXiv:2004.00629JCAP. 0836hep-phR. T. Co, L. J. Hall, K. Harigaya, K. A. Olive, and S. Verner, "Axion Kinetic Misalignment and Parametric Resonance from Inflation," JCAP 08, 036 (2020), arXiv:2004.00629 [hep-ph].
Cosmological axion problem in chaotic inflationary universe. S Kasuya, M Kawasaki, T Yanagida, 10.1016/S0370-2693(97)00809-5arXiv:hep-ph/9608405Phys. Lett. B. 409S. Kasuya, M. Kawasaki, and T. Yanagida, "Cosmological axion problem in chaotic inflation- ary universe," Phys. Lett. B 409, 94-100 (1997), arXiv:hep-ph/9608405.
Topological defects formation after inflation on lattice simulation. S Kasuya, M Kawasaki, 10.1103/PhysRevD.58.083516arXiv:hep-ph/9804429Phys. Rev. D. 5883516S. Kasuya and M. Kawasaki, "Topological defects formation after inflation on lattice simula- tion," Phys. Rev. D 58, 083516 (1998), arXiv:hep-ph/9804429.
Of Axions, Domain Walls and the Early Universe. P Sikivie, 10.1103/PhysRevLett.48.1156Phys. Rev. Lett. 48P. Sikivie, "Of Axions, Domain Walls and the Early Universe," Phys. Rev. Lett. 48, 1156-1159 (1982).
Q-balls. S R Coleman, 10.1016/0550-3213(86)90520-1Nucl. Phys. B. 262744Nucl.Phys.BS. R. Coleman, "Q-balls," Nucl. Phys. B 262, 263 (1985), [Addendum: Nucl.Phys.B 269, 744 (1986)].
Solitons in the supersymmetric extensions of the standard model. A Kusenko, 10.1016/S0370-2693(97)00584-4arXiv:hep-ph/9704273Phys. Lett. B. 405108A. Kusenko, "Solitons in the supersymmetric extensions of the standard model," Phys. Lett. B 405, 108 (1997), arXiv:hep-ph/9704273.
Supersymmetric Q balls as dark matter. A Kusenko, M E Shaposhnikov, 10.1016/S0370-2693(97)01375-0arXiv:hep-ph/9709492Phys. Lett. B. 418A. Kusenko and M. E. Shaposhnikov, "Supersymmetric Q balls as dark matter," Phys. Lett. B 418, 46-54 (1998), arXiv:hep-ph/9709492.
Q ball formation through Affleck-Dine mechanism. S Kasuya, M Kawasaki, 10.1103/PhysRevD.61.041301arXiv:hep-ph/9909509Phys. Rev. D. 6141301S. Kasuya and M. Kawasaki, "Q ball formation through Affleck-Dine mechanism," Phys. Rev. D 61, 041301 (2000), arXiv:hep-ph/9909509.
The Origin of the matter -antimatter asymmetry. M Dine, A Kusenko, 10.1103/RevModPhys.76.1arXiv:hep-ph/0303065Rev. Mod. Phys. 761M. Dine and A. Kusenko, "The Origin of the matter -antimatter asymmetry," Rev. Mod. Phys. 76, 1 (2003), arXiv:hep-ph/0303065.
Out of this world supersymmetry breaking. L Randall, R Sundrum, 10.1016/S0550-3213(99)00359-4arXiv:hep-th/9810155Nucl. Phys. B. 557L. Randall and R. Sundrum, "Out of this world supersymmetry breaking," Nucl. Phys. B 557, 79-118 (1999), arXiv:hep-th/9810155.
Gaugino mass without singlets. G F Giudice, M A Luty, H Murayama, R Rattazzi, 10.1088/1126-6708/1998/12/027arXiv:hep-ph/9810442JHEP. 1227G. F. Giudice, M. A. Luty, H. Murayama, and R. Rattazzi, "Gaugino mass without singlets," JHEP 12, 027 (1998), arXiv:hep-ph/9810442.
Cosmological Problems for the Polonyi Potential. G D Coughlan, W Fischler, E W Kolb, S Raby, G G Ross, 10.1016/0370-2693(83)91091-2Phys. Lett. B. 131G. D. Coughlan, W. Fischler, E. W. Kolb, S. Raby, and G. G. Ross, "Cosmological Problems for the Polonyi Potential," Phys. Lett. B 131, 59-64 (1983).
Relaxing the cosmological moduli problem. A D Linde, 10.1103/PhysRevD.53.R4129arXiv:hep-th/9601083Phys. Rev. D. 53A. D. Linde, "Relaxing the cosmological moduli problem," Phys. Rev. D 53, R4129-R4132 (1996), arXiv:hep-th/9601083.
Why have supersymmetric particles not been observed?. F Takahashi, T T Yanagida, 10.1016/j.physletb.2011.03.032arXiv:1101.0867Phys. Lett. B. 698hep-phF. Takahashi and T. T. Yanagida, "Why have supersymmetric particles not been observed?" Phys. Lett. B 698, 408-410 (2011), arXiv:1101.0867 [hep-ph].
Gravity mediation without a Polonyi problem. K Nakayama, F Takahashi, T T Yanagida, 10.1016/j.physletb.2012.06.072arXiv:1203.2085Phys. Lett. B. 714hep-phK. Nakayama, F. Takahashi, and T. T. Yanagida, "Gravity mediation without a Polonyi problem," Phys. Lett. B 714, 256-261 (2012), arXiv:1203.2085 [hep-ph].
A Simple Solution to the Polonyi Problem in Gravity Mediation. K Harigaya, M Ibe, K Schmitz, T T Yanagida, 10.1016/j.physletb.2013.03.001arXiv:1301.3685Phys. Lett. B. 721hep-phK. Harigaya, M. Ibe, K. Schmitz, and T. T. Yanagida, "A Simple Solution to the Polonyi Problem in Gravity Mediation," Phys. Lett. B 721, 86-89 (2013), arXiv:1301.3685 [hep-ph].
A new constraint on primordial lepton flavour asymmetries. V Domcke, K Kamada, K Mukaida, K Schmitz, M Yamada, arXiv:2208.03237hep-phV. Domcke, K. Kamada, K. Mukaida, K. Schmitz, and M. Yamada, "A new constraint on primordial lepton flavour asymmetries," (2022), arXiv:2208.03237 [hep-ph].
Implications of supersymmetry breaking with a little hierarchy between gauginos and scalars. J D Wells, arXiv:hep-ph/030612711th International Conference on Supersymmetry and the Unification of Fundamental Interactions. J. D. Wells, "Implications of supersymmetry breaking with a little hierarchy between gaugi- nos and scalars," in 11th International Conference on Supersymmetry and the Unification of Fundamental Interactions (2003) arXiv:hep-ph/0306127.
Supersymmetric unification without low energy supersymmetry and signatures for fine-tuning at the LHC. N Arkani-Hamed, S Dimopoulos, 10.1088/1126-6708/2005/06/073arXiv:hep-th/0405159JHEP. 0673N. Arkani-Hamed and S. Dimopoulos, "Supersymmetric unification without low energy su- persymmetry and signatures for fine-tuning at the LHC," JHEP 06, 073 (2005), arXiv:hep- th/0405159.
Split supersymmetry. G F Giudice, A Romanino, 10.1016/j.nuclphysb.2004.08.001arXiv:hep-ph/0406088Nucl. Phys. B. 699Nucl.Phys.BG. F. Giudice and A. Romanino, "Split supersymmetry," Nucl. Phys. B 699, 65-89 (2004), [Erratum: Nucl.Phys.B 706, 487-487 (2005)], arXiv:hep-ph/0406088.
PeV-scale supersymmetry. J D Wells, 10.1103/PhysRevD.71.015013arXiv:hep-ph/0411041Phys. Rev. D. 7115013J. D. Wells, "PeV-scale supersymmetry," Phys. Rev. D 71, 015013 (2005), arXiv:hep- ph/0411041.
Possible Signals of Wino LSP at the Large Hadron Collider. M Ibe, T Moroi, T T Yanagida, 10.1016/j.physletb.2006.11.061arXiv:hep-ph/0610277Phys. Lett. B. 644M. Ibe, T. Moroi, and T. T. Yanagida, "Possible Signals of Wino LSP at the Large Hadron Collider," Phys. Lett. B 644, 355-360 (2007), arXiv:hep-ph/0610277.
Explaining the Electroweak Scale and Stabilizing Moduli in M Theory. B S Acharya, K Bobkov, G L Kane, P Kumar, J Shao, 10.1103/PhysRevD.76.126010arXiv:hep-th/0701034Phys. Rev. D. 76126010B. S. Acharya, K. Bobkov, G. L. Kane, P. Kumar, and J. Shao, "Explaining the Electroweak Scale and Stabilizing Moduli in M Theory," Phys. Rev. D 76, 126010 (2007), arXiv:hep- th/0701034.
Spread Supersymmetry. L J Hall, Y Nomura, 10.1007/JHEP01(2012)082arXiv:1111.4519JHEP. 0182hep-phL. J. Hall and Y. Nomura, "Spread Supersymmetry," JHEP 01, 082 (2012), arXiv:1111.4519 [hep-ph].
The Lightest Higgs Boson Mass in Pure Gravity Mediation Model. M Ibe, T T Yanagida, 10.1016/j.physletb.2012.02.034arXiv:1112.2462Phys. Lett. B. 709hep-phM. Ibe and T. T. Yanagida, "The Lightest Higgs Boson Mass in Pure Gravity Mediation Model," Phys. Lett. B 709, 374-380 (2012), arXiv:1112.2462 [hep-ph].
Mini-Split. A Arvanitaki, N Craig, S Dimopoulos, G Villadoro, 10.1007/JHEP02(2013)126arXiv:1210.0555JHEP. 02126hep-phA. Arvanitaki, N. Craig, S. Dimopoulos, and G. Villadoro, "Mini-Split," JHEP 02, 126 (2013), arXiv:1210.0555 [hep-ph].
Simply Unnatural Supersymmetry. N Arkani-Hamed, A Gupta, D E Kaplan, N Weiner, T Zorawski, arXiv:1212.6971hep-phN. Arkani-Hamed, A. Gupta, D. E. Kaplan, N. Weiner, and T. Zorawski, "Simply Unnatural Supersymmetry," (2012), arXiv:1212.6971 [hep-ph].
Broadband Solenoidal Haloscope for Terahertz Axion Detection. J Liu, 10.1103/PhysRevLett.128.131801arXiv:2111.12103Phys. Rev. Lett. 128131801physics.ins-detJ. Liu et al. (BREAD), "Broadband Solenoidal Haloscope for Terahertz Axion Detection," Phys. Rev. Lett. 128, 131801 (2022), arXiv:2111.12103 [physics.ins-det].
Detecting high-frequency gravitational waves with opticallylevitated sensors. A Arvanitaki, A A Geraci, 10.1103/PhysRevLett.110.071105arXiv:1207.5320Phys. Rev. Lett. 11071105gr-qcA. Arvanitaki and A. A. Geraci, "Detecting high-frequency gravitational waves with optically- levitated sensors," Phys. Rev. Lett. 110, 071105 (2013), arXiv:1207.5320 [gr-qc].
Progress on the ARIADNE axion experiment. A A Geraci, 10.1007/978-3-319-92726-8_18arXiv:1710.05413Springer Proc. Phys. 211ARIADNEastro-ph.IMA. A. Geraci et al. (ARIADNE), "Progress on the ARIADNE axion experiment," Springer Proc. Phys. 211, 151-161 (2018), arXiv:1710.05413 [astro-ph.IM].
Axion Dark Matter. C B Adams, arXiv:2203.149232022 Snowmass Summer Study. hep-exC. B. Adams et al., "Axion Dark Matter," in 2022 Snowmass Summer Study (2022) arXiv:2203.14923 [hep-ex].
Running quark and lepton parameters at various scales. S Antusch, V Maurer, 10.1007/JHEP11(2013)115arXiv:1306.6879JHEP. 11115hep-phS. Antusch and V. Maurer, "Running quark and lepton parameters at various scales," JHEP 11, 115 (2013), arXiv:1306.6879 [hep-ph].
Two loop renormalization group equations for soft supersymmetry breaking couplings. S P Martin, M T Vaughn, 10.1103/PhysRevD.50.2282arXiv:hep-ph/9311340Phys. Rev. D. 5039903Phys.Rev.DS. P. Martin and M. T. Vaughn, "Two loop renormalization group equations for soft supersym- metry breaking couplings," Phys. Rev. D 50, 2282 (1994), [Erratum: Phys.Rev.D 78, 039903 (2008)], arXiv:hep-ph/9311340.
Noncollider searches for stable massive particles. S Burdin, M Fairbairn, P Mermod, D Milstead, J Pinfold, T Sloan, W Taylor, 10.1016/j.physrep.2015.03.004arXiv:1410.1374Phys. Rept. 582hep-phS. Burdin, M. Fairbairn, P. Mermod, D. Milstead, J. Pinfold, T. Sloan, and W. Taylor, "Non- collider searches for stable massive particles," Phys. Rept. 582, 1-52 (2015), arXiv:1410.1374 [hep-ph].
Cosmological constraints on light but massive relics. W L Xu, J B Muñoz, C Dvorkin, 10.1103/PhysRevD.105.095029arXiv:2107.09664Phys. Rev. D. 10595029astro-ph.COW. L. Xu, J. B. Muñoz, and C. Dvorkin, "Cosmological constraints on light but massive relics," Phys. Rev. D 105, 095029 (2022), arXiv:2107.09664 [astro-ph.CO].
| []
|
[
"MODIFY: MODEL-DRIVEN FACE STYLIZATION WITHOUT STYLE IMAGES",
"MODIFY: MODEL-DRIVEN FACE STYLIZATION WITHOUT STYLE IMAGES"
]
| [
"Yuhe Ding \nSchool of Computer Science\nAnhui University\n\n",
"Jian Liang \nCRIPAC & MAIS\nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Jie Cao \nCRIPAC & MAIS\nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Aihua Zheng \nSchool of Artificial Intelligence\nAnhui University\n\n\nCRIPAC & MAIS\nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Ran He "
]
| [
"School of Computer Science\nAnhui University\n",
"CRIPAC & MAIS\nInstitute of Automation\nChinese Academy of Sciences\n",
"CRIPAC & MAIS\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nAnhui University\n",
"CRIPAC & MAIS\nInstitute of Automation\nChinese Academy of Sciences\n"
]
| []
| Existing face stylization methods always acquire the presence of the target (style) domain during the translation process, which violates privacy regulations and limits their applicability in real-world systems. To address this issue, we propose a new method called MODel-drIven Face stYlization (MOD-IFY), which relies on the generative model to bypass the dependence of the target images. Briefly, MODIFY first trains a generative model in the target domain and then translates a source input to the target domain via the provided style model. To preserve the multimodal style information, MODIFY further introduces an additional remapping network, mapping a known continuous distribution into the encoder's embedding space. During translation in the source domain, MOD-IFY fine-tunes the encoder module within the target stylepersevering model to capture the content of the source input as precisely as possible. Our method is extremely simple and satisfies versatile training modes for face stylization. Experimental results on several different datasets validate the effectiveness of MODIFY for unsupervised face stylization. Code will be released at https://github.com/YuheD/MODIFY. | 10.1109/icassp49357.2023.10095222 | [
"https://export.arxiv.org/pdf/2303.09831v1.pdf"
]
| 257,623,067 | 2303.09831 | e8279ec7b5637497b88e30c0201327d0c96cef2e |
MODIFY: MODEL-DRIVEN FACE STYLIZATION WITHOUT STYLE IMAGES
Yuhe Ding
School of Computer Science
Anhui University
Jian Liang
CRIPAC & MAIS
Institute of Automation
Chinese Academy of Sciences
Jie Cao
CRIPAC & MAIS
Institute of Automation
Chinese Academy of Sciences
Aihua Zheng
School of Artificial Intelligence
Anhui University
CRIPAC & MAIS
Institute of Automation
Chinese Academy of Sciences
Ran He
MODIFY: MODEL-DRIVEN FACE STYLIZATION WITHOUT STYLE IMAGES
Index Terms-face stylization, test-time training
Existing face stylization methods always acquire the presence of the target (style) domain during the translation process, which violates privacy regulations and limits their applicability in real-world systems. To address this issue, we propose a new method called MODel-drIven Face stYlization (MOD-IFY), which relies on the generative model to bypass the dependence of the target images. Briefly, MODIFY first trains a generative model in the target domain and then translates a source input to the target domain via the provided style model. To preserve the multimodal style information, MODIFY further introduces an additional remapping network, mapping a known continuous distribution into the encoder's embedding space. During translation in the source domain, MOD-IFY fine-tunes the encoder module within the target stylepersevering model to capture the content of the source input as precisely as possible. Our method is extremely simple and satisfies versatile training modes for face stylization. Experimental results on several different datasets validate the effectiveness of MODIFY for unsupervised face stylization. Code will be released at https://github.com/YuheD/MODIFY.
INTRODUCTION
Face stylization [1,2] aims to translate the face images from the source (content) domain into the target (style) domain. It has been widely used in popular software such as Photoshop, Instagram, Beauty Camera, and Tik Tok. Recent methods have shown impressive results on unsupervised face stylization [3,4,5]. These methods require training data from both the content and source domains. This issue limits the real-world applications of face stylization due to privacy issues and expensive copyright costs.
Inspired by existing privacy-preserving methods [6,7,8,9,10,11,12,13,14,15], we present MODel-drIven Face stYlization (MODIFY), delivering the style information by a pretrained model to build a bridge between the private source and target dataset. Generally, the training process of our method consists of two stages, i.e., style encapsulation and face stylization. At the encapsulation stage, we train an auto-encoder to reconstruct the input with the provided target dataset only. Specifically, the generative model consists of an FPN encoder [16,2], encoding the input images into the latent space; a remapping network, mapping a known distribution such as Gaussian into the higher-resolution parts of latent space; and a StyleGAN decoder [17,18], rendering a constant vector to an image with source content and target style by the given latent code. We enforce the decoder to reconstruct the input from the full code output by the encoder and generate a multimodal version of the input from the fused code, which is a concatenation of the lower-resolution part of the encoder and the remapping network's output. To this end, we introduce the swapping loss to ensure these two modules have a shared embedding space. At the stylization stage, with only the source data available, our goal is to learn a mapping from the source data to the latent space without style images. In the first stage, the encoder has learned a mapping that serves the target data. To overcome the loss of the identity information caused by the gap between the source and target dataset, we fine-tune the encoder with the adversarial loss, pixel-level reconstruction loss, ID loss [19], and LPIPS loss [20] while freezing other modules. Therefore, MODIFY allows versatile training modes, including online, offline, and test-time training.
To summarize, our contributions are three-fold. 1) To address the copyright and the privacy limitation in the unsupervised face stylization problem, we first study an interesting and challenging problem-model-driven faced stylization without style images, which avoids the presence of target datasets. 2) We propose a new method called MODIFY, which consists of a generative model and a swapping loss, to store the style information into the proposed model. 3) We perform quantitative and qualitative evaluations to demonstrate the superiority of MODIFY and its generalizability across different training modes and style domains.
METHOD
The framework, as illustrated in Fig. 1, consists of a feature pyramid encoder E [16], a StyleGAN decoder D [18], a remapping network M with a cascade of multiple fullyconnected layers, and a discriminator Dis. Given an input x ∈ X where X denotes the source domain, our goal is to translate x to x ∈ Y where Y denotes the target domain. The gray, blue, and yellow parts with notation E, M, and D denote our encoder, remapping network, and decoder, respectively. The encoder with a standard feature pyramid over a ResNet backbone maps the input image into a latent code with 18 layers. The remapping network with a cascade of several fully connected layers maps the random noise to the higher ξ layers of the latent code. The decoder renders a constant vector to an image by the given latent code.
Style Encapsulation
At this stage, only the target (style) dataset Y is accessible. As shown in Fig. 1, given the input y ∈ Y, the encoder outputs a latent code w ∈ W 18×512 , each 1 × 512 vector corresponds to one of the 18 AdaIN [21] modules of the StyleGAN decoder [17,18,22,2]. w = {w l , w h } is obtained by concatenating w l and w h , where w (18−ξ)×512 l is the lower-resolution part, and w ξ×512 h is the higher-resolution part, and ξ is a given hyper-parameter. Benefiting from the superior disentanglement capability of StyleGAN, w l and w h describes the output's content and style, respectively. In this manner, the space W can be disentangled into the content space W (18−ξ)×512 c and style space W ξ×512 s , w l denotes the content code, and w h denotes the style code. We try to reconstruct the input strictly, enforcing the decoder to map the W space into the target data domain, and the encoder to map the input into the W space. During the process of reconstruction, inspired by some theoretical works [23,24], we follow the setting of PSP [2], adding the reconstruction loss L r to ensure the pixellevel correspondence, the LPIPS loss L lp [20] to keep the content information, and the ID loss L id [19] to preserve the identity of the input:
L r = y − y r 2 , L lp = F(y) − F(y r ) 2 , L id = 1 − R(y), R(y r ) ,(1)
where y r = D(E(y)) is the reconstruction of y, F(·) denotes the perceptual feature extractor, a, b denotes the cosine distance between a and b, and R(·) denotes the pretrained Arc-Face [19] network for face recognition.
Additionally, the adversarial loss helps reduce the blur and artifacts, and we employ WGAN [25] here:
L r adv = E[Dis(y )] − E[Dis(y r )],(2)
where y ∈ Y is a re-sampled target image, which is different from the input image y.
We use the remapping network M to capture the distribution of the higher-resolution style code. Given z ∈ N , where N is the standard Gaussian distribution, M maps z into w z = M(z). Then, we get the fused code w fused = {w l , w z } and we feed it into the decoder:
y z = D(w fused ),(3)
where y z and y r have the same content code and different style code. Thus y z is a multimodal version of y r . Inspired by Nie et al. [26], to enforce that the higherresolution parts of W and the embedding space of the remapping network M share a common distribution, we introduce the swapping loss:
L swap = w z − w z 2 ,(4)
where w fused = {w l , w z } = E(y z ). The swapping loss ensure that the remapping network learns a meaningful mapping, and avoid mode collapse of the M. The adversarial loss is also imposed on y z :
L z adv = E[Dis(y )] − E[Dis(y z )].(5)
In summary, the full objective function at this stage can be defined as: min
E,D,M max Dis L s1 =λ r adv L r adv + λ z adv L z adv + λ r L r + λ lp L lp + λ id L id + λ swap L swap ,(6)
where λ r adv , λ z adv , λ r , λ lp , λ id , λ swap are weighting parameters of different loss functions.
Face Stylization
At this stage, we are only able to access the source dataset. To avoid identity shifting, we fine-tune the encoder to make it adapt to the unseen source domain. As illustrated in Fig. 1, we replicate the trained encoder, one notated as E is then fine-tuned to map the input x into the latent space W, and the other notated as E to be fixed to provide pseudo ground truth in the adversarial loss. The generative stream is similar to the first stage, the input x from source dataset is fed to the encoder E and decoder D in order to obtain the input image with target style x = D(E (x)). The remapping network does not participate in the training at this stage, and the decoder is frozen. Besides, our model supports offline, online, and test-time training.
Offline Training. In the standard training mode, i.e., offline training, the entire training set is available during the training time, and only when training is completed can the model be used for prediction. We first introduce the objective function of the standard version: min
E max Dis L s2 = λ x adv L x adv + λ r L r + λ lp L lp + λ id L id ,(7)
where the adversarial loss L x adv is imposed on x and the reconstruction loss L r , LPIPS loss L lp and the ID loss L id are calculated between x and x .
Online Training. Online training mode process data sequentially. The model is continuously updated during operation as more training data arrives. Briefly, we only have one image in an iteration. MODIFY solves the same optimization problem as in Eq. (7) with batch size 1.
Test-time Training. Existing image translation methods remain notoriously weak at generalization under distribution shifts. Even seemingly minor differences between training and test data turn out to defeat state-of-the-art models [29]. Test-time training [30] is a new generation mode that does not anticipate the distribution shifts but instead learns from them at test time. When a test source input arriving, MODIFY fine-tunes the encoder adaptively for this specific input, and the loss function is also the same as Eq. (7). Unlike other training modes, only 50 iterations are required in test-time training, which is acceptable for the test efficiency.
EXPERIMENTS
Implementation details. In the style encapsulation stage, we train the model on a public dataset MetFaces [31]. There {1.0, 0, 0, 0.1, 0, 0} in the next 20,000 iterations. In the face stylization stage, to balance the samples number of two domains, we select 2000 photos from a high-quality face dataset FFHQ [17], and set the weights {λ r , λ lp , λ id , λ x adv } = {0.5, 0.8, 1, 0.01}. In both stages, we use Adam [32] optimizer with β 1 = 0.9, β 2 = 0.999, and the learning rate is 1e − 4. The model is trained for 170,000 steps with a batch size of 4 in the first stage. In the second stage, the model is trained for 20,000 steps for offline and online training and 50 steps for test-time training. We use the MindSpore framework during our implementation, and train our model on RTX TITAN GPU with 24 GB of memory.
Comparisons with State-of-the-art
Since this is the first model-driven translation method, we evaluate three existing state-of-the-art methods under a datadriven condition except ours for comparison. We compare MODIFY, trained in online, offline, and test-time modes, to the one-shot translation method OST [27], the one-shot and standard versions of StarGAN v2 [28] and PSP [2]. The oneshot versions of these two methods are trained on one source image and the whole target dataset. Note that our datasets are unpaired, which does not match the setting of PSP. Therefore, we modify the ID loss, L2 loss, and LPIPS loss of PSP to be calculated between input and output instead of between output and the ground truth in the original version. We also add a discriminator to calculate the adversarial loss between the output and the target images, thereby improving PSP to an unpaired version. The unpaired PSP could be seen as a data-driven version of our MODIFY, which can be considered as the best effect that we can achieve. Qualitative Results. Fig. 2 shows the qualitative comparison results. Fig. 2 (d), (f), (h) and (i) use the entire source dataset, and Fig. 2 (b), (c), (e) and (g) use one sample in source dataset. Note that Fig. 2 (b), (c), (e) use a randomselected source image for training, and our method shown in Fig. 2 (i) adaptively updates the model for the input test image without separate training. In the one-shot scenario, OST preserves too much input information, thereby ignoring the 1 1 All 1 All 1 All All Data-driven True True True True True False False False Training mode Offline Offline Offline Offline Offline Test-time Online Offline Fig. 2. Results of online, offline and test-time training version of MODIFY compared to other three state-of-the-art approaches (i.e., OST [27], StarGAN v2 [28], and PSP [2], PSP* notates the unpaired version of PSP). style transfer. Extremely little data leads to mode collapse in unpaired PSP. StarGAN v2 still performs very well in style transfer but preserves poorly the identity of input. These three methods can only learn a single seen source image during training, therefore they cannot perfectly predict all unseen test images. In comparison, our test-time training mode is more flexible therefore more robust. Quantitative Results. We conduct quantitative experiments using two widely-used metrics, Fréchet Inception Distance (FID) and user study. We compare the FIDs on the standard version of StarGAN v2 and unpaired PSP to the offline version of our method. As shown in the second column of Table 1, we achieve the best results even under a more brutal model-driven condition. The poor performance of unpaired PSP is not because of poor image quality, but the lack of diversity, thereby cannot capture the entire target distribution.
Then we investigate user studies against the state-of-the-art methods. Ten random-selected test photos and corresponding outputs synthesized by these three methods are presented to 100 subjects, who are told to vote their favorite one. The results are shown in the third column of Table 1. The unpaired PSP gets a 45.9% approval rate, and we get 45.2%. As we mentioned before, the unpaired PSP is a data-driven version of our method, implying the best quality that we can achieve in theory.
Ablation Study
Swapping Loss. To validate the importance of the swapping loss, we train a variant of MODIFY for comparison by removing L swap , and show the results in Fig. 3. Without the swapping loss, the embedding spaces of the remapping network and encoder are different, resulting in the style information not being correctly encapsulated in the remapping network. Thereby there are too many artifacts in the generated images. Fusion Layer. The hyper-parameter ξ indicates which layer the style code output by the remapping network M is fused with the latent code output by the encoder E. To explore the influence of ξ on the synthesized image, we train several versions of MODIFY with different ξ. As shown in Fig. 4, the larger the ξ, the more the identity is lost.
CONCLUSION
In this paper, we propose MODIFY with two training stages, to solve the face stylization problem under a data privacy condition. In the first stage, MODIFY trains a generative model with an FPN encoder, a StyleGAN decoder, and a remapping network to disentangle and preserving the style information.
We propose a swapping loss to enforce that the encoder and the remapping network share a common embedding space. In the second stage, MODIFY adapts the model to the unseen source domain. Our method is extremely simple and satisfies versatile training modes for the face stylization stage.
ACKNOWLEDGEMENTS
This work is funded by National Natural Science Foundation of China (Grant No. 62276256, 62206277) and CAAI-Huawei MindSpore Open Fund.
Fig. 1 .
1Illustration of the style encapsulation training stage.
Fig. 3 . 15 Fig. 4 .
3154Ablation study of the swapping loss L swap .(a) Input (b) ξ = 3 (c) ξ = 6 (d) ξ = 9 (e) ξ = 12 (f) ξ = Ablation study of the fusion layer ξ.
Table 1 .
1The quantitative results. The second column is FID (lower is better). The third column is the voting results of our user study (higher is better).Method
FID ↓ Percentage ↑
StarGAN v2 [28] 65.5
8.9%
Unparied PSP [2] 74.2
45.9%
MODIFY
58.6
45.2%
are 1336 face images extracted from works of art in this
dataset. We adopt a weight decay training method. Specifi-
cally, {λ swap , λ lp , λ r
adv , λ z
adv , λ r , λ id } = {0, 0.8, 0.1, 0, 0.8, 1}
during the first 150,000 iterations, while
Warpgan: Automatic caricature generation. Yichun Shi, Debayan Deb, Jain, Proc. CVPR. CVPRYichun Shi, Debayan Deb, and Anil K Jain, "Warpgan: Auto- matic caricature generation," in Proc. CVPR, 2019.
Encoding in style: a stylegan encoder for image-to-image translation. Richardson Elad, Alaluf Yuval, Patashnik Or, Nitzan Yotam, Azar Yaniv, Shapiro Stav, Cohen-Or Daniel, Proc. CVPR. CVPRRichardson Elad, Alaluf Yuval, Patashnik Or, Nitzan Yotam, Azar Yaniv, Shapiro Stav, and Cohen-Or Daniel, "Encoding in style: a stylegan encoder for image-to-image translation," in Proc. CVPR, 2021.
Unpaired cartoon image synthesis via gated cycle mapping. Yifang Men, Yuan Yao, Miaomiao Cui, Zhouhui Lian, Xuansong Xie, Xian-Sheng Hua, Proc. CVPR. CVPRYifang Men, Yuan Yao, Miaomiao Cui, Zhouhui Lian, Xuan- song Xie, and Xian-Sheng Hua, "Unpaired cartoon image syn- thesis via gated cycle mapping," in Proc. CVPR, 2022.
Unpaired image-to-image translation using cycle-consistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, Proc. ICCV. ICCVJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proc. ICCV, 2017.
Stylet2i: Toward compositional and high-fidelity textto-image synthesis. Zhiheng Li, Kai Martin Renqiang Min, Chenliang Li, Xu, Proc. CVPR. CVPRZhiheng Li, Martin Renqiang Min, Kai Li, and Chenliang Xu, "Stylet2i: Toward compositional and high-fidelity text- to-image synthesis," in Proc. CVPR, 2022.
Federated optimization: Distributed machine learning for on-device intelligence. Jakub Konečnỳ, Brendan Mcmahan, Daniel Ramage, Peter Richtárik, arXiv:1610.02527in arXiv preprintJakub Konečnỳ, H Brendan McMahan, Daniel Ramage, and Peter Richtárik, "Federated optimization: Distributed ma- chine learning for on-device intelligence," in arXiv preprint arXiv:1610.02527, 2016.
Federated learning: Strategies for improving communication efficiency. Jakub Konečnỳ, Brendan Mcmahan, X Felix, Peter Yu, Ananda Richtárik, Dave Theertha Suresh, Bacon, arXiv:1610.05492in arXiv preprintJakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon, "Fed- erated learning: Strategies for improving communication effi- ciency," in arXiv preprint arXiv:1610.05492, 2016.
Federated learning of deep networks using model averaging. Eider H Brendan Mcmahan, Daniel Moore, Blaise Ramage, Agüera Y Arcas, arXiv:1602.05629in arXiv preprintH Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas, "Federated learning of deep networks using model averaging," in arXiv preprint arXiv:1602.05629, 2016.
Our data, ourselves: Privacy via distributed noise generation. Cynthia Dwork, Krishnaram Kenthapadi, Frank Mcsherry, Ilya Mironov, Moni Naor, Annual International Conference on the Theory and Applications of Cryptographic Techniques. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor, "Our data, ourselves: Privacy via distributed noise generation," in Annual International Con- ference on the Theory and Applications of Cryptographic Tech- niques, 2006.
Differentially private generative adversarial network. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, Jiayu Zhou, arXiv:1802.06739in arXiv preprintLiyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou, "Differentially private generative adversarial network," in arXiv preprint arXiv:1802.06739, 2018.
Dp-cgan: Differentially private synthetic data and label generation. Reihaneh Torkzadehmahani, Peter Kairouz, Benedict Paten, Proc. CVPR. CVPRReihaneh Torkzadehmahani, Peter Kairouz, and Benedict Paten, "Dp-cgan: Differentially private synthetic data and label generation," in Proc. CVPR, 2019.
PATE-GAN: Generating synthetic data with differential privacy guarantees. Jinsung Yoon, James Jordon, Mihaela Van Der Schaar, Proc. ICLR. ICLRJinsung Yoon, James Jordon, and Mihaela van der Schaar, "PATE-GAN: Generating synthetic data with differential pri- vacy guarantees," in Proc. ICLR, 2019.
Private post-gan boosting. Marcel Neunhoeffer, Zhiwei Steven Wu, Cynthia Dwork, Proc. ICLR. ICLRMarcel Neunhoeffer, Zhiwei Steven Wu, and Cynthia Dwork, "Private post-gan boosting," in Proc. ICLR, 2021.
Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. Jian Liang, Dapeng Hu, Jiashi Feng, Proc. ICML. ICMLJian Liang, Dapeng Hu, and Jiashi Feng, "Do we really need to access the source data? source hypothesis transfer for unsu- pervised domain adaptation," in Proc. ICML, 2020.
Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, Jiashi Feng, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4411Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng, "Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 8602-8617, 2021.
Feature pyramid networks for object detection. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie, Proc. CVPR. CVPRTsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie, "Feature pyramid net- works for object detection," in Proc. CVPR, 2017.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proc. CVPR. CVPRTero Karras, Samuli Laine, and Timo Aila, "A style-based generator architecture for generative adversarial networks," in Proc. CVPR, 2019.
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proc. CVPR. CVPRTero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila, "Analyzing and improving the image quality of stylegan," in Proc. CVPR, 2020.
Arcface: Additive angular margin loss for deep face recognition. Jiankang Deng, Jia Guo, Niannan Xue, Stefanos Zafeiriou, Proc. CVPR. CVPRJiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou, "Arcface: Additive angular margin loss for deep face recogni- tion," in Proc. CVPR, 2019.
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, Proc. CVPR. CVPRRichard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang, "The unreasonable effectiveness of deep fea- tures as a perceptual metric," in Proc. CVPR, 2018.
Arbitrary style transfer in real-time with adaptive instance normalization. Xun Huang, Serge Belongie, Proc. ICCV. ICCVXun Huang and Serge Belongie, "Arbitrary style transfer in real-time with adaptive instance normalization," in Proc. ICCV, 2017.
Interpreting the latent space of gans for semantic face editing. Yujun Shen, Jinjin Gu, Xiaoou Tang, Bolei Zhou, Proc. CVPR. CVPRYujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou, "Inter- preting the latent space of gans for semantic face editing," in Proc. CVPR, 2020.
Robust discriminant analysis based on nonparametric maximum entropy. Ran He, Bao-Gang Hu, Xiao-Tong Yuan, Proc. ACML. ACMLRan He, Bao-Gang Hu, and Xiao-Tong Yuan, "Robust discrim- inant analysis based on nonparametric maximum entropy," in Proc. ACML, 2009.
Principal component analysis based on non-parametric maximum entropy. Ran He, Baogang Hu, Xiaotong Yuan, Wei-Shi Zheng, Neurocomputing. 7310-12Ran He, Baogang Hu, XiaoTong Yuan, and Wei-Shi Zheng, "Principal component analysis based on non-parametric maxi- mum entropy," Neurocomputing, vol. 73, no. 10-12, pp. 1840- 1852, 2010.
Wasserstein generative adversarial networks. Martin Arjovsky, Soumith Chintala, Léon Bottou, Proc. ICML. ICMLMartin Arjovsky, Soumith Chintala, and Léon Bottou, "Wasserstein generative adversarial networks," in Proc. ICML, 2017.
Semi-supervised stylegan for disentanglement learning. Weili Nie, Tero Karras, Animesh Garg, Shoubhik Debnath, Anjul Patney, Ankit Patel, Animashree Anandkumar, Proc. ICML. ICMLWeili Nie, Tero Karras, Animesh Garg, Shoubhik Debnath, Anjul Patney, Ankit Patel, and Animashree Anandkumar, "Semi-supervised stylegan for disentanglement learning," in Proc. ICML, 2020.
One-Shot Unsupervised Cross Domain Translation. Sagie Benaim, Lior Wolf, Proc. NeurIPS. NeurIPSSagie Benaim and Lior Wolf, "One-Shot Unsupervised Cross Domain Translation," in Proc. NeurIPS, 2018.
Stargan v2: Diverse image synthesis for multiple domains. Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha, Proc. CVPR. CVPRYunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha, "Stargan v2: Diverse image synthesis for multiple domains," in Proc. CVPR, 2020.
Do cifar-10 classifiers generalize to cifar-10?. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar, arXiv:1806.00451in arXiv preprintBenjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar, "Do cifar-10 classifiers generalize to cifar- 10?," in arXiv preprint arXiv:1806.00451, 2018.
Test-time training with selfsupervision for generalization under distribution shifts. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, Moritz Hardt, Proc. ICML. ICMLYu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt, "Test-time training with self- supervision for generalization under distribution shifts," in Proc. ICML, 2020.
Training generative adversarial networks with limited data. Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila, Proc. NeurIPS. NeurIPSTero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila, "Training generative adver- sarial networks with limited data," in Proc. NeurIPS, 2020.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proc. ICML. ICMLDiederik P Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," in Proc. ICML, 2015.
| [
"https://github.com/YuheD/MODIFY."
]
|
[
"UHECR Echoes from the Council of Giants",
"UHECR Echoes from the Council of Giants"
]
| [
"A M Taylor \nDeutsches Elektronen-Synchrotron\nPlatanenallee 6ZeuthenGermany\n",
"J H Matthews \nDepartment of Physics, Astrophysics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK\n\nInstitute of Astronomy\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK\n",
"A R Bell \nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK\n\nCentral Laser Facility\nSTFC Rutherford Appleton Laboratory\nOX11 0QXHarwell, OxfordUK\n"
]
| [
"Deutsches Elektronen-Synchrotron\nPlatanenallee 6ZeuthenGermany",
"Department of Physics, Astrophysics\nUniversity of Oxford\nDenys Wilkinson Building, Keble RoadOX1 3RHOxfordUK",
"Institute of Astronomy\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK",
"Clarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK",
"Central Laser Facility\nSTFC Rutherford Appleton Laboratory\nOX11 0QXHarwell, OxfordUK"
]
| [
"MNRAS"
]
| Recent anisotropy studies of UHECR data at energies 40 EeV, have disclosed a correlation of their angular distribution with the extragalactic local structure, specifically with either local starburst galaxies or AGN. Using Monte Carlo simulations taking into account photo-disintegration processes, we further explore a framework in which these UHECRs were accelerated by Centaurus A in a recent powerful outburst before being scattered by magnetic fields associated with local, Council of Giant, extragalactic structure. We find that the observed intermediate scale anisotropies can be accounted for by the Council of Giant structure imposing a response function on the initial outburst of UHECRs from a single source located at Centaurus A's position. The presence of these local structures create 'echoes' of UHECRs after the initial impulse, and focusing effects. The strongest echo wave has a lag of ∼ 20 Myr, comparable to the age of synchrotron-emitting electrons in the giant Centaurus A lobes. Through consideration of the composition of both the direct and echo wave components, we find that the distribution of the light (1 < ln < 1.5) component across the sky offers exciting prospects for testing the echo model using future facilities such as Auger prime. Our results demonstrate the potential that UHECR nuclei offer, as "composition clocks", for probing propagation scenarios from local sources. | null | [
"https://export.arxiv.org/pdf/2302.06489v1.pdf"
]
| 256,827,015 | 2302.06489 | 19bffe3baa8edd23eb2792041d770169f0e3a4c7 |
UHECR Echoes from the Council of Giants
1-?? (2022
A M Taylor
Deutsches Elektronen-Synchrotron
Platanenallee 6ZeuthenGermany
J H Matthews
Department of Physics, Astrophysics
University of Oxford
Denys Wilkinson Building, Keble RoadOX1 3RHOxfordUK
Institute of Astronomy
University of Cambridge
Madingley RoadCB3 0HACambridgeUK
A R Bell
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUK
Central Laser Facility
STFC Rutherford Appleton Laboratory
OX11 0QXHarwell, OxfordUK
UHECR Echoes from the Council of Giants
MNRAS
0001-?? (2022Preprint 14 February 2023 Compiled using MNRAS L A T E X style file v3.0cosmic rays -acceleration of particles -magnetic fields
Recent anisotropy studies of UHECR data at energies 40 EeV, have disclosed a correlation of their angular distribution with the extragalactic local structure, specifically with either local starburst galaxies or AGN. Using Monte Carlo simulations taking into account photo-disintegration processes, we further explore a framework in which these UHECRs were accelerated by Centaurus A in a recent powerful outburst before being scattered by magnetic fields associated with local, Council of Giant, extragalactic structure. We find that the observed intermediate scale anisotropies can be accounted for by the Council of Giant structure imposing a response function on the initial outburst of UHECRs from a single source located at Centaurus A's position. The presence of these local structures create 'echoes' of UHECRs after the initial impulse, and focusing effects. The strongest echo wave has a lag of ∼ 20 Myr, comparable to the age of synchrotron-emitting electrons in the giant Centaurus A lobes. Through consideration of the composition of both the direct and echo wave components, we find that the distribution of the light (1 < ln < 1.5) component across the sky offers exciting prospects for testing the echo model using future facilities such as Auger prime. Our results demonstrate the potential that UHECR nuclei offer, as "composition clocks", for probing propagation scenarios from local sources.
INTRODUCTION
The question as to the origin of the highest energy cosmic rays, with energies in excess of 10 20 eV, which have been detected at Earth over the past 60 years (Linsley 1963), continues to drive observational and theoretical studies in high energy astrophysics. Despite the time passed since their first detection, the answer to this question remains unresolved.
On theoretical grounds, the Hillas criterion (Hillas 1984) and Hillas-Lovelace condition indicate that the most promising candidates are objects possessing fast outflows with high kinetic energy luminosities (Lovelace 1976;Waxman 2004;Norman et al. 1995;Blandford 2000) such as active galactic nuclei (AGN) and gammaray bursts (GRB). Additionally, the limited propagation distance of ultra high energy cosmic ray (UHECR) nuclei through extragalactic radiation fields further constrains the number of potential candidate objects, to only those in the relatively local extragalactic vicinity (Taylor et al. 2011;Lang et al. 2020). For AGN, only a few local candidate sources exist, such as Centaurus A (Cen A) (O'Sullivan et al. 2009;Rieger & Aharonian 2009).
Recently, new insights into this UHECR origins problem have been provided by the Pierre Auger Observatory (PAO), which has reported a correlation of the UHECR hotspots seen in their skymaps with local structure in the southern hemisphere sky, specifically with ei-★ E-mail: [email protected] ther nearby star-forming galaxies (which they referred to as starburst galaxies) or AGN (Aab et al. 2018b;Abreu et al. 2022). Likewise, similar correlations of UHECR hotspots, for energies above 40 EeV, with local structure in the northern hemisphere sky have been reported by the Telescope Array (TA) collaboration (Abbasi et al. 2014). With the significance of the PAO reported starburst galaxy correlation being already larger than 4 (post-trial), the origin of such a correlation appears worthy of deeper consideration.
The existence of this correlation, assuming that it is a real correlation and not simply a statistical fluctuation, raises the question as to whether such a correlation can be compatible with a scenario in which a local AGN, namely Cen A, is the source of the UHECR driving the anisotropy signal detected by the PAO. We here explore the possibility that a correlation of UHECR with local structure is brought about by the deflection of UHECR, initially released by Cen A, on nearby galaxy systems, a question first raised by Bell & Matthews (2022, hereafter BM22).
In section 2 we consider the Milky Way's local extragalactic neighbourhood. In section 3 we describe the setup considered to study the propagation of UHECR from Cen A to Earth, considering their scattering the magnetic field associated with local galaxies, and their energy-and species-dependent photo-disintegration in extragalactic radiation fields. In section 4 the key findings from our simulations are discussed. In section 5 we discuss these results, outlining the limitations of our approach and indicating further aspects to be explored. In section 6 we draw our conclusions.
THE LOCAL EXTRAGALACTIC ENVIRONMENT
Following the growth of structure formation via gravitational collapse over cosmological timescales, the Universe at the present epoch on small scales ( 100 Mpc), is inhomogeneous. In the current study we zoom in on the inhomogeneous patch of the Universe in which the Milky Way (MW) resides. Specifically, we focus here on very local distances 10 Mpc around the MW, in a region with distinct kinematics known as the Local Sheet (Tully et al. 2008). The most massive galaxies in this region form a ring approximately surrounding the Local Group, and are described as the "Council of Giants" (CoG) by McCall (2014); we adopt this CoG naming convention hereafter.
The CoG or Local Sheet structure has a predominantly planar (ie. 2D) geometry and is approximately circular in structure. We consider here all members from the CoG listed by McCall (2014). Fig. 1 shows a depiction of the CoG objects which we focus on here in our study, shown in local sheet coordinates. The position of the MW is also indicated in this figure in blue, located close to the origin of the local sheet coordinate system. The position of the center of the best-fit circle describing the CoG members locations is indicated in Fig. 1 as a black cross, and is located ≈ 0.8 Mpc from the MW.
The plane of these CoG objects, as observed by a terrestrial observer, is shown in Fig 2, in a Galactic coordinate representation (Hammer-Aitoff projection). In both Fig.s 1 and 2, the position of Cen A within the CoG group is indicated in pink. The Galactic coordinates of, and distances to, the CoG members are given in Appendix A together with stellar masses, estimated star formation rates (SFRs) and infra-red luminosities.
Within the CoG group, only Cen A is known to demonstrate clear recent AGN jet activity, although Circinus may also exhibit some evidence of such activity, see Elmouttie et al. (1998). Definitive evidence for this activity in Cen A is revealed by the radio emission from two giant inflated lobe structures extending out to ≈ 300 kpc, a distance scale comparable to the virial radius of its host galaxy (Sheridan 1958). In addition to this it also exhibits smaller inner lobes, indicating the onset of more recent AGN activity (Croston et al. 2009). Amongst the CoG members, no other objects display such prominent AGN jet activity, although the galaxies NGC 253 and M 82 do reveal heightened levels of star formation around their nuclear regions, with thermal X-ray images of these objects indicating the presence of outflow-like structures emanating from them (Bregman et al. 1995;Pietsch et al. 2000). Such outflows could potentially pollute the environment out to and beyond their virial radius with hot gas and magnetic field, as has been suggested to have occurred from recent analysis of a group of local galaxies (including NGC 253, M64, M81, M83, and M94) (Bregman et al. 2022).
As well as affecting their environments powerful AGN and galactic outflows can accelerate particles to high energies. The maximum characteristic particle energy can be estimated from the Hillas-Lovelace condition, given by max
( KE ℏ) 1/2 ≈ 10 KE 3 × 10 43 erg s −1 1/2 EeV. (1)
Here is velocity of the outflowing magnetised jet plasma in speed of light units, KE is the kinetic power, is the atomic number, describes the scattering rate in units of the Bohm level scattering, is the electromagnetic fine structure constant, and ℏ = ℎ/2 where ℎ is Plancks constant. This condition can be used to identify viable sites of UHECR acceleration. Energetically, the contents of the lobes of Cen A are estimated to be 10 59−60 erg, suggesting a time-averaged luminosity of ∼ 5×10 43 erg s −1 assuming a slow subsonic inflation of The "Council of Giants" within the Local Sheet: A 2D diagram of the source (Cen A; pink circle), observer (Milky Way; blue "+") and 9 scattering galaxies (black circles) used in this work. The solid black line marks a circle of radius 3.746 Mpc and centred on = 0.362 Mpc, = 0.718 Mpc (see "×" in diagram), as defined by McCall (2014). The object positions in the diagram are provided in local sheet coordinates, in which the objects are predominantly located in the -plane. the lobes (Wykes et al. 2013). For an inflation velocity faster than this, the timescale for inflating the lobes is shorter, requiring a significantly higher jet power, potentially approaching the Eddington luminosity value. By comparison, an estimate of the kinetic luminosity of the winds of local starburst galaxies is more than an order of magnitude smaller than this (Heckman et al. 1990). Furthermore, the velocities of these winds are themselves orders of magnitude smaller than AGN outflow velocities. The kinetic luminosity for Cen A, and its outflow velocity, therefore indicates that it is unique amongst the CoG group as being the only member capable of satisfying the Hillas-Lovelace condition for particle acceleration to multi-EV rigidities (see eqn 1).
The thermal and magnetic pressures between galaxies within the CoG, and how the pressures at the center of the galaxies reduce with increasing distance from them, remains poorly understood. Observationally, there is a growing body of evidence that a "warm-hot intergalactic medium" (WHIM) permeates the space between galaxies (Macquart et al. 2020). Related, and conceptually similar, is the circumgalactic medium (CGM), usually defined as the region beyond the galactic disc but within the galactic virial radius (Tumlinson et al. 2017), though it may in fact even extend beyond this radius (Wilde et al. 2021a). Collectively, the WHIM and CGM appear to account for a significant fraction of the "missing baryons" (Gupta et al. 2012;Nicastro et al. 2018;Martynenko 2022). Although low in density ( ∼ 10 −(4−5) cm −3 ), the high temperature of this gas ( 300 eV) indicates that it provides significant thermal pressure within the extended galaxy out to distances comparable to the galactic virial radius (∼300 kpc). Should the strengths of the magnetic fields, , embedded within this gas be in approximate equipartition with the thermal energy density, 2 /8 ≈ (i.e. the ratio of thermal to magnetic pressures, B , is of order unity), magnetic field strengths within the range 0.05 − 0.2 G would also be expected out at these extended galactic distances. In reality, B in the CGM is not well constrained and could lie in the range 1 − 100 (Pakmor et al. 2020;Faucher-Giguere & Oh 2023), likely varying within and between different objects. However, as discussed by BM22, there is good evidence for large-scale magnetic fields surrounding M82, and our estimate of a ∼ 0.1 G field at the virial radius is consistent with recent results from CGM modelling (Pakmor et al. 2020;van de Voort et al. 2021;Faerman et al. 2020;Faerman & Werk 2023), supporting the earlier suggestion that giant magnetised haloes (which we hereon refer to as scattering haloes) around nearby galaxies give rise to the anisotropy in UHECR skymaps (BM22).
As the CoG members possess a variety of SFRs (see Appendix A), particularly in their Galactic nuclear regions, the levels of magnetisation of their scattering haloes at the virial radius are likely to vary considerably. However, given the current uncertainty on the physics dictating the driving of magnetic field and gas material to fill this region, for the sake of simplicity we here approximate that the scattering haloes of all CoG members to have the same sizes and magnetic field strengths; however, we discuss the possible hierarchy of circumgalactic magnetic field strengths and coherence lengths in CoG members (and their relative effectiveness as UHECR scatterers) further in section 5.1.
SIMULATION SETUP
To simulate the propagation of cosmic rays from Cen A through the CoG structure to Earth, we adopt a Monte Carlo description. This description traces the spatial trajectory of the cosmic ray nuclei through the CoG system for a simulation timescale of 45 Myr, with the first particles launched at = 0 such that the first particles arrive at Earth after ≈ 12 Myr. The initial starting conditions of the simulation are for 10 8 particles distributed uniformly within a sphere of radius 300 kpc, centered on Cen A. The injected particles represent cosmic ray nuclei, with weighting factors being adopted so as to take into account their injected energy spectrum (see 3.1 below). An initial isotropic momentum distribution of these cosmic rays is adopted. The positions of the scattering regions within our simulations are provided in Table A1. A planar view of the CoG system that the cosmic ray nuclei propagate through is provided in Fig. 1.
Injected Energy Spectrum
We inject particles into the system at Cen A with a spectral energy distribution of the form
= max ∑︁ =1 0 −2 − /( max ) ,(2)
where a spectral index of 2 has been adopted, as motivated by Fermi diffusive shock acceleration theory for the case of strong shocks (Jones 1994). In the above expression, the terms are the abundance of species of , and max is the maximum rigidity that the UHECR source accelerates particles up to. A value for 0 , the minimum energy scales particles are injected at, of 30 EeV is adopted. This value for 0 is adopted so as to focus our simulations on the energy scale at which small scale anisotropies are observed in the UHECR skymap data (see section 1). For our simulations, max = 30 EeV, which is compatible with the expectations found for scenarios in which the UHECR originate from a local source (Taylor et al. 2015;Aab et al. 2017a).
For the simulations considered here, a two species setup is adopted (ie. max = 2), consisting of He and Fe nuclei. Our choice of only a two component, light and heavy, mixed nuclear composition is simplistic, but deliberate. The fragility of the light He species above an energy of 10 19.5 eV (Hooper et al. 2007;Wykes et al. 2018) motivates it as a natural diagnostic of the propagation time of the UHECR in the extragalactic radiation field environment. Likewise, the relative stability of the heavy Fe species at these energies provides contrasting reference population of particles with which to compare the light species abundance. In this description, the heavy species injected at the source can be considered as a crude proxy for species heavier than He , which are considerably more stable than He for the energies considered.
We adopt abundance ratios for He and Fe injected at the sources of He = 0.868, and Fe = 0.132, which, due to the impact of the cutoff already being felt at our adopted minimum energy, result in a He : Fe ratio of 80 : 20 at energy 0 . We adopt this He : Fe ratio so as to give a comparable level of signal (within a factor of 3) in the Model C skymaps from both the direct and echoed waves. Our composition diagnostics are designed to be illustrative rather than providing a realistic match to UHECR composition as inferred from, e.g., max distributions (REFs); we reserve this exercise for future work.
Scattering Rates
Our description adopts isotropic scattering rates for the interaction of the cosmic rays with the magnetic fields local to the CoG objects. Due to a current poor knowledge of the magnetic fields on scales of the virial radius surrounding galactic structures, for simplicity we adopt an energy independent isotropic scattering rate for all cosmic ray nuclei in the system, with a scattering length,
sc = sc , if ≤ sc ∞, otherwise(3)
where sc is the galactic scattering radius for the CoG members, which we fix to have a size of 300 kpc for all objects, a value close to the expected virial radii for a 10 12 mass galaxy, and is the cosmic ray's distance from the CoG object. For our description of the scattering events, we allow for large angle isotropic scatterings to occur once the particles are within a distance sc of the scattering radius of each CoG galaxy. Our scattering description here differs in several ways from that adopted in BM22. We assume large angle scattering from all CoG members. In contrast, BM22 adopted a small angle scattering description from only local objects with the largest SFRs. For our results here, a scattering time of sc = 0.5 Myr is adopted (ie. sc = 150 kpc). In comparison to this length scale, the Larmor radius of a 10 EV cosmic ray in a 0.1 magnetic field is 100 kpc. Outside of the CoG galaxy regions, we assume that no scattering events take place.
Photo-disintegration Rates
We consider photo-disintegration of the cosmic rays in both the cosmic microwave background (CMB) and extragalactic background light (EBL) radiation fields. The photo-disintegration rates of UHECR nucei with the background radiation fields are determined by a convolution of the photo-disintegration cross-section with the radiation field spectral energy distribution (Hooper et al. 2007, see their eq. 3)). For the photo-disintegration cross-section, we use the family of Lorentzian models proposed by Khan et al. (2005). For the EBL radiation field, we adopt the model from Franceschini et al. (2008).
Coordinate System
A coordinate system which aligns to that of the local sheet is adopted for the Monte Carlo simulations. The coordinate system can be related to Galactic coordinate system via the the rotation matrix,
x Gal = M x ls ,
where the rotation matrix M is given by
M = + − + − − + + − (4)
The angles for this rotation are = 172 • , = 225 • and = 47.7 • . Note the expression given in eqn 4, utilise a shorthand notation in which is used as an abbreviation for sin( ) and as an abbreviation for cos( ).
Cen A Emission and Release Models
In the following we describe the results for 3 different source evolution models. These models vary both the UHECR source luminosity evolution with time, and escape time of particles from the source region. We label these models A, B and C. The basic premise here is that model A allows us to explore the impact of the CoG structure on the UHECR signals, whereas models B and C can be considered representations of plausible physical scenarios.
In Model A, we consider the case in which the UHECR source (Cen A) releases a single pulse of particles at = 0, with the particles subsequently escaping immediately from the source region. This model has a source term which is a -function in time, whose resultant transmission through the system to an observer at Earth essentially provide a response or transfer function of the UHECR signal at Earth to the CoG structure.
In Model B, we consider the case in which Cen A's UHECR source luminosity decreases exponentially over time after an initial outburst episode. For this model, once produced by the source, the particles escape immediately fron the source region. Using the timescale for the initial outburst as a reference time scale for our results ( = 0), the subsequent UHECR luminosity is given by,
= 0 − / dec , (for > 0) (5)
where dec is the decay time of the UHECR source luminosity, which we set to 3 Myr. Such a short activity timescale would be consistent with AGN flickering model proposed to describe the activity evolution of other local AGN (Saikia & Jamrozy 2009). However, such a description is a crude approximation to the true variability in the UHECR luminosity of Cen A. We apply an additional Gaussian smoothing with standard deviation of 1 Myr to the launch times of the CR particles. The purpose of this smoothing -which is applied to both Models B and C, but not Model A -is to limit the sensitivity of our results to the exact timestamps we present, which is an appropriate choice given the uncertain time evolution of Cen A.
In Model C, we consider the case in which Cen A injects a pulse of particles, with the particles subsequently residing longer within the source region. As was done for Model B, particles are injected with a Gaussian distribution of times centred on = 0, with a standard deviation of 1 Myr(ie. Gaussian smoothing in the time domain was carried out). Contrary to the rigidity independent description we adopt for particle propagation through the CoG structure, we approximate the physics of diffusive escape out of the magnetised lobes of Cen A by imposing an additional rigidity-dependent escape time for each particle, given by
esc = 10 / 10 EV −1 ,(6)
where is the particle energy, the particle charge, and 10 is the escape time for a 10 EV rigidity particle, for which we choose 10 = 1.5 Myr. Such an escape time for particles with rigidity 10 EV is consistent with these particles experiencing around 1 scattering event before being able to escape from their host environment region. While < esc , a given particle can undergo photo-disintegration loss interaction, but it does not move from it's starting position; only after = esc does the particle start propagating. Although this description fails to capture any change in rigidity of the particle as it undergoes energy losses, such rigidity changes during photo-disintegration are minor.
RESULTS
Particle Spatial Distribution
Following the propagation of cosmic ray nuclei through the CoG system, the arrival of multiple waves of particles to the MW location are observed. Fig.3 shows a = 0, Δ = 0.6 Mpc slice of the particle spatial distribution in the system for four key timescales: 0 Myr, 12 Myr, 21 Myr, and 33 Myr, in the − (local sheet) plane. Fig. 3 shows a snapshot of the logarithm (base 10) of the binned density in the simulation (bin size 0.03 Mpc) at four different times during the simulation. Also shown in this figure are the positions of the CoG objects (empty circles), Cen A (pink filled circle), and the MW location (black vertical cross). From the snapshots of the particle density at the four timescales, the arrival of waves to the MW location at 12 Myr, 21 Myr, and 33 Myr can be seen.
Direct and Echoed Waves
The four key timescales noted, relate to the intitial spatial distribution (0 Myr), the arrival of the direct wave from Cen A (12 Myr), and the arrival of two echoed waves (21 Myr and 33 Myr). The arrival of the direct and echoed waves can be easily appreciated from the blue line in Fig. 4, which shows the arriving UHECR density as a function of time after the initial outburst from Cen A. Three major peaks are observed in this figure, namely the direct wave at 12 Myr after the initial outburst, and the echoed waves at 21 Myr and 33 Myr after the outburst. The width of the peaks of the waves seen in the figure result from the finite size of the scattering regions. An understanding of the different timescales which these waves arrive from, and the specific sources responsible for contributing to the echo signal, can be understood from Fig. 5. This figure provides a dissection of the echoed waves, connecting their contribution to sources located on a common concentric ellipse, whose two focii are located at Cen A and the MW. The colour scale in the figure indicates the incurred delay time for each concentric ellipse.
Focusing Effects
As appreciated directly from the particle density snapshots shown Fig. 3, the scattering of particles from the CoG objects results in the arrival to the MW of focussed waves of particles. To understand the origin of this focussing effect, Fig. 5 shows a family of concentric ellipses (of varying eccentricity), with each ellipse having Cen A and the MW at the two focal points. These curves represent isotemporal contours for signals from Cen A which arrive to the MW at the same time. As observed in this figure, the CoG objects are approximately located on specific concentric ellipses (blue and yellow thick solid lines in Fig. 5), where the colour of the ellipse indicates the corresponding delay time incurred.
The eccentricity, , of an ellipse with the source and observer at the two focii can be related to the (straight-line, ballistic) time of arrival as = /( ). Here is the distance to the CR source, and is the time of arrival of scattered CRs with respect to the initial burst. With these definitions, the first CRs arrive at ≈ / ≈ 12 Myr. Subsequent echoes from sources lying on an ellipse with eccentricity arrive at ≈ /( ), with a delay with respect to the light travel time of ≈ (1 − )/( ). This geometrical argument assumes ballistic trajectories in between scatterers. Any additional small angle scattering introduced during particle propagation in the IGM would also introduce delays, though such delays would be expected to be safely negligible if the IGM scattering angle is itself small. Fig. 5 shows that the two echo waves after the initial outburst are caused by the collective influence of CoG members that are located approximately on the ≈ 20 Myr and ≈ 33 Myr isodelay contours. Specifically, the first echo wave is caused by scattering by M 83 and Circinus, and the second wave is caused by six sources located approximately on the same isodelay contour. One particular source, NGC 4945 is also responsible for a number of interesting effects in our simulations, due in part to its close proximity to Cen A. NGC 4945 intercepts rather a large fraction of the CRs coming from the source, some of which are scattered towards Earth, enhancing the signal from the approximate Cen A direction slightly and smearing out the arrival times. There is also a shielding effect from NGC 4945, which happens to lie on an approximately straight line path between Cen A and NGC 253. CRs are attenuated and fewer CRs reach NGC 253, which acts to weaken the NGC 253 echo signal in the resulting skymaps (see section 4.4).
Local Skymaps
The angular distribution of particles arriving to an observer located in the MW (i.e. the arriving particle skymap), after scattering from the CoG objects, is shown in Galactic coordinates in Fig. 6 for the model A scenario. The different panels in this figure show arriving cosmic ray skymaps at 11.7, 20.6, and 33.3 Myr after a Cen A outburst of UHECR. To produce these skymaps, we binned the arrival directions into solid angle bins, in Galactic coordinates, using the Healpy python implementation (Zonca et al. 2019) of the HEALpix scheme (Górski et al. 2005). The colour-scale in these skymaps encodes the number of particles per HEALpix pixel (ie. solid angle bin), initially calculated with 64 × 64 pixels covering the sky. In contrast to BM22, we do not include small-angle scattering in the regions between galactic scattering haloes. Instead, we apply a 20 • (full-width at half-maximum [FWHM]) Gaussian smoothing to the skymaps to approximate this process. Such a smoothing can be considered an approximation of a constant amount of IGM scattering as a function of sky position, and is broadly appropriate given that the source and scatterers are all located at a similar distance from Earth; nevertheless, we discuss this limitation further in section 5.1. The adopted 20 • FWHM of the Gaussian smoothing is larger than the ≈ 5 • angular radius subtended by a 300 kpc scattering halo at 3.7 Mpc.
As expected from the spatial distribution results discussed above, the early time direct wave (12 Myr) originates from Cen A. The arrival of the first echo wave to the MW at 21 Myr, originates from the CoG objects close to Cen A (NGC 4945, M83 and Circinus), as expected from the delay time ellipses shown in Fig. 5. In contrast to these two earlier skymaps, the arrival of the second echo wave to the MW at 33 Myr, originates from CoG objects located further from Cen A, on the side opposite to the location where Cen A resides. The results in Fig. 6 show how the CoG structure reverberates to a pulse of CRs, and can thus be thought of as a sparse representation of a spatially-resolved response function, analogously to the transfer and response functions used in spectroscopic reverberation mapping of AGN (e.g. Blandford & McKee 1982;Peterson 1993). The observed signal is then a convolution of the results from Model A with the underlying activity evolution of the source.
Similar plots are shown in Fig.s 7 and 8 for model B and model C outburst scenarios, respectively, focusing now only on the = 33.3 Myr snapshot. In our framework, and as also suggested by BM22, we consider this time period to be a reasonable approximation to the present day in the sense that it represents a characteristic time elapsed since Cen A was at its peak of UHECR activity. At earlier times in these simulations the snapshots only have small variations in the anisotropy and are dominated by signal from the Cen A direction (as can be seen from the online animations). However, the arriving UHECR flux at late times (33 Myr) allows for bright spots of comparable intensity in the skymap for both direction towards Cen A, and towards the CoG members located furthest from Cen A. These skymaps, for both model B and model C, show striking similarities with the observational results from both the PAO and TA (Abbasi et al. 2014;Aab et al. 2018b;Abreu et al. 2022), in particular when compared to the all-sky anisotropy patterns (Biteau et al. 2019;di Matteo et al. 2020a). Specifically, a bright hotspot region is observed from the direction of Cen A, with a ring of additional hotspots produced by echoes from the directions of Maffei/IC 342, M81/M82, M94 and M64.
It is worth commenting on the conspicuous absence of NGC 253 from the late-time skymaps. As noted above in section 4.3, NGC 4945 creates a shielding effect that significantly decreases the UHECR flux impinging on the NGC 253 scattering halo; this effect is responsible for the negligible signal at late times from the direction of NGC 253. To demonstrate this, in Fig. 9, we present results from a simulation identical to Model B, but with NGC 4945 removed. In this modified simulation a hotspot is indeed produced from the direction of NGC 253 at southern Galactic latitudes. There is a fairly prominent excess in this region of the sky in the PAO maps, so for the echo model to explain this we would either require some variation between the scattering haloes' ability to scatter UHECRs (as might be expected anyway; see section 5.1), or for additional scattering in the IGM to allow UHECRs do be deflected around NGC 4945. Alternatively, an additional local source near to the Galactic south pole could contribute, such as the Fornax A radio galaxy (Matthews et al. 2018;Eichmann et al. 2018).
Our results from both Models B and C are fairly similar to those presented by BM22, who show skymaps in equatorial coordinates, with a few differences. BM22 focused mainly on the TA hotspot and the influence of the M82 galaxy, before presenting a simulation which included M82, NGC 253 and IC 342. Our results show that this qualitative match to the observed skymaps does not disappear when photo-disintegration losses are included, as would be expected for the relatively short propagation times. Furthermore, we have included additional sources and so observe additional hotspots in the direction of M94 and M64, while Maffei 1&2 act to smear out and enhance the feature near IC 342. Finally, we note the influence of NGC 4945, Circinus and M83. These sources are close to Cen A on the sky and, depending on the model and timestamp, can act to produce a smeared out or elongated pattern in the direction of Cen A. In particular, in some cases the 'Cen A' feature resembles a lop-sided dumb-bell shape, better correlated with M83. This is an interesting general point given that the hotspot observed from PAO is somewhat diffuse (a top-hat search radius of ≈ 25 • is found by Abreu et al. 2022), and not perfectly aligned with Cen A (Aab et al. 2018a); we therefore suggest that scatterers local to the source may be important in determining the morphology of any observed excesses.
Composition-dependent Skymaps
In order to obtain further insights into the results show in Fig.s 7 and 8, it is helpful to consider a breakdown of these results into the contributions from different logarithmic nuclear species (ie. ln ) groups.
In Fig. 10 such a decomposition of the model B skymaps in Fig. 7 into composition groups is shown for the mass ranges 1 < ln < 1.5, and ln > 3.5. As is seen from this figure, for model B the He component (1 < ln < 1.5) of the arriving flux from CoG group members furthest from Cen A is considerably depleted relative to the He component from Cen A. Contrary to this, the Fe signal contribution (ln > 3.5) from these two regions in the sky are similar in magnitude.
In fig. 11 a decomposition of model C skymaps in Fig. 8 into composition groups is shown for the mass ranges 1 < ln < 1.5, and ln > 3.5. This figure shows that for the model C case, the Model B signal from Cen A at late times is almost purely Fe dominated. As noted earlier in section 3.1, it should be borne in mind here that the Fe species in these results should be considered as a proxy of species lighter than He . In contrast, the signal at late times from the CoG objects furthest from Cen A is almost purely He dominated in our simulations.
These composition-dependent skymap results for both model B and C demonstrate the insight, in addition to the usual angular information, that can be provided by the composition information. UHECR nuclei, operating as "composition clocks", can provide the additional third dimension to skymaps, giving rise to a clear skymap signature for a particular propagation scenario from a local source.
DISCUSSION
This work builds further on the possibility that UHECR at the highest energy may have a local extragalactic origin (Wykes et al. 2018, BM22). Such a possibility appears compatible with the evidence both that a local UHECR source must exist (Taylor et al. 2011;Lang et al. 2020), and that a small number of such sources are contributing to the UHECR flux observed at Earth (Ehlert et al. 2022). We now discuss our results within the wider astrophysical context, focusing on the key uncertainties in our model, before exploring the prospects for testing the echoes model in the future.
Scattering in Local Extragalactic Magnetic Fields
One of the key aspects of our work is that the magnetised CGM of galaxies within the CoG must represent an effective barrier to UHE-CRs if they are to produce UHECR echoes. As discussed in section 1 and by BM22, while the magnetic fields in the CGM are uncertain, the field strengths required to deflect UHECRs are plausible. In our work, we made the simplifying assumption that each CoG member has the same optical depth and scattering halo size. Provided that the optical depth is larger than 1 the results presented here are not found to be qualitatively sensitive to the specific value adopted. However, in detail neither of these assumptions is likely to hold, even if the pressures and densities in the respective CGMs are comparable. In particular, there is likely to be variation in the plasma beta value, B , since magnetic fields can be amplified and stretched by dynamos and dynamical interactions, or transported from the galaxy to the CGM through outflows. It is important to note that the SFR within the Galactic nuclear regions of the COG members varies considerably. Assuming that the level of this central SFR activity dictates a galaxy's ability to drive material out into its scattering halo region, a large variety of scattering halomagnetic field strengths would also be expected. Subsequently, CoG members possessing the largest nuclear SFR would be expected to possess the largest optical depths.
In addition, the structure of the magnetic field is important, because there must be some ordering of the field on the scale of the UHECR Larmor radius. A discussion of the ability of M82 to produce large-scale, ordered magnetic fields is given by BM22, but we also draw attention to the results of Pakmor et al. (2020) who examine the magnetised CGM in spiral, MW-like, galaxies with "zoom-in" cosmological MHD simulations. They find the CGM is magnetised by an in situ turbulent dynamo, which can create a magnetic field of strength ∼ 0.1 G by = 0. However, they also show that large-scale ordered fields are only produced in the presence of strong galactic outflows.
Taking all the above evidence together, the likely variation of the strength and structure of the circumgalactic magnetic field between galaxies would be expected to naturally create a hierarchy: some galaxies, perhaps those undergoing interactions or that have recently undergone a burst of star formation, would be effective UHECRs scatterers, while others could be more or less transparent to UHE-CRs. Such a hierarchy is likely to be important for explaining the apparent correlation of UHECR arrival directions with star-forming or starburst galaxies (Aab et al. 2018b;Abreu et al. 2022), and possibly even necessary for explaining why the MW is not opaque to UHECRs (see discussion in section 5.2).
Finally, we note the additional simplifying assumptions we have made. We neglected particle scattering in extragalactic space beyond the virial radius of the CoG objects (although we did approximate this effect by smoothing the skymaps), and within the virial radius of the MW (see discussion in section 5.2). Additionally the contribution to the UHECR skymap from more distant sources have been neglected, and the scattering in the scattering haloes was treated independently of rigidity. Each of these assumptions is warranted of further interrogation, which we leave to future work.
Propagation Within the Milky Way's Magnetic Field
Given that only ∼ 10% of the UHECR detected by the PAO above 40 EeV appear anisotropic (Aab et al. 2018b), correlating with the CoG structure (van Vliet et al. 2022), the results presented in section 4 are specifically focused on accounting for the origin of this anisotropic component. The origin of the remaining quasi-isotropic signal has been intentionally neglected. One possible way in which a level of quasi-isotropic signal could also be accounted for, expanding on the setup adopted in this study, would be the inclusion of particle scattering within the MW's own virial radius. The main effect expected from UHECR propagation within the MW is a smearing out of the arriving anisotropic UHECR signal. Provided that certain lines-of-sight probed by UHECR propagation through the MW gave rise to large enough angular spreading, an additional component of events, which might be described as quasi-isotropic, would also be produced.
Additionally, at sufficiently low energies, UHECR diffusion within the MW from an external source would be expected to give rise to a skymap with a largely dipolar anisotropy component (Giacinti et al. 2011). Whether local propagation at these lower energies (> 8 EeV) could account for the recent the discovery by the PAO of a dipole of magnitude 7% in the UHECR skymap (Aab et al. 2017b), consistent with the weaker evidence for a dipole also seen by TA (Abbasi et al. 2020), remains an open question. With the magnitude of this dipole increasing with energy, potentially reaching a magnitude of more than 10% above an energy of 40 EeV (Aab et al. 2020), a possible connection to the scattering scenario we put forward here seems warranted for future investigation.
Cen A's Activity Evolution
A key astrophysical aspect of the echoes scenario is that the original UHECR source must be variable, which is necessary for any of the echo waves or hotspots to be of comparable significance to the direct flux. More specifically, the source -in our case, Cen A -needs to have declined significantly in UHECR luminosity over a ∼ 20 Myr timescale if the hotspots from the echoes are to be approximately the same intensity as the hotspot from the region of Cen A. This decline can be achieved by a direct corresponding change in source power, or a combination of a change in power and the CR spectral index. In our modelling, the ratio of UHECR luminosity 20 Myr ago to the present day luminosity is ≈ 700. This factor, however, scales inversely with the cross-sectional area of the haloes (ie. with the square of the scattering halo radius). Increasing the scattering halo radius to 800 kpc, one of the values considered by BM22 and motivated by Wilde et al. (2021b); Lehner et al. (2020), would decrease this required magnitude of UHECR variability in Cen A to ≈ 100. The larger radius, and a consequently stronger echo, might be appropriate for galaxies such as M82 which are more strongly star-forming.
As discussed previously in other papers (Matthews et al. 2018, evidence exists supporting the possibility that Cen A Figure 10. Composition-dependent skymap in Galactic coordinates (Hammer-Aitoff projection) for Model B, the declining source scenario. The left-hand panel (purple) shows the results from 1 < ln < 1.5, spanning the He mass range ( = 3 − 4), and the right-hand panel (orange) shows the results from ln > 3.5, spanning the Fe mass range ( = 34 − 56). As in Fig. 6, the plots are constructed using the HEALpix scheme and a Gaussian smoothing function of 20 • . In this model, 'echo' features from the CoG members at large angular distances from Cen A are only significant in the higher mass bin (ln > 3.5), and the low-mass bin is dominated by relatively He-rich CRs that were accelerated more recently by Cen A. Model C Figure 11. As Fig. 10, but for Model C, the delayed escape scenario. In this model, 'echo' features from the CoG members at large angular distances from Cen A are only significant in the lighter mass bin (1 < ln < 1.5), and the high mass bin is dominated by Fe-rich CRs that have escaped Cen A more recently and scattered off NGC 4945, M83 and Circinus. possessed enhanced activity in its 'recent' history. Specifically, the inferred age of the synchrotron-emitting electrons in the giant radio lobes is ∼ 20 − 30 Myr (Hardcastle et al. 2009), is comparable to the timescales of the echo waves considered here. Furthermore, Cen A's giant lobes have an estimated total energy content of ∼ 10 59−60 erg s −1 (Wykes et al. 2013;Eilek 2014). If inflated over a similar ∼ 20 Myr timescale this would require a mean jet power of ∼ 10 44−45 erg s −1 , some 1.5-15% of Cen A's potential Eddington luminosity, 7 × 10 45 erg s −1 , assuming a black hole mass of 5.5 × 10 7 (Cappellari et al. 2009). These luminosity estimates for Cen A are consistent with the required kinetic energy luminosity necessary for the source to be considered capable of accelerating UHECRs, as discussed in section 2 (see Eqn. 1).
Predictions and Outlook
The hotspot maps obtained from our simulations shown in Fig.s 10 and 11 can be compared with the full-sky joint PAO/TA hotspot map, combining the data from PAO above 40 EeV and from TA above 53 EeV (di Matteo et al. 2020b, see their Fig. 4, which also highlights the alignment of both the local sheet and supergalactic planes to these hotspots). As apparent from a comparison of these simulation maps with the observational map, consistency between them can be found for either the model B and model C simulation scenarios.
The coming advent of AugerPrime (Castellina & Pierre Auger Col-laboration 2019) provides an exciting test bed for looking for further insights in the anisotropy signature reported by the PAO (Aab et al. 2018b;Abreu et al. 2022) and TA (Abbasi et al. 2014). Observations by AugerPrime are anticipated to allow the composition of air showers to be probed on a shower-by-shower basis. We consider here how composition dependent skymaps will allow the model explored here to be tested. Although models B and C lead to apparently similar skymaps at a time period of around 20 Myr after the Cen A outburst event (see Fig.s 7 and 8), the composition dependent skymaps for this same time window shown in fig.s 10 and 11 are noticeably different.
One general feature found is that the He -like flux for the Cen A region, and the CoG region away from Cen A are strongly different. In model B, the He flux for the echo signal from the region away from Cen A is small compared to the He flux for the Cen A region. In contrast to this, for model C, this the He signal from the region away from Cen A is large compared to the He signal from the Cen A region.
The geometrical nature of the delayed signal from Cen A, produced by UHECR echoes off the CoG structure, predicts a similar level of brightness of UHECR signal from sources on the same isodelay contour (see fig. 5), as appreciated by fig. 7 and fig. 8. Furthermore, the composition of the signal echoed off objects on the same isodelay contours should also match, as appreciated from fig. 10 and fig. 11. The approximate equal brightness of the sources seen in these results, however, partly comes from the assumption of equal size scattering regions, sc , for all CoG members. This assumption was made on the basis of simplicity rather than from observational motivations. In contrast to this dependence on underlying assumptions, the expectation of equal composition from objects on the same isodelay contour is a more robust prediction, being insensitive to the scattering region size.
Aspects of our findings here are more general than the specific Cen A source scenario that we consider. Provided that the primary UHECR source resides sufficiently close, the composition of the direct and echoed waves of UHECR, following their release from the source, offer a key diagnostic to probing the probing both the location of the UHECR source, and the local magnetic environment.
CONCLUSIONS
We here explore a potential origin of the observed correlation of UHECR with nearby extragalactic structures reported by PAO above an energy of 40 EeV (Aab et al. 2018b;Abreu et al. 2022) and TA above an energy of 50 EeV (Abbasi et al. 2014). Specifically, we investigate whether such a correlation can result from the echo signal of UHECR, originally accelerated and released by Cen A, off the local extragalactic structure, developing further a scenario initially considered by Bell & Matthews (2022).
Focussing our attention on the CoG structure, the dominant extragalactic structure at distances < 10 Mpc from the MW, we consider ballistic propagation of UHECR beyond 300 kpc from members of the CoG structure, with the UHECR undergoing large angle scattering on approaching distances smaller than this from any of the member objects. We find that the propagation of a pulse of UHECR from Cen A through this structure gives rise to three distinct signals. The first signal at 12 Myr, is produced by direct wave from Cen A. The second and third signals are the two echo waves at 21 Myr and 33 Myr.
Beyond these pulse results, we additionally consider the effect introduced by both Cen A's activity evolution over the last 30 Myr (model B), and the rigidity dependence of the UHECR escape from Cen A (model C). In both model B and C cases, it is shown that under reasonable assumptions for these two processes, hotspots corresponding to the CoG members in the late time (> 30 Myr) skymap are obtained, following the initial outburst from Cen A (see fig.s 7 and 8).
Through the consideration of the propagation of He and Fe nuclear species in the UHECR signal, and the photo-disintegration of these species en-route, we obtain composition dependent skymaps. These skymaps are produced by a mixture of direct and echoed signals. It is demonstrated that the apparent degeneracy in the late time skymaps for model B and C can be broken using the spatial distribution of the light component regions (see fig.s 10 and 11). Furthermore, the echo origin of the CoG objects correlation, quite generally, predicts a common signal composition from all CoG members located on a common isodelay contour (see fig. 5).
Our results suggest that the use of "composition clocks" -that is, UHECR composition as a measure of the travel time for the UHECRs as a function of arrival direction and/or energy -has more general and exciting prospects as a probe of the UHECR time domain, with the potential for testing the UHECR echo model as well as other UHECR source scenarios.
ACKNOWLEDGMENTS
AT acknowledges support from DESY (Zeuthen, Germany), a member of the Helmholtz Association HGF. JHM acknowledges funding from the Royal Society, and, previously, from the Herchel Smith Fund at the University of Cambridge. ARB acknowledges the support of an Emeritus Fellowship from the Leverhulme Trust. We would like to thank Alan Watson, Foteini Oikonomou, Yakov Faerman and Arjen van Vliet for helpful discussions. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). We gratefully acknowledge the use of the following software packages: healpy (Zonca et al. 2019), astropy (Astropy Collaboration et al. 2013, 2018, matplotlib (Hunter 2007).
DATA AVAILABILITY
Data and accompanying scripts to reproduce Figures 1 to 5 in this paper, together with animations of all skymaps and Fig. 3, are available in a github repository (https://github.com/jhmatthews/ uhecr-echo-vis) with an associated Zenodo DOI: 10.5281/zenodo.7634625. The additional raw data to reproduce the skymaps are available from the authors upon request.
Figure 1 .
1Figure 1. The "Council of Giants" within the Local Sheet: A 2D diagram of the source (Cen A; pink circle), observer (Milky Way; blue "+") and 9 scattering galaxies (black circles) used in this work. The solid black line marks a circle of radius 3.746 Mpc and centred on = 0.362 Mpc, = 0.718 Mpc (see "×" in diagram), as defined by McCall (2014). The object positions in the diagram are provided in local sheet coordinates, in which the objects are predominantly located in the -plane.
Figure 3 .
3Particle position maps from a slice of thickness Δ = 0.6 Mpc in the = 0 plane from Model A at four timesteps (3.9 Myr, 11.7 Myr, 20.6 Myr, 33.3 Myr), following their impulsive release from Cen A. The corresponding plot for models B and C can be found in Appendix ??, and an animated version can be found in the supplementary material. The position maps are presented as binned particle densities with bin sizes of 0.03 Mpc and a density floor of 10 −10 bin −1 in these arbitrary units.
Figure 4 .
4The CR particle density in a local box of size 300 kpc, centered on the Milky Way location, following the injection of particles from Cen A. The three colours show results from the three models considered: the single pulse (blue), declining source (orange) and ridigity-dependent escape (green). The red dotted line shows an exponential with a decay time of 3 Myr, and the dashed vertical lines mark = 11.7, 20.6, 33.3 Myr, the times at which − positions in Figs 3 and skymaps in Figs 6,7, 8 are shown.
Figure 5 .
5A family of 'isodelay contours', which form concentric ellipses with a variety of eccentricity values, colour-coded by the ballistic time of arrival. The ellipses are plotted as dashed lines from 11.7 to 52.8 Myr at 2.94 Myr intervals, with additional thick solid lines overlaid for = 20.6 Myr and = 33.3 Myr (see also Fig. 4. The two focal points ( 1 and 2 ) of the ellipses are centered on Cen A and the Milky Way, respectively, and the positions of the council of giants are marked with open circles. The relationship between the ballistic arrival time and eccentricity is given in the text, with larger eccentricities corresponding to earlier arrivals.
Figure 6 .
6Three skymaps in Galactic coordinates(Hammer-Aitoff projection) from Model A at 11.7 Myr (top), 20.6 Myr (middle), and 33.3 Myr (bottom) after the impulsive cosmic ray release from Cen A. The colour-scale encodes the number of particles per HEALpix pixel, initially calculated with 32 × 32 pixels covering the sky, which has then been smoothed with a Gaussian symmetric beam with full-width at half-maximum of 20 • . Animations of all skymaps are available in an online repository (see Data Availability).
Figure 7 .Figure 8 .Figure 9 .
789Skymap in Galactic coordinates (Hammer-Aitoff projection) at 33.3 Myr, for Model B, the declining source scenario, for which a decay time of dec = 3 Myr has been adopted. The map is calculated in the same way as inFig. As Fig. 7, but for Model C, the delayed escape scenario in which particles have a rigidity-dependent escape time from equation 6 with 10 = 1.5 Myr.Model B (no NGC 4945) AsFig. 7, but for Model B with NGC 4945 removed from the simulation. Removing the shielding impact of NGC 4945 results in a stronger excess in the direction of NGC 253 at southern Galactic latitudes as discussed further in the text.
Figure 2. A skymap showing the positions in the sky of the Council of Giant/Local Sheet objects. Cen A is marked with a pink circle, the other council members are marked with black circles and the supergalactic plane is shown as a dotted line.180 •
120 •
60 •
0 •
300 •
240 •
180 •
−90 •
−60 •
−30 •
0 •
30 •
60 •
90 •
M83
Cen A
M81 & M82
NGC 253
IC 342
Circinus
NGC 4945
M94
M64
Maffei 1 & 2
Table A1. Object names, Galactic coordinates ( , ), distances, stellar masses ( * ), 12 m lumnosity ( 12 m ), estimated SFRs of the CoG members included as UHECR scatterers in our simulations. Distances are taken from (McCall 2014, table 1) (Table 1). Mass in stars, infrared luminosities, and estimated SFRs of objects are taken from the WISE catalogue for extended sources(Jarrett et al. 2019).Galaxy
( • )
( • )
Distance (Mpc)
* (10 10
)
12 m (10 9 ) est. SFR (
yr −1 )
NGC 253
97.36
−87.96
3.5
1.7
3.5
5.4
M64
315.68
84.42
5.0
11.5
1.3
2.3
M81
142.09
40.91
3.7
7.1
0.4
0.8
M82
141.41
40.57
3.5
1.3
7.8
10.7
M83
314.58
31.97
4.9
2.7
3.4
5.2
M94
123.36
76.01
4.5
3.8
0.9
1.6
NGC 4945
305.27
13.34
3.3
1.2
1.8
3.0
IC 342
138.17
10.58
3.4
2.7
2.1
3.5
Maffei 1
135.86
−0.55
3.3
6.2
-
-
Maffei 2
136.50
−0.33
3.4
1.2
0.9
1.5
Circinus
311.33
−3.81
4.3
1.5
6.2
8.8
MNRAS 000, 1-?? (2022)
APPENDIX A: TABLE OF GALAXY PROPERTIESInTable A1, we show the complete list of the CoG objects included in our calculations, together with their positions, stellar masses, infrared luminosities, radio fluxes, and estimated SFRs. References for the sources of these estimates and measurements are given in the table caption, as are the symbol definitions.
. A Aab, 10.1088/1475-7516/2017/04/038JCAP. 0438Aab A., et al., 2017a, JCAP, 04, 038
. A Aab, 10.1126/science.aan4338Science. 3571266Aab A., et al., 2017b, Science, 357, 1266
. A Aab, 10.3847/2041-8213/aaa66dThe Astrophysical Journal. 85329Aab A., et al., 2018a, The Astrophysical Journal, 853, L29
. A Aab, 10.3847/2041-8213/aaa66dApJ. 85329Aab A., et al., 2018b, ApJ, 853, L29
. A Aab, 10.3847/1538-4357/ab7236Astrophys. J. 891142Aab A., et al., 2020, Astrophys. J., 891, 142
. R U Abbasi, 10.1088/2041-8205/790/2/L21Astrophys. J. Lett. 79021Abbasi R. U., et al., 2014, Astrophys. J. Lett., 790, L21
. R U Abbasi, 10.3847/2041-8213/aba0bcAstrophys. J. Lett. 89828Abbasi R. U., et al., 2020, Astrophys. J. Lett., 898, L28
. P Abreu, 10.3847/1538-4357/ac7d4eAstrophys. J. 935170Abreu P., et al., 2022, Astrophys. J., 935, 170
. 10.1051/0004-6361/201322068Astronomy and Astrophysics. 55833Astropy Collaboration et al., 2013, Astronomy and Astrophysics, 558, A33
. 10.3847/1538-3881/aabc4fThe Astronomical Journal. 156123Astropy Collaboration et al., 2018, The Astronomical Journal, 156, 123
. A R Bell, J H Matthews, 10.1093/mnras/stac031Monthly Notices of the Royal Astronomical Society. 511448Bell A. R., Matthews J. H., 2022, Monthly Notices of the Royal Astronomical Society, 511, 448
. J Biteau, 10.1051/epjconf/201921001005arXiv:1905.04188European Physical Journal Web of Conferences. p. 1005Biteau J., et al., 2019, in European Physical Journal Web of Conferences. p. 01005 (arXiv:1905.04188), doi:10.1051/epjconf/201921001005
. R D Blandford, 10.1238/Physica.Topical.085a00191Physica Scripta. T191Blandford R. D., 2000, Physica Scripta Volume T, 85, 191
. R D Blandford, C F Mckee, 10.1086/159843ApJ. 255419Blandford R. D., McKee C. F., 1982, ApJ, 255, 419
. J N Bregman, E Schulman, K Tomisaka, 10.1086/175160ApJ. 439155Bregman J. N., Schulman E., Tomisaka K., 1995, ApJ, 439, 155
. J N Bregman, E Hodges-Kluck, Z Qu, C Pratt, J.-T Li, Y Yun, 10.3847/1538-4357/ac51deAstrophys. J. 92814Bregman J. N., Hodges-Kluck E., Qu Z., Pratt C., Li J.-T., Yun Y., 2022, Astrophys. J., 928, 14
. M Cappellari, N Neumayer, J Reunanen, P P Van Der Werf, P T De Zeeuw, H W Rix, 10.1111/j.1365-2966.2008.14377.xMonthly Notices of the Royal Astronomical Society. 394660Cappellari M., Neumayer N., Reunanen J., van der Werf P. P., de Zeeuw P. T., Rix H. W., 2009, Monthly Notices of the Royal Astronomical Society, 394, 660
. A Castellina, Pierre Auger Collaboration10.1051/epjconf/201921006002arXiv:1905.04472European Physical Journal Web of Conferences. p. 6002Castellina A., Pierre Auger Collaboration 2019, in European Physi- cal Journal Web of Conferences. p. 06002 (arXiv:1905.04472), doi:10.1051/epjconf/201921006002
. J H Croston, 10.1111/j.1365-2966.2009.14715.xMNRAS. 395Croston J. H., et al., 2009, MNRAS, 395, 1999
. D Ehlert, F Oikonomou, M Unger, 2022Ehlert D., Oikonomou F., Unger M., 2022
. B Eichmann, J P Rachen, L Merten, A Van Vliet, 10.1088/1475-7516/2018/02/036Journal of Cosmology and Astroparticle Physics. 36Becker Tjus J.Eichmann B., Rachen J. P., Merten L., van Vliet A., Becker Tjus J., 2018, Journal of Cosmology and Astroparticle Physics, 2018, 036
. J A Eilek, 10.1088/1367-2630/16/4/045001New Journal of Physics. 1645001Eilek J. A., 2014, New Journal of Physics, 16, 045001
. M Elmouttie, R F Haynes, K L Jones, E M Sadler, M Ehle, 10.1046/j.1365-8711.1998.01592.xMNRAS. 2971202Elmouttie M., Haynes R. F., Jones K. L., Sadler E. M., Ehle M., 1998, MNRAS, 297, 1202
. Y Faerman, J K Werk, arXiv:2302.00692arXiv e-printsFaerman Y., Werk J. K., 2023, arXiv e-prints, p. arXiv:2302.00692
. Y Faerman, A Sternberg, C F Mckee, 10.3847/1538-4357/ab7ffcApJ. 89382Faerman Y., Sternberg A., McKee C. F., 2020, ApJ, 893, 82
. C.-A Faucher-Giguere, S P Oh, 2023Faucher-Giguere C.-A., Oh S. P., 2023
. A Franceschini, G Rodighiero, M Vaccari, 10.1051/0004-6361:200809691Astron. Astrophys. 487837Franceschini A., Rodighiero G., Vaccari M., 2008, Astron. Astrophys., 487, 837
. G Giacinti, M Kachelriess, D V Semikoz, G Sigl, 10.1016/j.astropartphys.2011.07.006Astropart. Phys. 35192Giacinti G., Kachelriess M., Semikoz D. V., Sigl G., 2011, Astropart. Phys., 35, 192
. K M Górski, E Hivon, A J Banday, B D Wandelt, F K Hansen, M Reinecke, M Bartelmann, 10.1086/427976The Astrophysical Journal. 622759Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, The Astrophysical Journal, 622, 759
. A Gupta, S Mathur, Y Krongold, F Nicastro, M Galeazzi, 10.1088/2041-8205/756/1/L8Astrophys. J. Lett. 7568Gupta A., Mathur S., Krongold Y., Nicastro F., Galeazzi M., 2012, Astrophys. J. Lett., 756, L8
. M J Hardcastle, C C Cheung, I J Feain, Ł Stawarz, 10.1111/j.1365-2966.2008.14265.xMonthly Notices of the Royal Astronomical Society. 3931041Hardcastle M. J., Cheung C. C., Feain I. J., Stawarz Ł., 2009, Monthly Notices of the Royal Astronomical Society, 393, 1041
. T M Heckman, L Armus, G K Miley, 10.1086/191522The Astrophysical Journal Supplement Series. 74833Heckman T. M., Armus L., Miley G. K., 1990, The Astrophysical Journal Supplement Series, 74, 833
. A M Hillas, 10.1146/annurev.aa.22.090184.002233Annual Review of Astronomy and Astrophysics. 22425Hillas A. M., 1984, Annual Review of Astronomy and Astrophysics, 22, 425
. D Hooper, S Sarkar, A M Taylor, 10.1016/j.astropartphys.2006.10.008Astroparticle Physics. 27199Hooper D., Sarkar S., Taylor A. M., 2007, Astroparticle Physics, 27, 199
. J D Hunter, 10.1109/MCSE.2007.55Computing in Science & Engineering. 990Hunter J. D., 2007, Computing in Science & Engineering, 9, 90
. T H Jarrett, M E Cluver, M J I Brown, D A Dale, C W Tsai, F Masci, 10.3847/1538-4365/ab521aApJS. 24525Jarrett T. H., Cluver M. E., Brown M. J. I., Dale D. A., Tsai C. W., Masci F., 2019, ApJS, 245, 25
. F C Jones, 10.1086/191875ApJS. 90561Jones F. C., 1994, ApJS, 90, 561
. E Khan, S Goriely, D Allard, E Parizot, T Suomijarvi, A J Koning, S Hilaire, M C Duijvestijn, 10.1016/j.astropartphys.2004.12.007Astropart. Phys. 23191Khan E., Goriely S., Allard D., Parizot E., Suomijarvi T., Koning A. J., Hilaire S., Duijvestijn M. C., 2005, Astropart. Phys., 23, 191
. R G Lang, A M Taylor, M Ahlers, V De Souza, 10.1103/PhysRevD.102.063012Physical Review D. 10263012Lang R. G., Taylor A. M., Ahlers M., de Souza V., 2020, Physical Review D, 102, 063012
. N Lehner, 10.3847/1538-4357/aba49cApJ. 9009Lehner N., et al., 2020, ApJ, 900, 9
. J Linsley, 10.1103/PhysRevLett.10.146Phys. Rev. Lett. 10146Linsley J., 1963, Phys. Rev. Lett., 10, 146
. R V E Lovelace, 10.1038/262649a0Nature. 262649Lovelace R. V. E., 1976, Nature, 262, 649
. J P Macquart, 10.1038/s41586-020-2300-2Nature. 581391Macquart J. P., et al., 2020, Nature, 581, 391
. N Martynenko, 10.1093/mnras/stac164Mon. Not. Roy. Astron. Soc. 511843Martynenko N., 2022, Mon. Not. Roy. Astron. Soc., 511, 843
. J H Matthews, A R Bell, K M Blundell, A T Araudo, 10.1093/mnrasl/sly099Monthly Notices of the Royal Astronomical Society. 47976Matthews J. H., Bell A. R., Blundell K. M., Araudo A. T., 2018, Monthly Notices of the Royal Astronomical Society, 479, L76
. J H Matthews, A R Bell, K M Blundell, A T Araudo, 10.1093/mnras/sty2936Monthly Notices of the Royal Astronomical Society. 4824303Matthews J. H., Bell A. R., Blundell K. M., Araudo A. T., 2019, Monthly Notices of the Royal Astronomical Society, 482, 4303
. M L Mccall, 10.1093/mnras/stu199Monthly Notices of the Royal Astronomical Society. 440405McCall M. L., 2014, Monthly Notices of the Royal Astronomical Society, 440, 405
. F Nicastro, 10.1038/s41586-018-0204-1Nature. 558406Nicastro F., et al., 2018, Nature, 558, 406
. C A Norman, D B Melrose, A Achterberg, 10.1086/176465ApJ. 45460Norman C. A., Melrose D. B., Achterberg A., 1995, ApJ, 454, 60
. S O'sullivan, B Reville, A M Taylor, 10.1111/j.1365-2966.2009.15442.xMon. Not. Roy. Astron. Soc. 400248O'Sullivan S., Reville B., Taylor A. M., 2009, Mon. Not. Roy. Astron. Soc., 400, 248
. R Pakmor, 10.1093/mnras/staa2530MNRAS. 4983125Pakmor R., et al., 2020, MNRAS, 498, 3125
. B M Peterson, 10.1086/133140PASP. 105247Peterson B. M., 1993, PASP, 105, 247
. W Pietsch, A Vogler, U Klein, H Zinnecker, Astron. Astrophys. 36024Pietsch W., Vogler A., Klein U., Zinnecker H., 2000, Astron. Astrophys., 360, 24
. F M Rieger, F A Aharonian, 10.1051/0004-6361/200912562A&A. 50641Rieger F. M., Aharonian F. A., 2009, A&A, 506, L41
. D J Saikia, M Jamrozy, Bulletin of the Astronomical Society of India. 3763Saikia D. J., Jamrozy M., 2009, Bulletin of the Astronomical Society of India, 37, 63
. K V Sheridan, 10.1071/PH580400Australian Journal of Physics. 11400Sheridan K. V., 1958, Australian Journal of Physics, 11, 400
. A M Taylor, M Ahlers, F A Aharonian, 10.1103/PhysRevD.84.105007Phys. Rev. D. 84105007Taylor A. M., Ahlers M., Aharonian F. A., 2011, Phys. Rev. D, 84, 105007
. A M Taylor, M Ahlers, D Hooper, 10.1103/PhysRevD.92.063011Phys. Rev. D. 9263011Taylor A. M., Ahlers M., Hooper D., 2015, Phys. Rev. D, 92, 063011
. R B Tully, E J Shaya, I D Karachentsev, H M Courtois, D D Kocevski, L Rizzi, A Peel, 10.1086/527428The Astrophysical Journal. 676184Tully R. B., Shaya E. J., Karachentsev I. D., Courtois H. M., Kocevski D. D., Rizzi L., Peel A., 2008, The Astrophysical Journal, 676, 184
. J Tumlinson, M S Peeples, J Werk, 10.1146/annurev-astro-091916-055240ARA&A. 55389Tumlinson J., Peeples M. S., Werk J. K., 2017, ARA&A, 55, 389
. E Waxman, 10.1007/BF02705103Pramana. 62483Waxman E., 2004, Pramana, 62, 483
. M C Wilde, 10.3847/1538-4357/abea14ApJ. 9129Wilde M. C., et al., 2021a, ApJ, 912, 9
. M C Wilde, 10.3847/1538-4357/abea14ApJ. 9129Wilde M. C., et al., 2021b, ApJ, 912, 9
. S Wykes, 10.1051/0004-6361/201321622Astronomy and Astrophysics. 55819Wykes S., et al., 2013, Astronomy and Astrophysics, 558, A19
. S Wykes, A M Taylor, J D Bray, M J Hardcastle, M Hillas, 10.1016/j.nuclphysbps.2018.07.033Nucl. Part. Phys. Proc. 234Wykes S., Taylor A. M., Bray J. D., Hardcastle M. J., Hillas M., 2018, Nucl. Part. Phys. Proc., 297-299, 234
. A Zonca, L Singer, D Lenz, M Reinecke, C Rosset, E Hivon, K Gorski, 10.21105/joss.01298The Journal of Open Source Software. 41298Zonca A., Singer L., Lenz D., Reinecke M., Rosset C., Hivon E., Gorski K., 2019, The Journal of Open Source Software, 4, 1298
. A Di Matteo, 10.22323/1.358.0439arXiv:2001.01864PoS. 2019439arXiv e-printsdi Matteo A., et al., 2020a, arXiv e-prints, p. arXiv:2001.01864 di Matteo A., et al., 2020b, PoS, ICRC2019, 439
. A Van Vliet, A Palladino, A Taylor, W Winter, 10.1093/mnras/stab3495Monthly Notices of the Royal Astronomical Society. 5101289van Vliet A., Palladino A., Taylor A., Winter W., 2022, Monthly Notices of the Royal Astronomical Society, 510, 1289
. F Van De Voort, R Bieri, R Pakmor, F A Gómez, R J J Grand, F Marinacci, 10.1093/mnras/staa3938MNRAS. 5014888van de Voort F., Bieri R., Pakmor R., Gómez F. A., Grand R. J. J., Marinacci F., 2021, MNRAS, 501, 4888
| [
"https://github.com/jhmatthews/"
]
|
[
"Pressure Fluctuations in Natural Gas Networks caused by Gas-Electric Coupling",
"Pressure Fluctuations in Natural Gas Networks caused by Gas-Electric Coupling"
]
| [
"Misha Chertkov [email protected] \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n",
"Michael Fisher [email protected] \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n",
"MPA, LANLScott Backhaus \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n",
"Los Alamos \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n",
"Nm Backhaus@lanl Gov \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n",
"Russell Bent [email protected] \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n",
"Sidhant Misra [email protected] \nLANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA\n"
]
| [
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA",
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA",
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA",
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA",
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA",
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA",
"LANL Los Alamos\nEECS, MIT\nT-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA"
]
| []
| The development of hydraulic fracturing technology has dramatically increased the supply and lowered the cost of natural gas in the United States, driving an expansion of natural gas-fired generation capacity in several electrical interconnections. Gas-fired generators have the capability to ramp quickly and are often utilized by grid operators to balance intermittency caused by wind generation. The time-varying output of these generators results in time-varying natural gas consumption rates that impact the pressure and line-pack of the gas network. As gas system operators assume nearly constant gas consumption when estimating pipeline transfer capacity and for planning operations, such fluctuations are a source of risk to their system. Here, we develop a new method to assess this risk. We consider a model of gas networks with consumption modeled through two components: forecasted consumption and small spatio-temporarily varying consumption due to the gasfired generators being used to balance wind. While the forecasted consumption is globally balanced over longer time scales, the fluctuating consumption causes pressure fluctuations in the gas system to grow diffusively in time with a diffusion rate sensitive to the steady but spatially-inhomogeneous forecasted distribution of mass flow. To motivate our approach, we analyze the effect of fluctuating gas consumption on a model of the Transco gas pipeline that extends from the Gulf of Mexico to the Northeast of the United States. | 10.1109/hicss.2015.330 | [
"https://arxiv.org/pdf/1507.06601v1.pdf"
]
| 3,976,420 | 1507.06601 | 030a5997a6e41b21eea38207e1b1266c2c139156 |
Pressure Fluctuations in Natural Gas Networks caused by Gas-Electric Coupling
Misha Chertkov [email protected]
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
Michael Fisher [email protected]
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
MPA, LANLScott Backhaus
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
Los Alamos
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
Nm Backhaus@lanl Gov
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
Russell Bent [email protected]
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
Sidhant Misra [email protected]
LANL Los Alamos
EECS, MIT
T-4 & CNLS, DSA-4, LANLLos Alamos, CambridgeNM, NM, MA
Pressure Fluctuations in Natural Gas Networks caused by Gas-Electric Coupling
T-4, LANL, Los Alamos, NM & EECS, U of Michigan Ann Arbor, MIIndex Terms-Natural Gas NetworksGas-Electric CouplingStochasticityReliability
The development of hydraulic fracturing technology has dramatically increased the supply and lowered the cost of natural gas in the United States, driving an expansion of natural gas-fired generation capacity in several electrical interconnections. Gas-fired generators have the capability to ramp quickly and are often utilized by grid operators to balance intermittency caused by wind generation. The time-varying output of these generators results in time-varying natural gas consumption rates that impact the pressure and line-pack of the gas network. As gas system operators assume nearly constant gas consumption when estimating pipeline transfer capacity and for planning operations, such fluctuations are a source of risk to their system. Here, we develop a new method to assess this risk. We consider a model of gas networks with consumption modeled through two components: forecasted consumption and small spatio-temporarily varying consumption due to the gasfired generators being used to balance wind. While the forecasted consumption is globally balanced over longer time scales, the fluctuating consumption causes pressure fluctuations in the gas system to grow diffusively in time with a diffusion rate sensitive to the steady but spatially-inhomogeneous forecasted distribution of mass flow. To motivate our approach, we analyze the effect of fluctuating gas consumption on a model of the Transco gas pipeline that extends from the Gulf of Mexico to the Northeast of the United States.
I. INTRODUCTION
A dominant new load on gas pipeline systems is natural gas-fired generators [1], [2]. An example of this dramatic change is seen on the gas pipelines that supply the electrical grid controlled by the Independent System Operator of New England (ISO-NE) where natural gas-fired electrical generation increased from 5% of total capacity to 50% in a span of 20 years [3]. A parallel development in many U.S. electrical grids is the expansion of intermittent renewable generation such as wind and photovoltaic (PV) generation-a trend that is expected to continue as utilities work to meet renewable portfolio standards [4], [5] that mandate a certain fraction of electrical generation be derived from renewable sources. In contrast to traditional coal, hydro or gas-fired generation, these intermittent renewable generators have limited controllability. To maintain balance of generation and load, other grid resources must respond to counteract these new fluctuations. Although many different types of advanced control of nontraditional resources are under consideration to provide balancing services, e.g. grid-scale battery storage and demand response, the control of fast-responding traditional generation (i.e. gas) is the current state-of-practice. Gas pipelines have traditionally supplied Load Distribution Companies (LDC) that primarily serve space or water heating loads that evolve slowly throughout the day in a relatively well-known pattern that is predicted based on historical information and weather forecasts. Other traditional pipeline customers are industrial loads that change from day-to-day, but are very predictable over the span of twenty-four hours. The combination of expanded natural gas-fired generation and its use to balance intermittent renewable generation is creating loads on natural gas pipelines that are significantly different than historical behavior and will challenge the current pipeline operating paradigm that is used to control gas pressure. The flow in natural gas pipeline is determined via bilateral transactions between buyers and sellers in a day-ahead market with market clearing and gas flow scheduling done in advance of the subsequent 24-hour period of gas delivery. Scheduling consists of determining the locations and constant rates of gas injections. The initial market clearing assumes that gas consumptions are uniform over the subsequent 24-hour delivery period. Over the gas day, gas buyers improve their estimate of actual gas needs, and mid-course corrections are allowed through the transaction and scheduling of gas flows in two subsequent intra-day markets at 10 and 14 hours after the start of the 24-hour delivery period. When serving traditional gas loads, the variability during the gas day is relatively small and slow and is well managed by linepack, i.e. compressed gas stored in the pipeline. The pressure in a gas transmission pipeline ranges between a maximum set by engineering limits and a minimum delivery pressure set by contracts. A typical maximum pressure is around 800 psi, and flow of the gas causes the pressure to fall along the pipeline. As the minimum pressure (∼ 500 psi) is approached, gas compressors installed along the pipeline are used to boost the pressure back near the maximum. Typical spacing between compressors is ∼ 50-100 km The relatively high operating pressures enable large gas transfer rates, and the spread between maximum and minimum pressure allows the pipeline to operate with an imbalance of gas injections and consumptions for hours at a time. An injection-consumption imbalance modifies the amount of gas stored in the pipeline via pressure changes, i.e. changes to the linepack. Linepack is sufficient to buffer the imbalance when serving traditional gas loads. However, the hydraulic fracturing-driven expansion of natural gas-fired generation capacity [6] and its use to balance intermittent renewable fluctuations will result in larger and faster fluctuations in consumption (and possibly production) creating challenges to historical pipeline operations and reliability. The analysis in this manuscript is motivated by these new challenges. Our approach is built on top of any solution of the steady gas flow problem that determines the spatial dependence of gas flow and pressure and the dispatch of gas compressors to maintain pressure. For example, the steady flow solution can be found by solving an optimal gas flow (OGF) problem [7] or using a model that approximates compressor dispatch decisions in current gas pipeline operations. Using these steady solutions, we build on the ideas of [8] and develop analysis tools to provide a probabilistic measure of the impact on pipeline reliability created by stochastic deviations of gas consumption from the forecasted values used in the steady solution (and scheduled during market clearing). These new tools are based on a linearization of the basic gas flow equations around the forecasted solution that retains the effect of stochasticity in consumption. The effect of this stochasticity is assessed on a model of the Transco gas pipeline that extends from the Gulf of Mexico to the Northeast of the United States (See Fig. 1 and [7]). This manuscript builds on recent work [7], [8] to develop a theoretic and computational approach to analyze the evolution of pressure in a gas system over time and space when the system is imbalanced. The three main contributions of this approach are:
• An analysis of spatiotemporal behavior of line-pack when the pipeline is subjected to stochastic gas consumptions. We observe that, even when fluctuations of the consumption and production are on average zero, the pressure fluctuations grow diffusively with time. We coin the term diffusive jitter of pressure fluctuations to describe this effect. • We show that the diffusive jitter of pressure is a nonlocal phenomenon where the pressure swings at one location depend on the behavior at all other locations. • We show that diffusive jitter is spatially inhomogeneous and dependent on the spatial distribution of the forecasted (stationary) solution.
The rest of the manuscript is organized as follows. Sections II and III provide a technical introduction to gas pipeline modeling. Section IV provides a brief summary of approaches used to solve the steady gas flow problem. Section V describes a generalization of [8] that linearizes the gas flow equations around the steady solution that includes the effect of stochastic gas consumption. The asymptotic solution of these linearized equations describes the diffusive jitter of the pressure fluctua- tions. Section VI applies the theoretical results to a model of the Transco pipeline. Section VII summarizes our main results and offers a brief discussion of future work.
II. DYNAMIC GAS FLOW (DGF) OVER A SINGLE PIPE
Before analyzing a pipeline network, we introduce the gas flow equations and notations for a single pipe. Major transmission pipelines are typically 16-48 inches in diameter and operate at high pressures (e.g. 200 to 1500 psi) and high mass flows (millions of cubic feet of gas per day) [9], [10]. Under these conditions, the pressure drop and energy loss due to shear is modeled by a nearly constant phenomenological friction factor f . The resulting gas flow model is a nonlinear partial differential equation (PDE) with one spatial dimension x (along the pipe axis) and one time dimension [11], [12], [13]:
∂ t ρ + ∂ x (uρ) = 0,(1)∂ t (ρu) + ∂ x (ρu 2 ) + ∂ x p = − ρu|u| 2d f − ρg sin α, (2) p = ρZRT.(3)
Here, u, p, and ρ are the spatially-dependent velocity, pressure, and density, respectively; Z is the gas compressibility factor; T is the temperature, R is the gas constant, and d is the diameter of the pipe. Eqs. (1, 2, 3) describe mass conservation, momentum balance and the ideal gas thermodynamic relation, respectively. The first term on the righthand side (rhs) of Eq. (2) describes the friction losses in the pipe. The second term on the rhs of Eq. (2) includes the gain or loss of momentum due to gravity g when the pipe is tilted by angle α. The frictional losses typically dominate the gravity term, which is often dropped. Because the flow velocities are usually small compared to the sound velocity, the gas inertia term ∂ t (ρu) and the advection term ∂ x (ρu 2 ) are typically small compared to the frictional losses and can also be dropped [11], [12], [13]. For simplicity of presentation, we have also assumed that the temperature does not change significantly along the pipe. Under these assumptions, Eqs. (1, 2, 3) are rewritten in terms Fig. 2. Schematic illustration of a gas network and associated notation. a) A schematic illustration of a single edge (i, j) of a network. Nodes at either end are indicated by open circles and labeled by their nodal pressure p i and p j . Compressors are indicated with filled squares. Mass flow φ i j is directed from i to j and nodal injections q i and q j contribute to this flow. Nodal pressure p i is modified by the compression ratio α i→ j yielding p i j (x i j = 0). The pressure falls along {i, j} reaching p i j (x i j = L i j ). If compressor α j→i is not present, then p i j (x i j = L i j ) = p j . b) A schematic of many edges connected in a meshed network. Nodes are indexed by i = 0, 1, · · · , where node 0 is typically reserved for the swing node -the node where pressure is maintained constant throughout the dynamics. Compressors, injections and edge mass flows are the same as in a).
of the pressure p and the mass flux φ = uρ:
c −2 s ∂ t p + ∂ x φ = 0,(4)∂ x p + β 2d φ|φ| p = 0,(5)
where c s ≡ √ ZRT and β ≡ f ZRT are considered constant. The solution of Eqs. (4,5) for t ∈ [0, τ] and x ∈ [0, L] requires initial and boundary conditions,
∀x ∈ [0, L] : φ(0; x) = φ 0 (x),(6)
∀t : φ(t; 0) = q (in) (t), φ(t; L) = q (out) (t), (7) which are consistent, i.e. φ 0 (0) = q (in) (0) and φ 0 (L) = q (out) (0), in addition fixing the initial pressure at the beginning of the pipe, e.g. p(0; 0) = p 0 .
III. DYNAMIC GAS FLOW (DGF) OVER A NETWORK Next, we generalize the equations for a single pipe to a gas network. The network is modeled by a graph G = (V , E) with vertices V and edges E, where the edges are directed (i, j) or undirected {i, j} depending on the context. Each vertex i ∈ V represents a node with a gas mass injection or consumption rate q i . Each edge (i, j) ∈ E is a single pipe with mass flow φ i j . The flow along each edge is described by a set of PDEs adapted from Eqs. (4,5):
∀t ∈ [0, τ], ∀{i, j} ∈ E, ∀x ∈ [0; L i j ] : c −2 s ∂ t p i j (t, x) + ∂ x φ i j (t, x) = 0,(8)∂ x p i j (t, x) + β 2d φ i j (t, x)|φ i j (t, x)| p i j (t, x) = 0,(9)
where p i j (t, x) and φ i j (t, x) are the pressure and mass flow, respectively, at time t and position x along edge (i, j) of length L i j . Here, p i j = p ji , φ i j = −φ ji , and L i j = L ji . See Fig. 2a for a schematic description of the variables. The flow of gas creates a pressure gradient, and compressor stations (potentially located at both ends of each edge {i, j}) are used to boost pressure. α i→ j denotes the compression ratio of the station adjacent to node i that boosts pressure for flow toward node j while α j→i denotes the compression ratio adjacent to node j that boosts pressure for flow toward node i. We choose to place compressors at both ends of every edge for generality, which also simplifies the notations in the following discussion. In reality there is no more than one compressor on any particular edge of the graph and α = 1 when there is no compressor. Note that α i→ j may be larger or smaller than 1, allowing the modeling of compression or decompression. If only compression is allowed, then α i→ j ≥ 1. The schematic in Fig. 2 displays the spatial relationships between nodes, edges, and compression ratios. These are expressed mathematically as
∀t ∈ [0, τ], ∀(i, j) ∈ E : p i j (t, 0) = p i→ j (t),(10)p i j (t, L i j ) = p j→i (t), p i→ j = p i α i→ j , p j→i = p j α j→i ,
where p i and p i→ j are the pressures at node i and pressure after compression ratio α i→ j . If there is no compressor, then α i→ j = 1. Under current operating practices, compression ratios do not change frequently 1 . Thus, we assume that α i→ j does not depend on time. Eqs. (9,10) are complemented with mass conservation at all nodes of the network:
∀t ∈ [0, τ], ∀i ∈ V : ∑ j:(i, j)∈E φ i j (t, 0) = q i (t).(11)
When the gas injections q(t) = (q i (t)|i ∈ V ) are given for t ∈ [0, τ], nodal conditions (11) generalizes the single-pipe boundary conditions in (7) to a pipe network. Eqs. (8,9,10,11) constitute a complete set of equations describing the Dynamic Gas Flow (DGF) problem when they are supplemented with compression ratios, i.e. α = (α i→ j |(i, j) ∈ E), initial conditions on the flows
t = 0, ∀{i, j} ∈ E, ∀x i j ∈ [0, L i j ] : φ i j (0; x i j ) = φ (in) i j (x i j ), ,(12)
and pressure at one slack node, p i=0 (0) = p 0 .
IV. OPTIMUM GAS FLOW APPROACHES
Later in the manuscript, we analyze the DGF problem by linearization of the fluctuations about a stationary solution. Here, we summarize two approaches to finding this stationary solution. We first solve a stationary version of the DGF problem, i.e, the Gas Flow (GF) problem, where the steady state pressure and flows are expressed in terms of compression ratios. The time-independent compression ratios are then determined via solution of the OGF problem. 1 Compressors are automated to a degree in that they are run in modes with one of those modes considered in this manuscript being a constant compression ratio. Then pressure fluctuations at the inlet of a compressor are amplified at the outlet of that compressor. Other modes of compressor operation may lead to different results for the fluctuation amplitude. There is relatively fast local control on the compressor to maintain this ratio. However, the set points for the ratio is updated very rarely. Adjusting this set point is the main effect the pipeline operator has on the system and this adjustment is done rarely throughout the operating day.
A. Stationary Gas Flow
In the GF problem, all input parameters (consumptions/injections, compression ratios and the pressure at the slack bus) are constant in time. The total injection and consumption is balanced
∑ i∈V q (st) i = 0.(13)
The steady solution of Eq. (8) is uniform mass flow along each pipe in the network, ∀{i, j} : φ i→ j = const. Substituting this result into Eq. (9) and integrating over space yields the algebraic relationship between pressure at position x ∈ [0; L i j ], compression, and (constant) flow through the pipe
∀(i, j) ∈ E : p (st) i→ j = p (st) i α i→ j ; (p (st) i j (x)) 2 = (p (st) i→ j ) 2 − βx d φ (st) i j |φ (st) i j |.(14)
The GF problem has a unique solution provided the compression ratios are known, and the GF solution in (14) is the basis for many approaches to solving the OGF problem.
B. Optimum Gas Flow
The solution to the GF problem leaves the time-independent compression ratios α unknown. These are chosen by the pipeline operators based on a combination of economic and operational factors. Here, we describe two approaches for selecting the α. The first is a greedy algorithm that approximates the current pipeline operations in the US. The guiding principle is that compressors are activated when the pressure prior to the next compressor drops below the acceptable lower bound. When activated, a compressor is set to its maximum compression ratio α. This algorithm is described in detail in [7]. The second approach is based on solutions to the optimal gas flow (OGF) problem [14], [15], [16], [7]. Here, we summarize a Geometric Programming (GP) approach to solving the OGF problem that minimizes the total compressor power to move the gas. The total power used in pipeline gas compression (assuming that the gas is ideal and compression is isentropic) is
∑ (i, j)∈E c i→ j φ (st) i j η i→ j max{α m i→ j , 1} − 1 ,(15)
where c i→ j is a constant that depends on the compressor, m = (γ − 1)/γ where γ is the gas heat capacity ratio, and η i→ j is the efficiency factor of the compressor. It is important to note that fluctuations caused by compressor consumption are negligible when compared to gas loads. The term φ (st) i j denotes the directional mass flow for edge i, j, when the edge is oriented from i to j. The OGF formulation assumes that the flow through the compressor is from i to j, i.e., φ
i j > 0, thus the direction of flow must be selected before hand. For tree networks, the magnitude and direction of the flows are computed exactly apriori and do not depend on the choice of compression ratios. For an edge i, j, let G i and G j be the two disjoint graphs obtained by removing (i, j). The flows φ
(st) i j are computed as φ (st) i j = ∑ i∈G i q (st) i = − ∑ i∈G j q (st) i .(16)
In networks with loops, flow direction is chosen using heuristics or through the introduction of binary variables [16].
Using the cost function in Eq. (15), the OGF problem is formulated as
min α,p ∑ (i, j)∈E c i→ j φ i j η i→ j max{α m i→ j , 1} − 1 (17) s.t. ∀(i, j) ∈ E : α 2 i→ j = p 2 j + βL i j d i j φ 2 i j p 2 i ,(18)∀i ∈ V : 0 ≤ p i ≤ p i ≤ p i ,(19)∀(i, j) ∈ E : α i→ j ≤ α i→ j ≤ α i→ j ,(20)
where Eq. (18) is obtained from Eq. (14). The upper bound in Eq. (19) represents engineering limits on pipes and the lower bound represents contractual obligations. The upper and lower bounds in Eq. (20) refer to maximum allowed compression and decompression at each compressor. If decompression is not allowed, α i→ j =1.
There are a variety of methods for solving the OGF over trees, and in this paper, we use the geometric programming (GP) approach described in [7]. The GP approach relaxes the lower bound α i→ j in Eq. (20)
s.t. 2 log(p i ) ≤β i ≤ 2 log(p i )(21)0 ≤t i j ≤ log(ᾱ i j ),(22)log eβ j −β i −t i j + δ 1 i j e −β i −t i j ≤ 0,(23)∀(i, j) ∈ E.(24)
The transformed variables are related to the original ones via the following equations
p 2 i = e β i , δ 1 i j = βL i j d i j φ 2 i j .(25)
The OGF in Eqs. (21)(22)(23)(24) is solved using convex optimization. When decompression is not allowed, α i→ j =1 in Eq. (20), we use a signomial programming (SP) method, which is a heuristic version of GP based on solving a sequence of convex programs [7]. Here, we use SP to solve the OGF.
V. DIFFUSIVE JITTER OF PRESSURE FLUCTUATIONS
The main contribution of this manuscript builds on the solution of the OGF by introducing a model of stochastic gas consumption and the analysis of its effects on pressure fluctuations-diffusive jitter. Our approach linearizes the DGF equations (Eqs. 8,9) around a solution to the GF problem (augmented with compression ratios from the OGF or the greedy compression scenario).
The linearized model captures the relationship between the fluctuating consumption and the fluctuating pressure. Asymptotically, the accumulated changes in pressure provide an indication of how fast the pressure will drift (the jitter) and exceed an operating limit in the absence of operator intervention. As the DGF solution drifts further from the original GF solution, the quality of the linearization degrades. However, we expect that the linearized solution to the DGF remains a strong relative indicator of how quickly a system will experience problems due to stochastic consumption. Formally, the stochastic consumption is defined by q(t) =
q (st) + ξ(t) where the components of ξ(t) = (ξ i (t)|i ∈ V )
are time varying but relatively small in comparison to q (st) . We assume a linearized solution of the DGF problem of the form p(t) = p (st) + δp(t) and φ(t) = φ (st) + δφ(t), where the respective corrections are small, i.e. |δp(t)| p (st) and |δφ(t)| φ (st) . The linearized versions of Eqs. (8,9,10,11) are The solution approach is an extension of the work in [8].
∀t ∈ [0, τ], ∀{i, j} ∈ E, ∀x ∈ [0; L i j ] : c −2 s ∂ t δp i j + ∂ x δφ i j = 0,(26)∂ x δp i j + β 2d δφ i j |φ (st) i j | p (st) i j + φ (st) i j |δφ i j | p (st) i j − δp i j φ (st) i j |φ (st) i j | (p (st) i j ) 2 = 0,(27)∀t ∈ [0, τ], ∀(i, j) ∈ E : δp i→ j = δp i α i→ j ,(28)
Following [8], we solve Eqs. (26, 27) for each pipe using a proposed solution of the form
δp i j = a i j (t)Z i j (x) + b i j (t, x),(31)
where the a i j (t) depend on time. In [8], it was argued that the a i j (t)Z i j (x) term represents the asymptotic contribution to the gas pressure fluctuations that grows in time. In contrast, b i j (t, x) represents smaller contributions to the pressure fluctuations that do not grow in time. Here, we focus on the contribution from the a i j (t)Z i j (x) term which is asymptotically dominant at long times. 2 Substitution of proposed solution (31) into Eqs. (26, 27) yields an equation for Z i j , i.e.
∂ x Z i j − β 2d φ (st) i j |φ (st) i j | (p (st) i j ) 2 Z i j = 0,(32)
where Z i j (x) counts x from node i. The integration of Eq. (32) over the spatial dependence of the stationary profile (14), yields
Z i j (x) = p (st) i→ j + p (st) j→i 2p (st) i j (x) ,(33)
where the normalization constant is chosen to guarantee,
where c i j = c ji is an edge specific constant.
To compute the global time-dependent factor a(t) we sum the mass conservation equation over all the nodes of the graph
∑ i∈V ξ i = ∑ {i, j}∈E (δφ i j (t, 0) − δφ i j (t, L i j )) ,(36)
integrate over time and define
Ξ(t) . = t 0 dt ∑ i∈V ξ i (t ),(37)
and finally sum Eq. (35) overall edges:
a(t) = c 2 s Ξ(t) ∑ {i, j}∈E c i j .(38)
Therefore, ∀t, ∀{i, j} ∈ E, x ∈ [0, L i j ] :
δp i j (t, x) ≈ c 2 s Ξ(t) ∑ {i, j}∈E c i j c i j Z i j (x).(39)
The unknown edge constants c i j are derived by substituting Eqs. (39) into Eqs. (28, 35) yielding
∀i, ∀ j, k s.t. (i, j), (i, k) ∈ E : c i j Z i j (0) α i→ j = c ik Z ik (0) α i→k .(40)
Eqs. (39, 40, 33) express the complete asymptotic (zero mode) solution of the DGF problem. Finally, we make several observations to connect the solution for the pressure fluctuations in Eqs. (39, 40, 33) to a probability distribution over the pressure fluctuations. First, the random gas load fluctuations ξ i (t) are zero-mean, temporarily homogeneous, and relatively short correlated in both time (the correlation time is less than τ) and space (the correlation length is less than the spatial extent of the network). Second, the fluctuations of δp i j in Eq. (39) are given by a time-integral and spatial-sum of the fluctuations. According to the Large Deviation theory, these observations imply that the pressure fluctuations form a Gaussian random process which jitters diffusively in time. Specifically, the Probability Distribution Function (PDF) of δp i j (t, x) is
P (δp i j (t, x) = δ) → (2πtD i j (x)) −1/2 exp − δ 2 2tD i j (x) ,(41)D i j = c 2 s c i j Z i j (x) ∑ {k,l}∈E c kl 2 ∑ n∈V ξ n (t ) 2 ,(42)
where the correlation function on the right-hand-side does not depend on t due to assumption of the statistical homogeneity of ξ.
VI. NUMERICAL EXPERIMENTS
Inspection of Eq. (42) shows that the variance of the pressure fluctuations as a function of position in the network is related to the coefficients D i j (x), referred to collectively as D. Higher values of D correspond to larger pressure fluctuations and higher likelihood of the pressure violating an engineering or contractual limit. By analogy with related physical processes, the coefficients D are similar to a diffusion coefficient, and we refer them this way in the remainder of the manuscript. The origins of D are primarily twofold. Once the gas consumptions and injections are fixed, the spatial dependence of D arises from the particular stationary solution of pressures, flows, and compression ratios through the Z i j (x). The magnitude of D is also related to the average global strength of the consumption fluctuations (∑ n∈V ξ n (t )) 2 . We apply the results described above to the Transco pipeline shown schematically in Fig. 1. We use data for the total consumption at each node over a 24-hour period from December 29, 2012 to fix the forecasted consumption for the stationary GF solution. These data represent relatively stressed operations for the Transco pipeline. The Transco pipeline has a small number of loops, which we partition to create a tree topology [7] that is very nearly linear but with a few small branches. We resolve these branches in the solution of the GF (or OGF) problem, however, when analyzing the pressure fluctuations, we aggregate these short branches to nodal consumptions and only analyze the fluctuations as a function of distance along the mainline. The Transco operational data does not include information on the deviations of the gas flows from their average or scheduled values. Instead, we estimate the global mean-square consumption fluctuations as
∑ n∈V ξ n (t ) 2 ≈ φ 0 3 2 * N.(43)
Here, φ 0 ≈ 20 kg/s is a typical average consumption for a node in the Transco pipeline, and N ≈ 70 is the number of consumption nodes, e.g. city-gates or power plants. This estimate of the gas consumption fluctuations assumes that the fluctuations at neighboring nodes are uncorrelated. If these neighboring nodes are gas-fired turbine generators that are both being used to balance renewable fluctuations, the assumption of independence may lead to an underestimation in Eq. (43). For presentation purposes, it is convenient to find a suitable normalization for D. Motivated by Eqs. (41,43), we normalize D by The spatial variation of D is due to Z i j (x), and the general shape of D can be understood by revisiting Eq. 32. The form of this equation suggest exponential growth or decay of Z i j (x) depending on the orientation of φ i j . The flow from the Gulf to the New York/New Jersey area is unidirectional creating the growth of D observed in Fig. 3. However, the large loads in the New York/New Jersey area combined with the offsetting injections from from the Marcellus Shale creates a flow reversal and an exponential decay of Z i j (and therefore of D). The peak in D is connected to the point of flow reversal. The solution in Fig. 3 displays more structure than simple exponential growth and decay for several reasons. First, the mass flow rates φ i j depend on location. However, perhaps more important are the discontinuities in D. These occur at compressor stations and are due to the discontinuities in p (st) at these locations. The global behavior of the OGF solution and the greedy algorithm Fig. 3 is similar because the differences in the compression ratios in the stationary solution does not affect the mass flow rates. However, it does affect the spatial dependence of pressure which can lead to the substantial local differences observed in Fig. 3. For example, between mileposts 1400 and 1700, D is much lower for the greedy algorithm compared to the OGF. However, the deployment of a compressor near milepost 1700 in the greedy algorithm leads to a large jump in D, a larger peak in pressure fluctuations, and a greater chance for violation of an engineering or contractual pressure limit. For this one example, this difference would seem to suggest that the OGF solution is less susceptible to pressure fluctuations. However, we note that the expected magnitude of the pressure fluctuations is not taken into account in either the greedy algorithm or the OGF. What these results do suggest is that the deployment of compressors in the stationary solution can have a significant impact on the expected pressure fluctuations, and that it is possible to formulate a a compressor dispatch optimization that balances the risk of such fluctuations against other desired operational properties, e.g. cost. The simple algebraic form of the probability of such large fluctuations in Eqs. (41,42) are convenient for incorporation into such formulations.
To determine the effect of overall consumption and injection on D, we uniformly scaled the base case consumptions and injections by a constant factor-a scaling that preserves the balance of consumptions and injections required for the existence of a stationary solution. Figure 4 displays the results for D computed using the OGF stationary solutions. The stationary solutions show small local differences in D caused by the deployment of compression. However, the major impact stems from the increase (or decrease) in mass flows. The uniform scaling does not affect the location of the flow reversal, so the peak in D appears at the same place. However, the larger (smaller) flows lead to faster (slower) growth rates for Z i j (see Eq. 32) and an overall higher (lower) peak in D.
In recent years the Marcellus Shale has become a large supplier of gas, and its injection capability is expected to increase [17]. To model the effect of this expansion we scaled all injections from the Marcellus Shale by a constant factor and removed a corresponding amount of gas from the injections at the Gulf to preserve the global balance of consumption and injection. Fig. 5 displays the results for D along the Transco pipeline for the OGF solution. Although the injection from the Marcellus is increased, the major gas load centers in New York and New Jersey keep the flow reversal point, and therefore the peak in D, pinned at more or less the same location. Larger Marcellus injections show slightly lower peak amplitudes of D indicating that, as gas injections are shifted from the Gulf to the Marcellus Shale, the reliability of pipeline operations is improved. We conjecture that this change is due to moving the source of gas injections closer to the major load centers. However, we again note that the OGF methods used to find the stationary solution do not account for the expected pressure fluctuations and their inclusion will likely lead to a modification of these results. Although scaling overall consumption and Marcellus supply are directly relevant to operators and planners, they do not exhibit a shift in the location of the peak in D or the appearance of multiple local maxima. To study these possibilities, we next imposed some less realistic changes. In particular, the loads in New York are shifted to points closer to the Gulf and Marcellus Shale, but the New Jersey loads were left unaffected. retirement of coal and fuel oil-fired generation in favor of natural gas-fired generation because of environmental concerns and the increased availability and low cost of natural gas and the ability of gas-fired generation to respond quickly to the variability of renewable generation. Although gas pipelines have the ability store gas in the form of increased pressure in the piepline, i.e. linepack, this storage is limited. In the future, linepack will be increasingly exercised as more gasfired generation is used to balance increasing amounts of wind generation. Larger swings in gas pipeline pressure (linepack) affect the ability of the pipeline to deliver gas to the generators, creating reliability implications that cascaded across these two infrastructures.
In this initial work, we have assessed the impact of fluctuating consumption by gas-generators on pipeline pressure. We start by splitting the gas flow equations for pipelines into two parts. The first is a stationary part that is time-independent and reflects the gas flows scheduled by the gas markets and gas compressor deployment determined by the pipeline operator. The second is a representation of the fluctuations around the scheduled flows created by linearizing the gas flow equations about the scheduled flows and compressor operations. From this linearized model, we can predict the probability that a set of stochastic gas loads will cause the pipeline pressure to violate an engineering or contractual pressure limit and create a reliability concern for the pipeline operator or the electrical grid operator. By making assumptions about the nature of the gas consumption fluctuations, this probability can be expressed in an algebraic form that is convenient for integration into a gas flow/gas compressor optimization problem where the probability can be a constraint or part of the objective to limit the likelihood of a pipeline reliability issue. We applied the theoretic results to a realistic model based on the Transco pipeline. Our computational experiments with the Transco model revealed the following interesting observations. First, the probability of large pressure fluctuations is highest at locations in the pipeline where the gas flow experiences a reversal (in and around the New York/New Jersey area for the Transco pipeline). Second, increasing the stress on the pipeline by increasing gas flow rates leads to higher probabilities of large pressure fluctuations. Third, rearranging pipeline flows, e.g. by increasing purchases from the Marcellus Shale at the expense of gas from the Gulf, can decrease the probability of large pressure fluctuations by moving the gas sources closer to the gas loads. The results of this paper suggest a number of interesting directions for future research.
• The linearization and asymptotic assumptions described here need to be validated against direct dynamic (transient) simulations of gas flows in variety of situations. Most existing work on such validations [18], [12], [19], [20], [21], [22] uses single pipe models. The challenge is to develop fast computational algorithms for transient problems with mixed (initial and boundary) conditions over large and loopy gas networks. • Our dynamic method applies to gas networks with loops, back flows, bi-directional compression and other complications. We will extend the experimental study to other current and planned networks in the U.S. and Europe. • Extending the probabilistic risk framework to the complications mentioned above requires extending the methods of [7] to create efficient optimization algorithms for gas networks with loops. • Compressor positions are assumed fixed by the OGF and greedy solution methods presented here. However, compressor position has a significant effect on pressure fluctuations in its vicinity. A future direction will be to formulate a compressor dispatch scheme and use it to analyze the effect of varying compressor position on pressure fluctuations near the compressors. • Incorporation of the probabilistic risk measures into OGF formulations to directly account for this risk. A promising direction is the chance constrained methodology developed in [23]. • This work suggests a new mathematical, statistical and computational foundations necessary to address the comprehensive strategic problems of re-organizing the existing system of energy trading (in the U.S. and elsewhere). Such a reorganization is required to reduce inefficiencies in how power and gas markets interact [1], [2], [24]. • It is not realistic to expect (at least not in US) that gas and power markets will merge in the near future. However, it is important to account for effects of mutual dependencies. In particular, incorporating effects of gas pressure fluctuations and uncertainty into planning and operations of power systems with significant penetrations of renewables and with gas turbines involved in balancing the renewable fluctuations is a very promising future direction for research. On the other hand it is as important to account for the effect of ramps in gas consumptions at generators on the gas flow optimization.
Fig. 1 .
1Schematic representation of the Transco gas transmission network.
, i.e. α i→ j =0. Under this relaxation, the OGF is transformed into a GP of the mt i j , ∀i ∈ V
δp i j (t, 0 )
0= δp i→ j (t), δp i j (t, L i j ) = δp j→i (t), (29) ∀t ∈ [0, τ], ∀i ∈ V : ∑ j:(i, j)∈E δφ i j (t, 0) = ξ i (t). (30) We seek asymptotic solutions to the PDE of Eqs. (26, 27, 28,29,30), where asymptotic implies finding solutions for time τ longer than the correlation time of the fluctuation consumption ξ. In addition, we seek solutions of Eqs. (26, 27,28,29,30) that connect the nodal quantities by algebraic relationships thereby eliminating the complexity of the orginal PDE.
for the time-dependent factor a i j (t) by substituting δp i j ∼ a i j (t)Z i j (x) into Eq. (26) and integrating the result over the entire spatial extent of the pipe {i, j} yielding δφ i j (t , 0) − δφ i j (t , L) . (34)In the asymptotic limit where δp i j ∼ a i j (t)Z i j (x) for every pipe (graph edge), Eqs. (29) can only be satisfied if the a i j (t) have the same functional dependence on time, i.e., ∀{i, j} ∈ E : a i j (t) = a(t)c i j ,
0 = 800 psi ≈ 5.5 * 10 6 Pa is the upper bound on allowed pressure in the pipes and t 0 = 15 min ≈ 10 3 s is a representative time period where we expect the developed theory to work well. We consider the base case of December 29th, 2012 and several modifications of this base case to investigate the effects of changing operations.Fig. 3displays D as a function of location along the mainline for two different stationary solutions for the base case-the OGF solution described in Section IV and the greedy algorithm from[7]. For a characteristic time of 15 min ≈ 10 3 s, D/D o = 1 corresponds to a variance in pressure fluctuations of (266 psi) 2 . Pressures in the Transco Pipeline range between 500 psi and 800 psi, so the pressure fluctuation standard deviation is 33 − 53% of the pressures in the pipeline for D/D o = 1. The same characteristic time and D/D o = 0.1 yields a variance of (84 psi) 2 which gives pressure fluctuation standard deviations of 10 − 16% of pipeline pressures. Since the pressure variance grows linearly in time, often over several 15 min intervals, these fluctuations can quickly grow to exceed pressure bounds without proper intervention. As the plots show, most pressure fluctuations are above D/D o = 0.1 throughout the pipeline, and therefore the fluctuations are of concern in any regions of pipeline where the pressure is near its upper or lower bound. The two solutions display similarities. Both show a build up of D from milepost 800 nearer to the Gulf of Mexico, a peak at milepost 1771 near New York and New Jersey, and a decay to a smaller value at milepost 2000 near the injection point for the Marcellus Shale in Pennsylvania.
Fig. 3 .
3Diffusion coefficient as a function of distance along the Transco mainline with stationary solutions given by OGF and the greedy algorithm. Both show a peak at milepost 1771, but the magnitude of this peak is much higher for the greedy algorithm than the OGF, indicating larger pressure fluctuations for the greedy algorithm.
Fig. 6 Fig. 4 .
64displays the impact of this shift on D computed using stationary solutions from the OGF and the greedy algorithm. Since the large load in New Jersey remains, milepost 1771 is still a position of flow reversal and a, now minor, maximum of D. However, the redistribution of load leads to a new global maximum near milepost 1319-a large load in North Carolina. Although the location of maximum fluctuations has been relocated, the maximum of D is much reduced by moving the loads closer to the gas injections. The previous example showed the appearance of a new global maximum, as well as several small local maxima, but the original local maximum remained. To remove it, we shifted New Jersey's load to points closer to the Gulf and the Marcellus Shale, as shown inFig. 7. This successfully removes the local maximum at milepost 1771 while leaving the global maximum at milepost 1319. In this case the jitter (diffusion coefficient) of the greedy algorithm and OGF are comparable, with OGF jitter greater before milepost 1319 and greedy greater after milepost 1319.VII. CONCLUSIONS AND FUTURE WORKWe have focused the analysis on the coupling between natural gas networks and electric networks at the time scale of intraday natural gas markets. The coupling at this time scale is expected to become tighter because of several factors: the Diffusion coefficient as a function of distance along the Transco mainline with stationary solutions given by the OGF with global consumption and injection scaled by a uniform factor. All show a peak at milepost 1771, but higher scaling factors have higher magnitudes at their peaks, indicating larger pressure fluctuations for larger system loads.
Fig. 5 .
5Diffusion coefficient as a function of distance along the Transco mainline with stationary solutions given by the OGF with Marcellus Shale injections scaled by a factor and the corresponding amount of injections removed from the Gulf. All show a peak at milepost 1771, but higher scaling factors have slightly lower magnitude at the peak, indicating smaller pressure fluctations. Higher scaling factors also have much lower magnitudes in the Marcellus Shale, indicating smaller pressure fluctuations when injections are shifted from the Gulf to the Marcellus Shale.
Fig. 6 .Fig. 7 .
67Diffusion coefficient as a function of distance along the Transco mainline with load redistributed from the large load in New York to the Gulf and Marcellus Shale, leaving the large load in New Jersey unaltered. This causes the appearance of a new global maximum at milepost 1319 which is the location of a large load in North Carolina. Since the New Jersey load was not redistributed, a local maximum remains at milepost 1771. Diffusion coefficient as a function of distance along the Transco mainline with load redistributed from the large loads in New York and New Jersey closer to the Gulf and Marcellus Shale. The global maximum at milepost 1319 remains, but the local maximum at milepost 1771 dissappears since the large load has been removed from this area.
Bounds for the second term are derived and solved using inhomogeneous linear equations for b i j .
The Future of Natural Gas:MIT Energy Initiative. "The Future of Natural Gas:MIT Energy Initiative, http://mitei.mit.edu/ system/files/NaturalGas Report.pdf," 2010.
Growing concerns, possible solutions: The interdependency of natural gas and electricity systems. "Growing concerns, possible solutions: The interdependency of natural gas and electricity systems, http://mitei.mit.edu/system/files/ 2014-MITEI-Report-Growing-Concerns-Possible-Solutions.pdf," 2014.
ISO New England: Adressing Gas Dependence. "ISO New England: Adressing Gas Dependence, http: //www.iso-ne.com/committees/comm wkgrps/strategic planning discussion/materials/natural-gas-white-paper-draft-july-2012.pdf," 2012.
Renewable portfolio standards in the states: Balancing goals and implementation strategies. "Renewable portfolio standards in the states: Balancing goals and im- plementation strategies, http://www.nrel.gov/docs/fy08osti/41409.pdf," 2007.
. Levelized cost of electricity renewable energy technologies. "Levelized cost of electricity renewable energy tech- nologies, http://www.ise.fraunhofer.de/en/publications/ veroeffentlichungen-pdf-dateien-en/studien-und-konzeptpapiere/ study-levelized-cost-of-electricity-renewable-energies.pdf," 2013.
The Economic Impacts of the Pennsylvania Marcellus Shale Natural gas play: An update. T J Considine, R Watson, S Blumsack, T. J. Considine, R. Watson, and S. Blumsack, "The Economic Impacts of the Pennsylvania Marcellus Shale Natural gas play: An update," 2010.
Optimal compression in natural gas networks: a geometric programming approach. S Misra, M W Fisher, S Backhaus, R Bent, M Chertkov, F Pan, IEEE Transactions on Control of Network Systems. S. Misra, M. W. Fisher, S. Backhaus, R. Bent, M. Chertkov, and F. Pan, "Optimal compression in natural gas networks: a geometric programming approach," IEEE Transactions on Control of Network Systems (CONES), 2015.
Cascading of Fluctuations in Interdependent Energy Infrastructures: Gas-Grid Coupling. M Chertkov, V Lebedev, S Backhaus, ArXiv eprintsM. Chertkov, V. Lebedev, and S. Backhaus, "Cascading of Fluctuations in Interdependent Energy Infrastructures: Gas-Grid Coupling," ArXiv e- prints, Nov. 2014.
Flow of fluids: Through valves, fittings and pipe. New YorkCRANECrane Company. Technical paper 410MCRANE, "Flow of fluids: Through valves, fittings and pipe," Crane Company, New York, Technical paper 410M, 1982.
S Mokhatab, W A Poe, J G Speight, Handbook of Natural Gas Transmission and Processing. HoustonGulf Professional PublishingS. Mokhatab, W. A. Poe, and J. G. Speight, Handbook of Natural Gas Transmission and Processing. Houston: Gulf Professional Publishing, 2006.
Simulation and analysis of gas networks. A Osiadacz, Gulf Pub. CoA. Osiadacz, Simulation and analysis of gas networks. Gulf Pub. Co., 1987. [Online]. Available: http://books.google.com/books?id= cMxTAAAAMAAJ
Unsteady and transient flow of compressible fluids in pipelinesa review of theoretical and some experimental studies. A Thorley, C Tiley, International Journal of Heat and Fluid Flow. 81A. Thorley and C. Tiley, "Unsteady and transient flow of compressible fluids in pipelinesa review of theoretical and some experimental studies," International Journal of Heat and Fluid Flow, vol. 8, no. 1, pp. 3 -15, 1987. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/0142727X87900440
Computational Techniques and Algorithms (Pipeline Gas Transmission). S A Sardanashvili, FSUE Oil and Gaz, I.M. Gubkin. Russian State University of Oil and Gasin RussianS. A. Sardanashvili, Computational Techniques and Algorithms (Pipeline Gas Transmission) [in Russian]. FSUE Oil and Gaz, I.M. Gubkin, Russian State University of Oil and Gas, 2005.
Optimization of natural-gas pipeline systems via dynamic programming. P Wong, R Larson, IEEE Transactions on. 135Automatic ControlP. Wong and R. Larson, "Optimization of natural-gas pipeline systems via dynamic programming," Automatic Control, IEEE Transactions on, vol. 13, no. 5, pp. 475-481, 1968.
Model relaxations for the fuel cost minimization of steady-state gas pipeline networks. S Wu, R Ros-Mercado, E Boyd, L Scott, Mathematical and Computer Modelling. 3123S. Wu, R. Ros-Mercado, E. Boyd, and L. Scott, "Model relaxations for the fuel cost minimization of steady-state gas pipeline networks," Mathematical and Computer Modelling, vol. 31, no. 23, pp. 197 -220, 2000. [Online]. Available: http://www.sciencedirect.com/science/article/ pii/S0895717799002320
Optimization methods for pipeline transportation of natural gas. C Borraz-Sánchez, NorwayDepartment of Informatics, University of BergenPh.D. dissertationC. Borraz-Sánchez, "Optimization methods for pipeline transportation of natural gas," Ph.D. dissertation, Department of Informatics, University of Bergen, Norway, October 2010.
2013 special reliability assessment: Accommodating an increased dependence on natural gas for electric power phase ii: A vulnerability and scenario assessment for the north american bulk power system. "2013 special reliability assessment: Accommodating an increased dependence on natural gas for electric power phase ii: A vulner- ability and scenario assessment for the north american bulk power system, http://www.nerc.com/pa/RAPA/ra/Reliability%20Assessments% 20DL/NERC PhaseII FINAL.pdf," 2013.
Simulation of transient gas flows in networks. A Osiadacz, 10.1002/fld.1650040103International Journal for Numerical Methods in Fluids. 41A. Osiadacz, "Simulation of transient gas flows in networks," International Journal for Numerical Methods in Fluids, vol. 4, no. 1, pp. 13-24, 1984. [Online]. Available: http://dx.doi.org/10.1002/fld. 1650040103
Transient analysis of gas pipeline network. W Tao, H Ti, Chemical Engineering Journal. 691W. Tao and H. Ti, "Transient analysis of gas pipeline network," Chemical Engineering Journal, vol. 69, no. 1, pp. 47 -52, 1998. [Online]. Available: http://www.sciencedirect.com/science/article/ pii/S1385894797001095
Simulation of transients in natural gas pipelines using hybrid tvd schemes. J Zhou, M A Adewumi, 10.1002/(SICI)1097-0363(20000229)32:4<407::AID-FLD945>3.0.CO;2-9International Journal for Numerical Methods in Fluids. 3244 407::AID-FLD945 3.0.CO;2-9J. Zhou and M. A. Adewumi, "Simulation of transients in natural gas pipelines using hybrid tvd schemes," International Journal for Numerical Methods in Fluids, vol. 32, no. 4, pp. 407-437, 2000. [Online]. Available: http://dx.doi.org/10.1002/(SICI) 1097-0363(20000229)32:4 407::AID-FLD945 3.0.CO;2-9
Simulation of transients in natural gas pipelines. C Dorao, M Fernandino, Journal of Natural Gas Science and Engineering. 31C. Dorao and M. Fernandino, "Simulation of transients in natural gas pipelines," Journal of Natural Gas Science and Engineering, vol. 3, no. 1, pp. 349 -355, 2011. [Online]. Available: http: //www.sciencedirect.com/science/article/pii/S1875510011000059
A state space model for transient flow simulation in natural gas pipelines. R Alamian, M Behbahani-Nejad, A Ghanbarzadeh, Journal of Natural Gas Science and Engineering. 90R. Alamian, M. Behbahani-Nejad, and A. Ghanbarzadeh, "A state space model for transient flow simulation in natural gas pipelines," Journal of Natural Gas Science and Engineering, vol. 9, no. 0, pp. 51 -59, 2012. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S1875510012000662
Chance-constrained optimal power flow: Risk-aware network control under uncertainty. D Bienstock, M Chertkov, S Harnett, 10.1137/130910312SIAM Review. 563D. Bienstock, M. Chertkov, and S. Harnett, "Chance-constrained optimal power flow: Risk-aware network control under uncertainty," SIAM Review, vol. 56, no. 3, pp. 461-495, 2014. [Online]. Available: http://dx.doi.org/10.1137/130910312
Measurement of energy market inefficiencies in the coordination of natural gas amp;amp; power. R Tabors, S Adamson, 47th Hawaii International Conference on. System Sciences (HICSS)R. Tabors and S. Adamson, "Measurement of energy market inefficien- cies in the coordination of natural gas amp;amp; power," in System Sciences (HICSS), 2014 47th Hawaii International Conference on, Jan 2014, pp. 2335-2343.
| []
|
[
"arXiv:physics/0012037v1 [physics.atom-ph] Differential and partial cross sections of elastic and inelastic positronium-helium-atom scattering",
"arXiv:physics/0012037v1 [physics.atom-ph] Differential and partial cross sections of elastic and inelastic positronium-helium-atom scattering"
]
| [
"Sadhan K Adhikari \nInstituto de Física Teórica\nUniversidade Estadual Paulista\n01.405-900São Paulo, São PauloBrazil\n"
]
| [
"Instituto de Física Teórica\nUniversidade Estadual Paulista\n01.405-900São Paulo, São PauloBrazil"
]
| []
| Scattering of positronium (Ps) by helium atom has been investigated in a three-Ps-state coupled-channel model including Ps(1s,2s,2p) states using a recently proposed time-reversal-symmetric regularized electron-exchange model potential. Specifically, we report results of differential cross sections for elastic scattering and target-elastic Ps excitations. We also present results for total and different partial cross sections and compare them with experiment and other calculations. PACS Number(s): 34.10.+x, 36.10.DrWith this objective we reinvestigate the problem of Ps scattering by He at higher energies using the time-reversal symmetric form of the exchange potential. We consider the three-Ps-state coupled-channel model with Ps(1s,2s,2p) states for calculating different elastic and inelastic cross sections of Ps-He scattering. We calculate the different Ps-He differential cross sections which are of great interest to experimentalists [5], in addition to the different angle-integrated partial cross sections. The differential cross sections carry detailed information about the scattering process. Cross sections for higher excitations and ionization of Ps are calculated by the Born approximation and added to the above Ps(1s,2s,2p) cross sections to yield the total cross section which is compared with experiment.The theory for the coupled-channel study of Ps-He scattering with the regularized model potential has already appeared in the literature[7,12,13,15]. It is worthwhile to quote the relevant working equations here. For target-elastic Ps-He scattering we solve the following Lippmann-Schwinger scattering integral equation in momentum space for the total electronic doublet spin state | 10.1103/physreva.62.062708 | [
"https://export.arxiv.org/pdf/physics/0012037v1.pdf"
]
| 15,931,134 | physics/0012037 | 8da618fffd43053501abd303ef2242a4386ba8c3 |
arXiv:physics/0012037v1 [physics.atom-ph] Differential and partial cross sections of elastic and inelastic positronium-helium-atom scattering
17 Dec 2000
Sadhan K Adhikari
Instituto de Física Teórica
Universidade Estadual Paulista
01.405-900São Paulo, São PauloBrazil
arXiv:physics/0012037v1 [physics.atom-ph] Differential and partial cross sections of elastic and inelastic positronium-helium-atom scattering
17 Dec 2000(March 31, 2022)
Scattering of positronium (Ps) by helium atom has been investigated in a three-Ps-state coupled-channel model including Ps(1s,2s,2p) states using a recently proposed time-reversal-symmetric regularized electron-exchange model potential. Specifically, we report results of differential cross sections for elastic scattering and target-elastic Ps excitations. We also present results for total and different partial cross sections and compare them with experiment and other calculations. PACS Number(s): 34.10.+x, 36.10.DrWith this objective we reinvestigate the problem of Ps scattering by He at higher energies using the time-reversal symmetric form of the exchange potential. We consider the three-Ps-state coupled-channel model with Ps(1s,2s,2p) states for calculating different elastic and inelastic cross sections of Ps-He scattering. We calculate the different Ps-He differential cross sections which are of great interest to experimentalists [5], in addition to the different angle-integrated partial cross sections. The differential cross sections carry detailed information about the scattering process. Cross sections for higher excitations and ionization of Ps are calculated by the Born approximation and added to the above Ps(1s,2s,2p) cross sections to yield the total cross section which is compared with experiment.The theory for the coupled-channel study of Ps-He scattering with the regularized model potential has already appeared in the literature[7,12,13,15]. It is worthwhile to quote the relevant working equations here. For target-elastic Ps-He scattering we solve the following Lippmann-Schwinger scattering integral equation in momentum space for the total electronic doublet spin state
Scattering of exotic ortho-positronium atom with long life time (142 ns) by neutral gas atoms and molecules is of fundamental interest in both physics and chemistry. Recent high precision measurements of positronium (Ps) scattering by H 2 , N 2 , He, Ne, Ar, C 4 H 10 , and C 5 H 12 [1][2][3][4][5][6] have enhanced theoretical activities [7][8][9][10][11] in this subject. Due to internal symmetry the direct static Born potential for elastic and even-parity transitions for these processes is zero and exchange correlation plays an important role for a correct description at low energies [10,11].
Recently, we suggested [12] a regularized nonlocal electron-exchange model potential with a single parameter C and used it in the successful study of of Ps scattering by H [13,14], He [12,15], Ne [15], Ar [15] and H 2 [16,17]. Our results were in agreement with experimental total cross section [1,3], specially at low energies for He, Ne, Ar and H 2 . In our initial calculations we used a non-symmetric form of the model exchange potential for Ps scattering by H [14], He [12], and H 2 [16]. Subsequent studies yielded improved results with a time-reversal symmetric form of the model potential for Ps scattering by H [13] and H 2 [17]. For H it was found [13] that the symmetric potential yielded excellent results for S-wave singlet Ps-H binding and resonance energies in agreement with accurate variational calculations [18]. The symmetric potential also led to very good results [15] for low-energy cross sections for Ps scattering by He, Ne, Ar, and H 2 in excellent agreement with experiment [3].
The problem of Ps-He scattering is of relevance to both experimentalists and theoreticians. Theoretically, it is the simplest of all Ps-scattering problems, which has reliable experimental cross sections. Once a good theoretical understanding of this system is obtained, we can try to understand the problem of Ps scattering by complex atoms and molecules.
D κ,κ ′ = (k 2 i + k 2 f )/8 + C 2 [(α 2 κ + α 2 κ ′ )/2 + (β 2 ν + β 2 ν ′ )/2](5)
where l and l ′ are the angular momenta of the initial and final Ps states, the initial and final Ps momenta are k i and k f , Q = k i − k f , α 2 κ /2 and α 2 κ ′ /2, and β 2 ν and β 2 ν ′ are the binding energy parameters of the initial and final He orbital and Ps states in atomic units, respectively, and C is the only parameter of the potential. Normally, the parameter C is taken to be unity which leads to reasonably good result [15,17,23]. However, it can be varied slightly from unity to get a precise fit to a low-energy observable. This variation of C has no effect on the scattering observables at high energies and the model exchange potential reduces to the Born-Oppenheimer exchange potential [19] at high energies. In the present study we use the value C = 0.84 throughout. This value of C leads to a very good fit of the elastic Ps-He cross section with the experiment of Skalsey et al. [3]. This exchange potential for Ps scattering is considered [12] to be a generalization of the Ochkur-Rudge exchange potential for electron scattering [20].
After a partial-wave projection, the system of coupled equations (1) is solved by the method of matrix inversion. A maximum number of partial waves J max is included in solving the system of coupled equations. The differential and angle-integrated partial cross sections so calculated are augmented by Born results for higher partial waves J > J max . A maximum of 40 Gauss-Legendre quadrature points are used in the discretization of each momentum-space integral. The calculations are performed with the exact Ps wave functions and the HF orbitals for He ground state [21]. Although it is relatively easy to obtain converged results for angle-integrated partial cross sections, special care is needed to obtain converged results for differential cross sections at higher energies. Coverged results for partial cross sections are obtained for J max = 30 at all energies. For obtaining convergent differential cross sections, we need to take J max = 150 partial waves at 100 eV. However, J max = 30 is sufficient for obtaining convergent differential cross sections at 20 and 30 eV.
Here we present results of Ps-He scattering using the three-Ps-state model that includes the following states: Ps(1s)He(1s1s), Ps(2s)He(1s1s), and Ps(2p)He(1s1s). The Born terms for the excitation of He are found to be small and are not considered here in the coupledchannel scheme. First, we present the elastic Ps(1s)He(1s1s) differential cross section and inelastic differential cross sections to Ps(2s)He(1s1s) and Ps(2p)He(1s1s) states at different energies.
In order to show the general trend of the differential cross sections, we perform calculations at the following incident positronium energies: 20, 30, 40, 60, 80 and 100 eV. We exhibit the differential cross sections for elastic scattering at these energies in Fig. 1. In Figs. 2 − 3 we show the inelastic cross sections for transition to Ps(2s)He(1s1s) and Ps(2p)He(1s1s) states. From all these figures we find that, as expected, the differential cross sections are more isotropic at low energies where only the low partial waves contribute. At higher energies more and more partial waves are needed to achieve convergence and the differential cross sections are more anisotropic. The small oscillation of the differential cross sections at larger angles and energies is due to numerical difficulties.
Recently, Garner et al. [5] have provided an experimental estimate of average differential cross section across the energy range 10 to 100 eV with respect to any process in Ps-He scattering for forward scattering angles: dσ/dΩ = (34 ± 12) × 10 −20 m 2 sr −1 = (121 ± 43)a 2 0 sr −1 . However, it is not possible to make a meaningful comparison between the present differential cross sections and the experimental estimate of Garner et al.
We calculate the different angle-integrated partial cross sections for Ps-He scattering. In addition to the Ps(1s,2s,2p) cross sections calculated using the coupled-channel method, we also calculate the higher Ps(7 > n > 2)-excitation and Ps-ionization cross sections using the Born approximation with present exchange potential. These results are shown in Fig. 4, where we plot angle-integrated elastic, Ps(n=2) [≡Ps(2s+2p)], inelastic Ps(7 > n > 2), and Ps ionization cross sections. The total cross section calculated from these partial cross sections is also shown in this plot and compared with the experiments of Refs. [1,3] and total cross section of the 22-Ps-state R-matrix calculation of Ref. [8]. The agreement between theory and experiment is quite good up to 70 eV. The target-inelastic processes ignored in this work are supposed to play important role at higher energies, which may be the cause of detorioration of agreement of present results with experiment above 70 eV. There exists qualitative disagreement between the present total cross section and that of the 22-state calculation of Ref. [8], on which we comment below.
As the Ps-He system is of fundamental interest to both theoreticians and experimentalists, it is appropriate to critically compare our results with other theories and experiments. The only other recent experiment on Ps-He is the one by Nagashima et al. [4], who obtained the cross section of (13 ± 4)πa 2 0 for an average energy of 0.15 eV in striking disagreement with the present calculation yielding 2.58πa 2 0 at 0.9 eV as well as with the experiment of Skalsey et al. [3] who obtained (2.61 ± 0.5)πa 2 0 at about 0.9 eV. Independent experiment on the measurement [22] of pick-off quenching rate of Ps on He can be used [23] to resolve the stalemate. It is argued [23] that a large low-energy Ps-He elastic cross section implies a large repulsive exchange potential between Ps and He atoms in the elastic channel. In the presence of a large repulsive potential it will be difficult for the Ps atom to approach the He atom. Consequently, one will have a small value for the pick-off quenching rate. From a study of the pick-off quenching rates of different models, we concluded [23] that a small low-energy cross section, as obtained by us, will lead to a large pick-off quenching rate in agreement with experiment. The large low-energy cross sections as obtained in other theoretical models [7,8,10,11] will lead to a much too small pick-off quenching rate in disagreement with experiment. This substantiates that the present lowenergy cross section and the experiment of Skalsey et al. [3] are consistent with the pick-off quenching rate measurement [22]. It would be difficult to reconcile the low-energy cross section of Nagashima et al. [4] and other theoretical results [7,8,10,11] with the measurement of the pick-off quenching rate.
We note that a model calculation by G. Peach [24], performed before the experiment of Skalsey et al. [3], is also in reasonable agreement with the present calculation and low energy experiments. The model of Peach was constructed by fitting to known positron-helium [25] and electron-helium [26] scattering data.
In Table I we compare the results of the angle-integrated partial cross sections to Ps(1s,2s,2p) states of different theoretical calculations. The present Ps(1s) Born cross sections are much smaller than the Born-Oppenheimer cross sections [19] used as input to closecoupling [7] or R-matrix [8] schemes. There have been different static-exchange calculations on Ps-He since the 1960s [7,8,10,11]. These calculations yielded similar results and in the static-exchange (SE) column of Table I we quote the recent cross sections of Refs. [7,8]. Although these SE cross sections are much smaller than the corresponding Born-Oppenheimer cross sections, they are much larger than those of the present calculation. The 22-Ps-state R-matrix calculation [8] yields elastic cross sections marginally smaller than the SE cross sections, and it seems unlikely that the "converged" R-matrix calculation will lead to elastic cross sections comparable to the present ones. However, the measured pick-off quenching rate [22] favors [23] a week exchange potential and small Ps(1s) cross sections at low energies, and future measurements of low-energy Ps-He elastic cross sections will decide which of the results are more realistic. Although the present elastic Ps(1) cross sections are much smaller than those of the R-matrix calculation, the reverse is true for the excitation cross sections to the Ps(2) states as can be found from Table I. The large Ps-excitation (and Ps ionization) cross sections of the present calculation and the small low-energy elastic cross sections are collectively responsible for the construction of the pronounced peak in the total cross section as in Fig. 4 near 15 − 20 eV in agreement with experiments of Refs. [1] and [3]. This peak is also present in the calculation of Peach [24] and is clearly absent in the close-coupling [7] and 22-Ps-state R-matrix analysis [8]. Similar peaks also appear in the total cross section of Ps-H 2 and Ps-Ar scattering [5].
To summarize, we have performed a three-Ps-state coupled-channel calculation of Ps-He scattering at low and medium energies using a regularized symmetric nonlocal electronexchange model potential recently suggested by us and successfully used in other Ps scattering problems. We present results for differential cross sections at several incident Ps energies between 20 eV to 100 eV for elastic scattering and inelastic excitation to Ps(2s,2p)He(1s1s) states. We also present the angle-integrated partial cross sections and compare them with those of other calculations. The present total cross sections are in agreement with data of Refs. [1,3]. However, there is alarming discrepancy between the present cross sections and those of conventional R-matrix [8] and close-coupling [7] calculations. These latter calculations are in agreement with a recent measurement of low-energy cross section by Nagashima et al. [4]. At low energies, the present elastic cross sections are much too smaller compared to those of Refs. [7,8]. However, the present total cross section develops a pronounced maximum near 15 − 20 eV as can be seen in Fig. 4 in agreement with the general experimental trend [5]. The cross section of Ref. [8] does not have this behavior. Although, comparison with the pick-off quenching measurement data [22] at low-energy favors [23] the results of the present model, further precise measurements of total and Ps(2) excitations at low energies will finally resolve the stalemate.
The work is supported in part by the Conselho Nacional de Desenvolvimento -Científico e Tecnológico, Fundação de Amparoà Pesquisa do Estado de São Paulo, and Financiadora de Estudos e Projetos of Brazil.
Fig 1
Table I :
IAngle-integrated Ps-He partial cross sections in πa 2 0 at different positronium energies: EB − first Born with present exchange; BO − first Born with Born-Oppenheimer exchange; SE − static exchange of Refs. [7,8]; 3St − three-Ps-state with present exchange; 22St − 22-Ps-state R-matrix calculation of Ref. [8] Energy Ps(1s) Ps(2s) Ps(2p) Ps(1s) Ps(1s) Ps(1s) Ps(1s) Ps(2s) Ps(2p) Ps(2) Ps(2)
(eV)
EB
EB
EB
BO
SE 22St 3St
3St
3St 3St 22St
0
15.82
14.6 13.2 3.34
0.068 15.33
132 14.4 13.0 3.15
0.612 12.11
98
12.9
2.75
1.088 10.04
78
12.1 11.3 2.48
1.7
8.08
59
11.3
2.18
2.448 6.38
44
10.5
9.4
1.88
4.352 3.91
23
9.0
1.26
5
3.39
8.6
7.1
1.00
5.508 3.06 0.070 1.44
0.96 0.071 1.15 1.22 0.24
6
2.79 0.091 1.78
8.1
6.1
0.97 0.083 1.35 1.43 0.42
6.8
2.42 0.100 1.89
12
7.7
0.96 0.074 1.47 1.54
8
1.99 0.097 1.77
7.1
4.8
0.92 0.056 1.45 1.51 0.50
10
1.51 0.080 1.48
6.7
3.8
0.84 0.048 1.29 1.34 0.51
15
0.86 0.048 0.97
3.
. A J Garner, G Laricchia, A Özen, J. Phys. B. 295961A. J. Garner, G. Laricchia, and A.Özen, J. Phys. B 29, 5961 (1996).
. N Zafar, G Laricchia, M Charlton, A Garner, Phys. Rev. Lett. 761595N. Zafar, G. Laricchia, M. Charlton, and A. Garner A, Phys. Rev. Lett. 76, 1595 (1996);
. A J Garner, G Laricchia, Can. J. Phys. 74518A. J. Garner and G. Laricchia, Can. J. Phys. 74, 518 (1996);
. A J Garner, A Ozen, G Laricchia, Nucl. Instrum. & Methods Phys. Res. B. 143155A. J. Garner, A. Ozen, and G. Laricchia, Nucl. Instrum. & Methods Phys. Res. B 143, 155 (1998).
. M Skalsey, J J Engbrecht, R K Bithell, R S Vallery, D W Gidley, Phys. Rev. Lett. 803727M. Skalsey, J. J. Engbrecht, R. K. Bithell, R. S. Vallery, and D. W. Gidley, Phys. Rev. Lett. 80, 3727 (1998).
. Y Nagashima, T Hyodo, K Fujiwara, A Ichimura, J. Phys. B. 31329Y. Nagashima, T. Hyodo, K. Fujiwara, and A. Ichimura, J. Phys. B 31, 329 (1998).
. A J Garner, A Özen, G Laricchia, J. Phys. B. 331149A. J. Garner, A.Özen, and G. Laricchia, J. Phys. B 33, 1149 (2000).
. H H Andersen, E A G Armour, J W Humberston, G Laricchia, Nucl. Instrum. & Methods Phys. Res. B. 14310H. H. Andersen, E. A. G. Armour, J. W. Humberston, and G. Laricchia, Nucl. Instrum. & Methods Phys. Res. B 143, U10 (1998).
. N K Sarkar, A S Ghosh, J. Phys. B. 304591N. K. Sarkar and A. S. Ghosh, J. Phys. B 30, 4591 (1997);
. N K Sarka, P Chaudhury, A S Ghosh, ibid. 321657N. K. Sarka, P. Chaudhury, and A. S. Ghosh, ibid. 32, 1657 (1999).
. J E Blackwood, C P Campbell, M T Mcalinden, H R J Walters, Phys. Rev. A. 604454J. E. Blackwood, C. P. Campbell, M. T. McAlinden, and H. R. J. Walters, Phys. Rev. A 60, 4454 (1999).
. H Ray, J. Phys. B. 325681in pressH. Ray, J. Phys. B 32, 5681 (1999); 33, xxx (2000) (in press);
. Phys. Lett. A. 252316Phys. Lett. A 252, 316 (1999).
. M I Barker, B H Bransden, J. Phys. B. 1730M. I. Barker and B. H. Bransden, J. Phys. B 1, 1109 (1968); 2, 730 (1969).
. P A Fraser, J. Phys. B. 11006P. A. Fraser, J. Phys. B 1, 1006 (1968);
. P A Fraser, M Kraidy, Proc. Phys. Soc. London. 89553P. A. Fraser and M. Kraidy, Proc. Phys. Soc. London 89, 553 (1966).
. P K Biswas, S K Adhikari, Phys. Rev. A. 59363P. K. Biswas and S. K. Adhikari, Phys. Rev. A 59, 363 (1999).
. S K Adhikari, P K Biswas, Phys. Rev. A. 592058S. K. Adhikari and P. K. Biswas, Phys. Rev. A 59, 2058 (1999).
. P K Biswas, S K Adhikari, J. Phys. B. 313147P. K. Biswas and S. K. Adhikari, J. Phys. B 31, 3147 (1998).
. P K Biswas, S K Adhikari, Chem. Phys. Lett. 317129P. K. Biswas and S. K. Adhikari, Chem. Phys. Lett. 317, 129 (2000);
. P K Biswas, Radiat. Phys. Chem. 5812502Phys. Rev. AP. K. Biswas, Radiat. Phys. Chem. 58, 443 (2000), Phys. Rev. A 60, 012502 (2000).
. P K Biswas, S K Adhikari, J. Phys. B. 31315P. K. Biswas and S. K. Adhikari, J. Phys. B 31, L737 (1998); 31, L315 (1998).
. P K Biswas, S K Adhikari, J. Phys. B. 331575P. K. Biswas and S. K. Adhikari, J. Phys. B 33, 1575 (2000).
. Z C Yan, Y K Ho, Phys. Rev. A. 595098Z. C. Yan and Y. K. Ho, Phys. Rev. A 59, 2697 (1999); 60, 5098 (1999);
. A M Frolov, V H Smith, Jr Ibid, 552662A. M. Frolov and V. H. Smith, Jr., ibid. 55, 2662 (1997);
. N Jiang, D M Schrader, Mat. Sc. Forum. 255312N. Jiang and D. M. Schrader, Mat. Sc. Forum 255-2, 312 (1997).
. J R Oppenheimer, Phys. Rev. 32361J. R. Oppenheimer, Phys. Rev. 32 361 (1928).
. M H R Rudge, Proc. Phys. Soc. London. 86763M. H. R. Rudge, Proc. Phys. Soc. London 86, 763 (1965);
. V I Ochkur, English Transl. Sov. Phys. JETP. 45503Zh. Eksp. Teor. Fiz.V. I. Ochkur, Zh. Eksp. Teor. Fiz. 45, 734 (1963)) [English Transl. Sov. Phys. JETP 18, 503 (1964).]
. E Clementi, C Roetti, At. Data Nucl. Data Tables. 14177E. Clementi and C. Roetti, At. Data Nucl. Data Tables 14, 177 (1974).
. B G Duff, F F Heymann, Proc. R. Soc. London, Ser. A. 270517B. G. Duff and F. F. Heymann, Proc. R. Soc. London, Ser. A 270, 517 (1962);
. F F Heymann, P E Osmon, J J Veit, W F Williams, Proc. Phys. Soc. London. 781038F. F. Heymann, P. E. Osmon, J. J. Veit, and W. F. Williams, Proc. Phys. Soc. London 78, 1038 (1961).
. S K Adhikari, P K Biswas, R A Sultanov, Phys. Rev. A. 594829S. K. Adhikari, P. K. Biswas, and R. A. Sultanov, Phys. Rev. A 59, 4829 (1999).
. G Peach, unpublished, 1995, as quoted in Refs. [1,8G. Peach, unpublished, 1995, as quoted in Refs. [1,8].
. J W Humberston, R I Campeanu, J. Phys. B. 134907J. W. Humberston and R. I. Campeanu, J. Phys. B 13, 4907 (1980).
Figure Caption: 1. Differential cross section (in units of a 2 0 ) for elastic Ps-He scattering at the following incident Ps energies: 20 eV (dashed-dotted line), 30 eV (dashed-double-dotted line), 40 eV (dashed-triple-dotted line), 60 eV (full line), 80 (long dashed line), and 100 eV (short dashed line). 2. Differential cross section. R K Nesbet, Phys. Rev. A. 2058in units of a 2 0 ) for inelastic Ps-He scattering toR. K. Nesbet, Phys. Rev. A 20, 58 (1979). Figure Caption: 1. Differential cross section (in units of a 2 0 ) for elastic Ps-He scattering at the following incident Ps energies: 20 eV (dashed-dotted line), 30 eV (dashed-double-dotted line), 40 eV (dashed-triple-dotted line), 60 eV (full line), 80 (long dashed line), and 100 eV (short dashed line). 2. Differential cross section (in units of a 2 0 ) for inelastic Ps-He scattering to
He(1s1s) state at the following incident Ps energies: 20 eV (dashed-dotted line). Ps, 30Ps(2s)He(1s1s) state at the following incident Ps energies: 20 eV (dashed-dotted line), 30
Differential cross section (in units of a 2 0 ) for inelastic Ps-He scattering to. Differential cross section (in units of a 2 0 ) for inelastic Ps-He scattering to
He(1s1s) state at the following incident Ps energies: 20 eV. Ps, dashed-dotted linePs(2p)He(1s1s) state at the following incident Ps energies: 20 eV (dashed-dotted line),
Partial and total cross sections (in units of 10 −16 cm 2 ) of Ps-He scattering at different Ps energies: Ps(1s) (dashed-triple-dotted line), Ps(n=2) (dashed-dotted line), Ps(7 > n > 2) (dashed-double-dotted line), Ps-ionization (dashed line), total (full line), total (full line with crosses from Ref. and data points with error bars from Refs. [1,3Partial and total cross sections (in units of 10 −16 cm 2 ) of Ps-He scattering at different Ps energies: Ps(1s) (dashed-triple-dotted line), Ps(n=2) (dashed-dotted line), Ps(7 > n > 2) (dashed-double-dotted line), Ps-ionization (dashed line), total (full line), total (full line with crosses from Ref. [8]), and data points with error bars from Refs. [1,3].
| []
|
[
"ON THE NATURE OF THE FBS BLUE STELLAR OBJECTS AND THE COMPLETENESS OF THE BRIGHT QUASAR SURVEY. II",
"ON THE NATURE OF THE FBS BLUE STELLAR OBJECTS AND THE COMPLETENESS OF THE BRIGHT QUASAR SURVEY. II"
]
| [
"A M Mickaelian ",
"M.-P Véron-Cetty ",
"P Véron ",
"\nByurakan Astronomical Observatory\nArmenian national Academy of Sciences\n378433ByurakanArmenia A\n",
"\nC. Gonçalves ESO\nKarl Schwarzschild Strasse 2D-85748Garching bei MünchenGermany\n",
"\nObservatoire de Haute Provence\nCNRS\nF-04870Saint-Michel l'ObservatoireFrance\n"
]
| [
"Byurakan Astronomical Observatory\nArmenian national Academy of Sciences\n378433ByurakanArmenia A",
"C. Gonçalves ESO\nKarl Schwarzschild Strasse 2D-85748Garching bei MünchenGermany",
"Observatoire de Haute Provence\nCNRS\nF-04870Saint-Michel l'ObservatoireFrance"
]
| []
| In Paper I(Mickaelian et al. 1999), we compared the surface density of QSOs in the Bright Quasar Survey (BQS) and in the First Byurakan Survey (FBS) and concluded that the completeness of the BQS is of the order of 70% rather than 30-50% as suggested by several authors. A number of new observations recently became available, allowing a re-evaluation of this completeness. We now obtain a surface density of QSOs brighter than B = 16.16 in a subarea of the FBS covering ∼2 250 deg 2 , equal to 0.012 deg −2 (26 QSOs), implying a completeness of 53±10%. | 10.1023/a:1010933009400 | [
"https://export.arxiv.org/pdf/astro-ph/0006331v1.pdf"
]
| 119,462,529 | astro-ph/0006331 | 2611f1db2851bee4f875183e31150664dd337539 |
ON THE NATURE OF THE FBS BLUE STELLAR OBJECTS AND THE COMPLETENESS OF THE BRIGHT QUASAR SURVEY. II
0006331v1 23 Jun 2000
A M Mickaelian
M.-P Véron-Cetty
P Véron
Byurakan Astronomical Observatory
Armenian national Academy of Sciences
378433ByurakanArmenia A
C. Gonçalves ESO
Karl Schwarzschild Strasse 2D-85748Garching bei MünchenGermany
Observatoire de Haute Provence
CNRS
F-04870Saint-Michel l'ObservatoireFrance
ON THE NATURE OF THE FBS BLUE STELLAR OBJECTS AND THE COMPLETENESS OF THE BRIGHT QUASAR SURVEY. II
0006331v1 23 Jun 2000arXiv:astro-ph/Subject headings: Quasars -Surveys
In Paper I(Mickaelian et al. 1999), we compared the surface density of QSOs in the Bright Quasar Survey (BQS) and in the First Byurakan Survey (FBS) and concluded that the completeness of the BQS is of the order of 70% rather than 30-50% as suggested by several authors. A number of new observations recently became available, allowing a re-evaluation of this completeness. We now obtain a surface density of QSOs brighter than B = 16.16 in a subarea of the FBS covering ∼2 250 deg 2 , equal to 0.012 deg −2 (26 QSOs), implying a completeness of 53±10%.
INTRODUCTION
In Paper I, by comparing the FBS (Markarian et al. 1989) and BQS (Green et al. 1986) surveys in their area in common, we derived a completeness of ∼70% for the BQS. A number of bright AGNs have since been discovered in the area which, together with our new spectroscopic observations, allowed us to refine our previous estimate of the BQS completeness.
OBSERVATIONS
We have obtained new spectra for 11 FBS objects. The observations were carried out on November 25, 1998 andJanuary 14-15, 1999 at the Byurakan Astrophysical Observatory (BAO) and at the Observatoire de Haute-Provence (OHP), respectively. The journal of observations is given in Table 1, together with relevant data.
Seven of the newly observed objects are stars, including FBS 2308+425 (at b = −16.3 • ) which is associated with a ROSAT RASS (Voges et al. 1999) X-ray source (Table 2, Paper I).
FBS 0950+664 (RXS J09540+6608) has been identified on an objective prism plate as an AGN (Bade et al. 1998); our spectrum shows it to be a Seyfert 1 at z = 0.172. Our new spectra of FBS 1235+699 and FBS 1324+448 confirm their redshift (z = 0.521 and 0.331 respectively). FBS 1715+406 is Zw 225.094 or MCG 07.35.061, a 15.4 mag galaxy at z = 0.030 (Marzke et al. 1996); according to Abramian & Mickaelian (1994), it is an emission line galaxy; our spectrum shows that it is an absorption line galaxy, with a weak [N ii] λ6583 line in emission at z = 0.029.
The spectra of the four extragalactic objects are displayed in Fig. 1. Table 1.
NEW PUBLISHED DATA
Since the publication of Paper I 1 51 new bright (B < 17.0) AGNs have been discovered, at |b| > 30 • , in the subarea of the FBS survey studied in this paper, bringing the total number of known such objects to 108.
MAGNITUDE ESTIMATE
We have extracted, when available, the O magnitudes of these 108 objects from the APS database (Pennington et al. 1993); these magnitudes are missing for five objects only (all at δ > 63 • ). We have also extracted the O magnitudes from the USNO-A2.0 catalogue (Monet et al. 1996) which, as the APS, is based on measurements of the O plates of the Palomar Observatory Sky Survey I (POSS-I). To estimate the accuracy of these magnitudes, we have proceeded as for the APS O magnitudes (see Paper I): we have compared the differences between the USNO O and photoelectric B magnitudes of 102 PG UV-excess stars vs their photoelectric U − B colours (Fig. 2); we found a negligible colour equation, a rms dispersion of 0.31 mag, compared to 0.25 mag for the APS magnitudes, and a relatively large offset <O − B> = −0.38 mag (to be compared with <O − B> = −0.16 mag for the APS). Fig. 3 shows a comparison of the APS and USNO O magnitudes for the bright AGNs; they are in reasonable agreement, except for five objects for which the USNO magnitudes are brighter by more than one mag than the APS magnitudes; these five objects are low luminosity Seyfert 1 galaxies at relatively small redshifts (z < 0.13) which probably explain the large magnitude differences; it seems that the USNO magnitudes for extended objects are grossly underestimated.
In Table 2, we list all bright (B < 17.0) AGNs found in the FBS subarea at |b| > 30 • with their APS and USNO O magnitudes (when available) and the absolute B magnitudes computed 2 using the APS O magnitudes increased by 0.16 mag (or the USNO O magnitudes increased by 0.4 mag), excluding the bright QSOs of our "complete" sample (listed in Table 3).
In the case of RXS J12110+7005 for which the APS magnitude is not available, Schwope et al. (2000) give B = 17.0, while the USNO O magnitude is 14.3; but this object has a moderate redshift (z = 0.127); moreover its APM O magnitude (Irwin et al. 1994) is 17.66; we therefore adopted the Schwope et al. mag and excluded it from the "complete" sample.
THE NEW RADIO AND X-RAY BRIGHT QSOs
The FIRST Bright Quasar Survey (FBQS) was built by matching the VLA FIRST survey with the Cambridge Automated Plate Measuring Machine (APM) catalog of POSS-I objects (Irwin et al. 1994); it covers an area of 2 682 deg 2 in the north Galactic cap; it contains 1 238 objects brighter than 17.8 mag on the POSS-I E plates (White et al. 2000). About 1 180 square degrees are within the FBS area; they contain 38 FIRST radio sources identified with an AGN brighter than B = 17.0 at |b| > 30 • ; nine are bright QSOs (O APS < 16.0), three (CSO 900, FIRST J1306+3915 and RXS J17102+3344) being new. Although the numbers are small, this suggests that the "complete" sample we built in Paper I is only 67±15% complete.
According to White et al. (2000), QSOs with radio emission above the FIRST 1 mJy limit constitute about 25% of all QSOs brighter than B ∼ 17.6, but for QSOs brighter than B = 16.4, the FBQS QSO density is indistinguishable from the density of optically selected QSOs. Nevertheless, of the 15 bright QSOs known prior to the FIRST survey in the area common to the FIRST and FBS surveys, only six (40%) have been detected as FIRST radio sources; therefore the complete Table 3. Cols. 1 to 6 give the B1950 position, cols. 7 and 8, the APS and USNO-A2.0 O magnitudes respectively, col. 9 the name, col. 10 the redshift, col. 11 the galactic latitude, col. 12 the absolute B magnitude and col. 13 references for the newly identified AGNs: (1) , (2) Cao et al. (1999), (3) Wei et al. (1999), (4) White et al. (2000), (5) Schwope et al. (2000), (6) Xu et al. (1999) identification of the FIRST sources with bright starlike objects could not yield a complete survey of bright QSOs.
, (7) present paper α(B1950) δ(B1950) APS O US O Name z b M B ref
A number of recent papers are devoted to the optical identification of RASS sources Cao et al. 1999;Grazian et al. 2000;Schwope et al. 2000;Xu et al. 1999). One of the new identifications is RXS J12043+4330, a QSO at z = 0.663 ; it is also FBS 1201+437 (FBS #302) or PG 1201+436, which had been classified as a DC white dwarf by Green et al. (1986). Its APS O magnitude is 16.23; it is therefore not bright enough to be included in our "complete" sample.
Nineteen RASS sources are now identified with a bright QSO in the area discussed in this paper (including the three new FIRST QSOs); of the 17 FBS or BQS bright QSOs in our sample (Table 3), 12 (70%) are ROSAT All Sky Survey (RASS) X-ray sources, suggesting that the total number of bright QSOs is equal to 19/0.70 = 27 (if all optically bright, X-ray sources have been discovered).
DISCUSSION
Our "complete" sample of bright QSOs (Table 3) contains 29 objects brighter than B = 16.16 (O APS < 16.00), three of them (indicated by a "N" in the last column of Table 3) are not within the PG area. The area common to the PG and FBS surveys at |b| > 30 • (∼2 250 deg 2 ) contains 26 bright QSOs (13 PG QSOs and 13 others) (but there are 17 PG QSOs with B PG < 16.16 in the Table 3: Bright QSOs (B < 16, and M B < −24) in the FBS subarea at |b| > 30 • . The columns are the same as in Table 2 with however two additional columns; an X in col. 14 indicates that the object is a ROSAT RASS source; a Y or an N in col. 15 indicates if the object lies or not in the PG area area; this larger number is probably due to the Eddington (1940) effect, the PG magnitudes being affected by relatively large errors, σ ∼ 0.37 mag). From these data, we derive a surface density of 0.012 deg −2 , which is to be compared with the original value of the PG survey: 0.0064 deg −2 , implying a maximum completeness of 53±10% for the PG survey. Grazian et al. (2000) have cross-correlated the RASS with photometric databases in an 8 164 deg 2 area of the northern sky at |b| > 30 • , selecting all coincidences brighter than R ∼ 15.4; from this, they derive a surface density of bright (B < 15.5) QSOs (defined as AGNs with M B < −23.0) of 10±2 10 −3 deg −2 and conclude that the true surface density of such objects is about three times larger than that derived from the PG survey. However, they do not specify how the B magnitude of their objects was derived. Their sample contains 46 QSOs; 15 of them have z > 0.20; we have extracted from the APS catalogue the O magnitudes for 12 of them (for the three others, these magnitudes are unavailable); it turns out that only one (J172320.5+341756) has O < 15.34, corresponding to B < 15.5, suggesting that the O magnitudes used by Grazian et al. are underestimated and, consequently, the surface density overestimated. Lamontagne et al. (2000) claim that they found a surface density of bright QSOs three times larger than the PG value. They have searched for UV-excess stellar-like objects with B < 16.5 and U − B < −0.6 in a 840 deg 2 area covering the south Galactic cap; the errors in the B magnitudes are estimated to be 0.30 mag rms. They have found 228 such objects which have all been spectroscopically identified; 32 are AGNs, out of which only eleven are brighter than B = 16.16 and M B = −24.0 (including 0117−2837 which, according to Grupe et al. (1999), has a redshift of 0.349 rather than 0.055). We derive a surface density or 0.013 deg −2 , in agreement with our value and only twice the PG value.
α(B1950) δ(B1950) APS O US O Name z b M B ref
CONCLUSION
In Paper I, we compared the surface density of QSOs in the Bright Quasar Survey and in the First Byurakan Survey and concluded that the completeness of the BQS is of the order of 70%; Wisotzki et al. (2000) have found that the BQS is 68% complete from a comparison with the Hamburg/ESO survey, in agreement with our previoys estimate. Based on a number of recently published data, as well as on our own new observations, we redetermined the surface density of QSOs brighter than B = 16.16 in the BQS area to be ∼0.012 deg −2 , implying that the completeness of the BQS is 53±10%. It should be stressed however that the numbers involved are quite small, and that larger areas should be investigated before a definitive value of the surface density of bright QSOs could be determined.
Fig. 1 .
1-Spectra of the four extragalactic objects in
Fig. 2 .
2-Plot of the differences between the USNO O and the photoelectric B magnitudes vs the photoelectric U − B colors for 102 PG objects
Fig. 3 .
3-Plot of the USNO vs the APS O magnitudes for the bright AGNs listed in tables 2 and 3
Table 1 :
1New spectra. Col. 1 gives the name, col. 2 the FBS number, col. 3 the original FBS classification, col. 4 the magnitude, cols. 5 and 6 the place and date of observation, col. 7 the galactic latitude, col. 8 the classification and col.9 the redshift
FBS #
mag
date
b
z
FBS 0228+447
227
B2
15.5 BAO 25.11.98 −14.4 *
FBS 0744+818 1 055
B1e 16.4 OHP 14.01.99
29.1 *
FBS 0747+729
966
N3e 15.8 BAO 25.11.98
30.4 *
FBS 0929+733
876
B3e 16.3 BAO 25.11.98
37.2 *
FBS 0944+713
878
B3e 18.6 OHP 14.01.99
39.3 *
FBS 0950+664
785
N2
16.7 BAO 25.11.98
42.4 S1 0.172
FBS 1049+803 1 068
N1e 17.1 BAO 25.11.98
35.7 *
FBS 1235+699
894
N1e 17.9 OHP 15.01.99
47.4 Q 0.521
FBS 1324+448
322
B1
17.
OHP 15.01.99
71.1 Q 0.331
FBS 1715+406
936
se
16.
OHP 18.01.99
34.5 G 0.029
FBS 2308+425
418
B1
13.5 OHP 14.01.99 −16.3 *
Table 2 :
2Bright AGNs (B < 17.0) in the FBS subarea at |b| > 30 • , excluding the bright QSOs listed in
Table 2 :
2(continues)
α(B1950)
δ(B1950)
APS O US O Name
z
b
M B
ref.
11 27 23.0 41 32 52
16.20
15.7
KUV 11274+4133
1.530 68.1 −28.4
11 33 57.2 39 16 41
16.11
15.7
FIRST J1136+3900 0.795 70.4 −27.5 (4)
11 34 17.3 34 49 12
17.04
16.8
FIRST J1136+3432 0.192 72.4 −23.1 (4)
11 37
9.3 66
4 28
16.25
15.7
FBS 1137+661
0.652 49.7 −26.6
11 40 56.8 68
1 34
16.82
15.9
FBS 1140+680
0.796 48.1 −26.6
11 47 46.0 67 15 28
16.69
16.2
FBS 1147+673
1.020 49.1 −27.4
11 48 41.7 38 39
2
16.20
15.6
FIRST J1151+3822 0.336 73.1 −25.2 (4)
11 48 53.3 38 42 33
17.34
16.8
B2 1148+38
1.304 73.1 −27.4 (4)
11 50 16.5 33 24
0
16.30
16.1
FBS 1150+334
1.389 76.0 −28.6
11 58 17.6 35 25 13
16.76
16.3
HS 1158+3525
1.700 76.6 −28.5 (4)
12
1 51.1 43 47 38
16.23
16.0
FBS 1201+437
0.663 71.1 −26.7 (6)
12
8 37.8 70 22 12
-
14.3
RXS J12110+7005
0.127 46.6 −24.8 (5)
12 11 32.8 33 26 26
17.26
16.6
B2 1211+33
1.598 79.9 −27.9 (4)
12 18
5.9 39
9 55
16.67
16.3
FIRST J1220+3853 0.376 76.6 −25.1 (4)
12 35 12.9 69 58 13
17.96
17.1
FBS 1235+699
0.522 47.4 −24.4
12 42 46.1 34 12 33
17.52
16.9
FBS 1242+342
0.717 83.1 −25.6
12 48 26.6 40
7 58
16.33
16.3
PG 1248+401
1.032 77.3 −27.8
12 55
1.7 44 45 47
16.48
16.1
FBS 1255+447
0.30
72.6 −24.7
12 57 26.8 34 39 31
17.21
16.8
B 201
1.375 82.5 −27.6 (4)
13 12 37.0 42 34
9
15.31
15.4
NPM1G+42.0343
0.073 74.1 −22.7 (5)
13 24 54.6 44 50 36
18.09
16.8
FBS 1324+448
0.331 71.1 −23.3
13 28 40.2 41 44 22
16.60
16.9
RXS J13308+4128
0.182 73.5 −23.5 (5)
13 29 29.8 41 17 23
16.78
16.8
FBS 1329+412
1.937 73.8 −28.9
13 38 28.6 40 51 48
16.82
17.0
RXS J13406+4036
0.161 73.1 −23.0 (5)
13 38 52.0 41 38 22
16.50
16.4
FBS 1338+416
1.204 72.5 −28.0
13 39 47.8 37 22 16
16.89
16.7
CSO 1010
1.106 75.4 −27.4 (4)
13 51 46.3 64
0 29
14.55
14.5
FBS 1351+640
0.088 52.0 −23.9
13 54
2.3 41 50 53
16.69
15.8
RXS J13561+4136
0.697 70.4 −26.4 (4)
14
0 50.9 33 34 26
16.12
16.2
RXS J14030+3320
0.342 73.4 −25.4 (5)
14 15 57.2 43 25 43
17.51
16.8
RXS J14179+4311
0.079 66.2 −20.7 (5)
14 16 43.3 42 47 29
16.34
15.6
HS 1416+4247
0.421 66.5 −25.6 (4)
14 22 57.6 42 27 36
16.42
15.9
RX J14249+422
0.316 65.7 −24.9
14 24 29.2 39 17 10
17.91
15.6
RXS J14265+3903
0.081 66.9 −20.4 (5)
14 29 20.9 40
5 55
16.62
15.9
CSO 464
1.217 65.7 −28.0 (4)
14 29 52.1 34 30
2
16.84
16.5
FIRST J1431+3416 0.704 67.3 −26.3 (4)
Table 2 :
2(end)
α(B1950)
δ(B1950)
APS O US O Name
z
b
M B
ref.
15 21 59.0 39 24 39
16.93
16.6
HS 1521+3924
0.657 56.2 −26.0 (4)
15 26 52.0 65 58 32
16.90
16.2
FBS 1526+659
0.345 44.4 −24.6
15 43 15.9 35
2
6
17.18
16.4
RXS J15451+3452
0.518 52.3 −25.1 (4)
16 11 13.3 37 24 49
15.88
13.6
MCG 06.36.003
0.070 46.7 −22.0 (5)
16 12 59.6 37 53 34
16.87
16.6
FIRST J1614+3746 1.532 46.4 −28.2 (4)
16 30 15.1 37 44
8
16.62
16.0
FBS 1630+377
1.478 42.9 −28.4
16 31 19.4 39 30 42
16.48
16.7
KUV 16313+3931
1.023 42.8 −27.6 (4)
16 39 36.8 35 56
0
16.54
16.3
FIRST J1641+3550 1.438 40.9 −28.4 (4)
17
1 36.2 37 41 32
15.60
15.6
RXS J17033+3737
0.065 36.8 −22.1 (5)
17
3
3.4 38
6
9
16.83
16.0
FIRST J1704+3802 0.063 36.6 −20.9 (4)
17
6 17.5 69
1 29
16.04
15.8
HS 1706+6901
0.449 34.6 −26.0
17 11 17.2 35 27
1
16.84
16.3
FIRST J1713+3253 0.083 34.5 −21.5 (4)
17 27 18.3 38 40 46
17.19
16.7
B3 1727+386
1.386 32.0 −27.7 (4)
17 32 26.6 40 39 50
16.20
16.1
FIRST J1734+4037 0.356 31.4 −25.4 (4)
In Paper I, we claimed that the position accuracy of the FBS objects in the last seven papers by Abramian & Mickaelian is much better than in the first four papers of the series; in fact, the objects #924 to #939 in paper IX(Abramian & Mickaelian 1994) have an accuracy as poor as in the first four papers.
assuming Ho = 50 km s −1 Mpc −1
. G B Abramian, A M Mickaelian, Astrophysics. 37224Abramian G.B., Mickaelian A.M. 1994, Astrophysics 37, 224
. N Bade, D Engels, W Voges, A&AS. 127145Bade N., Engels D., Voges W. 1998, A&AS 127, 145
. K Beuermann, H.-C Thomas, K Reinsch, A&A. 34747Beuermann K., Thomas H.-C., Reinsch K. et al. 1999, A&A 347, 47
. L Cao, J Y Wei, J Y Hu, A&AS. 135243Cao L., Wei J.Y., Hu J.Y. 1999, A&AS 135, 243
. A S Eddington, MNRAS. 100354Eddington A.S. 1940,MNRAS 100,354
. A Grazian, S Cristiani, V D'odorico, V Omizzolo, A Pizella, astro-ph/0002183AJ. Grazian A., Cristiani S., D'Odorico V., Omizzolo V., Pizella A. 2000, AJ (astro-ph/0002183)
. R F Green, M Schmidt, J Liebert, ApJS. 61305Green R.F., Schmidt M., Liebert J. 1986, ApJS 61, 305
. D Grupe, K Beuermann, K Mannheim, H.-C Thomas, A&A. 350805Grupe D., Beuermann K., Mannheim K., Thomas H.-C. 1999, A&A 350, 805
. M Irwin, S Maddox, R Mcmahon, 214Irwin M., Maddox S., McMahon R. 1994, Spectrum 2, 14
. R Lamontagne, S Demers, F Wesemael, G Fontaine, M J Irwin, AJ. 119241Lamontagne R., Demers S., Wesemael F., Fontaine G., Irwin M.J. 2000, AJ 119, 241
. B E Markarian, V A Lipovetsky, J A Stepanian, L K Erastova, A I Shapavalova, Commun. Special Astrophys. Obs. 625Markarian B.E., Lipovetsky V.A., Stepanian J.A., Erastova L.K., Shapavalova A.I. 1989, Commun. Special Astrophys. Obs. 62, 5
. R O Marzke, J P Huchra, M J Geller, AJ. 1121803Marzke R.O., Huchra J.P., Geller M.J. 1996, AJ 112, 1803
. A M Mickaelian, A C Gonçalves, M.-P Véron-Cetty, P Véron, Astrophysics. 421Paper IMickaelian A.M., Gonçalves A.C., Véron-Cetty M.-P., Véron P. 1999, Astrophysics 42, 1 (Paper I)
. D Monet, A Bird, B Canzian, USNO-A2.0, U.S. Naval Observatory, Washington D.CMonet D., Bird A., Canzian B. et al. 1996, USNO-A2.0, U.S. Naval Observatory, Washington D.C.
. R L Pennington, R M Humphreys, S C Odewahn, W Zumach, P M Thurmes, PASP. 105521Pennington R.L., Humphreys R.M., Odewahn S.C., Zumach W., Thurmes P.M. 1993, PASP 105, 521
. A Schwope, G Hasinger, I Lehman, 3211Schwope A., Hasinger G., Lehman I. et al. 2000, AN 321, 1
. W Voges, B Aschenbach, T Boller, A&A. 349389Voges W., Aschenbach B., Boller T. et al. 1999, A&A 349, 389
. J Y Wei, D W Xu, X Y Dong, J Y Hu, A&AS. 139575Wei J.Y., Xu D.W., Dong X.Y., Hu J.Y. 1999, A&AS 139, 575
. R L White, R H Becker, M D Gregg, ApJS. 126133White R.L., Becker R.H., Gregg M.D. et al. 2000, ApJS 126, 133
. L Wisotzki, N Christlieb, N Bade, A&A. 35877Wisotzki L., Christlieb N., Bade N. et al. 2000, A&A 358, 77
. D W Xu, J Y Wei, X Y Dong, J Y Hu, A&AS. 134365Xu D.W., Wei J.Y., Dong X.Y., Hu J.Y. 1999, A&AS 134, 365
| []
|
[
"ON GENERALIZED JØRGENSEN INEQUALITY IN INFINITE DIMENSION",
"ON GENERALIZED JØRGENSEN INEQUALITY IN INFINITE DIMENSION"
]
| [
"Krishnendu Gongopadhyay "
]
| []
| []
| In [6], Li has obtained an analogue of the Jørgensen inequality in the infinite-dimensional Möbius group. We show that this inequality is strict. | null | [
"https://arxiv.org/pdf/1808.06756v1.pdf"
]
| 53,136,913 | 1808.06756 | 7e8434ab8688cba8715a18fd249c413c143c202e |
ON GENERALIZED JØRGENSEN INEQUALITY IN INFINITE DIMENSION
21 Aug 2018
Krishnendu Gongopadhyay
ON GENERALIZED JØRGENSEN INEQUALITY IN INFINITE DIMENSION
21 Aug 2018arXiv:1808.06756v1 [math.GT]
In [6], Li has obtained an analogue of the Jørgensen inequality in the infinite-dimensional Möbius group. We show that this inequality is strict.
Introduction
The Möbius group M (n) acts by isometries on the n-dimensional real hyperbolic space. The Jørgensen inequality is a pioneer result in the theory of discrete subgroups of Möbius groups. The classical Jørgensen inequality gives a necessary criterion to detect the discreteness of a two-generator subgroup in M (2) and M (3). There have been several generalization of the Jørgensen inequality in higher dimensional Möbius groups, e.g. [3], [8], [9].
The Clifford algebraic formalism to Möbius group was initiated by Ahlfors in [1]. In this approach the 2 × 2 matrices over finite dimensional Clifford algebra acts by linear fractional transformations on the n-sphere. Waterman used the Clifford algebraic formalism of Möbius groups to obtain some Jørgensen type inequalities in [9]. Frunzȃ initiated a framework for infinite dimensional Möbius group in [2]. This framework is an extension of the Clifford algebraic viewpoint by Ahlfors. In [6,7,5], Li has used this viewpoint further to obtain discreteness criteria in infinite dimension.
In [6], Li has obtained an analogue of Jørgensen inequality in the infinite-dimensional Möbius group. The aim of this note is to show that this inequality is strict. In Section 2, we briefly recall basic notions of the infinite dimensional theory and note down the Jørgensen type inequality of Li. In Section 3 we prove that Li's inequality is strict, see Theorem 3.1.
Preliminaries
2.1.
Infinite dimensional Clifford group. The Clifford algebra C is the associative algebra over R generated by a countable family {i k } ∞ k=0 subject to the relations:
i h i k = −i k i h , h = k, i 2 k = −1
, and no others. Every element of C can be expressed as a = a I I, where I = i k 1 i k 2 . . . i kp , 1 ≤ k 1 < k 2 < · · · < k p ≤ n, n is a fixed natural number depending upon a, a I ∈ R, and I a 2 I < ∞. If I = ∅, then a I is the real part of a and the remaining part is the 'imaginary part' of a. In C the Euclidean norm is given as usual by |a| = |Re(a)| 2 + ||Im(a)| 2 . As in the finite-dimensional Clifford algebra, C has three special involutions, defined by the following.
* : In a ∈ C as above, replace in each
I = i v 1 i v 2 · · · i v k by i v k · · · i v 1 . a → a * is an anti-automorphism. ′ : Replace i k by −i k in a to obtain a ′ .
The conjugateā of a is now defined as:ā = (a * ) ′ = (a ′ ) * .
Elements of the following type:
a = a 0 + a 1 i 1 + · · · + a n i n + · · · ,
are called vectors. The set of vectors is denoted by
ℓ 2 . Let ℓ 2 = ℓ 2 ∪ {∞}. For any x ∈ ℓ 2 , we have x * = x andx = x ′ . Every non-zero vector is invertible and x −1 =x/|x| 2 .
The set of products of finitely many non-zero vectors is a multiplicative group, called Clifford group, and denoted by Γ.
A Clifford matrix g = a b c d over ℓ 2 is defined as follows:
(1) a, b, c, d ∈ Γ ∪ {0}; (2) ∆(g) = ad * − bc * = 1; (3) ab * , d * b, cd * , c * a ∈ ℓ 2 .
The set of all such matrices form a group, denoted by SL(Γ). For g as above,
g −1 = d * −b * −c * a * . Note that gg −1 = g −1 g = I.
The group PSL(Γ) = SL(Γ)/{±I} acts on ℓ 2 by the following transformation:
g : x → (ax + b)(cx + d) −1 .
Classification of elements in SL(Γ). Let f be in SL(Γ). Then
• f is loxodromic if it is conjugate in SL(Γ) to rλ 0 0 r −1 λ ′ , where r ∈ R − {0}, |r| = 1, λ ∈ Γ. If λ = ±1, then f is called hyperbolic. • f is parabolic if it is conjugate in SL(Γ) to a b 0 a ′ , where a, b ∈ Γ, |a| = 1, b = 0, and ab = ba ′ . • Otherwise f is elliptic. Definition 1. For g = a b c d , the trace of g is defined by tr(g) = a + d * .
A non-trivial element g ∈ SL(Γ) as above is called vectorial if b * = b, c * = c, and tr(g) ∈ R.
The real part of trace is a conjugacy invariant in SL(Γ).
Lemma 2.1. [4,6] If an element g in SL(Γ) is hyperbolic then tr(g) ∈ R, tr 2 (g) > 4.
Definition 2. A subgroup G of SL(Γ) is called elementary if it has a finite orbit in ℓ 2 .
Otherwise, G is called non-elementary. A subgroup G of SL(Γ) is discrete if for a sequence f i → g in G implies that f i = g for all sufficiently large i. Otherwise G is not discrete.
2.3.
Li-Jørgensen inequality. The following is the generalized Jorgensen inequality in infinite dimensional that was given by Li in [6].
Theorem 2.2. [6, Theorem 3.1] Let f, g ∈ SL(Γ) be such that f is hyperbolic, and [f, g] = f gf −1 g −1 is vectorial. Suppose that the two-generator group f, g is discrete and non-elementary. Then
(2.1) |tr 2 (f ) − 4| + |tr([f, g]) − 2| ≥ 1.
Li-Jørgenesen Inequality is Strict
Theorem 3.1. Let f, g ∈ SL(Γ) be such that f is hyperbolic, and [f, g] = f gf −1 g −1 is vectorial. Suppose that the two-generator group f, g is discrete and non-elementary. Then
(3.1) |tr 2 (f ) − 4| + |tr([f, g]) − 2| > 1,
where the above inequality is strict.
Proof. It follows from Theorem 2.2 that
|tr 2 (f ) − 4| + |tr([f, g]) − 2| ≥ 1.
If possible suppose that
(3.2) |tr 2 (f ) − 4| + |tr([f, g]) − 2| = 1.
Up to conjugacy, we assume f = r 0 0 r −1 , r > 1. Let g = a b c d . Let J(f, g) denote the right hand side of (3.2). By computation it is easy to see that
tr([f, g]) − 2 = −(r − r −1 ) 2 bc * ; tr 2 (f ) − 4 = (r − r −1 ) 2 . So, J(f, g) = (r − r −1 ) 2 (1 + |bc * |) = 1. Since [f, g]
is vectorial, it follows from above that bc * is a real number.
Let g 0 = g, g m+1 = g m f g −1 m , g m = a m b m c m d m . Let K = (r − r −1 ) 2 , w m = b m c * m .
Then by the equality in (3.2) we have K(1 + |w 0 |) = 1. This implies K < 1. Now note that
(3.3) b m+1 c * m+1 = −K(1 + b m c * m ).b m c * m . By induction, w m = b m c *
m is a sequence of real numbers. Also |w m+1 | ≤ K|w m |(1 + |w m |).
If possible suppose α m = K(1 + |w m |) < 1 for some m. Then using arguments similar to the proof of [6, Theorem 3.1], it can be shown that |b m+n c * m+n | ≤ α n m |b m c * m | and b m+n c * m+n → 0 as n → ∞, that would give a contradiction to the assumption that f, g is non-elementary. So, we must have K(1 + |w m |) ≥ 1 for all m.
Thus
1 ≤ K(1 + |w m |) ≤ K(1 + |w m−1 |)
. It is given that K(1 + |w 0 |) = 1. By induction, it follows that for all m, . This is a contradiction. Hence the inequality must be strict.
K|w m+1 | ≤ K.K|w m |.|1+w m | ≤ (1−K)K|1+w m | ≤ (1−K)K(1+|w m |) ≤ |tr([f, g]) − 2 + 4 − tr 2 (f )| = K|1 + bc * | = 1 = |tr([f, g]) − 2| + |4 − tr 2 (f )|.Since 4 − tr 2 (f ) < 0, this implies w 0 > 0. Hence by induction from (3.4) and (3.5), w m > 0 for all m. Thus, we have from (3.4), K = 1/(1 + w m ). In particular, w m = w m+1 . Now, from (3.3), we have K(1 + w m ) = −1, i.e. K = −1/(1 + w m )
Möbius transformations and Clifford numbers. Lars V Ahlfors, Differential geometry and complex analysis. BerlinSpringerLars V. Ahlfors. Möbius transformations and Clifford numbers. In Differential geometry and complex analysis, pages 65-73. Springer, Berlin, 1985.
Möbius transformations in infinite dimension. Monica Frunz˘a, Rev. Roumaine Math. Pures Appl. 367-8Analyse complexeMonica Frunz˘a. Möbius transformations in infinite dimension. Rev. Roumaine Math. Pures Appl., 36(7-8):369-376, 1991. Analyse complexe (Bucharest, 1989).
A generalization of the Shimizu-Leutbecher and Jørgensen inequalities to Möbius transformations in R N. Sa'ar Hersonsky, Proc. Amer. Math. Soc. 1211Sa'ar Hersonsky. A generalization of the Shimizu-Leutbecher and Jørgensen inequalities to Möbius transformations in R N . Proc. Amer. Math. Soc., 121(1):209-215, 1994.
Möbius transformations in infinite dimension. Lan Liu, Xian Tao Li, Wang, Heilongjiang Daxue Ziran Kexue Xuebao. 224Liu Lan Li and Xian Tao Wang. Möbius transformations in infinite dimension. Heilongjiang Daxue Ziran Kexue Xuebao, 22(4):497-500, 2005.
Ball-preserving Möbius transformations in infinite dimension. Liulan Li, Complex Var. Elliptic Equ. 547Liulan Li. Ball-preserving Möbius transformations in infinite dimension. Complex Var. Elliptic Equ., 54(7):697-703, 2009.
A generalization of Jørgensen's inequality to infinite dimension. Liulan Li, New York J. Math. 17Liulan Li. A generalization of Jørgensen's inequality to infinite dimension. New York J. Math., 17:41-49, 2011.
Discreteness of Möbius groups in infinite dimension. Liulan Li, Complex Var. Elliptic Equ. 581Liulan Li. Discreteness of Möbius groups in infinite dimension. Complex Var. Elliptic Equ., 58(1):109-112, 2013.
On discrete Möbius groups in all dimensions: a generalization of Jørgensen's inequality. G J Martin, Acta Math. 1633-4G. J. Martin. On discrete Möbius groups in all dimensions: a generalization of Jørgensen's inequality. Acta Math., 163(3-4):253-289, 1989.
Möbius transformations in several dimensions. P L Waterman, Adv. Math. 1011Indian Institute of Science Education and Research (IISER) Mohali, Knowledge CitySAS Nagar. India E-mail address: [email protected], [email protected]. L. Waterman. Möbius transformations in several dimensions. Adv. Math., 101(1):87-113, 1993. Indian Institute of Science Education and Research (IISER) Mohali, Knowledge City, Sector 81, SAS Nagar, Punjab 140306, India E-mail address: [email protected], [email protected]
| []
|
[
"Existence of Primitive Pairs with Prescribed Traces over Finite Fields",
"Existence of Primitive Pairs with Prescribed Traces over Finite Fields"
]
| [
"Hariom Sharma \nDepartment of Mathematics\nIndian Institute of Technology Delhi New Delhi\n110016India\n",
"R K Sharma \nDepartment of Mathematics\nIndian Institute of Technology Delhi New Delhi\n110016India\n"
]
| [
"Department of Mathematics\nIndian Institute of Technology Delhi New Delhi\n110016India",
"Department of Mathematics\nIndian Institute of Technology Delhi New Delhi\n110016India"
]
| []
| Let F = F q m , m > 6, n a positive integer, and f = p/q with p, q co-prime irreducible polynomials in F [x] and deg(p) + deg(q) = n. A sufficient condition has been obtained for the existence of primitive pairs (α, f (α)) in F such that for any prescribed a, b in E = F q , TrF/E(α) = a and TrF/E(α −1 ) = b. Further, for every positive integer n, such a pair definitely exists for large enough (q, m). The case n = 2 is dealt separately and proved that such a pair exists for all (q, m) apart from at most 64 choices. | 10.1080/00927872.2020.1852243 | [
"https://arxiv.org/pdf/2004.10719v1.pdf"
]
| 216,056,271 | 2301.02381 | 495e946d5c10bb4fb115b8da9cd7ee423466a28e |
Existence of Primitive Pairs with Prescribed Traces over Finite Fields
Apr 2020
Hariom Sharma
Department of Mathematics
Indian Institute of Technology Delhi New Delhi
110016India
R K Sharma
Department of Mathematics
Indian Institute of Technology Delhi New Delhi
110016India
Existence of Primitive Pairs with Prescribed Traces over Finite Fields
Apr 2020Finite FieldsCharactersPrimitive element 2010 Math Sub Classification: 12E2011T23 1
Let F = F q m , m > 6, n a positive integer, and f = p/q with p, q co-prime irreducible polynomials in F [x] and deg(p) + deg(q) = n. A sufficient condition has been obtained for the existence of primitive pairs (α, f (α)) in F such that for any prescribed a, b in E = F q , TrF/E(α) = a and TrF/E(α −1 ) = b. Further, for every positive integer n, such a pair definitely exists for large enough (q, m). The case n = 2 is dealt separately and proved that such a pair exists for all (q, m) apart from at most 64 choices.
Introduction
Let F q be a finite field with q = p k elements where p, k ∈ N and p is a prime number. Multiplicative cyclic group of non zero elements is denoted by F * q and the generator of F * q is called primitive element in F q . For a positive integer m(≥ 7), let F q m be the field extension of F q of degree m. Indeed, an element α ∈ F q m is primitive if and only if it is a zero of an irreducible polynomial of degree m over F q , the irreducible polynomial is known as primitive polynomial. Also, the trace of α over F q , denoted by Tr F q m /Fq (α) is given by α + α q + · · · + α q m−1 .
Primitive elements are used as fundamental tool in many cryptographic schemes(e.g., Diffie-Hellmen key exchange protocol). More precisely, they find several applications in cryptography and coding theory [9]. Therefore, the study of primitive elements and primitive polynomials is an active area of research. Another interesting problem related to primitive elements is study of primitive pairs. For a rational function f (x) ∈ F q m (x) and α ∈ F q m , we call a pair (α, f (α)) ∈ F q m × F q m a primitive pair in F q m if both α and f (α) are primitive elements in F q m . Generally, f (α) not necessarily be primitive element for a primitive α ∈ F q , for example, if f (x) = x 2 + 2 ∈ F 5 (x), then f (α) is not primitive for any primitive α ∈ F 5 .
It is both of theoretical importance and natural challenge to establish the existence of primitive elements with some prescribed conditions. Many researchers worked in this direction [6,10,1]. D. Jungnickel and S. A. Vanstone [7] proved the existence of primitive element ω in F q m with prescribed Tr F q m /Fq (ω) in F q for all pairs (q, m) excluding finitely many. Cohen [4], extended the result and established it for each pair except Tr F q m /Fq (ω) = 0 if m = 2 and (4, 3). Chou and Cohen [2] resolved completely the question of whether there exists a primitive element α in F q m such that both α and α −1 have trace zero over F q .
First in 1985, Cohen [3] studied the existence of primitive pairs (α, f (α)) over finite field F q where f (x) = x+a, a ∈ F q . In 2014, Cao and Wang [1] considered the existence of the primitive pairs with f (x) = x 2 +1
x in the finite field F q m and got that when m ≥ 29, there are such elements with Tr F q m /Fq (α) = a and Tr F q m /Fq (α −1 ) = b for any pair of prescribed a, b ∈ F * q . For the same rational function, in 2018, Anju, Sharma and Cohen [6], obtained a sufficient condition for existence of a primitive pair with Tr F q m /Fq (α) = a for any prescribed a ∈ F q and proved the existence of such elements for all pairs (q, m), m ≥ 5. Later in 2019, Sharma and Gupta [10] generalized the rational function to λ A (x), where
λ A (x) = ax 2 + bx + c dx + e , for any matrix A = a b c 0 d e ∈ M 2×3 (F q m ) of rank 2 and if λ A (x) = βx
or βx 2 for some β ∈ F q n then β = 1. Next, for m ≥ 7, they proved that for any such matrix A over finite field F q m of characteristic 2 there exists primitive pairs (α, λ A (α)) in F q m such that for any prescribed µ, ν ∈ F q , Tr F q m /Fq (α) = µ and Tr F q m /Fq (α −1 ) = ν except for at most 25 choices of (q, m).
In this paper, we take f (x) to be more general rational function and F q be finite field of any prime characteristic p. We propose the problem as follows. For a rational function f (x) = an 1 x n 1 +···+a 0 bn 2 x n 2 +···+b 0 ∈ F q m (x), we assume that p(x) = a n 1 x n 1 + · · · + a 0 , q(x) = b n 2 x n 2 + · · · + b 0 and a n 1 , b n 2 = 0. For n 1 , n 2 ∈ N ∪ {0} , define a subset of F q m (x) by
R n 1 ,n 2 = f (x) ∈ F q m (x)
p(x) and q(x) are co-prime irreducible polynomials over F q m with deg(p(x) = n 1 , deg(q(x) = n 2 )) , and set of pairs as
Q n 1 ,n 2 = (q, m)
for each f (x) ∈ R n 1 ,n 2 , there exists a primitive pair (α, f (α)) in F q m such that, for any prescribed a and b in F q , Tr F q m /Fq (α) = a and Tr
F q m /Fq (α −1 ) = b .
Let R n = n 1 +n 2 =n R n 1 ,n 2 . and Q n = n 1 +n 2 =n Q n 1 ,n 2 .
For each n ∈ N, first we establish a sufficient condition on q m such that (q, m) ∈ Q n . Further using a sieving modification of this sufficient condition we proved following result.
Preliminaries
In this section, we come out with some definitions and results which we shall need further in this article. If D denotes the set of divisors of q m − 1, then
for u ∈ D, an element w ∈ F * q is called u-free, if w = v d , where v ∈ F q m and d|u implies d = 1. Note that an element w ∈ F * q m is (q m − 1)
-free if and only if it is primitive. For more fundamentals on characters, primitive elements and finite fields, we refer the reader to [8].
As a special case of Lemma 10 of [11], we have an interesting result as following.
Lemma 2.1. Let u ∈ D, ξ ∈ F * q m . We have: d|u µ(d) φ(d) χ d χ d (ξ) = u φ(u) if ξis u-free, 0 otherwise.
where µ(·) is the Möbius function and φ(·) is the Euler function, χ d runs through all the φ(d) multiplicative characters over F * q m with order d.
Therefore, for each u ∈ D,
ρ u : α → θ(u) d|u µ(d) φ(d) χ d χ d (α),(1)
gives a characteristic function for the subset of u-free elements of
F * q m , where θ(u) = φ(u) u . Also, for each a ∈ F q , τ a : α → 1 q ψ∈Fq ψ( Tr F q m /Fq (α) − a)(2)
is a characterstic function for the subset of F q m consisting elements with Tr F q m /Fq (α) = a. We shall need the following results of D. Wang and L. Fu for our next theorem.
Lemma 2.2. [[5], T heorem 4.5] Let f (x) ∈ F q d (x) be a rational function. Write f (x) = k j=1 f j (x) n j , where f j (x) ∈ F q d [x]
are irreducible polynomials and n j are non zero integers. Let χ be a multiplicative character of F q d . Suppose that the rational
function d−1 i=0 f (x q i ) is not of the form h(x) ord(χ) in F q d (x), where ord(χ) is the smallest positive integer r such that χ r = 1, then we have α∈Fq,f (α) =0,f (α) =∞ χ(f (α)) ≤ (d k j=1 deg(f j ) − 1)q 1 2 . Lemma 2.3. [[5], T heorem 4.6] Let f (x), g(x) ∈ F q m (x) be rational func- tions. Write f (x) = k j=1 f j (x) n j , where f j (x) ∈ F q m [x]
are irreducible polynomials and n j are non zero integers. Let
D 1 = k j=1 deg(f j ), let D 2 = max(deg(g), 0), let D 3 be the degree of denominator of g(x)
, and let D 4 be the sum of degrees of those irreducible polynomials dividing denominator of g but distinct from f j (x)(j = 1, 2, · · · k). Let χ be a multiplicative character of F q m , and let ψ be a non trivial additive character of F q m . Suppose
g(x) is not of the form r(x) q m − r(x) in F q m (x). Then we have the estimate α∈F q m ,f (α) =0,∞g(α) =∞ χ(f (α))ψ(g(α)) ≤ (D 1 + D 2 + D 3 + D 4 − 1)q m 2 .
Sufficient condition
For each divisor l 1 , l 2 of q m − 1, f (x) ∈ R n and prescribed elements a, b of F q , suppose N f,n,a,b (l 1 , l 2 ) denotes the number of elements α ∈ F q m such that α is l 1 -free, f (α) is l 2 -free, Tr F q m /Fq (α) = a and Tr F q m /Fq (α −1 ) = b.
We now prove our sufficient condition as follows.
Theorem 3.1. Let m, n and q ∈ N such that q is a prime power. Suppose
q m 2 −2 > (n + 2)W (q m − 1) 2 .
(3)
Then (q, m) ∈ Q n .
Proof. To prove the result, we need to show that
N f,n,a,b (q m − 1, q m − 1) > 0 for every f (x) ∈ R n and prescribed a, b ∈ F q . Let f (x) ∈ R n beρ l 1 (α)ρ l 2 (f (α))τ a (α)τ b (α −1 )
now using (1) and (2), we have
N f,n,a,b (l 1 , l 2 ) = θ(l 1 )θ(l 2 ) q 2 d 1 |l 1 , d 2 |l 2 µ(d 1 ) φ(d 1 ) µ(d 2 ) φ(d 2 ) χ d 1 , χ d 2 χ f,a,b (χ d 1 , χ d 2 ) (4) where χ f,a,b (χ d 1 , χ d 2 ) = u,v∈Fq ψ 0 (−au − bv) α∈F q m \S χ d 1 (α)χ d 2 (f (α))ψ 0 (uα + vα −1 ).
From [Example 5.1, [8]], it follows that, for any given divisors
d 1 , d 2 of q m − 1 there exist integers m 1 , m 2 with 0 ≤ m 1 , m 2 < q m − 1 such that χ d 1 (x) = χ q−1 (x m 1 ) and χ d 2 (x) = χ q−1 (x m 2 )
. Thus
χ f,a,b (χ d 1 , χ d 2 ) = u,v∈Fq ψ 0 (−au−bv) α∈F q m \S χ q m −1 (α m 1 f (α) m 2 )ψ 0 (uα+vα −1 ) = u,v∈Fq ψ 0 (−au − bv) α∈F q m \S χ q m −1 (F (α))ψ 0 (G(α)). where, F (x) = x m 1 f (x) m 2 ∈ F q m (x) and G(x) = ux + vx −1 ∈ F q (x). If G(x) = h(x) q m − h(x) for any h(x) ∈ F q m (x) then by Lemma 2.3 |χ f,a,b (χ d 1 , χ d 2 )| ≤ (n + 2)q m 2 +2 .(5)
If G(x) = h(x) q m − h(x) for some h(x) ∈ F q (x) then following [9], it is only
possible if u = v = 0. Hence, if F (x) = h(x) q m −1 for any h(x) ∈ F q m (x) by Lemma 2.2, |χ f,a,b (χ d 1 , χ d 2 )| ≤ nq m 2 +2 .(6)
Now, let us consider the case when
F (x) = g(x) q m −1 for some g(x) ∈ F q m (x). If g(x) = g 1 (x) g 2 (x) for g 1 (x), g 2 (x) ∈ F q m [x] with gcd(g 1 (x), g 2 (x)) = 1, then x m 1 p(x) q(x) m 2 = g 1 (x) g 2 (x) q m −1 , that is x m 1 p(x) m 2 g 2 (x) q m −1 = g 1 (x) q m −1 q(x) m 2 .(7)
We claim that (7) is possible only if m 1 = m 2 = 0. For this, first we prove that if m 1 is 0, then m 2 must also be 0. Suppose m 1 = 0, then equation (7) becomes
p(x) m 2 g 2 (x) q m −1 = g 1 (x) q m −1 q(x) m 2 .
Let if possible, m 2 = 0, then p(x) and q(x) being co-prime gives p(x) divides g 1 (x), which further gives
g 2 (x) q m −1 = g 1 ′ (x) q m −1 q(x) m 2 p(x) q m −m 2 −1 , where g 1 ′ (x) = g 1 (x)/p(x). q m − m 1 − 1 > 0 tells that p(x) divides g 2 (x)
. A contradiction. Hence, m 1 = 0 implies m 2 = 0. Next if possible, let m 1 = 0. Then from (7), either x|g 1 (x) or x|q(x). First suppose x divides g 1 (x). We can restate equation (7) as
p(x) m 2 g 2 (x) q m −1 = g ′ 1 (x) q m −1 q(x) m 2 x q m −m 1 −1 where g ′ 1 (x) = g 1 (x)
x . Here gcd(g 1 (x), g 2 (x)) = 1 and q m − m 1 − 1 > 0 forces that x|p(x) and m 2 = 0. But p(x) is irreducible, and hence p(x) = ax for some a ∈ F * q m . This gives that cx
m 2 g 2 (x) q m −1 = g ′ 1 (x) q m −1 q(x) m 2 x q m −m 1 −1 , where c = a m 2 .
Here we come up with three possibilities as discussed in following cases. Case 1. q m − m 1 − 1 > m 2 . This gives that x|g 2 (x), which is not so.
Case 2. q m − m 1 − 1 < m 2 . As x can not divide q(x), so x divides g ′ 1 (x), im- plies cx m 2 g 2 (x) q m −1 = x q m −1 g ′′ 1 (x) q m −1 q(x) m 2 x q m −m 1 −1 with g ′′ 1 (x) = g ′ 1 (x)/x, which is same as cg 2 (x) q m −1 = g ′′ 1 (x) q m −1 q(x) m 2 x 2(q m −1)−m 1 −m 2 . Again q m − 1 > m 1 and q m − 1 > m 2 forces that x|g 2 (x), a contradiction. Case 3. q m − m 1 − 1 = m 2 . Gives cg 2 (x) q m −1 = g ′ 1 (x) q m −1 q(x) m 2 ,
which is possible only if m 2 = 0, again a contradiction. From the above discussion, it is clear that x does not divide g 1 (x). Now, let us assume x|q(x) and x ∤ g 1 (x). Then due to irreducibility q(x) = bx for some b ∈ F * q m . So by (7), we have
x m 1 p(x) m 2 g 2 (x) q m −1 = dg 1 (x) q m −1 x m 2 ,(8)
where d = b m 2 . Again three possibilities may arise, namely m 1 > m 2 , m 2 > m 1 and m 1 = m 2 . We deal with each of them separately as follows. Case 1. m 1 > m 2 . Then (8) gives x|g 1 (x), which is not possible. Case 2. m 2 > m 1 . Then by (8) we get, either x|p(x) or x|g 2 (x). But p(x) and q(x) are co-prime, therefore, x|g 2 (x). By (8),
p(x) m 2 g ′ 2 (x)x q m +m 1 −m 2 = dg 1 (x), with g ′ 2 (x) = g 2 (x)/x, which implies x|g 1 (x). A contradiction. Case 3. m 1 = m 2 . Here, (8) gives p(x) m 2 g 2 (x) q m −1 = dg 1 (x) q m −1 , which is possible only if m 2 = 0.
Thus, by above discussion together with (5) and (6) we get, if (χ d 1 , χ d 2 , u, v) = (χ 1 , χ 1 , 0, 0) then |χ f,a,b (χ d 1 , χ d 2 )| ≤ (n + 2)q m 2 +2 . Using this and (4), we get
N f,n,a,b (l 1 , l 2 ) ≥ θ(l 1 )θ(l 2 ) q 2 (q m − |S| − ((n + 2)q m 2 +2 )(W (l 1 )W (l 2 ) − 1)) (9) ≥ θ(l 1 )θ(l 2 ) q 2 (q m − (n + 1) − ((n + 2)q m 2 +2 )(W (l 1 )W (l 2 ) − 1)) Thus, if q m 2 −2 > (n + 2)W (l 1 )W (l 2 )
, then N f,n,a,b (l 1 , l 2 ) > 0 for all f (x) ∈ R n and prescribed a, b ∈ F q . The result now follows by taking l 1 = l 2 = q m − 1.
For further calculation work we shall need following results. Their proofs have been omitted as they follow on ideas from [6] . Theorem 3.3. Suppose m, n, q ∈ N such that q is a prime power. Also let l|(q m − 1), {p 1 , ..., p s } be the collection of all primes dividing q m − 1 but not l.
Suppose δ = 1−2 s i=1 1 p i , δ > 0 and ∆ = (2s−1) δ +2. If q m 2 −2 > (n+2)∆W (l) 2 then (q, m) ∈ Q n .
4 Computaions for Q 2 By [5], for m ≤ 4, there does not exists any primitive element α such that Tr F q m /Fq (α) = 0 and Tr F q m /Fq (α −1 ) = 0. The cases m = 5, and 6, demand an extensive computation and seems to call for a different approach. Consequently, we defer the study of these cases on another occasion. In this paper, we consider the cases m ≥ 7.
First we assume that ω(q m − 1) ≥ 473. Then using Lemma 3. Therefore, we can assume ω(q m − 1) ≤ 472. To make the further progress we use the sieving Theorem 3.3 in place of Theorem 3.1. Let 31 ≤ ω(q m −1) ≤ 472. In Theorem 3.3, let l to be the product of least 31 primes dividing q m − 1 i.e. W (l) = 2 31 . Then s ≤ 441 and δ will be at least its value when {p 1 , p 2 , · · · , p 441 } = {131, 137, · · · , 3347}. This gives δ > 0.0008225 and ∆ < 1071081.2759510, hence 4∆W (l) 2 < 1.9758 × 10 25 = R(say). By
Theorem 3.3 (q, m) ∈ Q 2 if q m 2 −2 > R that is if q > R 2 m−4 or q m > R 2m m−4 .
But m ≥ 7 implies 2m m−4 ≤ 14 3 . Therefore, if q m > R 14 3 or q m > 1.1138 × 10 118 then (q, m) ∈ Q 2 . Hence, ω(q m − 1) ≥ 62 gives (q, m) ∈ Q 2 .
We repeat the above process of Theorem 3.3 with the values in first part of Table 1 Table 1, provides (q, m) ∈ Q 2 if q m > (969830) 4 or q m > 8.8468 × 10 23 . Table 1 Sr Using similar arguments, for each n ∈ N, one can get a subset of Q n . Appendix 1.
. No. a ≤ ω(q m −1) ≤ b W (l) δ > ∆ < 4∆W (l) 2 <
any rational function and a, b ∈ F q . Let S 1 be the set of zeros and poles of f (x) in F q m and S = S 1 ∪ {0}. Assume l 1 , l 2 be divisors of q m − 1. Then by definition we have N f,n,a,b (l 1 , l 2 ) = α∈F q m \S
Lemma 3 . 2 .
32For each M ∈ N, if ω(M) ≥ 473. Then W (M)
20 then (q, m) ∈ Q 2 . But m ≥ 7 gives 10m 3m−20 ≤ 70. Hence, if q m > 4 70 then (q, m) ∈ Q 2 . Which is true for ω(q m − 1) ≥ 473.
. Hence (q, m) ∈ Q 2 if q m > (2749163) 14 3 or q m > 1.210 × 10 30 for m = 7, and q m > (2749163) 4 or q m > 5.7122 × 10 25 for m ≥ 8. (∵ m ≥ 8 =⇒ 2m m−4 ≤ 4) Therefore, for m ≥ 8, it is sufficient if ω(q m − 1) ≥ 20. So, repeated use of Theorem 3.3 for values in second part of
q, m) ∈ Q 2 unless m = 7 and q < 19625, m = 8 and q < 985, m = 9 and q < 458, m = 10 and q < 249, m = 11 and q < 151, m = 12 and q < 99, m = 13 and q < 70, m = 14 and q < 52, m = 15 and q < 40, m = 16 and q < 32, m = 17 and q < 26, m = 18 and q < 22, m = 19 and q < 19, m = 20 and q < 16, m = 21 and q < 14, m = 22 and q < 13, m = 23 and q < 11, m = 24, 25 and q < 10, m = 26 and q < 9, m = 27, 28 and q < 8, 29 ≤ m ≤ 34 and q = 2, 3, 4, 5. 35 ≤ m ≤ 39 and q = 2, 3, 4. 40 ≤ m ≤ 50 and q = 2, 3. 51 ≤ m ≤ 79 and q = 2.For each of above values we verify (3) and get a list of 494 possible exceptions (see appendix 1). Finally, for these possible exceptions we see that Theorem 3.3 is satisfied for some choice of l except the values stated in Theorem 1.1(see appendix 2). Which proves Theorem 1.1.Note: In the case q = 4 with m = 16, 20 & 24 and q = 8 with m = 20 equality occurs in (3), so we keep it in exception for (3) and verified using Theorem 3.3.
25, 125, 625, 3125, 15625,7, 49, 343, 2401,11, 121, 1331, 14641, 13, 169, 2197, 19, 361, 23, 529, 29, 31, 37, 41, 1681, 43, 47, 53, 59, 3481, 61, 67, 4489, 71, 79, 6241, 83, 6889, 97, 9409, 101, 103, 107, 109, 127, 131, 17161, 139, 19321, 151, 157, 181, 191, 197, 199, 211, 223, 227, 229, 233, 239, 241, 269, 277, 281, 311, 331, 359, 367, 389, 397, 401, 409, 431, 439, 463, 491, 499, 509, 547, 571, 593, 601, 607, 613, 619, 631, 643, 661, 691, 727, 877, 919, 953, 967, 1021, 1051, 1063, 1093, 1123, 1151, 1171, 1181, 1231, 1283, 1301, 1303, 1321, 1381, 1399, 1453, 1481, 1483, 1499, 1523, 1531, 1597, 1607, 1693, 1741, 1951, 2003, 2141, 2161, 2281, 2311, 2381, 2591, 2713, 2731, 2791, 2887, 2971, 3041, 3083, 3191, 3221, 3229, 3271, 3301, 3307, 3313, 3499, 3547, 3571, 3739, 3851, 3911, 4013, 4219, 4243, 4327, 4957, 5419, 5923, 5981, 6067, 6211, 6491, 6577, 7159, 7759, 8009, 8053, 8191, 8807, 9103, 9403, 9421, 9463, 9719, 9767, 9871, 9901, 9967, 10949, 10957, 12959, 14323, 15313, 15511, 16381, 17431, 17491, 19483. For m=8: 2, 4, 8, 16, 32, 64, 128, 512, 3, 9, 27, 81, 243, 729, 5, 25, 125, 7, 49, 343, 11, 121, 13, 169, 17, 19, 361, 23, 529, 29, 841, 31, 961, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 211, 223, 227, 229, 233, 239, 241, 251, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 433, 439, 443, 457, 461, 463, 467, 491, 499, 509, 521, 547, 557, 563, 571, 587, 593, 599, 601, 617, 619, 631, 647, 653, 659, 661, 683, 691, 701, 709, 727, 733, 739, 743, 757, 773, 787, 797, 809, 811, 823, 827, 829, 839, 853, 857, 859, 863, 881, 887, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983. For m=9: 2,4,8, 16, 32, 256,3,9, 27, 81,5, 25, 125,7, 49,11, 121, 13, 169, 19, 23, 29, 31, 37, 43, 47, 53, 61, 79, 83, 137, 139, 211, 367, 379. For m=10: 2,4,8, 16, 32, 64,3,9, 27,5, 25, 125,7, 49,11, 13, 169, 17, 19, 23, 29, 31, 37, 41, 53, 59, 61, 89, 101, 113, 137, 139, 149. For m=11: 2, 4, 16, 3, 9, 7, 13. For m=12: 2, 4, 8, 16, 32, 64, 3, 9, 27, 81, 5, 7, 49, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 89. For m=14: 2, 4, 3, 5. For m=15: 2, 4, 16, 3, 9,5. For m=16: 2, 4, 8, 3, 5. For m=18: 2, 4, 3. m=20: 2, 4, 8. For m=22: 2. For m=24: 2, 3. For m=28: 2. For m=30: 2. For m=36: 2.For m=7: 2, 4, 8, 16, 32, 64, 256, 512, 1024, 4096, 3, 9, 27, 81, 243, 729,
6561, 5, For m=10
Sr.
No.
q
l
s
Applicable Algebra in Engineering, Communication and Computing. Xiwang Cao, Peipei Wang, 25Primitive elements with prescribed traceXiwang Cao and Peipei Wang. Primitive elements with prescribed trace. Applicable Algebra in Engineering, Communication and Com- puting, 25(5):339-345, 2014.
Primitive elements with zero traces. Seng Wun, Chou, Stephen D Cohen, Finite Fields and Their Applications. 7Wun-Seng Chou and Stephen D Cohen. Primitive elements with zero traces. Finite Fields and Their Applications, 7(1):125-141, 2001.
Consecutive primitive roots in a finite field. S D Cohen, Proc. Amer. Math. Soc. S. D. Cohen. Consecutive primitive roots in a finite field. Proc. Amer. Math. Soc., pages 189-197, 1985.
Primitive finite field elements with prescribed trace. D Stephen, Mateja Cohen, Prešern, Southeast Asian Bulletin of Mathematics. 292Stephen D Cohen and Mateja Prešern. Primitive finite field elements with prescribed trace. Southeast Asian Bulletin of Mathematics, 29(2), 2005.
A class of incomplete character sums. Lei Fu, Daqing Wan, arXiv:1303.3650arXiv preprintLei Fu and Daqing Wan. A class of incomplete character sums. arXiv preprint arXiv:1303.3650, 2013.
Primitive element pairs with one prescribed trace over a finite field. A Gupta, R K Sharma, S D Cohen, Finite Fields Appli. 54A. Gupta, R. K. Sharma, and S. D. Cohen. Primitive element pairs with one prescribed trace over a finite field. Finite Fields Appli., 54:1- 14, 2018.
On primitive polynomials over finite fields. Dieter Jungnickel, A Scott, Vanstone, Journal of Algebra. 1242Dieter Jungnickel and Scott A Vanstone. On primitive polynomials over finite fields. Journal of Algebra, 124(2):337-353, 1989.
Cambridge university press. R Lidl, H Niederreiter, Finite fields. 20R. Lidl and H. Niederreiter. Finite fields, volume 20. Cambridge uni- versity press, 1997.
Public-Key Cryptosystems Based on the Discrete Logarithm Problem. C Paar, J Pelzl, SpringerBerlin Heidelberg; Berlin, HeidelbergC. Paar and J. Pelzl. Public-Key Cryptosystems Based on the Discrete Logarithm Problem, pages 205-238. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010.
Pair of primitive elements with prescribed traces over finite fields. R K Sharma, A Gupta, Comm. Alg. 473R. K. Sharma and A. Gupta. Pair of primitive elements with prescribed traces over finite fields. Comm. Alg., 47(3):1278-1286, 2019.
Character sums over galois rings and primitive polynomials over finite fields. Finite Fields and Their Applications. Fan Shuqin, Han Wenbao, 1011Fan Shuqin and Han Wenbao. Character sums over galois rings and primitive polynomials over finite fields. Finite Fields and Their Appli- cations, 10(1):36-52, 2004. m=11
| []
|
[
"Simulating cyclotron-Bloch dynamics of a charged particle in a 2D lattice by means of cold atoms in driven quasi 1D optical lattices",
"Simulating cyclotron-Bloch dynamics of a charged particle in a 2D lattice by means of cold atoms in driven quasi 1D optical lattices"
]
| [
"Andrey R Kolovsky \nKirensky Institute of Physics\n660036KrasnoyarskRussia\n\nSiberian Federal University\n660036KrasnoyarskRussia\n"
]
| [
"Kirensky Institute of Physics\n660036KrasnoyarskRussia",
"Siberian Federal University\n660036KrasnoyarskRussia"
]
| []
| Quantum dynamics of a charged particle in a 2D lattice subject to magnetic and electric fields is a rather complicated interplay between cyclotron oscillations (the case of vanishing electric field) andBloch oscillations (zero magnetic field), details of which has not yet been completely understood.In the present work we suggest to study this problem by using cold atoms in optical lattices. We introduce a 1D model which can be easily realized in laboratory experiments with quasi 1D optical lattices and show that this model captures many features of the cyclotron-Bloch dynamics of the quantum particle in 2D square lattices. | 10.1007/s11467-011-0202-3 | [
"https://arxiv.org/pdf/1106.0945v1.pdf"
]
| 58,918,325 | 1106.0945 | deb9b2793303e743cc71d8acb0f20b0ee32437d0 |
Simulating cyclotron-Bloch dynamics of a charged particle in a 2D lattice by means of cold atoms in driven quasi 1D optical lattices
6 Jun 2011
Andrey R Kolovsky
Kirensky Institute of Physics
660036KrasnoyarskRussia
Siberian Federal University
660036KrasnoyarskRussia
Simulating cyclotron-Bloch dynamics of a charged particle in a 2D lattice by means of cold atoms in driven quasi 1D optical lattices
6 Jun 2011(Dated: January 27, 2013)
Quantum dynamics of a charged particle in a 2D lattice subject to magnetic and electric fields is a rather complicated interplay between cyclotron oscillations (the case of vanishing electric field) andBloch oscillations (zero magnetic field), details of which has not yet been completely understood.In the present work we suggest to study this problem by using cold atoms in optical lattices. We introduce a 1D model which can be easily realized in laboratory experiments with quasi 1D optical lattices and show that this model captures many features of the cyclotron-Bloch dynamics of the quantum particle in 2D square lattices.
I. INTRODUCTION
A charged particle in 2D periodic potentials, which is subjected to in-plane electric field and normal to the plane magnetic field is a problem of lasting fundamental interest because of its relation to the quantum Hall effect. This problem was considered in several different approximations in the solid-state physics literature with the main emphasis on the energy spectrum of the system or, more precisely, on the electron density of states, which is a measurable quantity. Among these approximations the most popular is the tight-binding approximation, which amounts to a truncation of the Hilbert space of the single-particle
Hamiltonian to the lowest Bloch band. In the case of zero electric field this approximation results in the celebrated Hofstadter's butterfly spectrum [1], which is parametrized by the Peierls phase α -number of the magnetic flux quanta through the unit cell of the lattice [2]. In the opposite case of zero magnetic field the spectrum is the so-called Wannier-Stark ladder [3,4], which is parametrized by the angle θ between the electric field vector and the crystallographic axis of the lattice. The case when both fields are present is more subtle and, to the best of our knowledge, was analyzed for the first time only in 1995 [5].
Complementary to the spectral problem is the wave-packet dynamics of the particle. The main question one addresses here is whether a localized packet remains localized in the course of time or it spreads over the lattice. If for electrons in a solid crystal this problem is of pure academic interest, it appears to be of experimental relevence for cold atoms in optical lattices because in this system the wave-packet dynamics can be easily tracked by taking a picture of the atomic cloud after a given evolution time. A recent example is Ref.
[6] where the authors realized the Aubry-Andre model [7] (which coincides with Harper's Hamiltonian
[8] for the model parameter λ = 2) by loading cold atoms into the quasiperiodic 1D optical lattice. It was confirmed that the atomic cloud remains localized for large λ and spreads over the lattice for small λ.
Present research in cold atoms physics is also focused on the problem of generating synthetic magnetic fields, which could impart a Lorenz-like force to otherwise neutral atoms in motion (see, for example, the recent paper [9] and references therein). Since the electric field for cold atoms is easily mimicked by, for example, the gravitational force, this will open a perspective for studying the 2D wave-packet dynamics in the Hall configuration.
Theoretically, this problem was analyzed in much details in our recent publications [10,11].
The main message of the present work is that many (although not all) theoretical predictions of Refs. [10,11] can be verified by using the driven 1D lattice instead of the 2D lattice with an artificial magnetic field. The proposed experiment is a modification of the laboratory setup [6], where one substitutes one of the stationary lattices by a moving lattice.
II. WAVE-PACKET DYNAMICS IN THE HALL CONFIGURATION
To make the paper self-consistent we summarize the results of Refs. [10,11]. Using the tight-binding approximation the 2D wave-packet dynamics in the Hall configuration is governed by the following Schrödinger equation:
ihψ l,m = − J x 2 (ψ l+1,m + h.c.) − J y 2 e i2παl ψ l,m+1 + h.c. + ea(F x l + F y m)ψ l,m .(1)
In this equation ψ l,m are the wave function probability amplitudes for the lattice site (l, m), J x and J y the hopping matrix elements along the x and y axis (in what follows we assume J x = J y ≡ J for simplicity), e the charge, a the lattice period, F x and F y components of the electric field vector F, α = eBa 2 /hc the Peierls phase, and we use the Landau gauge A = B(0, x, 0) for the vector potential.
The main conclusion of Ref. [10] is that the system (1) has two qualitatively different dynamical regimes, depending on the inequality relation between the electric field magnitude and the quantity F cr = 2παJ/ea. In the strong field regime, F > F cr , the time evolution of a localized wave-packet is defined by the Bloch dynamics. For vanishing magnetic field these would be Bloch oscillations, where the packet oscillates near its initial position with the Bloch frequencies ω x,y = eaF x,y /h and amplitudes proportional to J x,y /eaF x,y . The obvious exception from this oscillatory behavior is the case where the vector F points the x or y direction. Here the packet spreads ballistically in the orthogonal to the field direction, with the rate defined by the hopping matrix element. A finite magnetic field (nonzero α) 'generalizes' this exclusion to the cases where the vector F points rational directions, i.e., F x /F y = r/q with r, q being co-prime numbers [11]. However, now the rate of ballistic spreading in the orthogonal direction is suppressed by the factor proportional to (J/eaF ) r+q−1 . In practice this functional dependence of the suppression coefficient implies that the wave packet spreading can be detected only for rational directions with small prime numbers r and q.
In the opposite limit of weak electric fields, F < F cr , the time evolution of a localized wave-packet is defined by the cyclotron dynamics. Namely, the packet moves in the orthog-
onal to F direction with the drift velocity v * , v * = ea 2 F/hα = F c/B ,(2)
in close analogy with a charged particle in free space under the effect of the crossing electric and magnetic fields. However, the presence of the lattice imposes a restriction: this regime occurs only for the subspace of initial conditions spanned by the family of transporting states.
For generic initial conditions the packet typically splits into several sub-packets moving in the orthogonal direction with different (both positive and negative) velocities.
III. THE 1D APPROXIMATION
In this section we introduce the 1D approximation to the above 2D problem. First we use
the unitary transformation, ψ l,m (t) → exp[−i(ω x l + ω y m)t]ψ l,m (t)
, after which the electric field appears as periodic driving of the system with Bloch frequencies ω x and ω y :
ihψ l,m = − J x 2 e −iωxt ψ l+1,m + h.c. − J y 2 e i(2παl−ωy t) ψ l,m+1 + h.c. .(3)
Let us now assume a situation where the wave function is uniform along the y axis, i.e.,
ihḃ l = − J x 2 (e −iωxt b l+1 + h.c.) − J y cos(2παl − ω y t)b l .(4)
Although the above assumption is rather specific, it was shown in Ref. [10] that |b l (t)| 2 well approximates the integrated probability P l (t) = m |ψ l,m (t)| 2 also in the case of a localized 2D wave packet, if its size exceeds the magnetic period d = a/α. Thus we can use the results of Refs. [10,11] to explain dynamical regimes of the system (4) and, vice versa, to verify theoretical predictions of the cited papers by studying the wave-packet dynamics of this one-dimensional system. Note that in terms of Eq. (4) the weak and strong field regimes correspond to slow driving, ω < ω cr ,
ω = ω 2 x + ω 2 y , ω cr = 2παJ/h ,(5)
and fast driving, ω > ω cr , respectively.
The system (4) coincides with that studied in the laboratory experiment [6] with two minor modifications. First, now the secondary (in terminology of Ref.
[6]) lattice moves with the constant velocity relative to the primary lattice. This can be done by linearly chirping the frequencies of two counter-propagating waves which form the secondary lattice [12]. Second, the hopping term in Eq. (4) contains an oscillatory phase. This phase can be introduced by the same techniques which one uses to induce Bloch oscillations of cold atoms (for example, by employing the gravitational force for vertically oriented lattices).
IV. THE REGIME OF SLOW DRIVING
A way to check that the 1D system (4) correctly captures the features of the 2D system (1) in the weak field regime (which now assumes ω < ω cr ) is to propagate the transporting state. In the 2D lattice it moves with the drift velocity (2) in the orthogonal to F direction.
For the considered system (4) this means that one can construct a coherent wave packet, which will travel across the lattice with the constant velocity v = aω y /2πα . . It is seen in Fig. 1(a) that the packet indeed moves with the velocity given in Eq.(6).
To create the narrow coherent atomic wave-packet might be a problem in a laboratory experiment. For this reason from now on we focus on the case of thermal atomic cloud, which corresponds to a wide incoherent Gaussian wave packet. We simulate the dynamics of this incoherent packet by assigning random phases to probability amplitudes of the initial Gaussian packet and averaging the result over different realizations of these random phases.
The right panel in Fig. 1 shows the typical evolution of a wide incoherent packet. Unlike the case of a narrow coherent packet, now the first moment M 1 = l|b l | 2 remains constant while the dispersion σ(t) = M 2 − M 2 1 grows linearly in the limit of large times.
V. THE REGIME OF FAST DRIVING
The characteristic feature of the cyclotron-Bloch dynamics for F > F cr is the strong dependence of the rate of spreading on the field direction or, what is the same, on the ratio between the two Bloch frequencies. The 1D system (4) fairly reproduces this dependence.
As an example, in Fig. 2 we depict the coefficient A for the asymptotic linear grows of the wave-packet dispersion, σ(t) ≈ At, as a function of the frequency ω for ω x /ω y = 0
and ω x /ω y = 1. In the former case, the rate of ballistic spreading is seen to approach the constant value A = J x / √ 2h, while in the latter case it decreases as
A = J x 2h J ȳ hω ν ,(7)
where ν = 1. In the general case of arbitrary rational ratio ω x /ω y = r/q the exponent is ν = r + q − 1 and one has to evolve the system for algebraically large times to reach the asymptotic regime. Finally, for irrational ω x /ω y there is no asymptotic linear growth in σ(t) but oscillations in time (see Fig. 3).
VI. FINITE EVOLUTION TIMES
In the preceding sections we discussed asymptotic dynamics of the system. Clearly, in a laboratory experiment the system evolution is restricted to some maximal time interval, which may be not large enough to speak about the asymptotic regime. Nevertheless, all effects mentioned above are well observed also for short evolution times. To support this statement Fig. 4 shows the wave-packet dispersion at t = 30T J as the function of ω for three different ratios between driving frequencies. Two dynamical regimes, which are separated by the critical frequency (5), can be easily identified and for ω > ω cr one clearly sees the difference between rational and irrational ω x /ω y . In addition to Fig. 4, the panels (b-d) in
VII. CONCLUSION
We have studied numerically the dynamics of non-interacting cold atoms in the driven 1D optical lattice with a particular driving. This driving assumes the presence of a static force, which we characterize by the parameter ω x -the Bloch Frequency, and a shallow secondary lattice with a larger period d = a/α, which moves at constant velocity aω y relative to the deep primary lattice. It is shown that this system well reproduces many features of the cyclotron-Bloch dynamics of the quantum particle in a 2D lattice. In particular, we find two qualitatively different dynamical regimes of the 1D system, depending on the driving frequencies.
In the first regime (slow driving, ω ≡ ω 2 x + ω 2 y < ω cr ) a cloud of non-condensed atoms spreads ballistically across the lattice with a rate proportional to the frequency ω y .
In the second regime (fast driving, ω > ω cr ) the size of the atomic cloud oscillates in time if ω x /ω y is an irrational number, while for rational ω x /ω y = r/q these oscillations are accompanied by slow ballistic spreading with a rate inversely proportional to the frequency ω in the power ν = r + q − 1.
We conclude the paper by a remark concerning the commensurability condition between the lattice periods, i.e., the rationality condition on the parameter α. This condition is known to play a crucial role in the case of stationary lattices [6,7]. Unlike this situation, for driven lattices we did not find the commensurability condition on the lattice periods to be of any importance, although we do not exclude a possibility that it might be important in some particular situations.
this work we assume the other scheme of the experimental setup, where the standing waves are formed by counter-propagating laser beams.
[13] We recall that there is a family of transporting states in the original problem, each representative of which is naturally characterized by its dispersion in the two orthogonal directions [11]. Fig. 1(b) yet ω = 1. correspond to the three cases considered in Fig. 4, the panel (a) shows the initial packet.
ψ l,m (t) = L −1/2 b l (t). This reduces (3) to the following 1D Schrödinger equation for the complex amplitude b l :
Figure 1
1(a) shows the result of numerical simulation for the initial Gaussian wave packet ,b l (t = 0) ∼ exp(−l 2 /2σ 2 ) ,with the width σ = (2πα J y /J x ) −1/2 . This packet approximates the ground Wannier state for the potential V (l) = −J y cos(2παl) and is a 1D analogue of the 2D transporting state with the same dispersion in the orthogonal direction[13]
Fig. 5
5depict populations of the lattice sites at the end of numerical simulations for ω = 1.(Tiny wiggling of the curves is an artifact due to the Monte-Carlo method of simulating the dynamics of an incoherent packet.) We note that particular shapes of the packets seen in the figure are rather sensitive to variations of the system parameters and the evolution time. This sensitivity, however, disappears for integrated characteristics like the wave-packet dispersion.
FIG. 1 :
1Space-time plot of the dynamics of the narrow coherent (left panel) and a wide incoherent (right panel) Gaussian wave-packets. Parameter are J x = J y = 1, α = 1/10, ω = 0.1, and ω x /ω y = 0. The time is measured in units of the tunneling period T J = h/J.
FIG. 2 :
2The proportionality coefficient A = A(ω) for the asymptotic linear growth of the wavepacket dispersion, σ(t) ≈ At/T J , for ω x /ω y = 0 and ω x /ω y = 1. The dashed lines are analytical estimates for large ω according to Ref.[11].
FIG. 3 :
3The wave packet dispersion as the function of time for ω x /ω y = 1 (dash-dotted line), ω x /ω y = 18/19 (solid line), and ω x /ω y = ( √ 5 − 1)/4 (dashed line). The other parameters are the same as in
FIG. 4 :FIG. 5 :
45The wave-packet dispersion at t = 30T J as the function of ω for ω x /ω y = 1 (d), ω x /ω y = 1Population of the lattice sites at the end of numerical simulation for ω = 1. Panels (b-d)
AcknowledgmentsThis work was partially supported by Russian Foundation for Basic Research, grant RFBR-10-02-00171-a.
. D R Hofstadter, Phys. Rev. B. 142239D. R. Hofstadter, Phys. Rev. B 14, 2239 (1976).
. R E Peiers, Z. Phys. 80763R. E. Peiers, Z. Phys. 80, 763 (1993).
. T Nakanishi, T Ohtsuki, M Saitoh, J. of Phys. Soc. Japan. 622773T. Nakanishi, T. Ohtsuki, and M. Saitoh, J. of Phys. Soc. Japan 62, 2773 (1993).
. M Glück, F Keck, A R Kolovsky, H J Korsch, Phys. Rev. Lett. 863116M. Glück, F. Keck, A. R. Kolovsky, and H. J. Korsch, Phys. Rev. Lett. 86, 3116 (2001).
. T Nakanishi, T Ohtsuki, M Saitoh, J. of Phys. Soc. Japan. 642092T. Nakanishi, T. Ohtsuki, and M. Saitoh, J. of Phys. Soc. Japan 64, 2092 (1995).
. A R Kolovsky, Europhys. Lett. 9320003A.R.Kolovsky, Europhys. Lett. 93, 20003 (2011).
. A R Kolovsky, G Mantica, Phys. Rev. E. 8341123A. R. Kolovsky and G. Mantica, Phys. Rev. E 83, 041123 (2011).
. I Chesnokov, A R Kolovsky, G Mantica, in preparationI. Chesnokov, A. R. Kolovsky and G. Mantica, in preparation.
We note that in the experiment [6] the authors used a mirror to create the standing waves. We note that in the experiment [6] the authors used a mirror to create the standing waves. In
| []
|
[
"Safe Self-Refinement for Transformer-based Domain Adaptation",
"Safe Self-Refinement for Transformer-based Domain Adaptation"
]
| [
"Tao Sun \nStony Brook University\n\n",
"Cheng Lu \nXPeng Motors\n\n",
"Tianshuo Zhang \nXPeng Motors\n\n",
"Haibin Ling [email protected] \nStony Brook University\n\n"
]
| [
"Stony Brook University\n",
"XPeng Motors\n",
"XPeng Motors\n",
"Stony Brook University\n"
]
| []
| Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain. It is a challenging problem especially when a large domain gap lies between the source and target domains. In this paper we propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects. First, encouraged by the success of vision transformers in various vision tasks, we arm SSRT with a transformer backbone. We find that the combination of vision transformer with simple adversarial adaptation surpasses best reported Convolutional Neural Network (CNN)-based results on the challenging DomainNet benchmark, showing its strong transferable feature representation. Second, to reduce the risk of model collapse and improve the effectiveness of knowledge transfer between domains with large gaps, we propose a Safe Self-Refinement strategy. Specifically, SSRT utilizes predictions of perturbed target domain data to refine the model. Since the model capacity of vision transformer is large and predictions in such challenging tasks can be noisy, a safe training mechanism is designed to adaptively adjust learning configuration. Extensive evaluations are conducted on several widely tested UDA benchmarks and SSRT achieves consistently the best performances, including 85.43% on Office-Home, 88.76% on VisDA-2017 and 45.2% on DomainNet. | 10.1109/cvpr52688.2022.00705 | [
"https://arxiv.org/pdf/2204.07683v1.pdf"
]
| 247,935,498 | 2204.07683 | 7f1110ab4270bcf53b685bed0d024aec38a655c2 |
Safe Self-Refinement for Transformer-based Domain Adaptation
Tao Sun
Stony Brook University
Cheng Lu
XPeng Motors
Tianshuo Zhang
XPeng Motors
Haibin Ling [email protected]
Stony Brook University
Safe Self-Refinement for Transformer-based Domain Adaptation
Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain. It is a challenging problem especially when a large domain gap lies between the source and target domains. In this paper we propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects. First, encouraged by the success of vision transformers in various vision tasks, we arm SSRT with a transformer backbone. We find that the combination of vision transformer with simple adversarial adaptation surpasses best reported Convolutional Neural Network (CNN)-based results on the challenging DomainNet benchmark, showing its strong transferable feature representation. Second, to reduce the risk of model collapse and improve the effectiveness of knowledge transfer between domains with large gaps, we propose a Safe Self-Refinement strategy. Specifically, SSRT utilizes predictions of perturbed target domain data to refine the model. Since the model capacity of vision transformer is large and predictions in such challenging tasks can be noisy, a safe training mechanism is designed to adaptively adjust learning configuration. Extensive evaluations are conducted on several widely tested UDA benchmarks and SSRT achieves consistently the best performances, including 85.43% on Office-Home, 88.76% on VisDA-2017 and 45.2% on DomainNet.
Introduction
Deep neural networks have achieved impressive performance in a variety of machine learning tasks. However, the success often relies on a large amount of labeled training data, which can be costly or impractical to obtain. Unsupervised Domain Adaptation (UDA) [36] handles this issue by transferring knowledge from a labelrich source domain to a different unlabeled target domain. Over the past years, many UDA methods have been proposed [4,12,14,24,45]. Among them, adversarial adaptation [4,14,45] that learns domain-invariant feature repre-sentation using the idea of adversarial learning has been a prevailing paradigm. Deep UDA methods are usually applied in conjunction with a pretrained Convolutional Neural Network (CNN, e.g., ResNet [8]) backbone in vision tasks. On medium-sized classification benchmarks such as Office-Home [33] and VisDA [20], the reported state-of-the-arts are very impressive [12]. However, on large-scale datasets like DomainNet [19], the most recent results in the literature by our submission report a best average accuracy of 33.3% [10], which is far from satisfactory.
With the above observations, we focus our investigation on challenging cases from two aspects:
• First, from the representation aspect, it is desirable to use a more powerful backbone network. This directs our attention to the recently popularized vision transformers, which have been successfully applied to various vision tasks [2,3,43]. Vision transformer processes an image as a sequence of tokens, and uses global self-attention to refine this representation. With its long-range dependencies and large-scale pre-training, vision transformer obtains strong feature representation that is ready for downstream tasks. Despite this, its application in UDA is still under-explored. Hence we propose to integrate vision transformer to UDA. We find that by simply combining ViT-B/16 [3] with adversarial adaptation, it can achieve 38.5% average accuracy on DomainNet, better than the current arts using ResNet-101 [8,10]. This shows that the feature representation of vision transformer is discriminative as well as transferable across domains.
• Second, from the domain adaptation aspect, a more reliable strategy is needed to protect the learning process from collapse due to large domain gaps. As strong backbones with large capacity like vision transformer increase the chance of overfitting to source domain data, a regularization from target domain data is desired. A common practice in UDA is to utilize model predictions for selftraining or enforce clustering structure on target domain data [12,24,44]. While this helps generally, the supervisions can be noisy when the domain gap is large. Therefore, an adaptation method is expected to be Safe [11] enough to avoid model collapse.
Motivated by the above discussions, in this paper, we propose a novel UDA solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation). SSRT takes a vision transformer as the backbone network and utilizes predictions on perturbed target domain data to refine the adapted model. Specifically, we add random offsets to the latent token sequences of target domain data, and minimize the discrepancy of model's predicted probabilities between the original and perturbed versions using the Kullback Leibler (KL) divergence. This imposes a regularization on the corresponding transformer layers in effect. Moreover, SSRT has several important components that contribute to its excellent performance, including multi-layer perturbation and bi-directional supervision.
To protect the learning process from collapse, we propose a novel Safe Training mechanism. As UDA tasks vary widely even when they are drawn from the same dataset, a specific learning configuration (e.g., hyper-parameters) that works on most tasks may fail on some particular ones. The learning configuration is thus desired to be automatically adjustable. For example, for perturbation-based methods [17,25], a small perturbation may under-exploit their benefits while a large one may result in collapse. Recent works [1,29] apply a manually defined ramp-up period at the beginning of training. However, this cannot solve the issue when its maximum value is improper for the current task. In contrast, we propose to monitor the whole training process and adjust learning configuration adaptively. We use a diversity measure of model predictions on the target domain data to detect model collapse. Once it occurs, the model is restored to a previously achieved state and the configuration is reset. With this safe training strategy, our SSRT avoids significant performance deterioration on adaptation tasks with large domain gaps. The code is available at https://github.com/tsun/SSRT.
In summary, we make the following contributions: • We develop a novel UDA solution SSRT, which adopts a vision transformer backbone for its strong transferable feature representation, and utilizes the predictions on perturbed target domain data for model refinement.
Related Work
Unsupervised Domain Adaptation. There are several prevailing categories of UDA methods. Discrepancybased methods minimize the distribution divergence between source and target domains with discrepancy measures [15,28,32]. Adversarial adaptation methods learn domain-invariant representations by playing a two-player min-max game between the feature extractor and a domain discriminator [4,28,31,32]. Recently, many works exploit self-training for domain adaptation [16,46,47]. They generate pseudo labels for target domain data and take them as labeled data to refine the model.
Transformer in Vision.
Vision Transformer (ViT) [3] is a pioneering work that applies a convolution-free transformer structure for image classification. Following that, many ViT variants have been proposed [7,13,30,41]. Transformer has been applied successfully to various vision tasks including image classification [3,30], object detection [2], semantic segmentation [27], etc.
The application of vision transformer in domain adaptation, however, is still very scarce. Notably, two concurrent explorations [39,40] have been recently reported on arXiv. Specifically, CDTrans [39] is a pure transformer solution for UDA, and it applies cross attention on source-target image pairs. TVT [40] proposes a transferable multi-head selfattention module and combines it with adversarial adaptation. Our method is different in that it uses pairs of target domain data and their perturbed version to refine the model. This guarantees the same semantic class. Besides, we delicately design the components of our model and the training strategy to avoid collapse on challenging tasks.
Consistency Regularization. Consistency regularization is an important technique in semi-supervised learning that achieves state-of-the-art results [25]. It leverages the idea that model predictions should be similar for semantically identical data. Some methods create perturbed inputs with adversarial training [17], while others use standard data augmentations [1,25,37]. These works mostly manipulate raw input images. In contrast, our study focuses on the latent token sequence representation of vision transformer.
Proposed Method
Problem Formulation
In Unsupervised Domain Adaptation, there is a source domain with labeled data The objective is formulated as
D s = {(x s i , y s i )} ns i=1 from X × Y and a target domain with unlabeled data D t = {(x t i )} nt i=1 from X ,min f,g max d L = L CE − L d + βL tgt ,(1)
where L CE is the standard cross-entropy loss on source domain data, L d is domain adversarial loss, defined as
L d = −E x∼Ds log d(f (x)) −E x∼Dt log(1 − d(f (x))) ,
β is a trade-off parameter, and L tgt is a loss on target domain data. A common choice of L tgt is the Mutual Information Maximization loss [6,23]. In our method, we instantiate it as the self-refinement loss L SR , introduced in Sec. 3.4.
Method Framework
We aim to regularize the latent feature spaces of transformer backbone by refining the model with perturbed target domain data. Figure 1 illustrates the framework of our proposed SSRT. Only target domain data are shown here. The network consists of a vision transformer backbone and a classifier head. Domain discriminator is not plotted. For each target domain image, the Patch Embedding layer transforms it into a token sequence including a special class token and image tokens. Then the sequence is refined with a series of Transformer Blocks. The classifier head takes the class token and outputs label prediction. We randomly choose one transformer block and add a random offset to its input token sequence. Then the corresponding predicted class probabilities of original and perturbed versions are used for bi-directional self-refinement. To avoid noisy supervision, only reliable predictions are used via a Confidence Filter. To reduce the risk of model collapse, we use a safe training mechanism to learn the model.
Multi-layer Perturbation for Transformer
While many works manipulate the raw input images [1,17,25], it may be better to do that at hidden layers [34]. Vision transformer has some particular properties due to its special architecture. Since the Patch Embedding layer is merely a convolutional layer plus the position embedding, a linear operation on raw input can be shifted equivalently to the first transformer block. Besides, due to residual connections within transformer blocks, the token sequences at adjacent blocks are highly correlated. The best layer to add perturbation, however, varies across tasks. Empirically, perturbing relatively deep layers performs better but at a higher risk of model collapse. Therefore, we randomly choose one from multiple layers, which proves to be more robust than perturbing any single layer from them. In fact, it imposes a regularization on multiple layers simultaneously, making the learning process safer.
Given a target domain image x, let b l x be its input token sequence of the l-th transformer block. b l x can be viewed as a latent representation of x in a hidden space. Since its dimension is high while the support of target domain data is limited in the space, it is inefficient to perturb b l x arbitrarily. Instead, we utilize the token sequence b l xr of another randomly chosen target domain image x r to add an offset. The perturbed token sequence of b l x is obtained as
b l x = b l x + α[b l xr − b l x ] × ,(2)
where α is a scalar and [·] × means no gradient backpropagation. Note that although gradients cannot backpropagate through the offset, they can pass through b l x . The importance of this is elaborated in the following section.
In addition to the manually injected perturbation, the Dropout layer in the classifier head also works randomly for the two branches. This creates another source of discrepancy for the self-refinement loss.
Bi-directional Self-Refinement
Now we are ready to define the loss function used for self-refinement. Let p x andp x be the predicted probability vectors corresponding to b l x andb l x , respectively. To measure their distance, KL divergence is commonly used.
D KL (p t ∥p s ) = i p t [i]log p t [i] p s [i] ,(3)
where p t is the teacher probability (a.k.a. target probability) and p s is the student probability. Note that KL divergence is asymmetric in p t and p s . While it is natural to take p x as the teacher probability since it corresponds to the original data, we find the reverse also works. Moreover, as shown in Sec. 4.3, it is more robust to combine them together. Our bi-directional self-refinement loss is defined as
L SR = E Bt∼Dt ωE x∼F [Bt;p] D KL (p x ∥p x ) + (1 − ω)E x∼F [Bt;p] D KL (p x ∥p x ) ,(4)
where ω is a random variable drawn from a Bernoulli distribution B(0.5), F is a Confidence Filter defined as
F [D; p] = {x ∈ D|max(p x ) > ϵ},(5)
and ϵ is a predefined threshold. L SR refines the model with confident predictions and regularizes it to predict smoothly in the latent feature spaces. Typically, the loss gradient is only back-propagated through the student probability (i.e., p s in Eq. 3) [1,17,18]. We find, however, it is better to back-propagate gradient through both teacher and student probabilities in our framework. Recall that ∂L SR /∂b l x is propagated to b l x identically in Eq. 2. Each model parameter is therefore updated based on the joint effects from p x andp x . This avoids excessively large gradients from any single probability. We observe degraded performance when either the gradients of teacher probabilities in KL divergence or that of b l x are blocked.
Safe Training via Adaptive Adjustment
In the proposed self-refinement strategy, setting a proper value of the perturbation scalar α and the self-refinement loss weight β is critical. Excessively large perturbations lead to a collapse of the predicted class distribution, while a small one may under-exploit its benefit. Since the target domain is fully unlabeled and domain adaptation tasks vary widely even for the same dataset, it is desired to adjust these values adaptively. Some works [1,29] apply a rampup period at the beginning of training. While this alleviates the tendency to collapse during this period, it cannot solve the issue when the maximum value is improper for current adaptation tasks.
Algorithm 1 Safe Training Mechanism.
Initialization: last restore = 0, save snapshot of M 1: procedure CHECKDIVDROP(div, L, T , iter) 2:
for l = 1 to L do ▷ check at multi-scales 3: divs = div(iter − T, . . . , iter) ▷ get diversity 4: divs = split(divs, 2 l ) ▷ to even sub-intervals 5: for i = 0 to len(divs) − 1 do if iter % T == 0 and iter >= T then 16: if CHECKDIVDROP(div, L, T , iter) then 17: Restore M to last snapshot, t r = iter 18: if iter − last restore ≤ T r then 19: return M, T r , t r 26: end procedure Algorithm 2 SSRT algorithm.
T r = T r × 2 ▷
Input: Model M, source data D s , target data D t , confidence threshold ϵ, self-refinement loss weight β, perturbation scalar α, Safe Training parameters T and L, diversity measure div(·). Initialization: T r = T , t r = 0 1: for iter = 0 to max Iter do 2:
Sample a batch from source data and target data 3:
Obtain r via Eq. 6, α r = rα, β r = rβ 4:
Randomly choose l ∈ {0, 4, 8}, add perturbation via Eq. 2 using α r , obtain L SR via Eq. 4 5: Update model parameters via Eq. 1 using β r 6:
M, T r , t r ← SAFETRAINING(M, div, T, L, iter) 7: end for
We propose a Safe Training mechanism. The observation is that whenever the model begins to collapse, the diversity of model predictions will decrease simultaneously. Our goal is to detect such events while monitoring the training process. Once it occurs, the learning configuration is reset and meanwhile the model is restored to a previously achieved state. Specifically, an adaptive scalar r ∈ [0, 1] is adopted to modulate α and β, i.e., α r = rα and β r = rβ.
We define a fixed period T and divide the training process into consecutive intervals. A model snapshot is saved at the end of each interval. Then r is defined as
r(t) = sin π 2Tr (t − t r ) if t − t r < T r 1.0 otherwise ,(6)
where t is the current training step. Initially, T r = T and t r = 0. It hence takes T steps for r to ramp up to 1.0. At the end of each interval, the diversity of model predictions within this interval is checked to find abrupt dropping. If not existed, the formulation of r remains unchanged. Otherwise, t r is reset to current training step t, and the model is restored to last snapshot. To avoid oscillation between collapse and restoration, T r is doubled if the last restoration occurs within T r steps. Figure 1 illustrates the training process with adaptive adjustment. Two events of diversity dropping are identified (marked with pink areas), leading to two model restorations and reset of r. The remaining issue is which diversity measure to use and how to detect diversity dropping. We find that the number of unique model predicted labels on each target training batch B t works well. We hence define the following diversity measure: div(t; B t ) = unique labels(h(B t )).
To detect diversity dropping, we split the interval into subintervals and check whether the average diversity value drops across each sub-interval. We implement this at multiscales to improve the sensitivity of detection. Every consecutive sub-intervals of T /2 1 , · · · , T /2 L steps are checked for a given integer L. Details are listed in Alg. 1 and Alg. 2.
Experiments
We evaluate our method on four popular UDA benchmarks. Office-31 [22] We use the ViT-base and ViT-small with 16×16 patch size [3,26], pre-trained on ImageNet [21], as the vision transformer backbones. For all tasks, we use an identical set of hyper-parameters (α = 0.3, β = 0.2, ϵ = 0.4, T = 1000, L = 4). Ablation studies on them are provided in Sec. 4.6. More details can be found in the supplementary material.
Our comparison methods include DANN [4], CDAN [14], CDAN+E [14], SAFN [38], SAFN+ENT [38], CDAN+TN [35], SHOT [12], DCAN+SCDA [10], MDD+SCDA [10], SWD [9], MIMTFEL [5], TVT [40] and CDTrans [39]. "Baseline" is ViT with adversarial adaptation (see Sec. 3.1). We also include its combination with Mutual Information (MI) loss [6,23] in comparison.
Results on Benchmarks
Tables 1-4 present evaluation results on four benchmarks. We use "-S/B" to indicate results using ViTsmall/base backbones, respectively. For Office-Home and Offce-31, CNN-based methods use ResNet-50 as their backbones; whereas for DomainNet and VisDA they use ResNet-101. Generally, the transformer-based results are much better. This is attributed to its strong transferable feature representations. ViT-base is better than ViT-small, due to higher model complexity. Apparently, Baselines improve over source-only training. Integrating Mutual Information Table 5 verifies that applying perturbation to the latent token sequences performs better than to the raw input images on Office-Home (OH) and DomainNet (DN). Fig. 5a compares performances when adding the same amount of perturbation to each layer while not using safe training. As can be seen, the best layer to apply perturbation varies across tasks. Besides, a layer that works for one task may fail on others. In our experiments, we uniformly choose one layer from {0,4,8}. As a comparison, perturbing any single layer from it decreases the average accuracy on DomainNet by -1.0%, -1.5% and -1.5%, respectively.
Effects of Multi-layer Perturbation
Effects of Bi-directional Self-Refinement
Our method adopts bi-directional supervision for selfrefinement in Eq. 4. The main consideration is to improve method's safeness. Figure 3 compares with uni-directional self-refinement by fixing ω to be 0 or 1. In the upper two figures, their performance drops for relatively large confidence threshold ϵ. In the lower two figures, model collapse occurs after training for some steps. In contrast, bi-directional selfrefinement is more robust as it combines the two losses, thus reduces the negative effect of either one. Table 6 presents some quantitative results. On Office-Home, all losses perform similarly well. On DomainNet, bi-directional selfrefinement works better. However, they all fail on challenging tasks when target domain is qdr. This is solved with Safe Training.
Another important issue is when to back-propagate gradients. Table 7 shows that the performance degrades when either the gradient for b l x in Eq. 2 or the teacher probability of KL divergence in Eq. 4 are blocked. An interesting finding is that the bi-directional self-refinement appears to be more robust even when the gradients are blocked. We believe this is because the two losses are complementary.
Effects of Safe Training
As observed previously, the vanilla training strategy may fail on some tasks. The reason is that the predicted class distribution on target domain data collapses due to excessive perturbation or too large loss weight, even if they work well on other tasks. Safe Training adjusts their values adaptively to avoid such situation. Figure 2 presents detailed training histories on two representative tasks to show how it works. For qdr→clp, the adaptive scalar r quickly converges to 1.0 and the diversity stabilizes to a relatively high value. Training model with or without Safe Training performs similarly. For clp→qdr, diversity drops after some steps, and r resets to smaller values. A clear correlation between diversity and accuracy can be observed. For example, at step of 10k, the accuracy drops abruptly and diversity drops concurrently. Without Safe Training, model collapses after about 10k iterations. With Safe Training, the model trains normally and surpasses the baseline finally. It should be noted that model collapse mainly affects target domain data. For clp→qdr without safe training, the final accuracy on source domain is 96.9% while that on target domain is only 0.3%.
Visualization of Perturbation
To visualize the perturbed version of a target domain image x, we initialize a trainable variable x vis as x, and optimize x vis to minimize ∥b l x − b l xvis ∥ 2 , whereb l x is the perturbed token sequence of x and b l xvis is the corresponding token sequence of x vis . Then x vis gives us an idea on how the perturbation in the latent space reflects on the raw input images. other image can be observed. However, for deep layers, this effect is less noticeable due to highly non-linear transformation of the network. This also indicates the complementary in using multi-layer perturbation. Figure 5c and 5d plots accuracy curves vs. the perturbation scalar α and the selfrefinement loss weight β. Even for obviously unreasonable values like α = 0.5, Safe Training can still adjust them adaptively to avoid model collapse. When α = 0, our method still has some gain over baseline. This is due to random dropout operations in the classifier head.
Ablation Studies
Conclusion
In this paper, we propose a novel UDA method named SSRT. It leverages a vision transformer backbone, and uses perturbed target domain data to refine the model. A safe training strategy is developed to avoid model collapse. Experiments on benchmarks show its best performance.
Limitation. Although we advance the average accuracy on DomainNet to 45.2%, it is far from saturated. One way is to combine multiple source domains. Another way is to incorporate some meta knowledge about target domains. We plan to extend our study in these directions in the future.
A. More Model and Training Details
Our implementation is based on the timm library 1 . We use ViT-B/16 [3] (vit base patch16 224 in timm) and ViT-S/16 [3] (vit small patch16 224 in timm) as the vision transformer backbones in the paper. Transformer weights are restored from the checkpoints released by official Google JAX implementation 2 , which are obtained by first training on ImageNet-21k [21] and then fine-tuning on Image-1k [21,26]. The classifier head consists of a bottleneck module (Linear → BatchNorm1d → ReLU → Dropout(0.5)) and a class predictor (Linear → ReLU → Dropout(0.5) → Linear). The domain discriminator has the same network structure as the class predictor except having only one output.
During the training procedure, images are first resized to 256 × 256 pixels, randomly flipped horizontally, and then randomly cropped and resized to 224×224 pixels. The only exception is for VisDA-2017 [20], where center-cropping of size 224 × 224 is used. During the test procedure, images are first resized to 256 × 256 pixels and then center-cropped to 224×224 pixels. To train the model, we adopt mini-batch Stochastic Gradient Descent (SGD) with momentum of 0.9. Learning rate is scheduled as lr = lr 0 * (1 + 1e −3 · i) −0.75 , where lr 0 is initial learning rate, i is training step. The learning rate of parameters of vision transformer backbone is set to be 1/10 of lr.
B. More Analysis on Bi-directional Self-Refinement Table A.1 provides additional results when blocking gradient back-propagation for different variables. Similar to the results listed in the paper (see Tab. 7), allowing gradient back-propagation of the teacher probabilities in KL divergence and b l x works better than other variants.
C. More Analysis on Safe Training
In our method, we adopt a Confidence Filter to remove noisy supervisions. If it not used (i.e., ϵ = 0), the performance may deteriorate. Table A.6 shows that using Safe Training can avoid significant performance drops, making the method much safer.
D. More Analysis on Multi-layer Perturbation
Figure A.1 provides additional results when adding the same amount of perturbation to each layer while not using safe training. As can be seen in the left figure, the best layer to apply perturbation varies across tasks. Besides, a layer that works for one task may fail on others. To see Table A.2 includes comparison results when adding the perturbation to raw input or a single layer ({0} or {4} or {8}) in our proposed SSRT method. As can be seen, perturbing raw input performs similarly to perturbing the 0-th transformer block. Besides, perturbing any single layer degrades the performance on some adaptations tasks. In contrast, multi-layer perturbation combines their merits and obtains the best results.
E. Analysis on Model's Robustness
In our proposed SSRT, we use perturbed target domain data to refine the model during the training procedure. In this section, we provide analysis on model's robustness against perturbation during the test procedure. For each testing target domain data, we follow the same way as described in the paper to add a random offset to its latent token sequence, and use the perturbed token sequence to make prediction. To analyze model's robustness against perturbation at different layers, we add perturbation to different transformer block as well as the raw input. The perturbation magnitude is controlled by a scalar α as used in the paper. Figure A.3 shows results (averaged over 6 random runs) on P r → Ar and clp → pnt. As can be seen, our method is more robust than Baseline. Even when adding a larger amount of perturbation (α = 0.4) than seen during training, SSRT incurs less accuracy decrease.
F. Comparison with SSL methods
Since Unsupervised Domain Adaptation (UDA) is closely related to Semi-Supervised Learning (SSL), in this section, we compare our method with two representative techniques in SSL, i.e., Mixup [42] and VAT [17]. Mixup regularizes the model to predict linearly between samples. Specifically, let x 1 and x 2 be two target domain data, p 1 = h(x 1 ) and p 2 = h(x 2 ) be the corresponding model predictions, Mixup first interpolates between two samples by λ ∼ Beta(α λ , α λ )
x ′ = λx 1 + (1 − λ)x 2 (9)
p ′ = λp 1 + (1 − λ)p 2(10)
Its loss function is
L mixup = E x1,x2∼Dt ∥h(x ′ ) − p ′ ∥ 2(11)
VAT enforces the model to predict consistently within the norm-ball neighborhood of each target data x. Its loss function is
L VAT = E x∼Dt max ∥r∥≤ρ D KL (h(x)∥h(x + r))(12)
We use L mixup and L VAT as the L tgt in our objective function. The trade-off parameter β is set to be 0.2 for both, same as used in our method. For Mixup, α λ is set to be 0.5. We linearly ramp up β to its maximum value over 1/4 of all training steps as used in [1,29]. Instead of interpolating probabilities, we interpolate unnormalized logits, as it is shown to perform slightly better. For VAT, ρ is set to be 100. Both two techniques are applied to the raw input images. Table A.5 presents results on three benchmarks using ViT-base backbone. Detailed numbers can be found in Tables A.2-A.4. On Office-Home [33] and VisDA-2017 [20], Mixup and VAT perform better than Baseline-B, and slightly worse than ours. On DomainNet [19], VAT still works. However, for Mixup, although we tried different hyper-parameters, it is still inferior to Baseline-B.
G. Results with ViT-small Backbone
ViT-small is a smaller version of ViT-base by halving the number of Self-Attention Heads and token embedding dimension of ViT-base. It has fewer parameters (∼22M params) than ResNet-101 (∼45M params). We empirically found that it convergences much slower than ViT-base, so we double the maximum training iterations. An alternative is to pretrain the model on the source data first and then adapt it to the target data. As can be seen from Tab. A.2, our proposed SSRT-S achieves +5.1% higher accuracy than MDD+SCDA (ResNet-101 backbone) on DomainNet, despite that ViT-small has fewer parameters than ResNet-101.
Figure 1 .
1Overview of SSRT. (Left) Illustration of Self-Refinement for our transformer-based model. The two branches share parameters. Random offsets are added to the input token sequences of transformer (TF) blocks. The model is refined using its predictions of the original and perturbed versions supervised by KL divergence. (Right) Illustration of Safe Training mechanism. See text for details. nary domain discrimination d(·; θ d ) : Z → [0, 1] that maps features to domain labels.
procedure SAFETRAINING(M, div, T , L, iter) 15:
Figure 2 .
2Representative training histories using Safe Training (ST) on DomainNet clp→qdr and qdr→clp. (Left) Plots of the diversity of model predictions on target domain data and the adaptive scalar r. For better visualization, both original values (light color) and smoothed values (dark color) of diversity are shown. (Right) Plots of comparison test accuracies on target domain data.
contains 4,652 images of 31 classes from three domains: Amazon (A), DSLR (D) and Webcam (W). Office-Home [33] consists of 15,500 images of 65 classes from four domains: Artistic (Ar), Clip Art (Cl), Product (Pr), and Real-world (Rw) images. VisDA-2017 [20] is a Synthetic-to-Real dataset, with about 0.2 million images in 12 classes. DomainNet [19] is the largest DA dataset containing about 0.6 million images of 345 classes in 6 domains: Clipart (clp), Infograph (inf), Painting (pnt), Quickdraw (qdr), Real (rel), Sketch (skt).
Figure 3 .
3Comparison of self-refinement losses. (Upper) Varying confidence threshold ϵ. (Lower) Test accuracies on target domain data. (Safe Training not applied)
Figure 4 .
4Visualization of perturbation at different layers.
Figure 5 .
5Figure 4 visualizes perturbed version of two images when adding perturbation to different transformer blocks. For shallow layers, an effect of blending with the Plots of ablation studies. Horizontal dash lines indicate baseline accuracies. ( † Safe Training not applied)
Figure 5
5presents ablation studies on hyper-parameters.Figure 5a plots results of perturbing different layers. Figure 5b plots Safe Training with different parameters. T and L affects its granularity. A smaller T implies more quick response. A larger L increases sensitivity but at the risk of more false-positive detections. Many combinations of T and L work well in our method.
Figure
A.1. Perturbation at different layer. † No gradient backpropagation for b l x . the importance of allowing gradient back-propagation for b l x (see Sec. 3.3 and Sec. 3.4 in the paper), the right figure shows that the model collapses when add perturbation to relatively deep layers while blocking the gradients of b l x .
Figure
A.2. Mixup with different hyperparameters. The legend for Mixup is formed as Mixup(β,α λ ).
Figure A. 3 .
3Analysis of model's robustness. The dashlines indicate true test accuracy on the target domain data. The bars show decreases of accuracies when adding perturbations to different layers during the test procedure.
Figure A.2 shows two adaptations tasks where Mixup fails.
• We propose a safe training strategy to protect the learning process from collapse due to large domain gaps.It adap-
tively adjusts learning configuration during the training
process with a diversity measure of model predictions on
target domain data.
• SSRT is among the first to explore vision transformer
for domain adaptation. Vision transformer-based UDA
has shown promising results, especially on large-scale
datasets like DomainNet.
• Extensive experiments are conducted on widely tested
benchmarks. Our SSRT achieves the best performances,
including 85.43% on Office-Home, 88.76% on VisDA-
2017 and 45.2% on DomainNet.
where X is the input space and Y is the label space. UDA aims to learn a classifier h = g • f , where f (·; θ f ) : X → Z denotes the feature extractor, g(·; θ g ) : Z → Y denotes the class predictor, and Z is the latent space. Adversarial adaptation learns domain-invariant feature via a bi-Classifier
Patch
Emb
Random offset
Softmax
f Confidence Filter
Randomly choose one
f
f
KL divergence
…
TF
Block
TF
Block
TF
Block
…
TF
Block
…
TF
Block
TF
Block …
Softmax
Classifier
…
Model snapshot
(iter)
Diversity
Adaptive scalar
Restore last snapshot
Table 1 .
1Accuracies (%) on Office-Home. * CDTrans uses DeiT-base backbone. • TVT uses ViT-base backbone. "-S/B" indicates ViTsmall/base backbones, respectively.MethodAr Cl Ar Pr Ar Rw Cl Ar Cl Pr Cl Rw Pr Ar Pr Cl Pr Rw Rw Ar Rw Cl Rw Pr Avg.ResNet-50 [8]
34.9
50.0
58.0
37.4
41.9
46.2
38.5
31.2
60.4
53.9
41.2
59.9
46.1
CDAN+E [14]
50.7
70.6
76.0
57.6
70.0
70.0
57.4
50.9
77.3
70.9
56.7
81.6
65.8
SAFN [38]
52.0
71.7
76.3
64.2
69.9
71.9
63.7
51.4
77.1
70.9
57.1
81.5
67.3
CDAN+TN [35]
50.2
71.4
77.4
59.3
72.7
73.1
61.0
53.1
79.5
71.9
59.0
82.9
67.6
SHOT [12]
57.1
78.1
81.5
68.0
78.2
78.1
67.4
54.9
82.2
73.3
58.8
84.3
71.8
DCAN+SCDA [10] 60.7
76.4
82.8
69.8
77.5
78.4
68.9
59.0
82.7
74.9
61.8
84.5
73.1
CDTrans * [39]
68.8
85.0
86.9
81.5
87.1
87.3
79.6
63.3
88.2
82.0
66.0
90.6
80.5
TVT • [40]
74.89
86.82
89.47
82.78
87.95
88.27
79.81
71.94
90.13
85.46
74.62
90.56
83.56
ViT-S [3]
47.01
76.98
83.54
69.84
77.11
80.42
68.15
44.08
82.86
74.78
47.97
84.66
69.78
Baseline-S
59.59
80.11
84.67
73.84
78.49
81.36
74.41
59.82
86.27
80.10
62.59
87.23
75.71
SSRT-S (ours)
67.03
84.21
88.32
79.85
84.28
87.58
80.72
66.03
88.27
82.04
69.44
89.86
80.64
ViT-B [3]
54.68
83.04
87.15
77.30
83.42
85.54
74.41
50.90
87.22
79.56
53.79
88.80
75.48
Baseline-B
66.96
85.74
88.07
80.06
84.12
86.67
79.52
67.03
89.44
83.64
70.15
91.17
81.05
Baseline-B+MI
70.63
88.62
89.99
82.08
87.84
89.28
81.01
68.82
91.26
85.17
71.66
92.45
83.23
SSRT-B (ours)
75.17
88.98
91.09
85.13
88.29
89.95
85.04
74.23
91.26
85.70
78.58
91.78
85.43
0
50
100
150
200
250
300
350
400
15
20
25
30
Diversity
0.00
0.25
0.50
0.75
1.00
r
clp qdr
0
50
100
150
200
250
300
350
400
Step (× 100)
Table 2 .
2Accuracies (%) on DomainNet. In each sub-table, the column-wise means source domain and the row-wise means target domain.ResNet-
101 [8]
clp inf pnt qdr rel skt Avg.
MIMTFL
[5]
clp inf pnt qdr rel skt Avg. CDAN [14] clp inf pnt qdr rel skt Avg.
clp
-19.3 37.5 11.1 52.2 41.0 32.2
clp
-15.1 35.6 10.7 51.5 43.1 31.2
clp
-20.4 36.6 9.0 50.7 42.3 31.8
inf
30.2 -31.2 3.6 44.0 27.9 27.4
inf
32.1 -31.0 2.9 48.5 31.0 29.1
inf
27.5 -25.7 1.8 34.7 20.1 22.0
pnt
39.6 18.7 -
4.9 54.5 36.3 30.8
pnt
40.1 14.7 -
4.2 55.4 36.8 30.2
pnt
42.6 20.0 -
2.5 55.6 38.5 31.8
qdr
7.0 0.9 1.4
-
4.1 8.3 4.3
qdr
18.8 3.1 5.0
-16.0 13.8 11.3
qdr
21.0 4.5 8.1
-14.3 15.7 12.7
rel
48.4 22.2 49.4 6.4
-38.8 33.0
rel
48.5 19.0 47.6 5.8
-39.4 32.1
rel
51.9 23.3 50.4 5.4
-41.4 34.5
skt
46.9 15.4 37.0 10.9 47.0 -
31.4
skt
51.7 16.5 40.3 12.3 53.5 -
34.9
skt
50.8 20.3 43.0 2.9 50.8 -
33.6
Avg.
34.4 15.3 31.3 7.4 40.4 30.5 26.6
Avg.
38.2 13.7 31.9 7.2 45.0 32.8 28.1
Avg.
38.8 17.7 32.8 4.3 41.2 31.6 27.7
MDD+
SCDA [10]
clp inf pnt qdr rel skt Avg.
CD-
Trans * [39]
clp inf pnt qdr rel skt Avg.
ViT-B [3]
clp inf pnt qdr rel skt Avg.
clp
-20.4 43.3 15.2 59.3 46.5 36.9
clp
-27.9 57.6 27.9 73.0 58.8 49.0
clp
-27.2 53.1 13.2 71.2 53.3 43.6
inf
32.7 -34.5 6.3 47.6 29.2 30.1
inf
58.6 -53.4 9.6 71.1 47.6 48.1
inf
51.4 -49.3 4.0 66.3 41.1 42.4
pnt
46.4 19.9 -
8.1 58.8 42.9 35.2
pnt
60.7 24.0 -13.0 69.8 49.6 43.4
pnt
53.1 25.6 -
4.8 70.0 41.8 39.1
qdr
31.1 6.6 18.0 -28.8 22.0 21.3
qdr
2.9 0.4 0.3
-
0.7 4.7 1.8
qdr
30.5 4.5 16.0 -27.0 19.3 19.5
rel
55.5 23.7 52.9 9.5
-45.2 37.4
rel
49.3 18.7 47.8 9.4
-33.5 31.7
rel
58.4 29.0 60.0 6.0
-45.8 39.9
skt
55.8 20.1 46.5 15.0 56.7 -
38.8
skt
66.8 23.7 54.6 27.5 68.0 -
48.1
skt
63.9 23.8 52.3 14.4 67.4 -
44.4
Avg.
44.3 18.1 39.0 10.8 50.2 37.2 33.3
Avg.
47.7 18.9 42.7 17.5 56.5 38.8 37.0
Avg.
51.5 22.0 46.1 8.5 60.4 40.3 38.1
Baseline-B clp inf pnt qdr rel skt Avg.
Baseline-B
+MI
clp inf pnt qdr rel skt Avg.
SSRT-B
(ours)
clp inf pnt qdr rel skt Avg.
clp
-30.9 53.3 16.3 72.7 55.4 45.7
clp
-30.5 55.8 18.1 74.7 57.5 47.3
clp
-33.8 60.2 19.4 75.8 59.8 49.8
inf
43.0 -40.8 7.8 56.4 35.9 36.8
inf
53.2 -52.8 9.2 68.3 45.3 45.8
inf
55.5 -54.0 9.0 68.2 44.7 46.3
pnt
55.7 28.6 -
7.4 70.5 48.3 42.1
pnt
56.8 27.6 -
7.3 70.8 49.3 42.4
pnt
61.7 28.5 -
8.4 71.4 55.2 45.0
qdr
25.5 5.2 9.7
-15.5 17.1 14.6
qdr
31.6 5.1 13.3 -25.3 23.0 19.6
qdr
42.5 8.8 24.2 -37.6 33.6 29.3
rel
62.3 32.5 62.5 8.2
-50.7 43.2
rel
65.7 32.4 63.9 6.9
-51.7 44.1
rel
69.9 37.1 66.0 10.1 -58.9 48.4
skt
66.4 30.6 58.0 18.1 70.1 -
48.6
skt
68.9 30.6 61.0 19.3 72.9 -
50.5
skt
70.6 32.8 62.2 21.7 73.2 -
52.1
Avg.
50.6 25.6 44.9 11.6 57.0 41.5 38.5
Avg.
55.2 25.2 49.4 12.2 62.4 45.3 41.6
Avg.
60.0 28.2 53.3 13.7 65.3 50.4 45.2
Table 3 .
3Accuracies (%) on VisDA-2017.Method
plane
bcycl
bus
car
horse
knife
mcycl
person
plant
sktbrd
train
truck
Avg.
ResNet-101 [8]
55.1
53.3
61.9
59.1
80.6
17.9
79.7
31.2
81.0
26.5
73.5
8.5
52.4
DANN [4]
81.9
77.7
82.8
44.3
81.2
29.5
65.1
28.6
51.9
54.6
82.8
7.8
57.4
CDAN [14]
85.2
66.9
83.0
50.8
84.2
74.9
88.1
74.5
83.4
76.0
81.9
38.0
73.9
SAFN [38]
93.6
61.3
84.1
70.6
94.1
79.0
91.8
79.6
89.9
55.6
89.0
24.4
76.1
SWD [9]
90.8
82.5
81.7
70.5
91.7
69.5
86.3
77.5
87.4
63.6
85.6
29.2
76.4
SHOT [12]
94.3
88.5
80.1
57.3
93.1
94.9
80.7
80.3
91.5
89.1
86.3
58.2
82.9
CDTrans * [39]
97.1
90.5
82.4
77.5
96.6
96.1
93.6
88.6
97.9
86.9
90.3
62.8
88.4
TVT • [40]
92.92
85.58
77.51
60.48
93.60
98.17
89.35
76.40
93.56
92.02
91.69
55.73
83.92
ViT-B [3]
99.09
60.66
70.55
82.66
96.50
73.06
97.14
19.73
64.48
94.74
97.21
15.36
72.60
Baseline-B
98.55
82.59
85.97
57.07
94.93
97.20
94.58
76.68
92.11
96.54
94.31
52.24
85.23
Baseline-B+MI
98.63
90.79
81.83
47.28
96.29
98.36
84.68
70.70
93.30
97.54
94.55
55.03
84.08
SSRT-B (ours)
98.93
87.60
89.10
84.77
98.34
98.70
96.27
81.08
94.86
97.90
94.50
43.13
88.76
Table 4 .
4Accuracies (%) on Office-31. DomainNet some domains have large gaps from the others, such as inf and qdr. Transferring among these domains and others is very difficult. It is thus desired to transfer safely and not deteriorate the performance significantly. Looking at tasks with qdr being target domain, SSRT-B obtains 29.3% average accuracy, while many other methods perform poorly. We illustrate the effects of some important components that contribute to our excellent performance in the following sections.Method
A W D W W D A D D A W A Avg.
ResNet-50 [8]
68.4 96.7 99.3 68.9 62.5 60.7 76.1
DANN [4]
82.0 96.9 99.1 79.7 68.2 67.4 82.2
SAFN+ENT [38] 90.1 98.6 99.8 90.7 73.0 70.2 87.1
CDAN+TN [35] 95.7 98.7 100. 94.0 73.4 74.2 89.3
SHOT [12]
90.1 98.4 99.9 94.0 74.7 74.3 88.6
MDD+SCDA [10] 95.3 99.0 100. 95.4 77.2 75.9 90.5
CDTrans * [39]
96.7 99.0 100. 97.0 81.1 81.9 92.6
TVT • [40]
96.4 99.4 100. 96.4 84.9 86.1 93.8
ViT-S [3]
86.9 98.6 100. 88.6 76.0 75.9 87.7
Baseline-S
91.9 99.1 100. 89.2 78.4 77.9 89.4
SSRT-S (ours)
95.7 99.2 100. 95.8 79.2 79.9 91.6
ViT-B [3]
91.2 99.2 100. 90.4 81.1 80.6 90.4
Baseline-B
92.5 99.2 100. 93.6 80.7 80.7 91.1
SSRT-B (ours)
97.7 99.2 100. 98.6 83.5 82.2 93.5
Table 5. Accuracies (%) compared with perturbing raw inputs. X †
means averaged over all 5 tasks with X being the target domain.
OH
DN
clp † inf † pnt † qdr † rel † skt †
Baseline-B
81.1
38.5
50.6 25.6 44.9 11.6 57.0 41.5
SSRT-B (raw) 85.0
44.2
58.6 26.7 51.7 13.7 63.9 50.8
SSRT-B
85.4
45.2
60.0 28.2 53.3 13.7 65.3 50.4
(MI) loss further improves. Compared with other meth-
ods, SSRT-B performs the best on Office-Home, Domain-
Net and VisDA. It improves 4.38% on Office-Home, 3.53%
on VisDA-2017 and 6.7% on DomainNet over Baseline-B
despite that Baseline-B is already very strong. In particular,
on the challenging DomainNet dataset, SSRT-B achieves an
impressive 45.2% average accuracy. It is worth mentioning
that in
Table 6 .
6Accuracies (%) using comparison losses. All results are reported at training step of 20k. X † means averaged over all 5 tasks with X being target domain. ‡ Using Safe Training.OH DN clp † inf † pnt † qdr † rel † skt †Baseline-B
81.1
38.9
50.7 25.5 46.1 11.9 57.4 42.0
ω = 0
85.5
41.1
57.3 22.0 52.2 1.8 63.4 49.9
ω = 1
85.7
40.1
56.6 23.4 48.1 0.3 63.3 49.0
ω ∼ B(0.5)
85.4
41.8
57.0 26.6 53.0 1.8 63.2 49.5
ω ∼ B(0.5) ‡
85.4
43.4
57.0 28.2 51.8 13.0 62.9 47.4
Table 7 .
7Blocking gradient back-propagation for different variables. Note that px andpx in the table only refer to the teacher probability in KL divergence. (Safe Training not applied)b l
x
pxpx
Pr Ar
Pr Cl
Pr Rw
ω = 0
×
4.70
2.66
16.39
ω = 1
×
79.15
44.38
89.14
ω ∼ B(0.5)
×
×
84.10
71.32
90.75
ω ∼ B(0.5)
×
84.38
72.60
90.87
ω ∼ B(0.5)
85.74
74.98
91.16
block 0
block 2
block 4
block 8
Table A .
A1. Blocking gradient back-propagation for different variables. Note that px andpx in the table only refer to the teacher probability in KL divergence. (Safe Training not applied)b l
x
pxpx
Cl Ar
Cl Pr
Cl Rw
ω = 0
×
1.61
12.71
6.08
ω = 1
×
81.17
85.00
87.28
ω ∼ B(0.5)
×
×
83.68
85.69
88.04
ω ∼ B(0.5)
×
84.55
87.27
89.49
ω ∼ B(0.5)
85.21
87.88
89.58
Table A .
A2. Accuracies (%) on DomainNet. In each sub-table, the column-wise means source domain and the row-wise means target domain. "-S/B" indicates ViT-small/base backbones, respectively. SCDA[10] clp inf pnt qdr rel skt Avg.ViT-B clp inf pnt qdr rel skt Avg. Baseline-B clp inf pnt qdr rel skt Avg.VAT-B [17] clp inf pnt qdr rel skt Avg.SSRT-B raw input clp inf pnt qdr rel skt Avg.SSRT-B{0} clp inf pnt qdr rel skt Avg.ViT-S clp inf pnt qdr rel skt Avg. Baseline-S clp inf pnt qdr rel skt Avg. SSRT-S clp inf pnt qdr rel skt Avg.MDD+
clp
-20.4 43.3 15.2 59.3 46.5 36.9
clp
-27.2 53.1 13.2 71.2 53.3 43.6
clp
-30.9 53.3 16.3 72.7 55.4 45.7
inf
32.7 -34.5 6.3 47.6 29.2 30.1
inf
51.4 -49.3 4.0 66.3 41.1 42.4
inf
43.0 -40.8 7.8 56.4 35.9 36.8
pnt
46.4 19.9 -
8.1 58.8 42.9 35.2
pnt
53.1 25.6 -
4.8 70.0 41.8 39.1
pnt
55.7 28.6 -
7.4 70.5 48.3 42.1
qdr
31.1 6.6 18.0 -28.8 22.0 21.3
qdr
30.5 4.5 16.0 -27.0 19.3 19.5
qdr
25.5 5.2 9.7
-15.5 17.1 14.6
rel
55.5 23.7 52.9 9.5
-45.2 37.4
rel
58.4 29.0 60.0 6.0
-45.8 39.9
rel
62.3 32.5 62.5 8.2
-50.7 43.2
skt
55.8 20.1 46.5 15.0 56.7 -
38.8
skt
63.9 23.8 52.3 14.4 67.4 -
44.4
skt
66.4 30.6 58.0 18.1 70.1 -
48.6
Avg.
44.3 18.1 39.0 10.8 50.2 37.2 33.3
Avg.
51.5 22.0 46.1 8.5 60.4 40.3 38.1
Avg.
50.6 25.6 44.9 11.6 57.0 41.5 38.5
clp
-33.1 57.1 19.5 75.8 59.8 49.0
clp
-32.7 60.0 19.0 75.3 59.8 49.3
clp
-33.2 59.7 19.6 75.3 58.7 49.3
inf
48.3 -45.2 9.8 55.0 37.4 39.2
inf
55.0 -54.0 8.9 67.8 48.1 46.8
inf
54.8 -53.5 9.3 67.7 46.1 46.3
pnt
60.0 30.9 -
7.9 71.1 52.6 44.5
pnt
61.6 28.6 -
8.2 71.3 55.4 45.0
pnt
61.2 29.0 -
7.1 71.2 55.0 44.7
qdr
26.7 5.4 9.2
-18.1 18.3 15.5
qdr
36.3 6.2 16.1 -32.1 31.2 24.4
qdr
40.8 7.0 13.2 -35.4 31.1 25.5
rel
68.7 35.3 65.0 7.8
-56.8 46.7
rel
69.8 35.6 66.1 12.4 -59.2 48.6
rel
69.6 35.7 65.7 10.7 -58.7 48.1
skt
70.2 33.3 65.0 17.6 72.2 -
51.7
skt
70.3 30.5 62.3 20.0 73.2 -
51.3
skt
69.7 32.1 62.0 19.0 72.8 -
51.1
Avg.
54.8 27.6 48.3 12.5 58.4 45.0 41.1
Avg.
58.6 26.7 51.7 13.7 63.9 50.8 44.2
Avg.
59.2 27.4 50.8 13.1 64.5 49.9 44.2
SSRT-B
{4}
clp inf pnt qdr rel skt Avg.
SSRT-B
{8}
clp inf pnt qdr rel skt Avg.
SSRT-B
{0,4,8}
clp inf pnt qdr rel skt Avg.
clp
-31.8 58.9 17.8 75.7 59.4 48.7
clp
-32.4 59.0 18.6 75.6 59.9 49.1
clp
-33.8 60.2 19.4 75.8 59.8 49.8
inf
53.5 -50.5 8.6 67.8 47.5 45.6
inf
55.9 -54.8 7.6 68.5 48.2 47.0
inf
55.5 -54.0 9.0 68.2 44.7 46.3
pnt
61.3 29.2 -
8.1 71.3 54.3 44.8
pnt
61.5 27.4 -
8.5 71.4 54.6 44.7
pnt
61.7 28.5 -
8.4 71.4 55.2 45.0
qdr
42.5 7.7 17.0 -23.3 33.4 24.8
qdr
33.6 5.7 11.3 -31.4 31.8 22.7
qdr
42.5 8.8 24.2 -37.6 33.6 29.3
rel
68.7 36.1 65.5 8.2
-57.6 47.2
rel
69.6 36.2 65.9 6.9
-58.1 47.3
rel
69.9 37.1 66.0 10.1 -58.9 48.4
skt
70.1 31.8 62.2 17.7 73.1 -
51.0
skt
69.9 30.9 62.3 19.8 73.3 -
51.2
skt
70.6 32.8 62.2 21.7 73.2 -
52.1
Avg.
59.2 27.3 50.8 12.1 62.2 50.4 43.7
Avg.
58.1 26.5 50.6 12.3 64.0 50.5 43.7
Avg.
60.0 28.2 53.3 13.7 65.3 50.4 45.2
clp
-23.0 46.2 11.9 66.3 46.2 38.7
clp
-27.0 49.0 12.8 68.2 49.1 41.2
clp
-28.5 53.1 12.1 69.9 52.1 43.1
inf
42.9 -42.8 3.8 62.3 33.9 37.1
inf
41.8 -43.1 2.7 63.0 33.0 36.7
inf
47.5 -49.8 1.5 64.9 39.7 40.7
pnt
45.2 22.2 -
3.5 66.5 35.7 34.6
pnt
48.8 25.7 -
3.1 67.0 40.8 37.1
pnt
53.0 26.5 -
4.4 67.3 46.7 39.6
qdr
19.7 3.3 7.8
-14.6 12.7 11.6
qdr
21.8 5.8 9.6
-15.3 15.2 13.5
qdr
31.3 6.9 13.0 -24.4 24.0 19.9
rel
50.8 24.2 54.2 4.6
-37.3 34.2
rel
54.6 28.7 57.5 3.6
-41.3 37.1
rel
60.0 31.2 60.5 4.6
-48.5 41.0
skt
57.2 19.5 47.1 13.9 62.5 -
40.0
skt
60.9 26.2 53.9 10.6 67.5 -
43.8
skt
63.8 28.6 57.0 13.7 68.7 -
46.4
Avg.
43.1 18.5 39.6 7.5 54.4 33.2 32.7
Avg.
45.6 22.7 42.6 6.5 56.2 35.9 34.9
Avg.
51.1 24.4 46.7 7.3 59.0 42.2 38.4
Table A .
A3. Accuracies (%) on Office-Home. Ar Cl Ar Pr Ar Rw Cl Ar Cl Pr Cl Rw Pr Ar Pr Cl Pr Rw Rw Ar Rw Cl Rw Pr Avg.Table A.4. Accuracies (%) on VisDA-2017. Table A.5. Comparisons with SSL methods. X † means averaged over all 5 tasks with X being the target domain. clp † inf † pnt † qdr † rel † skt † Table A.6. Accuracies (%) without Confidence Filter. ( † Safe Training not applied) Cl Ar Cl Pr Cl Rw Pr Ar Pr Cl Pr RwMethod
Baseline-B
66.96
85.74
88.07
80.06
84.12
86.67
79.52
67.03
89.44
83.64
70.15
91.17
81.05
Mixup-B [42]
71.32
86.66
88.82
82.45
84.79
87.58
82.90
71.68
90.77
85.46
74.36
91.37
83.18
VAT-B [17]
71.52
89.39
90.48
86.11
88.53
89.33
84.59
72.23
90.84
86.61
72.83
92.48
84.58
SSRT-B (ours)
75.17
88.98
91.09
85.13
88.29
89.95
85.04
74.23
91.26
85.70
78.58
91.78
85.43
Method
plane
bcycl
bus
car
horse
knife
mcycl
person
plant
sktbrd
train
truck
Avg.
Baseline-B
98.55
82.59
85.97
57.07
94.93
97.20
94.58
76.68
92.11
96.54
94.31
52.24
85.23
Mixup-B [42]
98.88
86.56
88.64
72.32
98.06
98.07
95.91
83.00
94.09
98.07
94.55
50.36
88.21
VAT-B [17]
99.15
87.71
90.85
67.81
98.81
98.17
97.57
76.65
92.88
98.73
96.27
57.37
88.50
SSRT-B (ours)
98.93
87.60
89.10
84.77
98.34
98.70
96.27
81.08
94.86
97.90
94.50
43.13
88.76
Office-
Home
VisDA
Domain-
Net
Baseline-B
81.1
85.2
38.5
50.6 25.6 44.9 11.6 57.0 41.5
Mixup-B
83.2
88.2
-
-
-
-
-
-
-
VAT-B
84.1
88.5
41.1
54.8 27.6 48.3 12.5 58.4 45.0
SSRT-B
85.4
88.8
45.2
60.0 28.2 53.3 13.7 65.3 50.4
Baseline-B
80.06
84.12
86.67
79.52
67.03
89.44
SSRT-B †
59.33
86.98
89.74
73.92
20.30
90.59
SSRT-B
84.51
86.98
89.30
82.65
67.79
91.16
https://github.com/rwightman/pytorch-imagemodels/blob/master/timm/models/vision transformer.py 2 https://github.com/google-research/vision transformer
Mixmatch: A holistic approach to semi-supervised learning. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel, NeurIPS. 11David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. In NeurIPS, 2019. 2, 3, 4, 11
Endto-end object detection with transformers. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko, ECCV. 1Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End- to-end object detection with transformers. In ECCV, pages 213-229, 2020. 1, 2
Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, 79Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. ICLR, 2021. 1, 2, 5, 6, 7, 9
Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, ICML. 7Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pages 1180-1189, 2015. 1, 2, 6, 7
Reducing distributional uncertainty by mutual information maximisation and transferable feature learning. Jian Gao, Yang Hua, Guosheng Hu, Chi Wang, Neil M Robertson, ECCV. SpringerJian Gao, Yang Hua, Guosheng Hu, Chi Wang, and Neil M Robertson. Reducing distributional uncertainty by mutual information maximisation and transferable feature learning. In ECCV, pages 587-605. Springer, 2020. 6
Discriminative clustering by regularized information maximization. Ryan Gomes, Andreas Krause, Pietro Perona, NeurIPS. 36Ryan Gomes, Andreas Krause, and Pietro Perona. Discrim- inative clustering by regularized information maximization. In NeurIPS, 2010. 3, 6
Transformer in transformer. Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang, arXiv:2103.00112arXiv preprintKai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. arXiv preprint arXiv:2103.00112, 2021. 2
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. 67Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 1, 5, 6, 7
Sliced wasserstein discrepancy for unsupervised domain adaptation. Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, Daniel Ulbricht, CVPR. Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced wasserstein discrepancy for unsu- pervised domain adaptation. In CVPR, pages 10285-10295, 2019. 6
Semantic concentration for domain adaptation. Shuang Li, Mixue Xie, Fangrui Lv, Chi Harold Liu, Jian Liang, Chen Qin, Wei Li, ICCV. 710Shuang Li, Mixue Xie, Fangrui Lv, Chi Harold Liu, Jian Liang, Chen Qin, and Wei Li. Semantic concentration for domain adaptation. In ICCV, pages 9102-9111, 2021. 1, 5, 6, 7, 10
Towards making unlabeled data never hurt. Yu-Feng Li, Zhi-Hua Zhou, TPAMI. IEEEYu-Feng Li and Zhi-Hua Zhou. Towards making unlabeled data never hurt. In TPAMI, pages 175-188. IEEE, 2014. 1
Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. Jian Liang, Dapeng Hu, Jiashi Feng, ICML. 67Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for un- supervised domain adaptation. In ICML, pages 6028-6039, 2020. 1, 5, 6, 7
Swin transformer: Hierarchical vision transformer using shifted windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, ICCV. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 2
Conditional adversarial domain adaptation. Mingsheng Long, Zhangjie Cao, Jianmin Wang, Michael I Jordan , NeurIPS. 6Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adapta- tion. In NeurIPS, pages 1645-1655, 2018. 1, 5, 6
Deep transfer learning with joint adaptation networks. Mingsheng Long, Han Zhu, Jianmin Wang, Michael I Jordan , ICML. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation net- works. In ICML, pages 2208-2217, 2017. 2
Instance adaptive self-training for unsupervised domain adaptation. Ke Mei, Chuang Zhu, Jiaqi Zou, Shanghang Zhang, ECCV. 2020Ke Mei, Chuang Zhu, Jiaqi Zou, and Shanghang Zhang. In- stance adaptive self-training for unsupervised domain adap- tation. In ECCV, pages 415-430, 2020. 2
Virtual adversarial training: a regularization method for supervised and semi-supervised learning. Takeru Miyato, Masanori Shin-Ichi Maeda, Shin Koyama, Ishii, TPAMI. 41811Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regulariza- tion method for supervised and semi-supervised learning. TPAMI, 41(8):1979-1993, 2018. 2, 3, 4, 10, 11
Realistic evaluation of deep semisupervised learning algorithms. Avital Oliver, Augustus Odena, Colin Raffel, D Ekin, Ian J Cubuk, Goodfellow, NeurIPS. 4Avital Oliver, Augustus Odena, Colin Raffel, Ekin D Cubuk, and Ian J Goodfellow. Realistic evaluation of deep semi- supervised learning algorithms. NeurIPS, 2018. 4
Moment matching for multi-source domain adaptation. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, Bo Wang, ICCV. 611Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In ICCV, pages 1406-1415, 2019. 1, 6, 11
Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, Kate Saenko, Visda, arXiv:1710.06924The visual domain adaptation challenge. 911arXiv preprintXingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017. 1, 6, 9, 11
. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Li Fei-Fei, 1159ImageNet large scale visual recognition challenge. IJCVOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition chal- lenge. IJCV, 115(3):211-252, 2015. 6, 9
Adapting visual category models to new domains. Kate Saenko, Brian Kulis, Mario Fritz, Trevor Darrell, ECCV. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In ECCV, pages 213-226, 2010. 6
Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. Yuan Shi, Fei Sha, ICML. 36Yuan Shi and Fei Sha. Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In ICML, 2012. 3, 6
A dirt-t approach to unsupervised domain adaptation. Rui Shu, H Hung, Hirokazu Bui, Stefano Narui, Ermon, ICLR. Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. In ICLR, 2018. 1
Fixmatch: Simplifying semisupervised learning with consistency and confidence. Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, D Ekin, Alex Cubuk, Han Kurakin, Colin Zhang, Raffel, NeurIPS, 2020. 23Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi- supervised learning with consistency and confidence. In NeurIPS, 2020. 2, 3
How to train your vit? data, augmentation, and regularization in vision transformers. Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, Lucas Beyer, arXiv:2106.1027069arXiv preprintAndreas Steiner, Alexander Kolesnikov, , Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270, 2021. 6, 9
Segmenter: Transformer for semantic segmentation. Robin Strudel, Ricardo Garcia, Ivan Laptev, Cordelia Schmid, ICCV. Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmenta- tion. In ICCV, 2021. 2
Deep coral: Correlation alignment for deep domain adaptation. Baochen Sun, Kate Saenko, ECCV. Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In ECCV, pages 443- 450, 2016. 2
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Antti Tarvainen, Harri Valpola, NeurIPS. 11Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, 2017. 2, 4, 11
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou, ICML. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through at- tention. In ICML, pages 10347-10357, 2021. 2
Adversarial discriminative domain adaptation. Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell, CVPR. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In CVPR, pages 7167-7176, 2017. 2
Deep domain confusion: Maximizing for domain invariance. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, Trevor Darrell, arXiv:1412.3474arXiv preprintEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014. 2
Deep hashing network for unsupervised domain adaptation. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, Sethuraman Panchanathan, CVPR. 611Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In CVPR, pages 5018- 5027, 2017. 1, 6, 11
Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio, Manifold mixup: Better representations by interpolating hidden states. In ICML. Vikas Verma, Alex Lamb, Christopher Beckham, Amir Na- jafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Ben- gio. Manifold mixup: Better representations by interpolating hidden states. In ICML, pages 6438-6447, 2019. 3
Transferable normalization: Towards improving transferability of deep neural networks. Ximei Wang, Ying Jin, Mingsheng Long, Jianmin Wang, Michael Jordan, NeurIPS. 67Ximei Wang, Ying Jin, Mingsheng Long, Jianmin Wang, and Michael Jordan. Transferable normalization: Towards im- proving transferability of deep neural networks. In NeurIPS, 2019. 5, 6, 7
A survey of unsupervised deep domain adaptation. Garrett Wilson, Diane J Cook, ACM TIST11Garrett Wilson and Diane J Cook. A survey of unsupervised deep domain adaptation. ACM TIST, 11(5):1-46, 2020. 1
Unsupervised data augmentation for consistency training. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, Quoc V Le, NeurIPS. 2020Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consis- tency training. In NeurIPS, 2020. 2
Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. Ruijia Xu, Guanbin Li, Jihan Yang, Liang Lin, ICCV. 67Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In ICCV, pages 1426- 1435, 2019. 5, 6, 7
Cdtrans: Cross-domain transformer for unsupervised domain adaptation. Tongkun Xu, Weihua Chen, Pichao Wang, Fan Wang, Hao Li, Rong Jin, arXiv:2109.06165v167arXiv preprintTongkun Xu, Weihua Chen, Pichao Wang, Fan Wang, Hao Li, and Rong Jin. Cdtrans: Cross-domain trans- former for unsupervised domain adaptation. arXiv preprint arXiv:2109.06165v1, 2021. 2, 5, 6, 7
Tvt: Transferable vision transformer for unsupervised domain adaptation. Jinyu Yang, Jingjing Liu, Ning Xu, Junzhou Huang, arXiv:2108.0598867arXiv preprintJinyu Yang, Jingjing Liu, Ning Xu, and Junzhou Huang. Tvt: Transferable vision transformer for unsupervised do- main adaptation. arXiv preprint arXiv:2108.05988, 2021. 2, 5, 6, 7
Disentangled non-local neural networks. Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, Han Hu, ECCV. 2020Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, and Han Hu. Disentangled non-local neural networks. In ECCV, pages 191-207, 2020. 2
Mixup: beyond empirical risk minimization. Hongyi Zhang, Moustapha Cisse, David Yann N Dauphin, Lopez-Paz, In ICLR. 11Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: beyond empirical risk minimiza- tion. In ICLR, 2018. 11
Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao, ICCV. Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, and Jianfeng Gao. Multi-scale vision long- former: A new vision transformer for high-resolution image encoding. In ICCV, 2021. 1
Label propagation with augmented anchors: a simple semi-supervised learning baseline for unsupervised domain adaptation. Yabin Zhang, Bin Deng, Kui Jia, Lei Zhang, ECCV. Yabin Zhang, Bin Deng, Kui Jia, and Lei Zhang. Label prop- agation with augmented anchors: a simple semi-supervised learning baseline for unsupervised domain adaptation. In ECCV, pages 781-797, 2020. 1
Bridging theory and algorithm for domain adaptation. Yuchen Zhang, Tianle Liu, Mingsheng Long, Michael I Jordan , ICML. Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael I Jordan. Bridging theory and algorithm for domain adapta- tion. In ICML, pages 7404-7413, 2019. 1
Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. Yang Zou, Zhiding Yu, Jinsong Kumar, Wang, ECCV. Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. Un- supervised domain adaptation for semantic segmentation via class-balanced self-training. In ECCV, pages 289-305, 2018. 2
Confidence regularized self-training. Yang Zou, Zhiding Yu, Xiaofeng Liu, Jinsong Kumar, Wang, ICCV. Yang Zou, Zhiding Yu, Xiaofeng Liu, BVK Kumar, and Jin- song Wang. Confidence regularized self-training. In ICCV, pages 5982-5991, 2019. 2
| [
"https://github.com/tsun/SSRT.",
"https://github.com/rwightman/pytorch-imagemodels/blob/master/timm/models/vision",
"https://github.com/google-research/vision"
]
|
[
"A Conditional Linear Combination Test with Many Weak Instruments *",
"A Conditional Linear Combination Test with Many Weak Instruments *"
]
| [
"Dennis Lim ",
"Wenjie Wang ",
"Yichong Zhang "
]
| []
| []
| We consider a linear combination of jackknife Anderson-Rubin (AR), jackknife Lagrangian multiplier (LM), and orthogonalized jackknife LM tests for inference in IV regressions with many weak instruments and heteroskedasticity. We choose the weights in the linear combination based on a decision-theoretic rule that is adaptive to the identification strength. Under both weak and strong identification, the proposed linear combination test controls asymptotic size and is admissible among certain class of tests. Under strong identification, we further show that our linear combination test has optimal power against local alternatives. Simulations and an empirical application to Angrist and Krueger's (1991) dataset confirm the good power properties of our test. most powerful test mentioned above under strong identification against local alternatives, and (4) it has asymptotic power equal to 1 under strong identification against fixed alternatives. Simulations based on the limit experiment as well as calibrated data confirm the good power properties of our test. Then, we apply the new jackknife CLC test toAngrist and Krueger's (1991)dataset with the specifications of 180 and 1,530 instruments. We find that in both specifications, our confidence intervals (CIs) are the shortest among those constructed by weak identification robust tests, namely, the jackknife AR, LM, and CLC tests, and the two-step procedure. Furthermore, our CIs are found to be even shorter than the non-robust Wald test CIs based on the jackknife IV estimator (JIVE)proposed byAngrist, Imbens, and Krueger (1999), which is in line with the theoretical result that the jackknife CLC test is adaptive to the identification strength and is efficient under strong identification. Relation to the literature. The contributions in the present paper relate to two strands of literature. First, it is related to the literature on many instruments; see, for example, Kunitomo and Sun (2022), among others. For implementing inferences in the context of many instruments and heteroskedasticity, Chao et al. (2012) and Hausman et al. (2012) provide standard errors for Wald-type inferences that are based on JIVE and a jackknifed version of the limited information maximum likelihood (LIML) estimator and the Fuller's ( 1977) estimator. These estimators are more robust to many instruments than the commonly used two-stage least squares (TSLS) estimator as they are able to correct the bias due to the high dimension of IVs. In simulations derived from the data inAngrist and Krueger (1991), which is representative of empirical labor studies with a many-instrument concern, Angrist and Frandsen (2022, Section IV) show that such bias-corrected estimators outperform the TSLS that is based on the instruments selected by the least absolute shrinkage and selection operator (LASSO) introduced in Belloni et al. (2012) or the random forest-fitted first stage introduced in Athey, Tibshirani, and Wager (2019).However, the Wald inference methods are not valid under weak identification, a situation in which the ratio of the so-called concentration parameter, a measure of the overall instrument strength, over the square root of the number of instruments stays bounded as the sample size diverges to infinity. In this case, even the aforementioned bias-corrected estimators are inconsistent, | null | [
"https://export.arxiv.org/pdf/2207.11137v2.pdf"
]
| 251,018,614 | 2207.11137 | 496871bccaf48078b17bfb552bc79ac88a89a5b9 |
A Conditional Linear Combination Test with Many Weak Instruments *
Dennis Lim
Wenjie Wang
Yichong Zhang
A Conditional Linear Combination Test with Many Weak Instruments *
Many instrumentspowersizeweak identification JEL codes: C12C36C55
We consider a linear combination of jackknife Anderson-Rubin (AR), jackknife Lagrangian multiplier (LM), and orthogonalized jackknife LM tests for inference in IV regressions with many weak instruments and heteroskedasticity. We choose the weights in the linear combination based on a decision-theoretic rule that is adaptive to the identification strength. Under both weak and strong identification, the proposed linear combination test controls asymptotic size and is admissible among certain class of tests. Under strong identification, we further show that our linear combination test has optimal power against local alternatives. Simulations and an empirical application to Angrist and Krueger's (1991) dataset confirm the good power properties of our test. most powerful test mentioned above under strong identification against local alternatives, and (4) it has asymptotic power equal to 1 under strong identification against fixed alternatives. Simulations based on the limit experiment as well as calibrated data confirm the good power properties of our test. Then, we apply the new jackknife CLC test toAngrist and Krueger's (1991)dataset with the specifications of 180 and 1,530 instruments. We find that in both specifications, our confidence intervals (CIs) are the shortest among those constructed by weak identification robust tests, namely, the jackknife AR, LM, and CLC tests, and the two-step procedure. Furthermore, our CIs are found to be even shorter than the non-robust Wald test CIs based on the jackknife IV estimator (JIVE)proposed byAngrist, Imbens, and Krueger (1999), which is in line with the theoretical result that the jackknife CLC test is adaptive to the identification strength and is efficient under strong identification. Relation to the literature. The contributions in the present paper relate to two strands of literature. First, it is related to the literature on many instruments; see, for example, Kunitomo and Sun (2022), among others. For implementing inferences in the context of many instruments and heteroskedasticity, Chao et al. (2012) and Hausman et al. (2012) provide standard errors for Wald-type inferences that are based on JIVE and a jackknifed version of the limited information maximum likelihood (LIML) estimator and the Fuller's ( 1977) estimator. These estimators are more robust to many instruments than the commonly used two-stage least squares (TSLS) estimator as they are able to correct the bias due to the high dimension of IVs. In simulations derived from the data inAngrist and Krueger (1991), which is representative of empirical labor studies with a many-instrument concern, Angrist and Frandsen (2022, Section IV) show that such bias-corrected estimators outperform the TSLS that is based on the instruments selected by the least absolute shrinkage and selection operator (LASSO) introduced in Belloni et al. (2012) or the random forest-fitted first stage introduced in Athey, Tibshirani, and Wager (2019).However, the Wald inference methods are not valid under weak identification, a situation in which the ratio of the so-called concentration parameter, a measure of the overall instrument strength, over the square root of the number of instruments stays bounded as the sample size diverges to infinity. In this case, even the aforementioned bias-corrected estimators are inconsistent,
Introduction
Various recent surveys in leading economics journals suggest that weak instruments remain important concerns for empirical practice. For instance, I. Andrews, Stock, and Sun (2019) survey 230 instrumental variable (IV) regressions from 17 papers published in the American Economic Review (AER). They find that many of the first-stage F-statistics (and non-homoskedastic generalizations) are in a range that raises such concerns, and virtually all of these papers report at least one firststage F with a value smaller than 10. Similarly, in Lee, McCrary, Moreira, and Porter's (2022) survey of 123 AER articles involving IV regressions, 105 out of 847 specifications have first-stage Fs smaller than 10. Moreover, many IV applications involve a large number of instruments. For example, in their seminal paper, Angrist and Krueger (1991) study the effect of schooling on wages by interacting three base instruments (dummies for the quarter of birth) with state and year of birth, resulting in 180 instruments. Hansen, Hausman, and Newey (2008) show that using the 180 instruments gives tighter confidence intervals than using the base instruments even after adjusting for the effect of many instruments. In addition, as pointed out by Mikusheva and Sun (2022), in empirical papers that employ the "judge design" (e.g., see Maestas, Mullen, and Strand (2013), Sampat and Williams (2019), and Dobbie, Goldin, and Yang (2018)), the number of instruments (the number of judges) is typically proportional to the sample size, and the famous Fama-MacBeth two-pass regression in empirical asset pricing (e.g., see Fama and MacBeth (1973), Shanken (1992), and Anatolyev and Mikusheva (2022)) is equivalent to IV estimation with the number of instruments proportional to the number of assets. Similarly, Belloni, Chen, Chernozhukov, and Hansen (2012) consider an IV application involving more than one hundred instruments for the study of the effect of judicial eminent domain decisions on economic outcomes. Carrasco and Tchuente (2015) used many instruments in the estimation of the elasticity of intertemporal substitution in consumption. Furthermore, as pointed out by Goldsmith-Pinkham, Sorkin, and Swift (2020), the shift-share or Bartik instrument (e.g., see Bartik (1991) and Blanchard, Katz, Hall, and Eichengreen (1992)), which has been widely applied in many fields such as labor, public, development, macroeconomics, international trade, and finance, can be considered as a particular way of combining many instruments. For example, in the canonical setting of estimating the labor supply elasticity, the corresponding number of instruments is equal to the number of industries, which is also typically proportional to the sample size.
In this paper, we propose a jackknife conditional linear combination (CLC) test, which is robust to weak identification, many instruments, and heteroskedasticity. The proposed test also achieves efficiency under strong identification against local alternatives. The starting point of our analysis is an observation that, under strong identification, an orthogonalized jackknife Lagrangian multiplier (LM) test is the uniformly most powerful test against local alternatives among the class of tests that are invariant to sign changes and constructed based on jackknife LM and Anderson-Rubin (AR) tests only. However, the orthogonalized LM test may not have good power under weak identification or against certain fixed alternatives. We therefore consider a linear combination of jackknife AR, jackknife LM, and orthogonalized LM tests. Specifically, we follow I. Andrews (2016) and determine the linear combination weights by minimizing the maximum power loss, which can be viewed as a maximum regret and is further calibrated based on the limit experiment of interest and a sufficient statistic for the identification strength under many instruments. We show such a jackknife CLC test is adaptive to the identification strength in the sense that (1) it achieves exact asymptotic size under both weak and strong identification, (2) it is asymptotically and conditionally admissible under weak identification among some class of tests, (3) it converges to the uniformly and there is no consistent test for the structural parameter of interest (see the discussions in Section 3 of Mikusheva and Sun (2022)). For weak identification robust inference under many instruments, D. Andrews and Stock (2007) consider the AR test, the score test introduced in Kleibergen (2002), and the conditional likelihood ratio test introduced in Moreira (2003). Their IV model is homoskedastic and requires the number of instruments to diverge slower than the cube root of the sample size (K 3 /n → 0, where K and n denote the number of instruments and the sample size, respectively). Anatolyev and Gospodinov (2011) propose a modified AR test, which allows for the number of instruments to be proportional to the sample size but also require homoskedastic errors.
Recently, Crudu et al. (2021) and Mikusheva and Sun (2022) propose jackknifed versions of the AR test in a model with many instruments and heteroskedasticity. Both tests are robust toward weak identification, whereas Mikusheva and Sun's (2022) jackknife AR test has better power properties because of the usage of a cross-fit variance estimator. However, the jackknife AR tests may be inefficient under strong identification. Mikusheva and Sun (2022) also propose a new pre-test for weak identification under many instruments and apply it to form a two-stage testing procedure with a Wald test based on the JIVE introduced in Angrist et al. (1999). The JIVE-Wald test is more efficient than the jackknife AR under strong identification. An empirical researcher can therefore employ the jackknife AR if the pre-test suggests weak identification or the JIVE-Wald if the pre-test suggests strong identification. Furthermore, Matsushita and Otsu (2020) propose a jackknife LM test, which is also robust to weak identification, many instruments, and heteroskedastic errors. Under strong identification and local alternatives, our jackknife CLC test proves to be more efficient than the jackknife AR, the jackknife LM, and the two-step test.
Second, our paper is related to the literature on weak identification under the framework of a fixed number of instruments or moment conditions, in which various robust inference methods are available for non-homoskedastic errors; see, for example, Stock and Wright (2000), Kleibergen (2005), D. Andrews and Cheng (2012), I. Andrews (2016), I. Andrews and Mikusheva (2016), I. Andrews (2018), Moreira and Moreira (2019), D. Andrews and Guggenberger (2019), and Lee et al. (2022). In particular, our jackknife CLC test extends I. Andrews (2016) to the framework with many weak instruments. I. Andrews (2016) considers the convex combination between the generalized AR statistic (S statistic) introduced by Stock and Wright (2000) and the score statistic (K statistic) introduced by Kleibergen (2005). We find that under many weak instruments, the orthogonalized jackknife LM statistic plays a role similar to the K statistic. However, the trade-off between the jackknife AR and orthogonalized LM statistics turns out to be rather different from that between the S and K statistics. As pointed out by I. Andrews (2016), in the case with a fixed number of weak instruments (or moment conditions), the K statistic picks out a particular (random) direction corresponding to the span of a conditioning statistic that measures the identification strength and restricts attention to deviations from the null along this specific direction. In contrast to the K statistic, the S statistic treats all deviations from the null equally. Therefore, the trade-off between the K and S statistics is mainly from the difference in attention to deviation directions.
We find that with many weak instruments, the jackknife AR and orthogonalized LM tests do not have such difference in deviation directions. Instead, their trade-off is mostly between local and non-local alternatives. Furthermore, although the standard LM test (without orthogonalization) is not weak identification robust under I. Andrews (2016)'s framework, the jackknife LM test is robust under many instruments. Therefore, we consider a linear combination of jackknife AR, jackknife LM, and orthogonalized jackknife LM tests, and we find that the resulting CLC test has good power properties in a variety of scenarios.
Notation. We denote Z(µ) as the normal random variable with unit variance and expectation µ and [n] = {1, 2, · · · , n}. We further simplify Z(0) as Z, which is just a standard normal random variable. We denote z α as the (1 − α) quantile of a standard normal random variable and C α (a 1 , a 2 ; ρ) as the (1 − α) quantile of random variable a 1 Z 2 1 + a 2 (ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ) + (1 − a 1 − a 2 )Z 2 2 where Z 1 and Z 2 are two independent standard normal random variables. Furthermore, we let C α = z 2 α/2 and C α,max (ρ) = sup (a 1 ,a 2 )∈A 0 C α (a 1 , a 2 ; ρ), where A 0 = {(a 1 , a 2 ) ∈ [0, 1] × [0, 1], a 1 + a 2 ≤ a} for some a < 1. The operators E * and P * are expectation and probability taken conditionally on data, respectively. For example, E * 1{Z 2 (μ) ≥ C α }, in whichμ is some estimator of the expectation µ based on data, means the expectation is taken over the normal random variable by treatingμ as deterministic. We use to denote convergence in distribution and U d = V to denote that U and V share the same distribution.
Setup and Limit Problems
We consider the linear IV regression with a scalar outcome Y i , a scalar endogenous variable X i , and a K × 1 vector of instruments Z i such that
Y i = X i β + e i , X i = Π i + V i , ∀i ∈ [n], (2.1) where Π i = E(X i |Z i ).
We focus on the model with a single endogenous variable, which is prevalent in empirical research. We let K diverge with sample size n, allowing for the case that K is of the same order of magnitude as n. For the rest of the paper, we follow the many-instrument literature and treat {Z i } i∈[n] as fixed so that Π i can also be written as EX iï ¼OEwhich is non-random, EV i = 0 by construction, and Ee i = 0 by IV exogeneity. We allow (e i , V i ) to be heteroskedastic across i.
Also, following the literature on many instruments, we assume without loss of generality that there are no controls included in our model as they can be partialled out from
(Y i , X i , Z i ). 1 We are interested in testing β = β 0 . Let e i (β 0 ) = Y i − X i β 0 = e i + X i ∆, where ∆ = β − β 0 .
We collect the transpose of Z i in each row of Z, an n × K matrix of instruments, and denote P = Z(Z Z) −1 Z . In addition, Let Q ab = i∈[n] j =i a i P ij b j √ K and C = Q ΠΠ . Then, as pointed out by Mikusheva and Sun (2022) point, the (rescaled) C is the concentration parameter that measures the strength of identification in the heteroskedastic IV model with many instruments. Specifically, the parameter β is weakly identified if C is bounded and strongly identified if |C| → ∞. We consider drifting sequence asymptotics so that all quantities are indexed by the sample size n. We omit such dependence for notation simplicity.
Throughout the paper, we consider three scenarios: (1) weak identification and fixed alternatives in which both C and ∆ are fixed and bounded, (2) strong identification and local alternatives in which C = C/d n , ∆ = ∆d n , C and ∆ are bounded constants independent of n, and d n → 0 is a deterministic sequence, and (3) strong identification and fixed alternatives in which C = C/d n and ∆ is fixed and bounded. All the weak identification robust tests proposed in the literature (namely, the jackknife AR tests in Crudu et al. (2021) and Mikusheva and Sun (2022) and the jackknife LM test in Matsushita and Otsu (2020)) depend on a subset of the following three quantities: (Q e(β 0 )e(β 0 ) , Q Xe(β 0 ) , Q XX ). Throughout the paper, we maintain the following high-level assumption.
Assumption 1. Under both weak and strong identification, the following weak convergence holds:
Q ee Q Xe Q XX − C N 0 0 0 , Φ 1 Φ 12 Φ 13 Φ 12 Ψ τ Φ 13 τ Υ , (2.2)
for some (Φ 1 , Φ 12 , Φ 13 , Ψ, τ, Υ).
Assumption 1 has already been verified by Chao et al. (2012) and Mikusheva and Sun (2022) under regularity conditions. It implies that, under both strong and weak identification,
Q e(β 0 )e(β 0 ) − ∆ 2 C Q Xe(β 0 ) − ∆C Q XX − C d = N 0 0 0 , Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 13 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) τ (β 0 ) Φ 13 (β 0 ) τ (β 0 ) Υ + o p (1), (2.3) where Φ 1 (β 0 ) = ∆ 4 Υ + 4∆ 3 τ + ∆ 2 (4Ψ + 2Φ 13 ) + 4∆Φ 12 + Φ 1 , Φ 12 (β 0 ) = ∆ 3 Υ + 3∆ 2 τ + ∆(2Ψ + Φ 13 ) + Φ 12 , Φ 13 (β 0 ) = ∆ 2 Υ + 2∆τ + Φ 13 , Ψ(β 0 ) = ∆ 2 Υ + 2∆τ + Ψ, τ (β 0 ) = ∆Υ + τ. (2.4)
Specifically, under strong identification, we have Q XX d n p −→ C, which has a degenerate distribution.
Also, under local alternatives, we have ∆ = o(1) so that
(Φ 1 (β 0 ), Φ 12 (β 0 ), Φ 13 (β 0 ), Ψ(β 0 ), τ (β 0 )) → (Φ 1 , Φ 12 , Φ 13 , Ψ, τ ).
To describe a feasible version of the test, we assume we have consistent estimates for all the variance components.
Assumption 2. Let ρ(β 0 ) = Φ 12 (β 0 ) √ Φ 1 (β 0 )Ψ(β 0 ) , γ(β 0 ) = ( Φ 1 (β 0 ), Φ 12 (β 0 ), Φ 13 (β 0 ), Ψ(β 0 ), τ (β 0 ), Υ, ρ(β 0 ))
be an estimator, and B ∈ be a compact parameter space. Then, we have
inf β 0 ∈B Φ 1 (β 0 ) > 0, inf β 0 ∈B Ψ(β 0 ) > 0, Υ > 0, and for β 0 ∈ B, || γ(β 0 ) − γ(β 0 )|| 2 = o p (1), where γ(β 0 ) ≡ (Φ 1 (β 0 ), Φ 12 (β 0 ), Φ 13 (β 0 ), Ψ(β 0 ), τ (β 0 ), Υ, ρ(β 0 )).
Several remarks on Assumption 2 are in order. First, Chao et al. (2012) propose a consistent estimator of Ψ under strong identification and many instruments. It is possible to compute γ(β 0 ) based on Chao et al.'s (2012) argument with their JIVE-based residualsê i from the structure equation replaced by e i (β 0 ). Under weak identification and β 0 = β, Crudu et al. (2021) and Matsushita and Otsu (2021) establish the consistency of such estimators for Φ 1 (β 0 ) and Ψ(β 0 ), respectively.
Similar arguments can be used to show the consistency of the rest of the elements in γ(β 0 ) under both weak and strong identification. In addition, the consistency can be established under both local and fixed alternatives. We provide more details in Section A.1 in the Online Supplement. Second, motivated by Kline, Saggio, and Sølvsten (2020), Mikusheva and Sun (2022) propose cross-fit estimators Φ 1 (β 0 ) and Υ, which are consistent under both weak and strong identification and lead to better power properties. Following their lead, one can write down the cross-fit estimators for the rest of the elements in γ(β 0 ) and show they are consistent. 2 We provide more details in Section A.2 in the Online Supplement. Note that both Crudu et al.'s (2021) and Mikusheva and Sun's (2022) estimators are consistent under heteroskedasticity and allow for K to be of the same order of n. Third, in order for our jackknife CLC test proposed below to control size under both weak and strong identification, it suffices to require γ(β 0 ) to be consistent under the null only. Fourth, in the following, we study the power properties of jackknife test statistics against local or fixed alternatives under different identification scenarios. The power analysis in Lemmas 2.1 and 2.4 below, and subsequently, Theorems 4.1 and 4.2, only requires the consistency of γ(β 0 ) under strong identification with local alternatives and weak identification with fixed alternatives, respectively.
2 For example, Mikusheva and Sun (2022, p.22) establish the limit of their cross-fit estimator Ψ under weak identification and many instruments when the residualêi from the structure equation is computed based on the JIVE estimator. We can construct Ψ(β0) by replacingêi by ei(β0). Then, the argument, as theirs with −QXe/QXX replaced by ∆, establishes that Ψ(β0)
p −→ Ψ(β0).
Under this framework, Crudu et al. (2021) and Mikusheva and Sun (2022) consider the jackknife AR test
1{AR(β 0 ) ≥ z α }, AR(β 0 ) = Q e(β 0 )e(β 0 ) Φ 1/2 1 (β 0 ) , (2.5)
and Matsushita and Otsu (2020) consider the jackknife LM test
1{LM 2 (β 0 ) ≥ C α }, LM (β 0 ) = Q Xe(β 0 ) Ψ 1/2 (β 0 ) . (2.6)
Both tests are robust to weak identification, many instruments, and heteroskedasticity. Lemma 2.1 below characterizes the joint limit distribution of (AR(β 0 ), LM (β 0 )) under strong identification and local alternatives.
Lemma 2.1. Suppose Assumptions 1 and 2 hold and we are under strong identification with local alternatives, that is, there exists a deterministic sequence d n → 0 such that C = C/d n and ∆ = ∆d n ,
where C and ∆ are bounded constants independent of n. Then, we have
AR(β 0 ) LM (β 0 ) N 1 N 2 d = N 0 ∆ C Ψ 1/2 , 1 ρ ρ 1 where ρ = Φ 12 / √ Φ 1 Ψ.
Two remarks are in order. First, under strong identification, we consider local alternatives so that β − β 0 → 0. This is why we have (Ψ(β 0 ), Φ 1 (β 0 ), Φ 12 (β 0 )) converge to (Ψ, Φ 1 , Φ 12 ), which are just the counterparts of (Ψ(β 0 ), Φ 1 (β 0 ), Φ 12 (β 0 )) when β 0 is replaced by β. Second, although AR(β 0 ) has zero mean, and hence, no power, it is correlated with LM (β 0 ) in the current context of many instruments. It is therefore possible to use AR(β 0 ) to reduce the variance of LM (β 0 ) and obtain a test that is more powerful than the LM test.
Lemma 2.2. Consider the limit experiment in which researchers observe (N 1 , N 2 ) with
N 1 N 2 d = N 0 θ , 1 ρ ρ 1 ,
know the value of ρ and that EN 1 = 0, and want to test for θ = 0 versus the two-sided alternative. In this case, the uniformly most powerful level-α test that is invariant to sign changes is 1{N * 2 2 ≥ C α }, where
N * 2 = (1 − ρ 2 ) −1/2 (N 2 − ρN 1 )
is the normalized residual from the projection of N 2 on N 1 .
Let the orthogonalized jackknife LM statistic be LM * (β 0 ) = (1− ρ(β 0 ) 2 ) −1/2 (LM (β 0 )− ρ(β 0 )AR(β 0 )).
Then, Lemma 2.1 implies, under strong identification and local alternatives,
AR(β 0 ) LM * (β 0 ) N 1 N * 2 d = N 0 ∆ C [(1−ρ 2 )Ψ] 1/2 , 1 0 0 1 . (2.7)
Lemma 2.2 with θ = ∆ CΨ −1/2 implies, in this case, that the test 1{LM * 2 (β 0 ) ≥ C α } is asymptotically strictly more powerful than the jackknife AR and LM tests based on AR(β 0 ) and LM (β 0 ) against local alternatives as long as ρ = 0. In addition, under strong identification and local alternatives, Mikusheva and Sun's (2022) two-step test statistic is asymptotically equivalent to LM (β 0 ), and thus, is less powerful than LM * (β 0 ) too.
Next, we compare the behaviors of AR(β 0 ), LM (β 0 ), and LM * (β 0 ) under strong identification and fixed alternatives.
Lemma 2.3. Suppose Assumption 2 holds, (Q e(β 0 )e(β 0 ) − ∆ 2 C, Q Xe(β 0 ) − ∆C, Q XX − C) = O p (1),
and we are under strong identification so that d n C → C for some d n → 0. Then, we have, for any
fixed ∆ = 0, d 2 n AR 2 (β 0 ) LM 2 (β 0 ) LM * 2 (β 0 ) p −→ Φ −1 1 (β 0 )∆ 4 C 2 Ψ −1 (β 0 )∆ 2 C 2 (1 − ρ 2 (β 0 )) −1 (Ψ −1/2 (β 0 ) − ρ(β 0 )Φ −1/2 1 (β 0 )∆) 2 ∆ 2 C 2 .
Given d n → 0 and Φ −1 1 (β 0 )∆ 4 C 2 > 0, AR 2 (β 0 ) has power 1 against fixed alternatives asymptotically. By contrast, LM * 2 (β 0 ) may not have power if ∆ = ∆ * (β 0 ) ≡ Φ 1/2 1 (β 0 )Ψ −1/2 (β 0 )ρ −1 (β 0 ). Next, we compare the performance of AR(β 0 ) and LM * (β 0 ) under weak identification and fixed alternatives.
Lemma 2.4. Suppose Assumptions 1 and 2 hold and we are under weak identification so that C is fixed. Then, we have, for any fixed ∆ = 0,
AR(β 0 ) LM * (β 0 ) N 1 N * 2 d = N m 1 (∆) m 2 (∆) , 1 0 0 1 , (2.8) where ρ(β 0 ) = Φ 12 (β 0 ) √ Ψ(β 0 )Φ 1 (β 0 ) and m 1 (∆) m 2 (∆) = Φ −1/2 1 (β 0 )∆ 2 C (1 − ρ 2 (β 0 )) −1/2 Ψ −1/2 (β 0 )∆C − ρ(β 0 )(1 − ρ 2 (β 0 )) −1/2 Φ −1/2 1 (β 0 )∆ 2 C .
In particular, as ∆ → ∞, we have
m 1 (∆) → C Υ 1/2 and m 2 (∆) → C Υ 1/2 ρ 23 (1 − ρ 2 23 ) 1/2 ,
where ρ 23 = τ (ΨΥ) 1/2 is the correlation between Q Xe and Q XX . 3
By comparing the means of the normal limit distribution in (2.8), we notice that under weak identification and fixed alternatives, neither LM * (β 0 ) dominates AR(β 0 ) or vice versa. We also notice from Lemma 2.4 that for testing distant alternatives, the power of LM * (β 0 ) is different from AR(β 0 ) by a factor of ρ 23 / 1 − ρ 2 23 , so that it will be lower when |ρ 23 | ≤ 1/ √ 2. Under weak identification and homoskedasticity, 4 we have ρ 23 = ρ = Φ 12 / √ ΨΦ 1 . Therefore, although the test To achieve the advantages of AR(β 0 ), LM (β 0 ), and LM * (β 0 ) in all three scenarios above, we need to combine them in a way that is adaptive to the identification strength. Following I.Andrews
1{LM * 2 (β 0 ) ≥ C α }
(2016), we consider the linear combination of AR 2 (β 0 ), LM 2 (β 0 ), and LM * 2 (β 0 ). Recall that (N 1 , N * 2 ) are the limits of (AR(β 0 ), LM * (β 0 )) in either strong or weak identification. See (2.7) and (2.8) for their expressions in these two cases. Then, in the limit experiment, the linear combination test can be written as φ a 1 ,a 2 ,∞ = 1{a 1 N 2 1 + a 2 (ρN 1 + (1 −ρ 2 ) 1/2 N * 2 ) 2 + (1 − a 1 − a 2 )N * 2 2 ≥ C α (a 1 , a 2 ;ρ)}, (2.9)
where (a 1 , a 2 ) ∈ A 0 are the combination weights, N 1 ∼ Z(θ 1 ), and N * 2 ∼ Z(θ 2 ); the mean parameters θ 1 and θ 2 are defined in Lemmas 2.1 and 2.4 for strong and weak identification, respectively; and ρ is the limit of ρ(β 0 ). 5 Let the eigenvalue decomposition of the matrix
a 1 + a 2ρ 2 a 2ρ (1 −ρ 2 ) 1/2 a 2ρ (1 −ρ 2 ) 1/2 1 − a 1 − a 2ρ 2 be a 1 + a 2ρ 2 a 2ρ (1 −ρ 2 ) 1/2 a 2ρ (1 −ρ 2 ) 1/2 1 − a 1 − a 2ρ 2 = U s 1 (a 1 , a 2 ) 0 0 s 2 (a 1 , a 2 ) U , (2.10)
where, by construction, s 1 (a 1 , a 2 ) ≥ s 2 (a 1 , a 2 ) ≥ 0 and U is a 2 × 2 unitary matrix. We highlight the dependence of eigenvalues (s 1 , s 2 ) on the weights (a 1 , a 2 ). The dependence of U on (a 1 , a 2 ) is suppressed for notation simplicity. Then, we have
a 1 N 2 1 + a 2 (ρN 1 + (1 −ρ 2 ) 1/2 N * 2 ) 2 + (1 − a 1 − a 2 )N * 2 2 = s 1 (a 1 , a 2 ) N 2 1 + s 2 (a 1 , a 2 ) N 2 2
and φ a 1 ,a 2 ,∞ = 1{s 1 (a 1 , a 2 ) N 2 1 + s 2 (a 1 , a 2 ) N 2 2 ≥ C α (a 1 , a 2 ;ρ)}, where
N 1 N 2 = U N 1 N * 2
and N 1 and N 2 are independent normal random variables with unit variance. This implies that φ a 1 ,a 2 ,∞ can be viewed as a linear combination test of two independent chi-squares random variables with one degree of freedom, and those two chi-squares random variables are obtained by properly rotating N 1 and N * 2 (i.e., the limits of AR(β 0 ) and LM * (β 0 )). Theorem 2.1 states the key properties of φ a 1 ,a 2 ,∞ under the limit experiment. Theorems 4.1-4.3 further establish that these properties hold asymptotically for our linear combination test.
Theorem 2.1. (i) Under weak identification and fixed alternatives, N 1 ∼ Z(θ 1 ), N * 2 ∼ Z(θ 2 ), and they are independent, where θ 1 = m 1 (∆) and θ 2 = m 2 (∆) as in (2.8). We consider the test of H 0 : θ 1 = θ 2 = 0 against H 1 : θ 1 = 0 or θ 2 = 0. Then, for any (a 1 , a 2 ) ∈ A 0 , φ a 1 ,a 2 ,∞ defined in (2.9) is admissible among the level-α tests based on test statisticss 1 N 2 1 +s 2 N 2 2 for (s 1 ,s 2 ) ∈ + × + .
(ii) Under strong identification and local alternatives,
N 2 1 ∼ Z 2 , N * 2 2 ∼ Z 2 (θ), where θ = ∆ C
[(1−ρ 2 )Ψ] 1/2 as in (2.7). We consider the test of H 0 : θ = 0 against H 1 : θ = 0. Then, φ a 1 ,a 2 ,∞ defined in (2.9) is the uniformly most powerful test in the class of tests that depend on (N 1 , N * 2 ) and are invariant to sign changes if and only if a 1 = 0 and a 2 ρ = 0.
(iii) Suppose Assumption 2 holds, (Q e(β 0 )e(β 0 ) − ∆ 2 C, Q Xe(β 0 ) − ∆C, Q XX − C) = O p (1), and
we are under strong identification with fixed alternatives. If 1 ≥ a 1,n ≥q Φ 1 (β 0 )
C 2 ∆ 4 * (β 0 ) for some constantq > C α,max (ρ(β 0 )) and (a 1,n , a 2,n ) ∈ A 0 , where ∆ * (β 0 ) = Φ 1/2 1 (β 0 )Ψ −1/2 (β 0 )ρ −1 (β 0 ), then
1{a 1,n AR 2 (β 0 ) + a 2,n LM 2 (β 0 ) + (1 − a 1,n − a 2,n )LM * 2 (β 0 ) ≥ C α (a 1,n , a 2,n ; ρ(β 0 ))} p −→ 1.
Five remarks are in order. First, in the case with a fixed number of weak instruments (or moment conditions), I. Andrews (2016) consider the linear combination of K and S statistics. The trade-off between K and S statistics is from the difference in attention to deviation directions (see the discussions in Section 3 of I. Andrews (2016)). We notice from Theorem 2.1 that φ a 1 ,a 2 ,∞ is constructed based on a quadratic function of AR(β 0 ) and LM * (β 0 ), which play roles similar to S and K, respectively. However, AR(β 0 ) and LM * (β 0 ) do not have such a difference in deviation directions. Instead, the trade-off between AR(β 0 ) and LM * (β 0 ) is between local and non-local alternatives. Additionally, although the standard score test is not weak identification robust under a fixed number of instruments, LM (β 0 ) is robust under many instruments. Therefore, we consider the linear combination of AR(β 0 ), LM (β 0 ), and LM * (β 0 ) to take advantage of the power properties of all three tests.
Second, unlike the one-sided jackknife AR test proposed by Mikusheva and Sun (2022), we construct the jackknife CLC test based on AR 2 (β 0 ) for several reasons. First, under weak identification, when the concentration parameter C and, thus, m 1 (∆) defined in Lemma 2.4 is nonnegative, the one-sided test has good power. However, even in this case, the power curves simulation in Section 5.1 shows that our jackknife CLC test is more powerful than the one-sided AR test in most scenarios. Second, our jackknife CLC test will have good power even when C is negative. 6 Third, we show below that under strong identification and local alternatives, our jackknife CLC test converges to the uniformly most powerful test 1{N * 2 2 > C α } whereas both the one-and two-sided tests based on AR(β 0 ) have no power, as shown in Lemma 2.1. Fourth, under strong identification and fixed alternatives, our jackknife CLC test has asymptotic power equal to 1, as shown in Lemma 2.3 and Theorem 4.3 below. In this case, using the one-sided jackknife AR test cannot further improve the power. Fifth, combining LM * 2 (β 0 ) with AR 2 (β 0 ) (and LM 2 (β 0 )), rather than AR(β 0 ), can substantially mitigate the impact of power loss of LM * (β 0 ) at ∆ * (β 0 ), as shown in the numerical investigation in Section 5.
Third, Theorem 2.1(i) implies that φ a 1 ,a 2 ,∞ is admissible among tests that are also quadratic functions of N 1 and N * 2 with the same rotation U but different eigenvalues (s 1 ,s 2 ); that is,
(N 1 , N * 2 )U s 1 0 0s 2 U N 1 N * 2 .
Specifically, in the special case with a 2 = 0, the rotation matrix U = I 2 and φ a 1 ,0,∞ is admissible among level-α tests based on the test statistics of the form a 1 N 2 1 + (1 − a 1 )N * 2 2 for a 1 ∈ [0, 1], which is similar to the result for the linear combination of S and K statistics in I. Andrews (2016).
Fourth, under strong identification and local alternatives, a 1 = 0 and a 2 ρ = 0 imply that φ a 1 ,a 2 ,∞ = 1{N * ,2 2 ≥ C α }, which is the uniformly most powerful invariant test. When ρ = 0 and under local alternatives, a 2 N * 2 2 in the second and third terms of φ a 1 ,a 2 ,∞ cancels out, which implies that φ a 1 ,a 2 ,∞ = 1{N * 2 2 ≥ C α } as long as a 1 = 0. Fifth, we note that both the rotation matrix U and the eigenvalues s 1 and s 2 in (2.10) are func-
6 We note that C = i∈[n] j =i Π i P ij Π j √ K = i∈[n] (1−P ii )Π 2 i −Π M Π √ K , where M = I − P . If Π M Π and i∈[n] PiiΠ 2 i
are sufficiently large, C can be negative. Mikusheva and Sun (2022) further assume that Π M Π ≤ CΠ Π K for some constant C > 0, which implies that C > 0.
tions of (a 1 , a 2 ). We choose this specific parametrization so that φ a 1 ,a 2 ,∞ can be written as a linear combination of AR 2 (β 0 ), LM 2 (β 0 ), and LM * 2 (β 0 ). It is possible to use other parametrizations to combine AR(β 0 ) and LM * (β 0 ). For example, let
O(ζ) = cos(ζ) − sin(ζ) sin(ζ) cos(ζ)
be a rotation matrix with angle ζ and AR † (β 0 , ζ)
LM † (β 0 , ζ) = O(ζ) AR(β 0 ) LM * (β 0 )
. Then, in the limit experiment, the linear combination test statistic can be written as
aN †2 1 + (1 − a)N †2 2 , (2.11) where (N † 1 , N † 2 )
are the limits of (AR † (β 0 , ζ), LM † (β 0 , ζ)) under either weak or strong identification. In the following, we use a minimax procedure to select the weight (a 1 , a 2 ) in our jackknife CLC test φ a 1 ,a 2 ,∞ . We can do the same to select a and ζ for the new parametrization in (2.11). Under strong identification and local alternatives, Lemma 2.2 shows that 1{LM * 2 (β 0 ) ≥ C α } is the most powerful test against local alternatives, which is achieved by our jackknife CLC test φ a 1 ,a 2 ,∞ with a 1 = 0 and a 2 ρ = 0. In this setting, the new parametrization does not bring any additional power.
A Conditional Linear Combination Test
In this section, we determine the weights (a 1 , a 2 ) in the jackknife CLC test via a minimax procedure. Under weak identification, the limit power of the jackknife CLC test with weights (a 1 , a 2 ) is
Eφ a 1 ,a 2 ,∞ = E1 a 1 Z 2 1 (m 1 (∆)) + a 2 (ρ(β 0 )Z 1 (m 1 (∆)) + (1 − ρ 2 (β 0 )) 1/2 Z 2 (m 2 (∆))) 2 +(1 − a 1 − a 2 )Z 2 2 (m 2 (∆)) ≥ C α (a 1 , a 2 ; ρ(β 0 )) ,
where m 1 (∆) and m 2 (∆) are defined in Lemma 2.4, and Z 1 (·) and Z 2 (·) are independent. In this case, we can be explicit and write φ a 1 ,a 2 ,∞ = φ a 1 ,a 2 ,∞ (∆). However, the limit power of the jackknife CLC test will typically remain unknown as the true parameter β (and hence ∆) is unknown. To overcome this issue, we follow I. Andrews (2016) and calibrate the power of Eφ a 1 ,a 2 ,∞ (δ), where δ ranges over all possible values that ∆ can potentially take; we define φ a 1 ,a 2 ,∞ (δ) as well as the range of potential values of ∆ below.
Let D = Q XX − (Q e(β 0 )e(β 0 ) , Q Xe(β 0 ) ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) be the residual from the projection of Q XX on (Q e(β 0 )e(β 0 ) , Q Xe(β 0 ) ). By (2.3), under weak identification, D = D + o p (1), D d = N (µ D , σ 2 D ), where µ D = C 1 − (∆ 2 , ∆) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) and σ 2 D = Υ − (Φ 13 (β 0 ), τ (β 0 )) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) .
We note that D is a sufficient statistic for µ D , which contains information about the concentration parameter C and is asymptotically independent of AR(β 0 ), LM (β 0 ), and hence LM * (β 0 ).
Under weak identification, we observe that m 1 (∆) and m 2 (∆) in Lemma 2.4 can be written as
m 1 (∆) m 2 (∆) = C 1 (∆) C 2 (∆) µ D , (3.1) where C 1 (∆) C 2 (∆) ≡ Φ −1/2 1 (β 0 )∆ 2 (1 − ρ 2 (β 0 )) −1/2 (Ψ −1/2 (β 0 )∆ − ρ(β 0 )Φ −1/2 1 (β 0 )∆ 2 ) × 1 − (∆ 2 , ∆) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) −1 . (3.2)
By (3.1), we see that φ a 1 ,a 2 ,∞ = φ a 1 ,a 2 ,∞ (∆) defined in (2.9) can be written as
1 a 1 Z 2 1 (C 1 (∆)µ D ) + a 2 (ρ(β 0 )Z 1 (C 1 (∆)µ D ) + (1 − ρ 2 (β 0 )) 1/2 Z 2 (C 2 (∆)µ D )) 2 +(1 − a 1 − a 2 )Z 2 2 (C 2 (∆)µ D ) ≥ C α (a 1 , a 2 ; ρ(β 0 ))
.
This motivates the definition that
φ a 1 ,a 2 ,∞ (δ) = 1 a 1 Z 2 1 (C 1 (δ)µ D ) + a 2 (ρ(β 0 )Z 1 (C 1 (δ)µ D ) + (1 − ρ 2 (β 0 )) 1/2 Z 2 (C 2 (δ)µ D )) 2 +(1 − a 1 − a 2 )Z 2 2 (C 2 (δ)µ D ) ≥ C α (a 1 , a 2 ; ρ(β 0 ))
.
(3.3)
To emphasize the dependence of φ a 1 ,a 2 ,∞ (δ) on µ D and γ(β 0 ), we further write φ a 1 ,a 2 ,∞ (δ) as
φ a 1 ,a 2 ,∞ (δ, µ D , γ(β 0 )).
The range of values that ∆ can take is defined as D(β 0 ) = {δ : δ + β 0 ∈ B}, where B is the parameter space. For example, in their empirical application of returns to education, Mikusheva and Sun (2022) posit that the value of β (i.e., the return to education) is from -0.5 to 0.5 (i.e., B = [−0.5, 0.5]). We follow the same practice in the simulation based on calibrated data in Section 5.2 and the empirical application in Section 6.
Following the lead of I. Andrews (2016), we define the highest attainable power for each δ ∈ D(β 0 ) as P δ,µ D = sup (a 1 ,a 2 )∈A(µ D ,γ(β 0 )) Eφ a 1 ,a 2 ,∞ (δ, µ D , γ(β 0 )), which means that
P δ,µ D − Eφ a 1 ,a 2 ,∞ (δ, µ D , γ(β 0 ))
is the power loss when the weights are set as (a 1 , a 2 ). Here we denote the domain of (a 1 , a 2 ) as A(µ D , γ(β 0 )) and define it as
A(µ D , γ(β 0 )) = {(a 1 , a 2 ) ∈ A 0 , a 1 ∈ [a(µ D , γ(β 0 )), 1]} where A 0 = {(a 1 , a 2 ) ∈ [0, 1] × [0, 1], a 1 + a 2 ≤ a} for some a < 1, a(µ D , γ(β 0 )) = min 0.01, 1.1C α,max (ρ(β 0 ))Φ 1 (β 0 )c B (β 0 ) ∆ 4 * (β 0 )µ 2 D , ∆ * (β 0 ) = Φ 1/2 1 (β 0 )Ψ −1/2 (β 0 )ρ −1 (β 0 ) as defined after Lemma 2.3, and c B (β 0 ) = sup δ∈D(β 0 ) 1 − (δ 2 , δ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) 2 .
The maximum power loss over δ ∈ D(β 0 ) can be viewed as a maximum regret. Then, we choose (a 1 , a 2 ) that minimizes the maximum regret; that is,
(a 1 (µ D , γ(β 0 )), a 2 (µ D , γ(β 0 ))) ∈ arg min (a 1 ,a 2 )∈A(µ D ,γ(β 0 )) sup δ∈D(β 0 ) (P δ,µ D − Eφ a 1 ,a 2 ,∞ (δ, µ D , γ(β 0 ))). (3.4)
Four remarks on the domain of (a 1 , a 2 ) (i.e., A(µ D , γ(β 0 ))) are in order. First, the lower bound a(µ D , γ(β 0 )) is motivated by Theorem 2.1(iii). Second, under weak identification, µ D is fixed, and
1.1Cα,max(ρ(β 0 ))Φ 1 (β 0 )c B (β 0 ) ∆ 4 * (β 0 )µ 2 D
may be larger than 0.01. In this case, we have A(µ D , γ(β 0 )) = {(a 1 , a 2 ) ∈ A 0 , a 1 ∈ [0.01, 1]}. In our simulations, the minimax a 1 never hits the lower bound so that setting the lower bound to be 0.01 or 0 does not make any numerical difference. Third, under strong identification and local alternatives,
1.1Cα,max(ρ(β 0 ))Φ 1 (β 0 )c B (β 0 ) ∆ 4 * (β 0 )µ 2 D will converge to zero so that A(µ D , γ(β 0 )) = (a 1 , a 2 ) ∈ A 0 , a 1 ∈ 1.1C α,max (ρ(β 0 ))Φ 1 (β 0 )c B (β 0 ) ∆ 4 * (β 0 )µ 2 D , 1 .
We show in Theorem 4.2 below that in this case, the minimax jackknife CLC test converges to 1{N * 2 2 ≥ C α } defined in Lemma 2.2, which is the uniformly most powerful invariant test. Furthermore, the minimax a 1 satisfies the requirement in Theorem 2.1(iii) withq = 1.1C α,max (ρ(β 0 )) so that under strong identification, our CLC test has asymptotic power 1 against fixed alternatives, as shown in Theorem 4.3. Fourth, we require a < 1 for some technical reason. Again, in our simulations, we never observe the minimax a 1 + a 2 hitting the upper bound so that setting the upper bound to be a or 1 does not make any numerical difference.
In practice, we do not observe µ D and γ(β 0 ). Therefore, we follow I. Andrews (2016, Section 6) and consider the plug-in method. We can replace γ(β 0 ) by its consistent estimator γ(β 0 ) introduced in Assumption 2. To obtain a proxy of µ D , 7 we define
σ D = Υ − ( Φ 13 (β 0 ), τ (β 0 )) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) 1/2 ,
which is a function of γ(β 0 ) and a consistent estimator of σ D by Assumption 2. Then, under weak
identification, we have D 2 / σ 2 D = D 2 /σ 2 D + o p (1) d = χ 2 1 (µ 2 D /σ 2 D ) + o p (1) and D 2 /σ 2 D is a sufficient statistic for µ 2 D . Let r = D 2 / σ 2 D .
We consider two estimators for µ D as functions of D and σ D , namely, f pp ( D, γ(β 0 )) = σ D r pp and f krs ( D, γ(β 0 )) = σ D √ r krs , where r pp = max( r − 1, 0) and
r krs = r − 1 + exp − r 2 ∞ j=0 − r 2 j 1 j!(1 + 2j) −1 .
Specifically, Kubokawa, Robert, and Saleh (1993) show that r krs is positive as long as r > 0 and r ≥ r krs ≥ r − 1. It is also possible to consider the MLE based on a single observation D 2 / σ 2 D . However, such an estimator is harder to use because it does not have a closed-form expression.
In practice, we estimate Eφ a 1 ,a 2 ,∞ (δ, µ D , γ(β 0 )) by E * φ a 1 ,a 2 ,s (δ, D, γ(β 0 )) for s ∈ {pp, krs}, where φ a 1 ,a 2 ,s (δ, D, γ(β 0 )) = 1 a 1 Z 2 1 ( C 1 (δ)f s ( D, γ(β 0 ))) +a 2 ρ(β 0 )Z 1 ( C 1 (δ)f s ( D, γ(β 0 ))) + (1 − ρ 2 (β 0 )) 1/2 Z 2 ( C 2 (δ)f s ( D, γ(β 0 ))) 2 +(1 − a 1 − a 2 )Z 2 2 ( C 2 (δ)f s ( D, γ(β 0 )) ≥ C α (a 1 , a 2 ; ρ(β 0 )) ,(3.5)
and ( C 1 (δ), C 2 (δ)) are similarly defined as (C 1 (δ), C 2 (δ)) in (3.2) with γ(β 0 ) replaced by γ(β 0 ); that is,
C 1 (δ) C 2 (δ) ≡ Φ −1/2 1 (β 0 )δ 2 (1 − ρ 2 (β 0 )) −1/2 ( Ψ −1/2 (β 0 )δ − ρ(β 0 ) Φ −1/2 1 (β 0 )δ 2 ) × 1 − (δ 2 , δ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) −1 .
Let P δ,s ( D, γ(β 0 )) = sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )) E * φ a 1 ,a 2 ,s (δ, D, γ(β 0 )). Then, for s ∈ {pp, krs}, 7 In fact, as φa 1 ,a 2 ,∞(δ, µD, γ(β0)) only depends on µ 2 D , we aim to find a good estimator of µ 2 D .
we can estimate a(µ D , γ(β 0 )) in (3.4) by A s ( D, γ(β 0 )) = (A 1,s ( D, γ(β 0 )), A 2,s ( D, γ(β 0 ))) defined as
A s ( D, γ(β 0 )) ∈ arg min (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )) sup δ∈D(β 0 ) (P δ,s ( D, γ(β 0 )) − E * φ a 1 ,a 2 ,s (δ, D, γ(β 0 ))), (3.6)
where φ a 1 ,a 2 ,s (δ, D, γ(β 0 )) is defined in (3.5),
A(f s ( D, γ(β 0 )), γ(β 0 )) = {(a 1 , a 2 ) ∈ A 0 , a 1 ∈ [a(f s ( D, γ(β 0 )), γ(β 0 )), a]}, a(f s ( D, γ(β 0 )), γ(β 0 )) = min 0.01, 1.1C α,max ( ρ(β 0 )) Φ 1 (β 0 ) c B (β 0 ) ∆ 4 * (β 0 )f 2 s ( D, γ(β 0 )) , c B (β 0 ) = sup δ∈D(β 0 ) 1 − (δ 2 , δ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) 2 , and ∆ * (β 0 ) = Φ 1/2 1 (β 0 ) Ψ −1/2 (β 0 ) ρ −1 (β 0 ). Then, the feasible jackknife CLC test is, for s ∈ {pp, krs}, φ As( D, γ(β 0 )) = 1 A 1,s ( D, γ(β 0 ))AR 2 (β 0 ) + A 2,s ( D, γ(β 0 ))LM 2 (β 0 ) +(1 − A 1,s ( D, γ(β 0 )) − A 2,s ( D, γ(β 0 )))LM * 2 (β 0 ) ≥ C α (A s ( D, γ(β 0 )); ρ(β 0 )) ,
(3.7)
Asymptotic Properties
We first consider the asymptotic properties of the jackknife CLC test under weak identification and fixed alternatives, in which C and ∆ are treated as fixed so that we have
D D d = N (µ D , σ 2 D ).
We see from (3.4) and (3.
6) that A s (d, r) = (a 1 (f s (d, r), r), a 2 (f s (d, r), r)) for (d, r) ∈ × Γ, where
Γ is the parameter space for γ(β 0 ) and s ∈ {pp, krs}. We make the following assumption on A s (·).
Assumption 3. Suppose we are under weak identification with a fixed β 0 . Let S s be the set of
discontinuities of A s (·, γ(β 0 )) : → [0, 1] × [0, 1]. Then, we assume A s (d, r) is continuous in r at r = γ(β 0 )
for any d ∈ /S s , and the Lebesgue measure of S s is zero for s ∈ {pp, krs}.
Assumption 3 is a technical condition that allows us to apply the continuous mapping theorem.
It is mild because A s (·) is allowed to be discontinuous in its first argument. In practice, we can approximate A s (·) by a step function defined over a grid of d so that there is a finite number of discontinuities. The continuity of A s (·) in its second argument is due to the smoothness of the bivariate normal PDF with respect to the covariance matrix. Therefore, in this case, Assumption 3 holds automatically.
Theorem 4.1. Suppose we are under weak identification and fixed alternatives and that Assumptions 1, 2, and 3 hold. Then, for s ∈ {pp, krs},
A s ( D, γ(β 0 )) A s (D, γ(β 0 )) = (a 1 (f s (D, γ(β 0 )), γ(β 0 )), a 2 (f s (D, γ(β 0 )), γ(β 0 ))) and 8 E φ As( D, γ(β 0 )) → Eφ a 1 (fs(D,γ(β 0 )),γ(β 0 )),a 2 (fs(D,γ(β 0 )),γ(β 0 )),∞ (∆, µ D , γ(β 0 )), where φ a 1 ,a 2 ,∞ is defined in (3.3) and a l (f s (D, γ(β 0 )), γ(β 0 )) is interpreted as a l (µ D , γ(β 0 )) defined in (3.4) with µ D replaced by f s (D, γ(β 0 )) for l = 1, 2.
In addition, let BL 1 be the class of functions f (·) of D that is bounded and Lipschitz with Lipschitz constant 1. Then, if the null hypothesis holds such that ∆ = 0, we have
E( φ As( D, γ(β 0 )) − α)f ( D) → 0, ∀f ∈ BL 1 .
Several remarks on Theorem 4.1 are in order. First, Theorem 4.1 shows that the asymptotic power of the CLC test with the weights (a 1 , a 2 ) selected by the minimax procedure is the same as that in the limit experiment when the weights equal A s (D, γ(β 0 )), which is a function of D. Given that D is independent of both normal random variables in φ a 1 ,a 2 ,∞ (δ) in (3.3), the jackknife CLC test is asymptotically admissible conditional on D among the tests specified in Theorem 2.1(i).
Second, we see that the power of our jackknife CLC test is Eφ As(D,γ(β 0 )),∞ (∆, µ D , γ(β 0 )), which does not exactly match the minimax power
Eφ a 1 (µ D ,γ(β 0 )),a 2 (µ D ,γ(β 0 )),∞ (∆, µ D , γ(β 0 ))
in the limit problem. This is because under weak identification, it is impossible to consistently estimate µ D , or equivalently, the concentration parameter. A similar result holds under weak identification with a fixed number of moment conditions in I. Andrews (2016). The best we can do is to approximate µ D by reasonable estimators based on D such as f pp (D, γ(β 0 )) and f krs (D, γ(β 0 )).
Last, Theorem 4.1 implies that our jackknife CLC test controls size asymptotically conditionally on D, and thus, unconditionally.
Next, we consider the performance of φ As( D, γ(β 0 )) defined in (3.7) under strong identification and local alternatives. 8 We assume that C 0 = +∞ if C > 0 and min(C, +∞) = C.
A 1,s ( D, γ(β 0 )) p −→ 0, A 2,s ( D, γ(β 0 ))ρ p −→ 0, and φ As( D, γ(β 0 )) 1{N * 2 2 ≥ C α }, where N * 2 d = N ∆ C [(1−ρ 2 )Ψ] 1/2 , 1 .
Three remarks are in order. First, Theorem 4.2 shows that under strong identification and local alternatives, our jackknife CLC test converges to the uniformly most powerful level-α test characterized in Lemma 2.2. Therefore, it is more powerful than the jackknife AR and LM tests.
Second, under strong identification and local alternatives, the JIVE-based Wald test proposed by Chao et al. (2012) is asymptotically equivalent to the jackknife LM test, which implies that the jackknife AR and JIVE-Wald-based two-step test in Mikusheva and Sun (2022) is also dominated by the jackknife CLC test. Third, Theorem 4.2 shows that our jackknife CLC test is adaptive.
In practice, econometricians do not know whether or not the alternative β 0 is close to the null β. Therefore, our jackknife CLC test calibrates the power over all of the values δ can take (i.e., δ ∈ D(β 0 )), which includes both local and fixed alternatives. Yet, Theorem 4.2 shows that the minimax procedure can produce the most powerful test as if it is known that β 0 is under local alternatives.
Last, we show that, under strong identification, the jackknife CLC test φ As( D, γ(β 0 )) defined in (3.7) has asymptotic power 1 against fixed alternatives.
− ∆ 2 C, Q Xe(β 0 ) − ∆C, Q XX − C) = O p (1)
. Further suppose that we are under strong identification with fixed alternatives so that ∆ = β − β 0 is nonzero and fixed. Then, we have φ As( D, γ(β 0 )) p −→ 1.
Simulation
Power Curve Simulation for the Limit Problem
In this section, we simulate the power behavior of tests under the limit problem described in Section 2. We compare the following tests with a nominal rate of 5%: our jackknife CLC test in which µ D is estimated by the methods pp and krs, respectively, the one-sided jackknife AR test defined in (2.5), the jackknife LM test defined in (2.6), and the test that is based on the orthogonalized jackknife LM statistic LM * 2 (β 0 ) defined in this paper. The results below are based on 5,000 simulation replications.
We set the parameter space for β as B = [−6/C, 6/C], where C = 3 and 6 represent weak and strong identification, respectively. The choice of parameter space follows that in I.Andrews (2016, Section 7.2). We set β 0 = 0, and the values of the covariance matrix in (2.2) are set as follows:
Φ 1 = Ψ = Υ = 1, and Φ 12 = Φ 13 = τ = ρ, where ρ ∈ {0.2, 0.4, 0.7, 0.9}. We then compute γ(β 0 ) based on (2.4) as β ranges over B and generate AR(β 0 ) and LM (β 0 ) based on (2.3). Last, we implement our CLC test purely based on AR(β 0 ), LM (β 0 ), γ(β 0 ), and B without assuming the knowledge of (C, β, Φ 1 , Ψ, Υ, Φ 12 , Φ 13 , τ ). We have tried to simulate under alternative settings of the covariance matrix, and the obtained patterns of the power behavior are very similar.
Figures 1-4 plot the power curves for ρ = 0.2, 0.4, 0.7, and 0.9. In each figure, we report the results under both weak and strong identification (C = 3 and 6, respectively). We observe that overall, the two jackknife CLC tests have the best power properties in terms of maximum regret.
Especially when the identification is strong (C = 6) and/or the degree of endogeneity is not very low (ρ = 0.4, 0.7, or 0.9), the jackknife CLC tests outperform their AR and LM counterparts by a large margin. In addition, we notice that when C = 3, for some parameter values LM * (β 0 ) can suffer from substantial declines in power relative to the other tests, which is in line with our theoretical predictions. By contrast, our jackknife CLC tests are able to guard against such substantial power loss because of the adaptive nature of their minimax procedure.
Simulation Based on Calibrated Data
We follow Angrist and Frandsen (2022) and Mikusheva and Sun (2022) and calibrate a data generating process (DGP) based on the 1980 census dataset from Angrist and Krueger (1991). Let the instruments be Z i = (1{Q i = q, C i = c}) q∈{2,3,4},c∈{31,··· ,39} , (1{Q i = q, P i = p}) q∈{2,3,4},p∈{51 states} ,
W i = 1{C i = c, P i = p} c∈{30,...,39},p∈{51 states} ,
which is a 510 × 1 matrix.
As in Angrist and Frandsen (2022), using the full 1980 sample (consisting of 329,509 individuals),
we first obtain the averageX i for each QOB-YOB-POB cell; we call thiss(q, c, p). Next we use LIML to estimate the structural parameters in the following linear IV regression:
Y i =X i β X +W i β W + e i , X i = Z i Γ Z +W i Γ W + V i ,
whereX is endogenous and is instrumented by Z i andW i is the exogenous control variable. Denote
the LIML estimate for β X,W ≡ (β X , β W ) as β LIM L = ( β LIM L,X , β LIM L,W ). We let y(C i , P i ) = W i β LIM L,W and ω(Q i , C i , P i ) =Ỹ i −X i β LIM L,X −W i β LIM L,W .
Based on the LIML estimate and the calibrated ω(Q i , C i , P i ), we simulate the following two DGPs:
1. DGP 1:
y i =ȳ + β s i + ω(Q i , C i , P i )(ν i + κ 2 ξ i ) (5.1) s i ∼ P oisson(µ i ),
where β is the parameter of interest, ν i and ξ i are independent standard normal,ȳ = 1 n n i=1 y(C i , P i ), µ i ≡ max{1, γ 0 + γ Z Z i + κ 1 ν i }, and γ 0 + γ Z Z i is the projection ofs i (q, c, p) onto a constant and Z i . We set κ 1 = 1.7 and κ 2 = 0.1 as in Mikusheva and Sun (2022).
2. DGP 2: Same as DGP 1 except that κ 1 = 2.7 and s i ∼ P oisson(2µ i )/2 . We consider varying sample size n based on 0.5%, 1%, and 1.5% of the full sample size. Upon obtaining n observations, we exclude instruments with n i=1 Z ij < 5. This results in small, medium, and large samples with 1,648, 3,296, and 4,943 observations and 119, 142, and 150 numbers of IVs, respectively. Our DGP 1 is exactly the same as that in Mikusheva and Sun (2022), which has ρ = 0.41. Our DGP 2 has ρ = 0.7. The concentration parameters (defined as C/Υ 1/2 ) for small, medium, and large samples are 2.15, 3.62, and 4.85, respectively, for DGP 1, and 2.38, 3.97, 5.28, respectively, for DGP 2.
We emphasize that following Angrist and Frandsen (2022) and Mikusheva and Sun (2022), we only useW i to compute the LIML estimator and calibrate ω(Q i , C i , P i ), but do not use it to generate new data. Therefore, for the simulated data, the outcome variable isỹ i , the endogenous variable iss i , the IV Z i is viewed to be fixed, and the exogenous control variable is just an intercept. We then denote the demeaned versions ofỹ i ands i as Y i and X i , respectively, in (2.1) and implement various inference methods described below. Following Mikusheva and Sun (2022), we test the null hypothesis that β = β 0 for β 0 = 0.1 while varying the true value β ∈ B. The parameter space is set as B = [−0.5, 0.5], which is consistent with the choice of parameter space for the empirical application below. The results below are based on 1,000 simulation repetitions. We provide more details about the implementation in Section B in the Online Supplement.
We compare the following tests with a nominal rate of 5%:
1. pp: our jackknife CLC test when µ D is estimated by the method pp.
2. krs: our jackknife CLC test when µ D is estimated by the method krs. First, all methods control size well because they are all weak identification robust. Second, the performance of the jackknife CLC test with krs is slightly better than than that with pp, which is consistent with the power curve simulation in Section 5.1. Third, in DGP 1 with a small sample size, the power of the jackknife AR test is about 9.2% higher than that of the krs test when β is around -0.3. However, for alternatives close to the null (e.g., when β is around 0), the power of the krs test is 24% higher, which implies that the power of the krs test is still better than that for the jackknife AR test in the minimax sense. The power of the jackknife LM tests is similar to that of the krs test in DGP 1 with a small sample size. Fourth, for the rest of the scenarios, the power of the krs test is the highest in most regions of the parameter space. The power of the jackknife AR and LM is at most 0.7% higher than that of the krs test at some point. For DGP 1 with medium and large sample sizes, the maximum power gaps between our krs test and the jackknife LM are about 8.6% and 5.6%, and about 43.2% and 50% compared with the jackknife AR.
Furthermore, they are 23.3%, 19.5%, and 18.5% compared with the jackknife LM for DGP 2 with small, medium, and large sample sizes, respectively, and about 41.5%, 55.3%, and 55.5% compared with the jackknife AR. Fifth, Figures 7 and 8 show the average values of (a 1 , a 2 ), the weights of the jackknife AR and LM for our CLC tests, under DGPs 1 and 2, respectively. We observe that the minimax procedure does not put all the weights on the LM * test. Furthermore, because the jackknife AR is more powerful on the left side of the parameter space relative to the right, the minimax weights for AR 2 (β 0 ) (a 1 ) are higher on the left than on the right. The summation of a 1 and a 2 is the lowest for alternatives that are close to the null, which is consistent with our theory that LM * is most powerful for local alternatives. Compared with those for DGP 1, the weights for DGP 2 are lower in general because the identification is slightly stronger in this case. Last, although the power of LM * 2 (β 0 ) drops at both ends of the parameter space, the power of the jackknife CLC tests remains stable. From Figures 7 and 8, we see that in those regions, more weights are put on AR 2 (β 0 ) and LM 2 (β 0 ). The jackknife AR test is defined in (2.5) with Φ 1 being the cross-fit estimator in Mikusheva and Sun (2022). The jackknife LM test is defined in (2.6) with the cross-fit estimator for Ψ(β 0 ). The pp and krs tests are our jackknife CLC tests. The two-step procedure is given by Mikusheva and Sun (2022, Section 5). Specifically, the researcher accepts the null if F > 9.98 and W ald(β 0 ) < C 0.02 10 or if F ≤ 9.98 and AR(β 0 ) < z 0.02 . In the case of 180 instruments, because F = 13.42 > 9.98, the lower and upper bounds of the 95% confidence interval (CI) for the two-step procedure correspond respectively to the minimum and maximum of the set {β 0 ∈ : W ald(β 0 ) < C 0.02 }; similarly, for the 1,530 instruments, as F = 6.32 ≤ 9.98, the lower and 9 The dataset can be downloaded from MIT Economics, Angrist Data Archive, https://economics.mit.edu/faculty/angrist/data1/data/angkru1991. 10 F = QXX / Υ, where Υ is the cross-fit estimator. W ald(β0) is defined as
β −β 0 V 2
, whereβ is the JIVE estimator andV is a cross-fit estimator of the asymptotic variance ofβ. We refer interested readers to Mikusheva and Sun (2022, Section 5) for more details. upper bounds of the CI for the two-step procedure correspond respectively to the minimum and maximum of the set {β 0 ∈ : AR(β 0 ) < z 0.02 }. We also report the 95% Wald test CI based on the JIVE estimator, denoted as JIVE-t. Notes: The F 's for 180 and 1,530 instruments are 13.42 and 6.32, respectively. The grid-search used for our confidence interval was over 10,000 equidistant grid-points for β 0 ∈ [−0.5, 0.5]. Our jackknife AR confidence interval for 1530 instruments differs from that in Mikusheva and Sun (2022) because they used year-of-birth 1930-1938 dummies for the QOB-YOB-POB interactions, whereas we used 1930-1939 dummies. More details are provided in Section C in the Online Supplement. Table 1 highlights that the CIs generated by our jackknife CLC tests are the shortest among all the weak identification robust CIs (i.e., pp, krs, jackknife AR, jackknife LM, and two-step).
Furthermore, the jackknife CLC CIs are 7.6% and 2.0% shorter than the non-robust JIVE-t CIs with 180 and 1,530 instruments, respectively, which is in line with our theoretical result that the CLC tests are adaptive to the identification strength and efficient under strong identification.
Conclusion
In this paper, we consider a jackknife CLC test that is adaptive to the identification strength in IV regressions with many weak instruments. We show that the proposed test is (i) robust to weak identification, many instruments, and heteroskedasticity, (2)
A Verifying Assumption 2 A.1 Standard Estimators
In this section, we maintain Assumption 4, which is stated below and just Mikusheva and Sun (2022, Assumption 1).
Assumption 4. The observations (Y i , X i , Z i ) i∈[n] are i.i.d. Suppose P is an n×n projection matrix of rank K, K → ∞ as n → ∞ and there exists a constant δ such that P ii ≤ δ < 1.
Following the results in Chao et al. (2012) and Mikusheva and Sun (2022), we can show that under either weak or strong identification, Assumption 1 in the paper holds:
Q ee Q Xe Q XX − C N 0 0 0 , Φ 1 Φ 12 Φ 13 Φ 12 Ψ τ Φ 13 τ Υ , (A.1) where σ 2 i = Ee 2 i , η 2 i = EV 2 i , γ i = Ee i V i , ω i = j =i P ij Π j , Φ 1 = lim n→∞ 2 K i∈[n] j =i P 2 ij σ 2 i σ 2 j , Φ 12 = lim n→∞ 1 K i∈[n] j =i P 2 ij (γ j σ 2 i + γ i σ 2 j ), Φ 13 = lim n→∞ 2 K i∈[n] j =i P 2 ij γ i γ j , Ψ = lim n→∞ 1 K i∈[n] j =i P 2 ij (η 2 i σ 2 j + γ i γ j ) + 1 K i∈[n] ω 2 i σ 2 i , τ = lim n→∞ 2 K i∈[n] j =i P 2 ij η 2 i γ j + 2 K i∈[n] ω 2 i γ i , and Υ = lim n→∞ 2 K i∈[n] j =i P 2 ij η 2 i η 2 j + 4 K i∈[n] ω 2 i η 2 i .
We note that the standard estimators of the above variance components proposed by Crudu Specifically, let
Φ 1 (β 0 ) = 2 K i∈[n] j =i P 2 ij e 2 i (β 0 )e 2 j (β 0 ), Φ 12 (β 0 ) = 1 K i∈[n] j =i P 2 ij (X j e j (β 0 )e 2 i (β 0 ) + X i e i (β 0 )e 2 j (β 0 )), Φ 13 (β 0 ) = 2 K i∈[n] j =i P 2 ij X i e i (β 0 )X j e j (β 0 ), Ψ(β 0 ) = 1 K i∈[n] ( j =i P ij X j ) 2 e 2 i (β 0 ) + 1 K i∈[n] j =i P 2 ij X i e i (β 0 )X j e j (β 0 )), τ (β 0 ) = 1 K i∈[n] ( j =i P ij X j ) 2 X i e i (β 0 ) + 1 K i∈[n] j =i P 2 ij X 2 i X j e j (β 0 ), and Υ = 2 K i∈[n] j =i P 2 ij X 2 i X 2 j . Assumption 5. Suppose max i∈[n] |Π i | ≤ C, p 1/4 n Π Π K = o(1), and E(e 6 i + V 6 i ) < ∞, where p n = max i∈[n] P ii .
Two remarks on Assumption 5 are in order. First, max i∈[n] |Π i | ≤ C is mild because Π i = EX i . Then, Mikusheva and Sun (2022) consider the cross-fit estimators for Φ 1 (β 0 ), Ψ(β 0 ), and Υ defined as
Φ 1 (β 0 ) = 2 K i∈[n] j =i P 2 ij [e i (β 0 )M i e(β 0 )][e j (β 0 )M j e(β 0 )], Ψ(β 0 ) = 1 K i∈[n] ( j =i P ij X j ) 2 e i (β 0 )M i e(β 0 ) M ii + i∈[n] j =i P 2 ij M i Xe i (β 0 )M j Xe j (β 0 ) , and Υ = 2 K i∈[n] j =i P 2 ij [X i (β 0 )M i X][X j (β 0 )M j X],
where X and e(β 0 ) are the column vectors that collect all X i and e i (β 0 ), respectively. Following their lead, we can construct the cross-fit estimators for the rest three elements in γ(β 0 ) as follows:
Φ 12 (β 0 ) = 1 K i∈[n] j =i P 2 ij (M j Xe j (β 0 )e i (β 0 )M i e(β 0 ) + M i Xe i (β 0 )e j (β 0 )M j e(β 0 )), Φ 13 (β 0 ) = 2 K i∈[n] j =i P 2 ij M i Xe i (β 0 )M j Xe j (β 0 ), and τ (β 0 ) = 1 K i∈[n] j =i P 2 ij (X i M i X)(M j Xe j (β 0 )) + 1 K i∈[n] ( j =i P ij X j ) 2 e i (β 0 )M i X 2M ii + X i M i e(β 0 ) 2M ii ,
Assumption 6. Suppose Assumption 5 holds. Further suppose that Π M Π ≤ CΠ Π K for some constant C > 0.
Compared with the assumptions in Mikusheva and Sun (2022), Assumption 6 further requires that max i∈[n] |Π i | ≤ C. However, it allows for the case that Π Π/K → c, where c is a nonzero constant, as long as p n = o(1), which is weaker than those in Mikusheva and Sun (2022) (e.g.,
Theorems 3 and 5 in their paper require Π Π/K → 0 and Π Π/K 2/3 → 0, respectively, for the consistency of the cross-fit variance estimators).
B Details for Simulations Based on Calibrated Data
The DGP contains only the intercept as the control variable. Therefore, we implement our jackknife CLC test on the demeaned version of (ỹ i ,s i , Z i ). The parameter space is B = [−0.5, 0.5]. We test the null hypothesis that β = β 0 for β 0 = 0.1 while varying the true value β over 30 equal-spaced grids over B. The grids for δ is the grid for β minus β 0 . We generate grids of (a 1 , a 2 ) as a 1 = sin 2 (t 1 ) and a 2 = cos 2 (t 1 ) sin 2 (t 2 ) with t 1 taking values over 15 equal-spaced grids over [a 1/2 (f s ( D, γ(β 0 )), π/2] and t 2 taking values over 15 equal-spaced grids over [0, π/2]. We gauge E * φ a 1 ,a 2 ,s (δ, D, γ(β 0 )) via a Monte Carlo integration with N = 2000 draws of independent standard normal random variables. In practice, it is rare but possible that A s ( D, γ(β 0 )) defined in (3.6) is not unique. To increase numerical stability, we follow I. Andrews (2016) and allow for some slackness in the minimization. Let G a be the grid of (a 1 , a 2 ) mentioned above, Q(a 1 , a 2 ) = sup δ∈D(β 0 ) (P δ,s ( D, γ(β 0 ))−E * φ a 1 ,a 2 ,s (δ, D, γ(β 0 ))), Q min = min (a 1 ,a 2 )∈Ga Q(a 1 , a 2 ) + 1/n, where n is the sample size, and Ξ = {(a 1 , a 2 ) ∈ G a : Q(a) ≤ Q min + ( Q min (1 − Q min )) 1/2 (2 log(log(N ))) 1/2 N −1/2 }.
The slackness term in the definition of Ξ is due to the law of the iterated logarithm for sum of Bernoulli random variables and captures the randomness of the Monte Carlo integration. Suppose there are L elements in Ξ, which are denoted as {(a 1,l , a 2,l )} L l=1 . We then define A s ( D, γ(β 0 )) as (a 1, L/2 , a 2, L/2 ). We use the cross-fit estimators defined in Section A.2 throughout the simulation.
C Details for Empirical Application
We consider the 1980s census of 329,509 men born in 1930-1939 based on Angrist and Krueger's (1991) dataset. The model for 180 instruments follows Mikusheva and Sun (2022), which can be written explicitly as solve for the range of β where the null hypothesis cannot be rejected. Specifically, we can write the above model as
ln W i = Constant + H i ζ + 38 c=30 Y OB i,c ξ c + s =56 P OB i,s η s + βE i + γ i E i = Constant + H i λ + 38 c=30 Y OB i,c µ c + s =56 P OB i,s α s + 3 j=1 s =56 QOB i,j P OB i,s δ c,s + 3 j=1 39 c=30 QOB i,j Y OB i,c θ j,c + ε i , where W i is the weekly wage, E i is the education of the i-th individual, H i is a vector of covariates, 11 Y OB i,ln W i = C i Γ + βE i + γ i E i = C i τ + Z i Θ + ε i ,
where C i is a (329,509×71)-matrix of controls containing the first four terms on the right-hand of the first equation, while Z i is the (329,509×180)-matrix of instruments containing the first two terms in the third line. We can then partial out the controls C i by multiplying each equation by 11 The covariates we consider are: RACE, MARRIED, SMSA, NEWENG, MIDATL, ENOCENT, WNOCENT, SOATL, ESOCENT, WSOCENT, and MT.
12 The state numbers are from 1 to 56, excluding (3,7,14,43,52), corresponding to U.S. state codes.
the residual matrix I − C(C C) −1 C to obtain a form analogous to that in the main text:
Y i = X i β + e i , X i = Π i + v i .
Then, at each grid-point we take β 0 = β and compute AR(β 0 ), LM (β 0 ), W ald(β 0 ), φ App( D, γ(β 0 )) and φ A krs ( D, γ(β 0 )) . We reject the chosen value of β 0 for AR(β 0 ) if it exceeds the one-sided 5%quantile of the standard normal (i.e., reject if AR(β 0 ) > z 0.05 ). If LM (β 0 ) 2 > C 0.05 , we reject the chosen β 0 for Jackknife LM. If W ald(β 0 ) > C 0.05 , we reject for JIVE-t. If φ As( D, γ(β 0 )) > C 0.05 (A s ( D, γ(β 0 )); ρ(β 0 )) for s ∈ {pp, krs}, we reject accordingly. The two-step procedure depends on the value of F . If F > 9.98, we reject if W ald(β 0 ) > C 0.02 ; otherwise if F ≤ 9.98, we reject if
AR(β 0 ) > z 0.02 .
The model for 1,530 instruments can be written explicitly as
ln W i = Constant + H i ζ + 38 c=30 Y OB i,c ξ c + s =56 P OB i,s η s + βE i + γ i . E i = Constant + H i λ + 38 c=30 Y OB i,c µ c + s =56 P OB i,s α s + 3 j=1 39 c=30 s∈{51 states} QOB i,j Y OB i,c P OB i,s δ j,c,s .
The main difference between this 1,530-instrument specification and the 180-instrument one is that we now have QOB-YOB-POB interactions as our instruments, compared with QOB-YOB and QOB-POB interactions in the case of 180 instruments. Note that in both cases, only quarter-ofbirth 1-3 are used; quarter 4 is omitted in order to avoid multicollinearity.
D Proof of Lemma 2.1
Under strong identification, by (2.3) and Assumption 2, we have
1 0 0 0 1 0 0 0 d n Q ee Q Xe Q XX N 0 0 C , Φ 1 Φ 12 0 Φ 12 Ψ 0 0 0 0 ,
In addition, we note that e i (β 0 ) = e i + X i ∆ with ∆ = d n ∆ → 0. Therefore, Under strong identification, we have C∆ = C ∆,
Q e(β 0 )e(β 0 ) = Q ee + 2∆Q Xe + ∆ 2 Q XX = Q ee + o p (1), Q Xe(β 0 ) = Q Xe + ∆Q XX = Q Xe + C ∆ + o p (1).
This implies
AR(β 0 ) LM (β 0 ) = Q e(β 0 )e(β 0 ) / Φ 1/2 1 Q Xe(β 0 ) / Ψ 1/2 1 N 0 C ∆ Ψ 1/2 , 1 ρ ρ 1 . E Proof of Lemma 2.2 Recall N * 2 = (1 − ρ 2 ) −1/2 (N 2 − ρN 1 ) and N 1 N * 2 d = N 0 θ (1−ρ 2 ) 1/2 , 1 0 0 1 .
Because ρ is known, it suffices to construct the uniformly most powerful invariant test based on observations (N 1 , N * 2 ). As the null and alternative are invariant to sign changes, the maximum invariant is (N 1 , N * 2 2 ). Then, Lehmann and Romano (2006, Theorem 6.2.1) implies the invariant test should be based on the maximum invariant. Note (N 1 , N * 2 2 ) are independent, N 1 follows a standard normal distribution, and N * 2 follows a noncentral chi-square distribution with one degree of freedom and noncentrality parameter λ = θ 2 1−ρ 2 . Therefore, by the Neyman-Pearson's Lemma (Lehmann and Romano (2006, Theorem 3.2.1)), the most powerful test based on observations (N 1 , N * 2 2 ) is the likelihood ratio test where the likelihood ratio function evaluated at (N 1 = 1 , N * 2 2 = 2 ) depends on 2 only and can be written as
LR ( 2 ; λ) = − λ 2 + log exp( √ λ 2 ) + exp(− √ λ 2 ) 2 .
In addition, we note that LR ( 2 ; λ) is monotone increasing in 2 for any λ ≥ 0 and 2 ≥ 0. Therefore, Lehmann and Romano (2006, Theorem 3.4.1) implies the likelihood ratio test is equivalent to 1{N * 2 2 ≥ C α }, which is uniformly most powerful among tests for λ = 0 v.s. λ > 0 and based on observations (N 1 , N * 2 2 ) only. This means it is also the uniformly most powerful test that is invariant to sign changes.
F Proof of Lemma 2.3
Under strong identification and fixed alternatives, because (Q e(β 0 )e(β 0
) − ∆ 2 C, Q Xe(β 0 ) − ∆C, Q XX − C) = O p (1), we have d n AR(β 0 ) d n LM (β 0 ) p −→ ∆ 2 C Φ 1/2 1 (β 0 ) ∆ C Ψ 1/2 (β 0 ) .
This implies
d n LM * (β 0 ) p −→ 1 (1 − ρ 2 (β 0 )) 1/2 ∆ C Ψ 1/2 (β 0 ) − ρ(β 0 )∆ 2 C Φ 1/2 1 (β 0 )
, which leads to the desired result.
G Proof of Lemma 2.4
Under weak identification, (2.3) implies
Q e(β 0 )e(β 0 ) Q Xe(β 0 ) = Q ee + 2∆Q Xe + ∆ 2 Q XX Q Xe + ∆Q XX N ∆ 2 C ∆C , Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) ,
which leads to the first result.
For the second result, it is obvious that m 1 (∆) → CΥ −1/2 . In addition, we have
m 2 (∆) = C ∆Φ 1 (β 0 ) − ∆ 2 Φ 12 (β 0 ) (Φ 1 (β 0 )(Φ 1 (β 0 )Ψ(β 0 ) − Φ 2 12 (β 0 ))) 1/2 → τ C (Υ(ΥΨ − τ 2 )) 1/2 = C Υ 1/2 ρ 23 (1 − ρ 2 23 ) 1/2 ,
where we use the fact that
Φ 1 (β 0 )/∆ 4 → Υ, (Φ 1 (β 0 )Ψ(β 0 ) − Φ 2 12 (β 0 ))/∆ 4 → ΥΨ − τ 2 , Φ 1 (β 0 ) − ∆Φ 12 (β 0 ) ∆ 3 → τ.
H Proof of Theorem 2.1 For Theorem 2.1(ii), we note thatρ = ρ under local alternatives and φ a 1 ,a 2 ,∞ = 1 (a 1 + a 2 ρ 2 )N 2 1 + 2a 2 ρ(1 − ρ 2 ) 1/2 N 1 N * 2 + (1 − a 1 − a 2 ρ 2 )N * 2 2 ≥ C α (a 1 , a 2 ; ρ) .
The "if" part of Theorem 2.1(ii) is a direct consequence of Lemma 2.2. The "only if" part of Theorem 2.1(ii) is a direct consequence of the the necessary part of Lehmann and Romano (2006, Theorem 3.2.1). Specifically, given N 1 and N * 2 are independent, the "only if" part requires a 1 + a 2 ρ 2 = 0, which implies a 1 = 0 and a 2 ρ = 0.
For Theorem 2.1(iii), we consider two cases of fixed alternatives: (1) ∆ = Φ 1/2 (1), by Lemma 2.3, the limits of d 2 n AR 2 (β 0 ), d 2 n LM 2 (β 0 ), d 2 n LM * 2 (β 0 ) are all positive, implies that for all (a 1,n , a 2,n ) ∈ A 0 1{a 1,n AR 2 (β 0 ) + a 2,n LM 2 (β 0 ) + (1 − a 1,n − a 2,n )LM * 2 (β 0 ) ≥ C α (a 1,n , a 2,n ; ρ(β 0 ))} p −→ 1.
1 (β 0 )Ψ −1/2 (β 0 )ρ −1 (β 0 ) and (2) ∆ = Φ 1/2 1 (β 0 )Ψ −1/2 (β 0 )ρ −1 (β 0 ). In Case
In Case (2), we have P a 1,n AR 2 (β 0 ) + a 2,n LM 2 (β 0 ) + (1 − a 1,n − a 2,n )LM * 2 (β 0 ) ≥ C α (a 1,n , a 2,n ; ρ(β 0 ))
≥ P qΨ 2 (β 0 )ρ 4 (β 0 ) C 2 Φ 1 (β 0 ) d 2 n AR 2 (β 0 ) ≥ C α (a 1,n , a 2,n ; ρ(β 0 ) = P (q + o p (1) ≥ C α,max (ρ(β 0 ))) → 1,
where the first inequality follows from the restriction on a 1.n and the facts that LM 2 (β 0 ) ≥ 0 and LM * 2 (β 0 ) ≥ 0, the first equality follows from d 2 n AR 2 (β 0 ) p −→ Φ −1 1 (β 0 )∆ 4 * (β 0 )C 2 (by Lemma 2.3) and ρ(β 0 ) p −→ ρ(β 0 ), and the last convergence follows from the fact thatq > C α,max (ρ(β 0 )). This concludes the proof.
I Proof of Theorem 4.1
We are under weak identification. By Lemma 2.4 and Assumption 2, we have
AR(β 0 ) LM * (β 0 ) D N m 1 (∆) m 2 (∆) µ D , 1 0 0 0 1 0 0 0 σ 2 D .
This implies (AR(β 0 ), LM * (β 0 ), D) are asymptotically independent. In addition, by Assumption 3, we have (AR 2 (β 0 ), LM * 2 (β 0 ), A s ( D, γ(β 0 ))) (Z 2 (m 1 (∆)), Z 2 (m 2 (∆)), A s (D, γ(β 0 ))) where the two normal random variables are independent and independent of D, and by definition, A s (D, γ(β 0 ))) = (a 1 (f s (D, γ(β 0 )), γ(β 0 )), a 2 (f s (D, γ(β 0 )), γ(β 0 ))). In addition, we have ρ(β 0 ) p −→ ρ(β 0 ). By the bounded convergence theorem, this further implies E φ As( D, γ(β 0 )) → Eφ a 1 (fs(D,γ(β 0 )),γ(β 0 )),a 2 (fs(D,γ(β 0 )),γ(β 0 )),∞ (∆, µ D , γ(β 0 )). (I.1)
In addition, suppose the null holds so that ∆ = 0. This implies m 1 (∆) = m 2 (∆) = 0. Then, we have
( φ As( D, γ(β 0 )) − α)f ( D) (φ a 1 (fs(D,γ(β 0 )),γ(β 0 )),a 2 (fs(D,γ(β 0 )),γ(β 0 )),∞ (0, µ D , γ(β 0 )) − α)f (D),
where φ a 1 (fs(D,γ(β 0 )),γ(β 0 )),a 2 (fs(D,γ(β 0 )),γ(β 0 )),∞ (0, µ D , γ(β 0 ))
= 1 a 1 (f s (D, γ(β 0 )), γ(β 0 ))Z 2 1 + a 2 (f s (D, γ(β 0 )), γ(β 0 ))(ρ(β 0 )Z 1 + (1 − ρ 2 (β 0 )) 1/2 Z 2 ) (1 − a 1 (f s (D, γ(β 0 )), γ(β 0 )) − a 2 (f s (D, γ(β 0 )), γ(β 0 )))Z 2 2 ≥ C α (a 1 (f s (D, γ(β 0 )), γ(β 0 )), a 2 (f s (D, γ(β 0 )), γ(β 0 )); ρ(β 0 ))
, Z 1 and Z 2 are independent standard normals, and they are independent of D. Then, by the definition of C α (·), we have E( φ As( D, γ(β 0 )) − α)f ( D) → E E φ a(fs(D,γ(β 0 )),γ(β 0 )),∞ (0, µ D , γ(β 0 )) − α|D f (D) = 0.
J Proof of Theorem 4.2
Denote c B = c B (β) and ∆ * = ∆ * (β). By Assumption 2, Φ 1 > 0, which implies |∆ * | > 0. Under strong identification and local alternatives, we have ∆ → 0, c B (β 0 ) → c B , ∆ * (β 0 ) → ∆ * , C α,max (ρ(β 0 )) → C α,max (ρ), and
AR(β 0 ) LM * (β 0 ) d n D N 0 ∆ C ((1−ρ 2 )Ψ) 1/2 C , 1 0 0 0 1 0 0 0 0 . This implies d n σ D √ r = d n D p −→ C, which further implies d n f pp ( D, γ(β 0 )) p −→ C. For f krs ( D, γ(β 0 )),
we note that max( r − 1, 0) ≤ r krs ≤ r.
Therefore, we also have f krs ( D, γ(β 0 ))d n
p −→ C. Let E n (ε) = {|| γ(β 0 ) − γ(β 0 )|| + |δ n D − C| ≤ ε}.
Then, for an arbitrary ε > 0, we have P(E n (ε)) ≥ 1 − ε when n is sufficiently large.
Denote δ = d n δ. We have A s ( D, γ(β 0 )) ∈ arg min (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )) sup δ∈ Dn P dn δ,s ( D, γ(β 0 )) − E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) ,
where D n = { δ : d n δ ∈ D(β 0 )}. Let Q n (a 1 , a 2 , δ) = P dn δ,s ( D, γ(β 0 )) − E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) and
Q(a 1 , a 2 , δ) = E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } − E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) , where Z 1 is standard normal, Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) is normal with mean (1 − ρ 2 ) −1/2 Ψ −1/2 δ C
and unit variance, and Z 1 and Z 2 (·) are independent. Then, we aim to show that sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ Dn
Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ) p −→ 0. (J.1)
We divide D n into three parts:
D n,1 (ε) = { δ ∈ D n , | δ| ≤ M 1 (ε)}, D n,2 (ε) = δ ∈ D n , d n δ ∆ * (β 0 ) − 1 ≤ ε , and D n,3 (ε) = D n ∩ D c n,1 (ε) ∩ D c n,2 (ε),
where M 1 (ε) is a large constant so that
P (1 − a)Z 2 M 2 1 (ε)ε 2 C 2 2(1 − ρ 2 )Ψc B ≥ C α,max (ρ) + 1 = 1 − ε. (J.2)
When n is sufficiently large and ε is sufficiently small, on E n (ε), there exists a constant c such that
| ∆ * (β 0 ) − ∆ * | ≤ cε, inf δ∈ D n,2 (ε) |d n δ| ≥ (1 − ε)(|∆ * | − cε), | Φ 1 (β 0 ) − Φ 1 | ≤ cε, |d 2 n f 2 s ( D, γ(β 0 )) − C 2 | ≤ cε, sup δ∈ D n,2 (ε) 1 − (d 2 n δ 2 , d n δ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) 2 ≤ 1 − (∆ 2 * , ∆ * ) Φ 1 Φ 12 Φ 12 Ψ −1 Φ 13 τ 2 + cε ≤ c B + cε, | c B (β 0 ) − c B | ≤ cε. (J.3)
This further implies D n,1 (ε) ∩ D n,2 (ε) = ∅.
Recall φ a 1 ,a 2 ,s (δ, D, γ(β 0 )) defined in (3.5). With δ replaced by d n δ and when δ ∈ D n,1 (ε), we
have d −1 n C 1 (d n δ) d −1 n C 2 (d n δ) (d n f s ( D, γ(β 0 ))) p −→ 0 (1 − ρ 2 ) −1/2 Ψ −1/2 δ C ,
Therefore, uniformly over (a 1 , a 2 ) ∈ A 0 and δ ∈ D n,1 (ε) and conditional on data, we have
φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) 1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) .
This implies sup (a 1 ,a 2 )∈A 0 , δ∈ D n,1 (ε)
E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) − E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) p −→ 0.
In addition, by Lemma 2.2, for any δ, E1
a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ)
is maximized at a 1 = 0 and a 2 ρ = 0. This implies sup δ∈ D n,1 (ε)
|P dn δ,s ( D, γ(β 0 )) − E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α }| = sup δ∈ D n,1 (ε) | sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )) E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) − E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α }| ≤ sup δ∈ D n,1 (ε) sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )) E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) − E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } + o p (1), ≤ sup δ∈ D n,1 (ε) sup (a 1 ,a 2 )∈A 0 E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) − E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } + o p (1) = o p (1),
where the second inequality is due to the facts that a(f s ( D, γ(β 0 )), γ(β 0 )) = o p (1) under strong identification and E1
a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) is continuous at a 1 = 0 uniformly over | δ| ≤ M 1 (ε). Therefore, we have sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,1 (ε) Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ) p −→ 0. (J.4)
Next, we consider the case when δ ∈ D n,2 (ε). We have
φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) = 1 a 1 Z 2 1 ( C 2 1 (d n δ)f 2 s ( D, γ(β 0 ))) +a 2 ρ(β 0 )Z 1 ( C 2 1 (d n δ)f 2 s ( D, γ(β 0 ))) + (1 − ρ 2 (β 0 )) 1/2 Z 2 ( C 2 2 (d n δ)f 2 s ( D, γ(β 0 ))) 2 +(1 − a 1 − a 2 )Z 2 2 ( C 2 2 (d n δ)f 2 s ( D, γ(β 0 ))) ≥ C α (a 1 , a 2 ; ρ(β 0 ))
≥ 1 a(f s ( D, γ(β 0 )), γ(β 0 ))Z 2 1 ( C 2 1 (d n δ)f 2 s ( D, γ(β 0 ))) ≥ C α,max ( ρ(β 0 )) .
By (J.3)
, on E n (ε), there exists a constant c > 0 such that
C 2 1 (d n δ)(d n f s ( D, γ(β 0 ))) 2 = Φ −1 1 (β 0 )(d n δ) 4 (d n f s ( D, γ(β 0 ))) 2 1 − (d 2 n δ 2 , d n δ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) 2 ≥ (Φ 1 (β 0 ) + cε) −1 (1 − ε) 4 (|∆ * | − cε) 4 ( C 2 − cε) c B + cε ≥ c
and a(f s ( D, γ(β 0 )), γ(β 0 )) C 2
1 (d n δ)f 2 s ( D, γ(β 0 )) ≥ 1.1C α,max ( ρ(β 0 )) Φ 1 (β 0 ) c B (β 0 ) ∆ 4 * (β 0 )d 2 n f 2 s ( D, γ(β 0 )) C 2 1 (d n δ)(d n f s ( D, γ(β 0 ))) 2 ≥ 1.1C α,max ( ρ(β 0 ))(Φ 1 − cε)(c B − cε) (|∆ * | + cε) 4 ( C 2 + cε) (Φ 1 (β 0 ) + cε) −1 (1 − ε) 4 (|∆ * | − cε) 4 ( C 2 − cε) c B + cε ≥ (1.1 − cε)C α,max ( ρ(β 0 )),
where the last inequality holds because ε can be arbitrarily small. This means, on E n (ε) and when δ ∈ D n,2 (ε),
E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) ≥ P * (o p (1) + (1.1 − cε)C α,max ( ρ(β 0 )) ≥ C α,max ( ρ(β 0 ))) → 1.
As P(E n (ε)) → 1, we have sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε)
1 − E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) p −→ 0, and thus, sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε)
P dn δ,s ( D, γ(β 0 )) − E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) ≤ sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε) 1 − E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) p −→ 0. (J.5)
Furthermore, note that a 1 + a 2 ≤ a < 1 and when δ ∈ D n,2 (ε), on E n (ε), (J.3) implies δ 2 → ∞.
Therefore, we have
a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 + (1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ (1 − a)Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) = (1 − a) δ 2 C 2 (1 − ρ 2 )Ψ (1 + o p (1)) → ∞,
which further implies sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε)
1 − E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) p −→ 0
and sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε)
E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } − E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) p −→ 0. (J.6)
Combining (J.5) and (J.6), we have sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε) Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ) → 0. (J.7)
Last, we consider the case in which δ ∈ D n,3 (ε). On E n (ε), (J.3) implies
C 2 2 (d n δ)f 2 s ( D, γ(β 0 )) = δ 2 (1 − dn δ ∆ * (β 0 ) ) 2 (1 − ρ 2 (β 0 )) Ψ(β 0 ) d 2 n f 2 s ( D, γ(β 0 )) 1 − (d 2 n δ 2 , d n δ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) 2 ≥ (1 − cε)M 2 1 (ε)ε 2 ( C 2 − cε) (1 − ρ 2 )Ψc B ≥ M 2 1 (ε)ε 2 C 2 2(1 − ρ 2 )Ψc B ,
where the second inequality holds when ε is sufficiently small. In this case, E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) ≥ P * ((1 − a)Z 2 2 ( C 2 2 (d n δ)f 2 s ( D, γ(β 0 ))) ≥ C α,max ( ρ(β 0 )))
≥ P * (1 − a)Z 2 2 M 2 1 (ε)ε 2 C 2 2(1 − ρ 2 )Ψc B ≥ C α,max ( ρ(β 0 )) ≥ P * (1 − a)Z 2 2 M 2 1 (ε)ε 2 C 2 2(1 − ρ 2 )Ψc B ≥ C α,max (ρ) + cε − ε ≥ 1 − 2ε,
where the second inequality is by the fact that the CDF (survival function) of Z 2 (λ) is monotone decreasing (increasing) in |λ| and the last equality is by the definition of M 1 (ε) in (J.2) and the fact that C α,max ( ρ(β 0 )) p −→ C α,max (ρ) . This implies, on E n (ε), sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,3 (ε)
P dn δ,s ( D, γ(β 0 )) − E * φ a 1 ,a 2 ,s (d n δ, D, γ(β 0 )) ≤ 2ε. (J.8)
In addition, we note that (1 − ρ 2 ) −1 Ψ −1 δ 2 C 2 satisfies
(1 − ρ 2 ) −1 Ψ −1 δ 2 C 2 ≥ M 2 1 (ε)ε 2 C 2 2(1 − ρ 2 )Ψc B ,
where we use the facts that δ 2 ≥ M 2 1 (ε), c B ≥ 1, and ε < 1. Therefore, by the same argument, we have
E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) ≥ 1 − ε and sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,3 (ε) E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } − E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) ≤ ε. (J.9)
Combining (J.8) and (J.9), we have, on E n (ε), sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,3 (ε) Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ) ≤ 3ε. (J.10)
Combining (J.4), (J.7), and (J.10), we have P sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ Dn |Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ)| > 5ε ≤ P sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,1 (ε) |Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ)| > ε, E n (ε) + P sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,2 (ε) |Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ)| > ε, E n (ε) + P sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ D n,3 (ε) |Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ)| > 3ε, E n (ε) + P (E c n (ε))
≤ o(1) + ε.
Since ε is arbitrary, we have ω n ≡ sup (a 1 ,a 2 )∈A(fs( D, γ(β 0 )), γ(β 0 )), δ∈ Dn |Q n (a 1 , a 2 , δ) − Q(a 1 , a 2 , δ)| p −→ 0.
Then we have
0 ≤ sup δ∈ Dn Q n (a(f s ( D, γ(β 0 )), γ(β 0 )), 0, δ) − sup δ∈ Dn Q n (A s ( D, γ(β 0 )), δ) ≤ sup δ∈ Dn Q(a(f s ( D, γ(β 0 )), γ(β 0 )), 0, δ) − sup δ∈ Dn Q(A s ( D, γ(β 0 )), δ) + 2ω n = o p (1) − sup δ∈ Dn Q(A s ( D, γ(β 0 )), δ) + 2ω n ,
where the equality holds because (1) sup δ∈ Q(a 1 , 0, δ) is continuous at a 1 = 0 as shown in the proof of I. Andrews (2016, Theorem 5), (2) a(f s ( D, γ(β 0 )), γ(β 0 )) = o p (1) under strong identification, and
(3) sup δ∈ Q(0, 0, δ) = 0 by construction.
On the other hand, we have
Q(a 1 , a 2 , δ) = E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } − E1 a 1 Z 2 1 + a 2 ρZ 1 + (1 − ρ 2 ) 1/2 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) 2 +(1 − a 1 − a 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ) = E1{Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α } − E1 (a 1 + a 2 ρ 2 )Z 2 1 + a 2 ρ(1 − ρ 2 ) 1/2 Z 1 Z 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) +(1 − a 1 − a 2 ρ 2 )Z 2 2 ((1 − ρ 2 ) −1/2 Ψ −1/2 δ C) ≥ C α (a 1 , a 2 ; ρ)
Note that a 1 = 0 and a 2 ρ = 0 if and only if a 1 + a 2 ρ 2 = 0, given that a 1 and a 2 are nonnegative.
Therefore, Theorem 2.1(ii) implies, for any constant C > 0, there exists a constant c > 0 such that inf (a 1 ,a 2 )∈A 0 ,a 1 +a 2 ρ 2 ≥C sup δ∈ Dn Q(a 1 , a 2 , δ) ≥ c > 0.
Therefore,
P A 1,s ( D, γ(β 0 )) + A 2,s ( D, γ(β 0 ))ρ 2 ≥ C > 0 ≤ P (c ≤ o p (1) + 2ω n ) → 0.
This implies A 1,s ( D, γ(β 0 )) p −→ 0 and A 2,s ( D, γ(β 0 ))ρ p −→ 0.
K Proof of Theorem 4.3
We consider strong identification with fixed alternatives. By construction, we have A 1,s ( D, γ(β 0 )) ≥
1.1Cα,max( ρ(β 0 )) Φ 1 (β 0 ) c B (β 0 ) ∆ 4 * (β 0 )f 2 s ( D, γ(β 0 ))
. By Theorem 2.1(iii), it suffices to show that, w.p.a.1,
1.1C α,max ( ρ(β 0 )) Φ 1 (β 0 ) c B (β 0 ) ∆ 4 * (β 0 )f 2 s ( D, γ(β 0 )) ≥q Ψ 2 (β 0 )ρ 4 (β 0 ) C 2 Φ 1 (β 0 ) ,
or equivalently,
1.1C α,max ( ρ(β 0 )) Φ 1 (β 0 ) c B (β 0 ) ∆ 4 * (β 0 )d 2 n f 2 s ( D, γ(β 0 )) ≥q Ψ 2 (β 0 )ρ 4 (β 0 ) C 2 Φ 1 (β 0 ) =q Φ 1 (β 0 ) C 2 ∆ 4 * (β 0 ) , (K.1)
for some constantq > C α,max (ρ(β 0 )). Under strong identification and fixed alternatives, we have
d n D = d n Q XX − (Q e(β 0 )e(β 0 ) , Q Xe(β 0 ) ) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) p −→ 1 − (∆ 2 , ∆) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) C.
Therefore, we have
d n f s ( D, γ(β 0 )) = d n D + o p (1) p −→ 1 − (∆ 2 , ∆) Φ 1 (β 0 ) Φ 12 (β 0 ) Φ 12 (β 0 ) Ψ(β 0 ) −1 Φ 13 (β 0 ) τ (β 0 ) C
for s ∈ {pp, krs}. This means for any ε > 0, w.p.a.1,
d 2 n f 2 s ( D, γ(β 0 )) ≤ (c B (β 0 ) + ε) C 2 .
In addition, we have c B (β 0 )
p −→ c B (β 0 ) ≥ 1, ∆ * (β 0 ) p −→ ∆ * (β 0 ), C α,max ( ρ(β 0 )) p −→ C α,max (ρ(β 0 )), and Φ 1 (β 0 ) p −→ Φ 1 (β 0 ) > 0, which imply c B (β 0 ) ≥ c B (β 0 )−cε, Φ 1 (β 0 ) ≥ Φ 1 (β 0 )−cε, C α,max ( ρ(β 0 )) ≥
C α,max (ρ(β 0 )) − cε, and ∆ 4 * (β 0 ) ≤ ∆ 4 * (β 0 ) + cε, w.p.a.1. Therefore, we have, w.p.a.1,
1.1C α,max ( ρ(β 0 )) Φ 1 (β 0 ) c B (β 0 ) ∆ 4 * (β 0 )d 2 n f 2 s ( D, γ(β 0 )) ≥ 1.1(C α,max (ρ(β 0 )) − cε)(c B (β 0 ) − cε)(Φ 1 (β 0 ) − cε) (∆ 4 * (β 0 ) + cε)(c B (β 0 ) + ε) C 2 ≥ (1.1 − cε)C α,max (ρ(β 0 ))Φ 1 (β 0 ) ∆ 4 * (β 0 ) C 2 ,
where the second inequality holds because ε can be arbitrarily small. Then, we can letq in (K.1) be (1.1 − cε)C α,max (ρ(β 0 )) which is greater than C α,max (ρ(β 0 )). This concludes the proof.
L Proof of Theorem A.1
We focus on the consistency of Φ 1 (β 0 ) and Ψ(β 0 ). The consistency of the rest four estimators can be established in the same manner. We have e i (β 0 ) = e i + ∆X i = U i (∆) + ∆Π i , where
U i (∆) = e i + ∆V i . Therefore, Φ 1 (β 0 ) = 2 K i∈[n] j =i P 2 ij e 2 i (β 0 )e 2 j (β 0 ) = 2 K i∈[n] j =i P 2 ij (∆ 2 Π 2 i + 2∆Π i U i (∆) + U 2 i (∆))(∆ 2 Π 2 j + 2∆Π j U j (∆) + U 2 j (∆)) = 2 K i∈[n] j =i P 2 ij U 2 i (∆)U 2 j (∆) + ∆ 4 K i∈[n] j =i P 2 ij (Π i U i (∆)U 2 j (∆) + Π j U j (∆)U 2 i (∆)) + ∆ 2 2 K i∈[n] j =i P 2 ij (Π 2 i U 2 j (∆) + Π 2 j U 2 i (∆) + 4Π i Π j U i (∆)U j (∆)) + ∆ 3 4 K i∈[n] j =i P 2 ij (Π 2 i Π j U j (∆) + Π 2 j Π i U i (∆)) + ∆ 4 2 K i∈[n] j =i P 2 ij Π 2 i Π 2 j ≡ 4 l=0 ∆ l T l .
We first note that 1
K i∈[n] ω 2 i σ 2 i = o(1), 1 K i∈[n] ω 2 i γ i = o(1), and 1 K i∈[n] ω 2 i η 2 i = o(1). To see this, note that 1 K i∈[n] ω 2 i σ 2 i ≤ C K i∈[n] ω 2 i ≤ C K (2Π P 2 Π + 2 i∈[n] P 2 ii Π 2 i ) ≤ C K ( i,j∈[n] |Π i ||P ij ||Π j | + Π Πp n ) ≤ Cp 1/2 n Π Π K = o(1),
where the second inequality is shown is the Proof of Mikusheva and Sun (2022, Lemma S1.4) and last inequality is by the fact that P 2 ij ≤ j∈[n] P 2 ij = P ii . The results for 1 K i∈[n] ω 2 i γ i = o(1) and
1 K i∈[n] ω 2 i η 2 i = o(1) can be established in the same manner. We first consider T 0 . Denote ξ ij = U 2 i (∆)U 2 j (∆) − EU 2 i (∆)U 2 j (∆). We want to show that 1 K i∈[n] j =i P 2 ij ξ ij = o p (1).
Note that
E 1 K i∈[n] j =i P 2 ij ξ ij 2 = 1 K 2 i∈[n] j =i P 4 ij Eξ 2 ij + 4 K 2 i∈[n] j =i i =i,j P 2 ij P 2 ii Eξ ij ξ ii .
As both Eξ 2 ij and |Eξ ij ξ ii | are bounded, we have
1 K 2 i∈[n] j =i P 4 ij Eξ 2 ij ≤ C K 2 i∈[n] j =i P 2 ij ≤ C K = o(1) and 1 K 2 i∈[n] j =i i =i,j P 2 ij P 2 ii Eξ ij ξ ii ≤ C K 2 i∈[n] j =i i =i,j P 2 ij P 2 ii ≤ C K 2 i∈[n] j =i P 2 ij P ii = o(1).
Therefore, we have
T 0 = 2 K i∈[n] j =i P 2 ij E(U 2 i (∆)U 2 j (∆)) + o p (1) = ∆ 4 2 K i∈[n] j =i P 2 ij η 2 i η 2 j + ∆ 3 4 K i∈[n] j =i P 2 ij (η 2 i γ j + η 2 j γ i ) + ∆ 2 2 K i∈[n] j =i P 2 ij (η 2 i σ 2 j + η 2 j σ 2 i + 4γ i γ j ) + ∆ 4 K i∈[n] j =i P 2 ij (γ i σ 2 j + γ j σ 2 i ) + 2 K i∈[n] j =i P 2 ij σ 2 i σ 2 j + o p (1) = Φ 1 (β 0 ) + o p (1).
By the same argument above, we have (1) because ET 1 = 0. Similarly, we have ET 3 = 0 and T 3 = o p (1). Next, we have
T 1 = ET 1 + o p (1) = o pp n T 2 = ET 2 + o P (1) ≤ C K i∈[n] j =i P 2 ij Π 2 i + o p (1) ≤ Cp n Π Π K + o p (1) = o p (1).
Last, we have
T 4 ≤ C K i∈[n] j =i P 2 ij Π 2 i = o(1),
where the first inequality is by max i∈[n] |Π i | < C. This implies
Φ 1 (β 0 ) − Φ 1 (β 0 ) = o p (1).
Next, we consider the consistency of Ψ(β 0 ). By the similar argument above, we have
1 K i∈[n] j =i P 2 ij X i e i (β 0 )X j e j (β 0 )) = 1 K i∈[n] j =i P 2 ij Π i e i (β 0 )Π j e j (β 0 )) + 1 K i∈[n] j =i P 2 ij Π i e i (β 0 )V j e j (β 0 )) + 1 K i∈[n] j =i P 2 ij V i e i (β 0 )Π j e j (β 0 )) + 1 K i∈[n] j =i P 2 ij V i e i (β 0 )V j e j (β 0 )) = 1 K i∈[n] j =i P 2 ij (γ i + ∆η 2 i )(γ j + ∆η 2 j ) + o p (1). (L.1)
In addition, we have
1 K i∈[n] ( j =i P ij X j ) 2 e 2 i (β 0 ) = 1 K i∈[n] (ω i + j =i P ij V j ) 2 e 2 i (β 0 ) = 1 K i∈[n] ω 2 i Ee 2 i (β 0 ) + 1 K i∈[n] j =i P 2 ij η j Ee 2 i (β 0 ) + o p (1) = 1 K i∈[n] j =i P 2 ij η 2 j (σ 2 i + 2γ i ∆ + ∆ 2 η 2 i ) + o p (1), (L.2)
where the second equality is due to Mikusheva and Sun (2022, Lemma S3.2). In the next section, we show the same results hold under Assumption 5. Combining (L.1) and (L.2), we have
Ψ(β 0 ) = 1 K i∈[n] j =i P 2 ij (γ i + ∆η 2 i )(γ j + ∆η 2 j ) + 1 K i∈[n] j =i P 2 ij η 2 j (σ 2 i + 2γ i ∆ + ∆ 2 η 2 i ) + o p (1) = 1 K i∈[n] j =i P 2 ij (γ i γ j + η 2 i η 2 j ) + 4∆ K i∈[n] j =i P 2 ij η 2 i γ j + 2∆ 2 K i∈[n] j =i P 2 ij η 2 i η 2 j + o p (1) = Ψ(β 0 ) + o p (1).
M Proof of Theorem A.2
Given Lemma A.1, Lemmas 2 and 3 in Mikusheva and Sun (2022) hold under Assumptions 4 and 6. Therefore, Mikusheva and Sun (2022, Theorem 3) shows that
Φ 1 (β 0 ) − 2 K i∈[n] j =i P 2 ij EU 2 i (∆)EU 2 j (∆) = o p (1).
In addition, the proof of Theorem A.1 shows that
2 K i∈[n] j =i P 2 ij EU 2 i (∆)EU 2 j (∆) = Φ 1 (β 0 ) + o(1),
which implies the consistency of Φ 1 (β 0 ).
Similarly, given Lemma A.1, Lemma S3.1 in Mikusheva and Sun (2022) holds under Assumptions 4 and 6, so that the consistency of Υ to Υ is also shown by using their argument. In addition, we use the same argument in the proof of Mikusheva and Sun (2022, Theorem 5) to show that
Ψ(β 0 ) = 1 K i∈[n] ( j =i P ij X j ) 2 e i M i e M ii + 1 K i∈[n] j =i P 2 ij M i Xe i M j Xe j + ∆ 1 K i∈[n] ( j =i P ij X j ) 2 e i M i X M ii + X i M i e M ii + 2 K i∈[n] j =i P 2 ij M i Xe i M j XX j + ∆ 2 1 K i∈[n] ( j =i P ij X j ) 2 X i M i X M ii + 1 K i∈[n] j =i P 2 ij M i XX i M j XX j = Ψ + 2∆τ + ∆ 2 Υ + o p (1) = Ψ(β 0 ) + o p (1),
where the second equality also follows from Lemma S3.1 in Mikusheva and Sun (2022).
Next for Φ 12 (β 0 ), we have
1 K i∈[n] j =i P 2 ij M j Xe j (β 0 )e i (β 0 )M i e(β 0 ) = 1 K i∈[n] j =i P 2 ij M j Xe j e i M i e + ∆ 1 K i∈[n] j =i P 2 ij (M j XX j e i M i e + M j Xe j X i M i e + M j Xe j e i M i X) + ∆ 2 1 K i∈[n] j =i P 2 ij (M j XX j X i M i e + M j XX j e i M i X + M j Xe j X i M i X) + ∆ 3 1 K i∈[n] j =i P 2 ij M j XX j X i M i X. Note that 1 K i∈[n] j =i P 2 ij M j Xe j e i M i e = 1 K i∈[n] j =i P 2 ij (M j V + λ i )e j e i M i e, where λ i = M i Π.
Then, by Lemma A.1 and Lemma 3 of Mikusheva and Sun (2022),
1 K i∈[n] j =i P 2 ij M j Xe j e i M i e − 1 K i∈[n] j =i P 2 ij M j V e j e i M i e = o p (1).
Furthermore, by Lemma A.1 and Lemma 2 of Mikusheva and Sun (2022),
1 K i∈[n] j =i P 2 ij M j V e j e i M i e − 1 K i∈[n] j =i P 2 ij γ j σ 2 i = o p (1).
By using similar arguments, we find that
1 K i∈[n] j =i P 2 ij M j XX j e i M i e = 1 K i∈[n] j =i P 2 ij η 2 j σ 2 i + o p (1), 1 K i∈[n] j =i P 2 ij M j Xe j X i M i e = 1 K i∈[n] j =i P 2 ij γ j γ i + o p (1), 1 K i∈[n] j =i P 2 ij M j Xe j e i M i X = 1 K i∈[n] j =i P 2 ij γ j γ i + o p (1), 1 K i∈[n] j =i P 2 ij M j XX j X i M i e = 1 K i∈[n] j =i P 2 ij η 2 j γ i + o p (1), 1 K i∈[n] j =i P 2 ij M j XX j e i M i X = 1 K i∈[n] j =i P 2 ij η 2 j γ i + o p (1), 1 K i∈[n] j =i P 2 ij M j Xe j X i M i X = 1 K i∈[n] j =i P 2 ij γ j η 2 i + o p (1), 1 K i∈[n] j =i P 2 ij M j XX j X i M i X = 1 K i∈[n] j =i P 2 ij η 2 j η 2 i + o p (1).
Putting these results together, we obtain
Φ 12 (β 0 ) = Φ 12 + ∆(2Ψ + Φ 13 ) + 3∆ 2 τ + ∆ 3 Υ + o p (1) = Φ 12 (β 0 ) + o p (1).
We use similar arguments to prove the results for Ψ 13 (β 0 ) and τ (β 0 ). For Φ 13 (β 0 ), notice that
1 K i∈[n] j =i P 2 ij M i Xe i (β 0 )M j Xe j (β 0 ) = 1 K i∈[n] j =i P 2 ij M i Xe i M j Xe j + ∆ 1 K i∈[n] j =i P 2 ij (M i Xe i M j XX j + M i XX i M j Xe j ) + ∆ 2 1 K i∈[n] j =i P 2 ij M i XX i M j XX j = 1 K i∈[n] j =i P 2 ij γ i γ j + ∆ 1 K i∈[n] j =i P 2 ij (γ i η 2 j + η 2 i γ j ) + ∆ 2 1 K i∈[n] j =i P 2 ij η 2 i η 2 j + o p (1), which implies that Φ 13 (β 0 ) = Φ 13 + 2∆τ + ∆ 2 Υ + o p (1) = Φ 13 (β 0 ) + o p (1).
Finally, for τ (β 0 ), notice that
1 K i∈[n] j =i P 2 ij X i M i XM j Xe j (β 0 ) = 1 K i∈[n] j =i P 2 ij η 2 i γ j + 1 K i∈[n] j =i P 2 ij η 2 i η 2 j ∆ + o p (1), 1 K i∈[n] ( j =i P ij X j ) 2 e i (β 0 )M i X 2M ii + X i M i e(β 0 ) 2M ii = 1 K i∈[n] j =i P 2 ij η 2 i γ j + 1 K i∈[n] j =i P 2 ij η 2 i η 2 j ∆ + o p (1), which implies that τ (β 0 ) = τ + ∆Υ + o p (1) = τ (β 0 ) + o p (1).
This completes the proof of the theorem.
N Proof of Lemma A.1
Let p n = max i P ii . We first give some useful bounds:
i∈[n] ω 2 i ≤ C max i P 1/2 ii Π Π = Cp 1/2 n Π Π, max i∈[n] ω 2 i = max i∈[n] ( j =i P ij Π j ) 2 ≤ max i∈[n] ( j =i P 2 ij )Π Π ≤ p n Π Π, which imply i∈[n] ω 4 i ≤ max i∈[n] ω 2 i ( i∈[n] ω 2 i ) ≤ Cp n (Π Π) 2 .
First, we show that Mikusheva and Sun (2022, Lemma S2.1) hold under our conditions following the lines of argument in their proof. More specifically, we notice that to show ∆ 2 |EA 2 | = o(1),
where A 2 is defined in the proof of Mikusheva and Sun (2022, Lemme S2.1), it suffices to show the following terms are o(1):
C∆ 2 K i∈[n] j =i P 2 ij |λ i ||Π j | ≤ C∆ 2 K i∈[n] P ii λ 2 i 1/2 j∈[n] P jj Π 2 j 1/2 ≤ C∆ 2 K p n λ λ 1/2 Π Π 1/2 ≤ C∆ 2 K 3/2 p n Π Π = o(1) by λ λ ≤ C Π Π K , C∆ 2 K i∈[n] j =i P 2 ij |Π i ||Π j | ≤ C∆ 2 K i∈[n] P ii Π 2 i 1/2 j∈[n] P jj Π 2 j 1/2 ≤ C∆ 2 K p n Π Π = o(1).
Then, we prove the variance of ∆ 2 A 2 = o(1) by showing that
C∆ 4 K 2 i∈[n] j∈[n] P 4 ij λ 2 i λ 2 j ≤ C∆ 4 K 2 p 2 n λ λ 2 ≤ C∆ 4 K 2 p 2 n Π Π K 2 = o(1) by P 2 ij ≤ P ii , C∆ 4 K 2 i∈[n] λ 2 i j∈[n] P 2 ij Π Π + λ λ j∈[n] P jj |Π j | 2 ≤ C∆ 4 K 2 p n (λ λ)(Π Π) + (λ λ)(p n K)(Π Π)
≤ C∆ 4 K 3 p n (Π Π) 2 + p n K(Π Π) 2 = o(1) by
P 2 ij |Π i λ i Π j |P 2 i j |Π i λ i Π j | k∈[n] |M jk M j k | ≤ C K 2 i∈[n] j∈[n] P 2 ij Π 2 i λ 2 i i∈[n] j∈[n] P 2 ij Π 2 j ≤ C K 2 p 2 n (Π Π)(λ λ) ≤ C K 3 p 2 n Π Π 2 = o(1),
where k∈[n] |M jk M j k | ≤ 1 by Mikusheva and Sun (2022, Lemma S1.1(ii)).
Now we show that Mikusheva and Sun (2022, Lemma S3.2 ) holds under our conditions, i.e.,
1 K n i=1 (ω i + j =i P ij V j ) 2 U i − 1 K n i=1 ω 2 i E[U i ] + 1 K i,j =i P 2 ij E[U i ]η 2 j p −→ 0, (b) 1 K n i=1 (ω i + j =i P ij V j ) 2 ξ 1,i M ii k =j P ik ξ 2,k p −→ 0, (c) 1 K n i=1 (ω i + j =i P ij V j ) 2 a i ξ 1,i p −→ 0, (d) 1 K n i=1 (ω i + j =i P ij V j ) 2 a i M ii k =i P ik ξ 1,k − 2 K n i=1 j =i P 2 ij ω i a i M ii E[V j ξ 1,j ] p −→ 0, (e) 1 K n i=1 (ω i + j =i P ij V j ) 2 Π i λ i M ii p −→ 0,(a)
where ξ 1,i , ξ 2,i stay for either e i or V i , U i stay for e 2 i , e i V i , or V 2 i , and a i stay for either Π i or λ i M ii . To prove statement (a), following the arguments in Mikusheva and Sun (2022), we just need to show the following terms are o(1): where we have used max i∈[n] ω 2 i ≤ p n Π Π, i∈[n] ω 2 i ≤ Cp 1/2 n Π Π, and Mikusheva and Sun (2022, Lemma S1.3(b)).
E 1 K i∈[n] ω 2 i U i 2 ≤ C K 2 i∈
To prove statement (b), we show that C K 2 i∈[n] j =i (P 2 ij ω 4 i + P 2 ij w 2 i w 2 j + P 4 ij w 2 i + P 4 ij |ω i ω j |) |Π i | ≤ C, C K 2 i∈[n] j =i P 4 ij a 2 i + |a i | |a j | ≤ C K 2 p 2 n a a + p 2 n a a = o(1), C K 2 i∈[n] j =i P 2 ij (ω 2 i a 2 i + |ω i a i ||ω j a j |) ≤ C K 2 p 2 n (Π Π)(a a) + p 2 n (Π Π)(a a) = o(1).
To prove statement (d), we first show that
C K 2 i∈[n] ω 2 i |a i | 2 + i∈[n] |ω i a i | 2 = o(1).
In particular, when a i = Π i , we have (1),
C K 2 i∈[n] ω 2 i |Π i | 2 + i∈[n] |ω i Π i | 2 ≤ C K 2 i∈[n] ω 2 i 2 + i∈[n] |ω i Π i | 2 ≤ C K 2 p 1/2 n Π Π 2 + i∈[n] ω 2 i Π Π ≤ C K 2 p n (Π Π) 2 + p 1/2 n (Π Π) 2 = oWhen a i = λ i M ii , we have C K 2 i∈[n] ω 2 i λ i M ii 2 + i∈[n] ω i λ i M ii 2 ≤ C K 2 i∈[n] ω 4 i (λ λ) + i∈[n] ω 2 i (λ λ) ≤ C K 2 p n (Π Π) 2 (λ λ) + p 1/2 n (Π Π)(λ λ) = o(1).
Furthermore, we can show that To prove statement (e), we show that
C K i∈[n] ω 2 i Π i λ i M ii ≤ C K i∈[n] ω 2 i λ i M ii ≤ C K i∈[n]
ω 4 i 1/2 λ λ 1/2 ≤ C K p 1/2 n (Π Π)(λ λ) 1/2 = o(1),
C K 2 j∈[n] i =j P ij ω i Π i λ i M ii 2 ≤ C K 2 j∈[n] i =j |P ij ||ω i ||λ i | 2 ≤ C K 2 j∈[n] i =j ω 2 i i =j P 2 ij λ 2 i ≤ CKp 1/2 n Π Πλ λ K 2 = o(1), C K 2 j∈[n] i =j P 2 ij Π i λ i M ii 2 ≤ C K 2 j∈[n] i =j P 2 ij |λ i | 2 ≤ CKp n λ λ K 2 = o(1), C K j∈[n] i =j P 2 ij Π i λ i M ii ≤ C K i∈n j∈[n]
P 2 ij |Π i λ i | ≤ C K p n (Π Π) 1/2 (λ λ) 1/2 = o(1),
C K 2 j∈[n] k =j i =j,k P 2 ij P 2 ik Π i λ i M ii 2 ≤ C K 2 j∈[n] k =j i =j,k P 2 ij P 2 ik |λ i | 2 ≤ C K 2 j∈[n] k =j i =j,k P 4 ij P 4 ik λ λ ≤ Cp 3 n Kλ λ K 2 = o(1),
where we have used Mikusheva and Sun (2022, Lemma S1.1(ii)).
Finally, we can show that Mikusheva and Sun (2022, Lemma S3.1) also holds under our conditions by using similar arguments. We omit the details for brevity.
has a power advantage under strong identification against local alternatives, it may lack power under weak identification against distant alternatives if the degree of endogeneity is low. Furthermore, LM * (β 0 ) may not have power if ∆ = ∆ * (β 0 ). We notice that such a power issue of LM * (β 0 ) is similar to that of the tests based on the K statistic introduced byKleibergen (2002Kleibergen ( , 2005 under the framework of a fixed number of instruments. Under such a framework, the K statistic is efficient under strong identification against local alternatives but may have a non-monotonic power function under weak identification (e.g., see the discussions in Section 3.1 of I.Andrews(2016)).
Theorem 4. 3 .
3Suppose Assumption 2 holds, and (Q e(β 0 )e(β 0 )
Figure 1 :
1Power Curve for ρ = 0.2
Figure 2 :Figure 3 :
23Power Power Curve for ρ = 0.7 where Q i , C i , P i are individual i's quarter of birth (QOB), year of birth (YOB) and place of birth (POB), respectively, so that there are 180 instruments. Note that the dummy with q = 1 and c = 30 is omitted in Z i . We denoteỸ i as income,X i as the highest grade completed, andW i as the full set of YOB-POB interactions; that is,
Figure 4 :
4Power Curve for ρ = 0.9
3. AR: the one-sided jackknife AR test with the cross-fit variance estimator proposed by Mikusheva and Sun (2022). 4. LM CF:Matsushita and Otsu's (2021) jackknife LM test, but with a cross-fit variance estimator (details are given in Section A.2 in the Online Supplement).5. 2-step: Mikusheva and Sun's (2022) two-step estimator in which the overall size is set at 5%. 6. LM * : LM * test defined in this paper. 7. LM MO:Matsushita and Otsu's (2021) original jackknife LM test.Figures 5 and 6 plot the power curves of the aforementioned tests. We can make six observations.
Figure 5 :
5Power Curve for DGP 1
Figure 6 :
6Power Curve for DGP 26 Empirical ApplicationIn this section, we consider the linear IV regressions with the specification underlyingAngrist and Krueger (1991, Table VII, column (6)), using the full original dataset. 9 The outcome variable Y and endogenous variable X are log weekly wages and schooling, respectively. We follow Angrist and Krueger(1991)and focus on two specifications with 180 and 1,530 instruments. The 180 instruments include 30 quarter and year of birth interactions (QOB-YOB) and 150 quarter and place of birth interactions (QOB-POB). For the second specification with 1530 instruments, we also include full interactions among QOB-YOB-POB. The exogenous control variables have been partialled out from the outcome and endogenous variables. More details of the empirical application are given in Section C in the Online Supplement. The considered tests are similar to those in the previous section.
Figure 7 :
7Average Values of a for DGP 1
Figure 8 :
8Average Values of a for DGP 2
admissible under weak identification among some class of tests, and (3) uniformly most powerful among sign-invariant tests under strong identification against local alternatives. Simulation experiments confirm the good power properties of the jackknife CLC test.
et al. (2021) are equal to Chao et al.'s (2012) estimators with their residualê i replaced by e i (β 0 ).
Second, Assumption 5 allows for weak identification when Π Π/ √ K → c for a constant c. It also allows for strong identification when Π Π/ √ K → ∞. In this case, if K/n → 0 so that p n = o(1),we allow Π Π/K → c for a positive constant c. Otherwise, if K is proportional to n, Assumption 5 requires Π Π/K → 0. Sucha restriction is needed because Assumption 2 includes the case of fixed alternatives (i.e., fixed ∆ = 0), which is not considered in Crudu et al. (2021) and Chao et al. (2012). Theorem A.1. Suppose Assumptions 4 and 5 hold. Then Assumption 2 holds for Crudu et al.'s (2021) estimators defined above. A.2 Cross-Fit Estimators Let M = I − P , M ij be the (i, j) element of M , M i be the ith row of M , and P 2 ij = P 2 ij M ii M jj +M 2 ij .
c is a dummy variable indicating whether the individual was born in year c = {30, 31, ..., 39}, while QOB i,j is a dummy variable indicating whether the individual was born in quarter-of-birth j ∈ {1, 2, 3, 4}. P OB i,s is the dummy variable indicating whether the individual was born in state s ∈ {51 states}. 12 The coefficient β is the return to education. We vary this β across 10,000 equidistant grid-points from -0.5 to 0.5 (i.e., β ∈ {−0.5, −4.9999, −4.9998, ..., 0, ..., 4.9999, 0.5}) and
i) is a direct consequence of Marden (1982, Theorem 2.1) because the acceptance region A = {(A, B) : s 1 A 2 + s 2 B 2 ≤ C α (a 1 , a 2 ; ρ(β 0 ))} is closed, convex, and monotone decreasing in the sense that if (A, B) ∈ A and A ≤ A, B ≤ B, then (A , B ) ∈ A.
Theorem 4.2. Suppose that Assumptions 1 and 2 hold. Further suppose that we are under strong identification and local alternatives as described in Lemma 2.1. Then, for s ∈ {pp, krs}, we have
Table 1reports the 95% CIs by inverting the corresponding 5% tests mentioned above for the parameter space B = [−0.5, 0.5]. Note all CIs except JIVE-t are robust to weak identification. As F 's are higher than 4.14 in both cases, the JIVE-t (5%) has theStock and Yogo (2005b)-type guarantee with at most a 5% size distortion (i.e., the overall size isless than 10%).
jackknife AR jackknife LM
JIVE-t
Two-step
pp
krs
(5%)
(5%)
(5%)
(5%)
(5%)
(5%)
180 IVs
[0.008,0.201] [0.067,0.135] [0.066,0.132] [0.059,0.139] [0.067,0.128] [0.067,0.128]
1530 IVs [-0.035,0.22] [0.036,0.138] [0.035,0.133] [-0.051,0.242] [0.037,0.133] [0.037,0.133]
Table 1: Confidence Intervals
ij |λ i Π i M jk | ij |λ i Π i M jk | ij |λ i Π i M jj | jk ≤ K and P 2 ij ≤ P ii ≤ p n .Second, we show thatMikusheva and Sun (2022, Lemma S2.2) holds under our conditions. Notice that |∆EA 1 | = o(1) by Then, we show that the variance of ∆A 1 is o(1) by showing the following terms are o(1): ij |Π i M ik M jk | ij |Π i M ik M jk | ij |Π i M ij M jj | ij |Π i M ik M jk | ik |Π i M ij M jk |Then, to show that Mikusheva and Sun (2022, Lemma 3) holds under our conditions, we show the following terms are o(1): ij |Π i λ i Π j λ j | ≤ |Π i ||λ j | i∈[n] i ∈[n] j∈[n] j ∈[n]j∈[n]
P 2
jj ≤ p n K,
C∆ 4
K 2
i∈[n] j∈[n]
P 2
ij |Π i Π j |
2
≤
C∆ 4
K 2
i∈[n]
P ii Π 2
i
j∈[n]
P jj Π 2
j
≤
C∆ 4
K 2 p 2
n (Π Π) 2 = o(1),
and
C∆ 4
K 2
j∈[n] k∈[n]
i∈[n]
P 2
2
=
C∆ 4
K 2
j∈[n] k =j
i∈[n]
P 2
2
+
C∆ 4
K 2
j∈[n]
i∈[n]
P 2
2
≤
C∆ 4
K 2
j∈[n] k =j
M 2
jk
i∈[n]
P ii λ 2
i
i∈[n]
P ii Π 2
i
+
C∆ 4
K 2
j∈[n]
i∈[n]
P 2
ij |λ i |
2
≤
C∆ 4
K 2 Kp 2
n (λ λ)(Π Π) +
C∆ 4
K 2
j∈[n]
i∈[n]
P 4
ij
λ λ
≤
C∆ 4 Kp 2
n (Π Π) 2
K 2
+
C∆ 4 p n KΠ Π
K 2
= o(1) by
j∈[n] k =j
M 2
jk =
j∈[n] k =j
P 2
C|∆|
K
i∈[n] j =i
P 2
ij |Π i | ≤
C|∆|
K
i∈[n]
P 2
ii
1/2
(Π Π) 1/2 ≤
C|∆|
K
(p n K) 1/2 (Π Π) 1/2 = o(1),
C∆ 2
K 2
i∈[n]
j∈[n]
P 2
ij
λ 2
i +
i∈[n] j∈[n]
P 2
ij |λ i ||λ j |
≤
C∆ 2
K 2 p n (λ λ) + p n (λ λ) = o(1),
C∆ 2
K 2
i∈[n] j∈[n]
P 4
ij (λ 2
i + |λ i ||λ j |) +
i∈[n] j∈[n]
P 2
ij λ 2
i
≤
C∆ 2
K 2 p 2
n (λ λ) + p 2
n (λ λ) + p n (λ λ) = o(1),
C∆ 2
K 2
i∈[n] k∈[n]
P 2
ik |λ i ||λ k | +
i∈[n] j∈[n]
P 2
ij |λ i ||λ j |
≤
C∆ 2
K 2 p n (λ λ) + p n (λ λ) = o(1),
C∆ 2
K 2
j∈[n]
i∈[n]
P 2
ij |λ i |
2
≤
C∆ 2
K 2
j∈[n]
i∈[n]
P 4
ij
(λ λ) ≤
C∆ 2
K 2 (p n K)(λ λ) = o(1),
C∆ 2
K 2
j∈[n]
i∈[n]
P 2
ij |Π i |
2
≤
C∆ 2
K 2
j∈[n]
i∈[n]
P 4
ij
(Π Π) ≤
C∆ 2
K 2 (p n K)(Π Π) = o(1),
C∆ 2
K 2
j∈[n] k∈[n]
i∈[n]
P 2
2
=
C∆ 2
K 2
j∈[n] k =j
i∈[n]
P 2
2
+
C∆ 2
K 2
j∈[n]
i∈[n]
P 2
2
≤
C∆ 2
K 2
j∈[n] k =j
M 2
jk
i∈[n]
P 4
ij
Π Π +
C∆ 2
K 2
j∈[n] i∈[n]
P 4
ij
Π Π
≤
C∆ 2
K 2 Kp 2
n (Π Π) +
C∆ 2
K 2 Kp n (Π Π) = o(1),
C∆ 2
K 2
j∈[n] k∈[n]
i∈[n]
P 2
i∈[n]
P 2
≤
C∆ 2
K 2 Kp n (Π Π) = o(1),
C∆ 2
K 2
i∈[n] j∈[n]
P 2
ij |Π i |
2
≤
C∆ 2
K 2 (p n K)(Π Π) = o(1).
C
K
i∈[n] j =i
P 2
C
K
i∈[n] j∈[n]
P 2
ij Π 2
i Π 2
j
1/2
i∈[n] j∈[n]
P 2
ij λ 2
i λ 2
j
1/2
≤
C
K
p n Π Π λ λ ≤
C
K 2 p n Π Π
2
= o(1),
C
K 2
j∈[n]
i∈[n]
P 2
ij |Π i ||λ i |
2
λ 2
j ≤
C
K 2
j∈[n]
p n
i∈[n]
2
λ 2
j ≤
C
K 2 p 2
n Π Π
Π Π
K
2
= o(1),
C
K 2
where we have used i∈[n] ω 2 i ≤ Cp 1/2 n Π Π and i∈[n] ω 4 i ≤ Cp n (Π Π) 2 . To prove statement (c), we show that, for a i = Π i or λ i /M ii , = o(1), where we have used max i∈[n]≤
C
K 2
pn
i∈[n]
ω 4
i +
i∈[n]
P ii ω 4
i
1/2
j∈[n]
P jj ω 4
j
1/2
+
i∈[n]
P ii ω 2
i p n + p n
i∈[n]
P ii ω 2
i
1/2
j∈[n]
P jj ω 2
j
1/2
≤
C
K 2 p 2
n (Π Π) 2 + p 2
n (Π Π) 2 + p 5/2
n (Π Π) + p 5/2
n (Π Π) = o(1),
C
K 2
i∈[n]
ω 2
i +
i∈[n] j∈[n]
P 2
ij |ω i ω j |
≤
C
K 2 p 1/2
n Π Π + p 3/2
n Π Π = o(1),
C
K 2
i∈[n]
P 2
ii a 2
i +
i∈[n] j∈[n]
P 2
ij |a i a j |
≤
C
K 2 p 2
n a a + p n a a = o(1),
C
K 2
i∈[n]
ω 4
i
λ 2
i
M 2
ii
≤
C
K 2 max
i∈[n]
ω 2
i
2
i∈[n]
λ 2
i ≤ Cp 2
n
Π Π
K
3
= o(1),
C
K 2
i∈[n]
ω 4
i Π 2
i ≤
C
K 2
i∈[n]
ω 4
i ≤
C
K 2 p n Π Π
2
When there are control variables and we partial them out from both Y and X, the residuals for Y and X are not exactly independent. However, all the analyses in this paper are still valid because they only require (2.2), which still holds for the residuals of Y and X.
We suppress the dependence of m1(∆) and m2(∆) on γ(β0) and C for notation simplicity. 4 Specifically, we say the data are homoskedastic if (σi, γi, ηi) defined after (A.1) in Section A of the Online Supplement are constant across i. 5 Under fixed alternatives,ρ = ρ(β0); under local alternatives,ρ = ρ.
Specification testing in models with many instruments. S Anatolyev, N Gospodinov, Econometric Theory. 272Anatolyev, S. and N. Gospodinov (2011). Specification testing in models with many instruments. Econometric Theory 27 (2), 427-441.
Factor models with many assets: strong factors, weak factors, and the two-pass procedure. S Anatolyev, A Mikusheva, Journal of Econometrics. 2291Anatolyev, S. and A. Mikusheva (2022). Factor models with many assets: strong factors, weak factors, and the two-pass procedure. Journal of Econometrics 229 (1), 103-126.
On the asymptotic optimality of the liml estimator with possibly many instruments. T Anderson, N Kunitomo, Y Matsushita, Journal of Econometrics. 1572Anderson, T., N. Kunitomo, and Y. Matsushita (2010). On the asymptotic optimality of the liml estimator with possibly many instruments. Journal of Econometrics 157 (2), 191-204.
Estimation and inference with weak, semi-strong, and strong identification. D W Andrews, X Cheng, Econometrica. 805Andrews, D. W. and X. Cheng (2012). Estimation and inference with weak, semi-strong, and strong identification. Econometrica 80 (5), 2153-2211.
Identification-and singularity-robust inference for moment condition models. D W Andrews, P Guggenberger, Quantitative Economics. 104Andrews, D. W. and P. Guggenberger (2019). Identification-and singularity-robust inference for moment condition models. Quantitative Economics 10 (4), 1703-1746.
Testing with many weak instruments. D W K Andrews, J H Stock, Journal of Econometrics. 1381Andrews, D. W. K. and J. H. Stock (2007). Testing with many weak instruments. Journal of Econometrics 138 (1), 24-46.
Conditional linear combination tests for weakly identified models. I Andrews, Econometrica. 846Andrews, I. (2016). Conditional linear combination tests for weakly identified models. Economet- rica 84 (6), 2155-2182.
Valid two-step identification-robust confidence sets for GMM. I Andrews, Review of Economics and Statistics. 1002Andrews, I. (2018). Valid two-step identification-robust confidence sets for GMM. Review of Economics and Statistics 100 (2), 337-348.
Conditional inference with a functional nuisance parameter. I Andrews, A Mikusheva, Econometrica. 844Andrews, I. and A. Mikusheva (2016). Conditional inference with a functional nuisance parameter. Econometrica 84 (4), 1571-1612.
Weak instruments in instrumental variables regression: Theory and practice. I Andrews, J H Stock, L Sun, Annual Review of Economics. 111Andrews, I., J. H. Stock, and L. Sun (2019). Weak instruments in instrumental variables regression: Theory and practice. Annual Review of Economics 11 (1), 727-753.
Machine labor. J Angrist, B Frandsen, Journal of Labor Economics. 40S1Angrist, J. and B. Frandsen (2022). Machine labor. Journal of Labor Economics 40 (S1), S97-S140.
Jackknife instrumental variables estimates. J Angrist, G Imbens, A Krueger, Journal of Applied Econometrics. 141Angrist, J., G. Imbens, and A. Krueger (1999). Jackknife instrumental variables estimates. Journal of Applied Econometrics 14 (1), 57-67.
Does compulsory school attendance affect schooling and earning?. J D Angrist, A B Krueger, Quarterly Journal of Economics. 1064Angrist, J. D. and A. B. Krueger (1991). Does compulsory school attendance affect schooling and earning? Quarterly Journal of Economics 106 (4), 979-1014.
S Athey, J Tibshirani, S Wager, Generalized random forests. The Annals of Statistics. 47Athey, S., J. Tibshirani, and S. Wager (2019). Generalized random forests. The Annals of Statis- tics 47 (2), 1148-1178.
Who benefits from state and local economic development policies?. T J Bartik, Kalamazoo, MIWE Upjohn Institute for Employment ResearchBartik, T. J. (1991). Who benefits from state and local economic development policies? Kalamazoo, MI: WE Upjohn Institute for Employment Research.
Alternative approximations to the distributions of instrumental variable estimators. P Bekker, Econometrica. 623Bekker, P. (1994). Alternative approximations to the distributions of instrumental variable estima- tors. Econometrica 62 (3), 657-681.
Sparse models and methods for optimal instruments with an application to eminent domain. A Belloni, D Chen, V Chernozhukov, C Hansen, Econometrica. 806Belloni, A., D. Chen, V. Chernozhukov, and C. Hansen (2012). Sparse models and methods for optimal instruments with an application to eminent domain. Econometrica 80 (6), 2369-2429.
Inference for high-dimensional sparse econometric models. A Belloni, V Chernozhukov, C Hansen, arXiv:1201.0220arXiv preprintBelloni, A., V. Chernozhukov, and C. Hansen (2011). Inference for high-dimensional sparse econo- metric models. arXiv preprint arXiv:1201.0220 .
Regional evolutions. O J Blanchard, L F Katz, R E Hall, B Eichengreen, Brookings Papers on Economic Activity. 19921Blanchard, O. J., L. F. Katz, R. E. Hall, and B. Eichengreen (1992). Regional evolutions. Brookings Papers on Economic Activity 1992 (1), 1-75.
A regularization approach to the many instruments problem. M Carrasco, Journal of Econometrics. 1702Carrasco, M. (2012). A regularization approach to the many instruments problem. Journal of Econometrics 170 (2), 383-398.
Regularized liml for many instruments. M Carrasco, G Tchuente, Journal of Econometrics. 1862Carrasco, M. and G. Tchuente (2015). Regularized liml for many instruments. Journal of Econo- metrics 186 (2), 427-442.
Random effects estimators with many instrumental variables. G Chamberlain, G Imbens, Econometrica. 721Chamberlain, G. and G. Imbens (2004). Random effects estimators with many instrumental vari- ables. Econometrica 72 (1), 295-306.
Consistent Estimation with a Large Number of Weak Instruments. J C Chao, N R Swanson, Econometrica. 735Chao, J. C. and N. R. Swanson (2005). Consistent Estimation with a Large Number of Weak Instruments. Econometrica 73 (5), 1673-1692.
Asymptotic distribution of jive in a heteroskedastic iv regression with many instruments. J C Chao, N R Swanson, J A Hausman, W K Newey, T Woutersen, Econometric Theory. 281Chao, J. C., N. R. Swanson, J. A. Hausman, W. K. Newey, and T. Woutersen (2012). Asymp- totic distribution of jive in a heteroskedastic iv regression with many instruments. Econometric Theory 28 (1), 42-86.
Inference in instrumental variable models with heteroskedasticity and many instruments. F Crudu, G Mellace, Z Sándor, Econometric Theory. 372Crudu, F., G. Mellace, and Z. Sándor (2021). Inference in instrumental variable models with heteroskedasticity and many instruments. Econometric Theory 37 (2), 281-310.
The effects of pretrial detention on conviction, future crime, and employment: Evidence from randomly assigned judges. W Dobbie, J Goldin, C S Yang, American Economic Review. 1082Dobbie, W., J. Goldin, and C. S. Yang (2018). The effects of pretrial detention on conviction, future crime, and employment: Evidence from randomly assigned judges. American Economic Review 108 (2), 201-40.
Choosing the number of instruments. S G Donald, W K Newey, Econometrica. 695Donald, S. G. and W. K. Newey (2001). Choosing the number of instruments. Econometrica 69 (5), 1161-1191.
Risk, return, and equilibrium: Empirical tests. E F Fama, J D Macbeth, Journal of Political Economy. 813Fama, E. F. and J. D. MacBeth (1973). Risk, return, and equilibrium: Empirical tests. Journal of Political Economy 81 (3), 607-636.
Some properties of a modification of the limited information estimator. W A Fuller, Econometrica. 454Fuller, W. A. (1977). Some properties of a modification of the limited information estimator. Econometrica 45 (4), 939-953.
Bartik instruments: What, when, why, and how. P Goldsmith-Pinkham, I Sorkin, H Swift, American Economic Review. 1108Goldsmith-Pinkham, P., I. Sorkin, and H. Swift (2020). Bartik instruments: What, when, why, and how. American Economic Review 110 (8), 2586-2624.
Gmm with many moment conditions. C Han, P C Phillips, Econometrica. 741Han, C. and P. C. Phillips (2006). Gmm with many moment conditions. Econometrica 74 (1), 147-192.
Estimation with many instrumental variables. C Hansen, J Hausman, W Newey, Journal of Business and Economic Statistics. 264Hansen, C., J. Hausman, and W. Newey (2008). Estimation with many instrumental variables. Journal of Business and Economic Statistics 26 (4), 398-422.
Instrumental variables estimation with many weak instruments using regularized jive. C Hansen, D Kozbur, Journal of Econometrics. 1822Hansen, C. and D. Kozbur (2014). Instrumental variables estimation with many weak instruments using regularized jive. Journal of Econometrics 182 (2), 290-308.
Instrumental variable estimation with heteroskedasticity and many instruments. J A Hausman, W K Newey, T Woutersen, J C Chao, N R Swanson, Quantitative Economics. 32Hausman, J. A., W. K. Newey, T. Woutersen, J. C. Chao, and N. R. Swanson (2012). Instrumental variable estimation with heteroskedasticity and many instruments. Quantitative Economics 3 (2), 211-255.
Pivotal statistics for testing structural parameters in instrumental variables regression. F Kleibergen, Econometrica. 705Kleibergen, F. (2002). Pivotal statistics for testing structural parameters in instrumental variables regression. Econometrica 70 (5), 1781-1803.
Testing parameters in GMM without assuming that they are identified. F Kleibergen, Econometrica. 734Kleibergen, F. (2005). Testing parameters in GMM without assuming that they are identified. Econometrica 73 (4), 1103-1124.
Leave-out estimation of variance components. P Kline, R Saggio, M Sølvsten, Econometrica. 885Kline, P., R. Saggio, and M. Sølvsten (2020). Leave-out estimation of variance components. Econo- metrica 88 (5), 1859-1898.
Minimum distance approach to inference with many instruments. M Kolesár, Journal of Econometrics. 2041Kolesár, M. (2018). Minimum distance approach to inference with many instruments. Journal of Econometrics 204 (1), 86-100.
Estimation of noncentrality parameters. T Kubokawa, C P Robert, A M E Saleh, The Canadian Journal of Statistics. 211Kubokawa, T., C. P. Robert, and A. M. E. Saleh (1993). Estimation of noncentrality parameters. The Canadian Journal of Statistics 21 (1), 45-57.
Constructing optimal instruments by first-stage prediction averaging. G Kuersteiner, R Okui, Econometrica. 782Kuersteiner, G. and R. Okui (2010). Constructing optimal instruments by first-stage prediction averaging. Econometrica 78 (2), 697-718.
Asymptotic expansions of the distributions of estimators in a linear functional relationship and simultaneous equations. N Kunitomo, Journal of the American Statistical Association. 75371Kunitomo, N. (1980). Asymptotic expansions of the distributions of estimators in a linear func- tional relationship and simultaneous equations. Journal of the American Statistical Associa- tion 75 (371), 693-700.
Valid t-ratio inference for iv. D S Lee, J Mccrary, M J Moreira, J R Porter, American Economic Review forthcoming. Lee, D. S., J. McCrary, M. J. Moreira, and J. R. Porter (2022). Valid t-ratio inference for iv. American Economic Review forthcoming.
Testing statistical hypotheses. E L Lehmann, J P Romano, Springer Science & Business MediaLehmann, E. L. and J. P. Romano (2006). Testing statistical hypotheses. Springer Science & Business Media.
Does disability insurance receipt discourage work? using examiner assignment to estimate causal effects of ssdi receipt. N Maestas, K J Mullen, A Strand, American Economic Review. 1035Maestas, N., K. J. Mullen, and A. Strand (2013). Does disability insurance receipt discourage work? using examiner assignment to estimate causal effects of ssdi receipt. American Economic Review 103 (5), 1797-1829.
Combining independent noncentral chi squared or f tests. J I Marden, The Annals of Statistics. 101Marden, J. I. (1982). Combining independent noncentral chi squared or f tests. The Annals of Statistics 10 (1), 266-277.
Jackknife Lagrange multiplier test with many weak instruments. Y Matsushita, T Otsu, LSE, STICERDMatsushita, Y. and T. Otsu (2020). Jackknife Lagrange multiplier test with many weak instruments. LSE, STICERD.
Jackknife empirical likelihood: small bandwidth, sparse network and high-dimensional asymptotics. Y Matsushita, T Otsu, Biometrika. 1083Matsushita, Y. and T. Otsu (2021). Jackknife empirical likelihood: small bandwidth, sparse network and high-dimensional asymptotics. Biometrika 108 (3), 661-674.
Inference with many weak instruments. Review of Economic Studies forthcoming. A Mikusheva, L Sun, Mikusheva, A. and L. Sun (2022). Inference with many weak instruments. Review of Economic Studies forthcoming.
Optimal two-sided tests for instrumental variables regression with heteroskedastic and autocorrelated errors. H Moreira, M J Moreira, Journal of Econometrics. 2132Moreira, H. and M. J. Moreira (2019). Optimal two-sided tests for instrumental variables regression with heteroskedastic and autocorrelated errors. Journal of Econometrics 213 (2), 398-433.
A conditional likelihood ratio test for structural models. M J Moreira, Econometrica. 714Moreira, M. J. (2003). A conditional likelihood ratio test for structural models. Econometrica 71 (4), 1027-1048.
Approximate distributions of k-class estimators when the degree of overidentifiability is large compared with the sample size. K Morimune, Econometrica. 513Morimune, K. (1983). Approximate distributions of k-class estimators when the degree of overi- dentifiability is large compared with the sample size. Econometrica 51 (3), 821-841.
Generalized method of moments with many weak moment conditions. W K Newey, F Windmeijer, Econometrica. 773Newey, W. K. and F. Windmeijer (2009). Generalized method of moments with many weak moment conditions. Econometrica 77 (3), 687-719.
Instrumental variable estimation in the presence of many moment conditions. R Okui, Journal of Econometrics. 1651Okui, R. (2011). Instrumental variable estimation in the presence of many moment conditions. Journal of Econometrics 165 (1), 70-86.
How do patents affect follow-on innovation? evidence from the human genome. B Sampat, H L Williams, American Economic Review. 1091Sampat, B. and H. L. Williams (2019). How do patents affect follow-on innovation? evidence from the human genome. American Economic Review 109 (1), 203-36.
On the estimation of beta-pricing models. J Shanken, The Review of Financial Studies. 51Shanken, J. (1992). On the estimation of beta-pricing models. The Review of Financial Studies 5 (1), 1-33.
Robust estimation with many instruments. M Sølvsten, Journal of Econometrics. 2142Sølvsten, M. (2020). Robust estimation with many instruments. Journal of Econometrics 214 (2), 495-512.
Asymptotic distributions of instrumental variables statistics with many instruments. J Stock, M Yogo, Chapter6Stock, J. and M. Yogo (2005a). Asymptotic distributions of instrumental variables statistics with many instruments, Volume 6. Chapter.
GMM with weak identification. J H Stock, J H Wright, Econometrica. 685Stock, J. H. and J. H. Wright (2000). GMM with weak identification. Econometrica 68 (5), 1055- 1096.
Identification and Inference for Econometric Models: Essays in Honor of Thomas Rothenberg. J H Stock, M Yogo, D. W. Andrews and J. H. StockCambridge University PressCambridge, U.K.Testing for weak instruments in linear IV regressionStock, J. H. and M. Yogo (2005b). Testing for weak instruments in linear IV regression. In D. W. Andrews and J. H. Stock (Eds.), Identification and Inference for Econometric Models: Essays in Honor of Thomas Rothenberg, Chapter 6, pp. 80-108. Cambridge, U.K.: Cambridge University Press.
Bootstrap inference for instrumental variable models with many weak instruments. W Wang, M Kaffo, Journal of Econometrics. 1921Wang, W. and M. Kaffo (2016). Bootstrap inference for instrumental variable models with many weak instruments. Journal of Econometrics 192 (1), 231-268.
| []
|
[
"Contextual Mixture of Experts: Integrating Knowledge into Predictive Modeling",
"Contextual Mixture of Experts: Integrating Knowledge into Predictive Modeling"
]
| [
"Francisco Souza ",
"Tim Offermans ",
"Ruud Barendse ",
"Geert Postma ",
"Jeroen Jansen "
]
| []
| []
| This work proposes a new data-driven model devised to integrate process knowledge into its structure to increase the human-machine synergy in the process industry. The proposed Contextual Mixture of Experts (cMoE) explicitly uses process knowledge along the model learning stage to mold the historical data to represent operators' context related to the process through possibility distributions. This model was evaluated in two real case studies for quality prediction, including a sulfur recovery unit and a polymerization process. The contextual mixture of experts was employed to represent different contexts in both experiments. The results indicate that integrating process knowledge has increased predictive performance while improving interpretability by providing insights into the variables affecting the process's different regimes. | 10.1109/tii.2022.3224973 | [
"https://export.arxiv.org/pdf/2211.00558v1.pdf"
]
| 253,244,324 | 2211.00558 | 31b8c3be203141c166ea508efc70a755b53a19d2 |
Contextual Mixture of Experts: Integrating Knowledge into Predictive Modeling
Francisco Souza
Tim Offermans
Ruud Barendse
Geert Postma
Jeroen Jansen
Contextual Mixture of Experts: Integrating Knowledge into Predictive Modeling
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. X, NO. Y, MONTH YEAR 1Index Terms-soft sensorsmixture of expertsmultimode processesprocess knowledgepossibility distribution
This work proposes a new data-driven model devised to integrate process knowledge into its structure to increase the human-machine synergy in the process industry. The proposed Contextual Mixture of Experts (cMoE) explicitly uses process knowledge along the model learning stage to mold the historical data to represent operators' context related to the process through possibility distributions. This model was evaluated in two real case studies for quality prediction, including a sulfur recovery unit and a polymerization process. The contextual mixture of experts was employed to represent different contexts in both experiments. The results indicate that integrating process knowledge has increased predictive performance while improving interpretability by providing insights into the variables affecting the process's different regimes.
I. INTRODUCTION
There is an increasing demand for industrial digitization toward a more sustainable and greener industrial future. Artificial intelligence (AI) is at the front of the 4th industrial revolution by redefining decision-making at the operational and technical levels, allowing faster, data-driven, and automatic decision-making along the value chain. Also, with further growth in industrial data infrastructure, many companies are implementing data-driven predictive models to improve energy efficiencies and industrial sustainability. This can reduce production costs and environmental impact while increasing process efficiency. In parallel, the new upcoming industrial revolution (Industry 5.0) aims to leverage human knowledge and decision-making abilities by strengthening the cooperation between humans and machines [1]. This new revolution will require the data-driven models to be explainable by providing insights into the process to gain the operator's trust and increase synergy. The human-machine synergy can be further enhanced by incorporating operator domain knowledge and process information into the data-driven models.
Process information can come from various sources, including first-principle equations and process-specific characteristics [2], such as process division structure or multiple operating modes. First-principle models can be combined with data-driven models within the hybrid AI models framework [3] or within the informed machine learning framework [4].
For the process division structure, the multiblock modeling is a common approach for retaining the explainability of multi-stage processes within a multiblock representation [2]. Multiple operating modes can be caused by a change in feedstock, operation, seasonality (aka. multimode processes), or the sequence of phases comprising a batch cycle production (aka. multiphase processes) [5]. These modes can be represented in a multi-model (ensemble) structure [6]. This work focuses on modeling processes with multiple operating modes, and the proposed method was created with this in mind. However, the proposed method is flexible enough to be applied to other types of processes where process expert knowledge is available.
The modeling of multiple operating modes processes follows from rule-based expert systems [7]- [10], clustering [11], Mixture models (MM) [12]- [14], Gaussian mixture regression (GMR) [15], [16], or mixture of experts (MoE) [6], [17] strategies to identify the groups that represent each operational regime, then combine them according to the process's regime. Apart from rule-based expert models, none of the above works discuss using domain knowledge from operators. In practice, such methods do not attempt to represent process characteristics; rather, the goal is to minimize prediction error, and in some cases, domain knowledge is used only to initialize the model structure. As a result, despite being accurate, the model becomes unrepresentative of the process, making it impossible to understand the effects of the variables in the various regimes of the process. In fact, the rich process context available from operators can be valuable to the model. If a model can reflect the operator's knowledge, it can play an important role in model acceptance in the industry. This paper proposes the contextual mixture of experts (cMoE), a new data-driven model that connects the process' expert domain knowledge (process context) to the predictive model. The cMoE gives a holistic perspective to the datadriven model while still adhering to process correlation and representing the operators' process context. The cMoE is composed of a set of expert and gate models, where each expert/gate is designed to represent a specific context of the process employing possibility distributions. The gates represent operators' process context in the model by defining the boundaries of each contextual region. These three components, experts and gates, and the operator's contextual information form the basis of cMoE. The training procedure in cMoE uses a learning approach devised to assure that each expert represents the defined context and, at the same time, generalizes well for unseen data. To allow model interpretability, the gates and expert models are linear models. Also, a 1 regularization penalty is applied to the gates and experts learning for a arXiv:2211.00558v1 [cs.LG] 1 Nov 2022 parsimonious representation of each context model.
The application of regularization in MoE, with linear base models is not new, and it has been used for dealing with highdimensional settings and for feature selection. For example, [18] investigates the 1 penalty and smooth clipped absolute deviation (Scad) [19] penalties for feature selection in MoE. In addition, [20] proposed using the 1 penalty for MoE in classification applications. The authors in [21], [22] investigate the theoretical aspect of MoE with 1 penalties. In order to avoid instability in the learning of gating coefficients (typically followed by a softmax function), the authors in [23] proposed the use of a proximal-Newton expectationmaximization (EM). The elastic-net penalty was used by the authors of [6], who employed a regularization penalty for the inverse of Hessian matrix along the gates learning. In the proposed cMoE, the 1 penalty is employed to promote sparsity in the gates and experts. The solution for the gates and experts follows from the coordinate gradient descent together with the EM framework, as used in [6]. However, there are two significant differences between the method presented here and the work of [6]. The gates and experts models are chosen based on an estimator of the leave-one-out cross-validation error (LOOCV). The cMoE's performance is accessed based on an estimated LOOCV, and this is used in the EM algorithm as a stop condition and for model selection. Unlike [6], where regularized Hessian can lead to unstable results, the learning of the gates in the cMoE is based on the Newton update, with a step-size parameter added to control the learning rate and increase model stability.
The proposed method is evaluated in two experiments. The first one is the sulfur recovery unit (SRU) described in [24], where the goal is to predict the H 2 S at the SRU's output stream. The operators in that study are more interested in the H 2 S peaks as they are related to the undesirable behavior of SRU unit. The cMoE is then applied for predicting H 2 S with separate representations for peaks and non-peaks components, allowing the identification of causes of the peaks, beyond the prediction of H 2 S. The second case study investigates the application of cMoE to estimate the acid number in a multiphase batch polymerization process. The process knowledge is presented in an annotated data source for the phase transition. The cMoE is then utilized to provide a contextual model for each phase. In all the experiments, the results indicate that incorporating the operator's context into the cMoE gives interpretability and insight to the process and significantly increases the model performance.
This work's main contributions are as follows: i) the use of possibility distributions to represent the operator's expert knowledge; ii) the development of a new mixture model called contextual mixture of experts to incorporate the operator's expert knowledge from the possibility distributions into the model structure; iii) the application of a 1 penalty to the gates and experts coefficients to promote sparse solutions. iv) the development of a leave-one-out error (LOOCV) estimator for experts and gates and the cMoE model;
The paper is divided as follows. Section II gives the background for the paper. Section III presents the proposed model contextual mixture of experts. Section IV presents experimental results. Finally, Section VI gives concluding remarks.
II. PRELIMINARIES A. Notation
In this paper, finite random variables are represented by capital letters and their values by the corresponding lowercase letters, e.g. random variable A, and corresponding value a. Matrices and vectors are represented by boldface capital letters, e.g. A = [a ij ] N ×d and boldface lowercase letters, e.g. a = [a 1 , . . . , a d ] T , respectively. The input and output/target variables are defined as X = {X 1 , . . . , X d } and Y , respectively. The variables X 1 , . . . , X d can take N different values as {x ij ∈ X j : j = 1, . . . , d and i = 1, . . . , N }, and similarly for Y as {y i ∈ Y : i = 1, . . . , N }.
B. Mixture of Experts
The MoE is a modeling framework based on the divide and conquer principle. It consists of a set of experts and gates, with the gates defining the boundaries (soft boundaries) and the experts making predictions within the region assigned by the gates. The prediction output of a MoE with C experts iŝ
y(x i ) = C c=1 g c (x i )ŷ c (x i ),(1)
whereŷ c (x i ) is the expert's predicted output at region c, and g c (x i ) is the gate function that represents the expert's boundaries at region c. The probability distribution function (PDF) of the MoE is defined as
p(y i |x i ; Ω) = C c=1 g c (x i ; V)p(y i |x i ; Θ c ),(2)
The PDF is the expert c conditional distribution with meanŷ ci , and σ 2 c is the noise variance. The set of expert parameters is defined as Θ c = {θ c , σ 2 c }. The gates g c (x i ; V) assigns mixture proportions to the experts, with constraints C c=1 g c (x i ) = 1 and 0 ≤ g c (x i ) ≤ 1, and for simplicity g ci = g c (x i ). The gates typically follows from the softmax function:
g ci = exp x T i v c C k=1 exp x T i v k(3)
where v c is the parameter that governs the gate c, and V = {v 1 , . . . , v C } is the set of all gates parameters. The collection of all parameters is defined as Ω =
{Θ 1 , . . . , Θ C , v 1 , . . . , v C }.
From the MoE framework, the parameters in Ω are found trough the maximization of log-likelihood
Ω = arg max Ω * L(Ω * ),(4)
where the log-likelihood for N iid samples is defined as L(Ω) = log p(Y |X; Ω) = N i=1 log p(y i |x i ; Ω). The solution of Eq. (4) follows from the expectation-maximization (EM) algorithm, an iterative procedure that maximizes the loglikelihood from successive steps. In EM, the p(Y |X; Ω) is treated as the incomplete data distribution. The missing part, the hidden variables Z are introduced to indicate which expert c was responsible to generate the sample i. The complete distribution is given by
p(y i , z i |x i ; Ω) = C c=1 g c (x i ; V) zci p(y i |x i ; Θ c ) zci(5)
where z ci ∈ {0, 1} and for each sample i, all variables z ci are zero, except for a single one. The hidden variable z ci indicates which expert c was responsible of generating data point i. LetΩ t denote the estimated parameters at iteration t, the EM algorithm increases the log-likelihood at each iteration so that
L C (Ω t+1 ) > L C (Ω t ).
It is composed by two main steps, the expectation (E-step) and maximization step (M-step).
From an initial guessΩ 0 , the expectation of the complete log-likelihood (aka Q-function) is computed with respect to the current estimateΩ t :
a) E-step:
Q t (Ω) = E L C (Ω) | Z,Ω t Q t (Ω) = N i=1 C c=1 γ t ci log g c (x i ; V)p(y i |x i ; Θ c ) . (6)
where γ t ci ≡ E z ci | Z,Ω t is the posteriori distribution of z ci after observing the data X, Y (called responsibilities). The responsibilities accounts for the probability of expert c has generated the sample i.
In the M-Step, the new parameters are found by maximizing the Q-function, as the following b) M-step:Ω t+1 = arg max
Ω Q t (Ω)(7)
The EM runs until convergence, typically measured by the Q-function.
C. Possibility distribution
Possibility theory is a framework for representing uncertainty, and ambiguous knowledge, from possibility distributions [25], [26]. Let Ψ represent a finite set of mutually exclusive events, where the true alternative is unknown. This lack of information on the true event resumes the uncertainty in representing the true alternative. A possibility distribution, defined in the set Ψ, maps the set of possible events to the unit interval [0, 1], π : Ψ → [0, 1], with at least π(s) = 1 for some s ∈ Ψ, where the function π(s) represents the state of knowledge by the expert about the state of the data, and π(s) stands as the belief of event s be the true alternative. The larger π(s), the more plausible, i.e. plausible the event s is. It can also be interpreted as the "degree of belief"; for example, the "degree of belief" of event s be true is 0.8. It is assumed to be possibilistic rather than probabilistic, so distinct events may simultaneously have a degree of possibility equal to 1.
Given two possibilities distributions π 1 (s) and π 2 (s), the possibility distribution π 1 (s) is said to be more specific than π 2 (s), iff π 1 (s) < π 2 (s) ∀s ∈ A. Then, π 1 is at least as restrictive and informative as π 2 . In the possibilistic framework, extreme forms of partial knowledge can be captured, namely: • Complete knowledge: for some s 0 , π(s 0 ) = 1, and π(s) = 0, ∀s = s 0 (only s 0 is possible); • Complete ignorance: π(s) = 1 , ∀s ∈ Ψ (all events are possible). The more specific π is, the more informative it is. The minimal specificity principle drives possibility theory [27]. It states that any now-known impossible hypothesis cannot be ruled out. It is a principle of minimal commitment, caution, and information. Essentially, the aim is to maximize possibility degrees while keeping constraints in mind.
III. CONTEXTUAL MIXTURE OF EXPERTS
In this section the contextual mixture of experts is presented. The first subsection Sec. III-A describes the model structure and its goals. Sec. III-B introduces the possibilities distributions used in the expert knowledge representation. The learning of the contextual mixture of experts is given in Sec. III-C.
A. The Model
The structure of the contextual mixture of experts is composed by C experts, and gates models; this is represented in Fig. 1. The context here refers to any meaningful process data characteristic defined by the analyst or derived from any process information/knowledge. Each context c is encoded by a possibility distribution π c , which is used to represent the expert/analyst uncertainty knowledge about the respective context. The analyst inputs each context into the contextual mixture of experts by incorporating the context into the model structure and defining each expert model's expected operating region. Then, each contextual expert modelŷ c (x i ) is trained on the region of context c and makes predictions based on its domain representation. This contextual approach enables the prediction to be divided into different components representing meaningful context specified by the analyst.
The output prediction of cMoE is given by a weighted sum of the experts output, given by Eq. (1). In cMoE the the gates g c (x i ) is the probability of sample x i belonging to the region of context c. The expert modelŷ c (x i ) is trained on the region defined by the context c, and gives the prediction according with its domain representation. For simplified notation, the contribution of each expert model is defined aŝ
y i = C c=1 h ci where the input x i is omitted for clarity, and h ci g c (x i )ŷ c (x i ),
where stands for defined to be. For example, in a 3-phase batch process, the cMoE is set to have three contexts, each one representing a phase. In such case, the simplified representation is given bŷ
y = h phase 1 + h phase 2 + h phase 3
From that, each contextual model can be interpreted separately, or jointly according to the analyst's needs.
B. Expert Process Knowledge Representation
Here, the expert's knowledge for each context is represented by a possibility distribution. For each context c, there is an associated possibility distribution π c (x i ) (in short π ci ), where i ∈ Ψ (where Ψ represents the set of all available samples). The value of π ci indicates the degree of belief of the sample i pertains to the region of context c. In the case of π ci = 1, sample i is considered to be fully possible; if π ci = 0, sample i is considered to be completely impossible to be part of context c; any value between these two extremes π ci = p, p ∈]0, 1[, can be accredited as partial possibility, to a degree certainty p, of sample i belonging to the context c. Therefore, if π ci = 1 for all c = 1, . . . , C, the sample i will be defined as being believed to belong to all contextual regions (complete ignorance about sample i), whereas if π ci = 1 for a single c ∈ {1, . . . , C}, the sample i will be defined as be being accredited to to be fully certain to belong to context c (complete knowledge about sample i), while being impossible to belong the other contexts.
Because expert knowledge's reliability cannot be fully guaranteed for each context, and because reliability is commonly described with some degree of certainty, for example (80% sure or 70% certain) [28, Chapter 2], the possibility distributions used here are intended to account for this uncertainty in representing expert knowledge of each context. To do so, two possibility distributions are employed in the experiments, the α-Certain distribution and the β-Trapezoidal (fuzzy) distribution, where α and β are the degrees of certainty. a) α-Certain possibility distribution: is an imprecise knowledge distribution with a certainty factor α. The available knowledge about the true alternative is expressed as a subset A ⊆ Ψ associated with a certain level of trust α ∈ [0, 1], concerning the occurrence of A. This can be expressed declaratively as "A is certain to degree α". This type of distribution has been suggested as [29]:
π ci = 1 if i ∈ A 1 − α otherwise(8)
If α = 1, π ci represents the characteristic function of A, on the other extreme if α = 0, π ci represents the total ignorance about A. The α-Certain possibility distribution is shown in Fig. 2a. b) β-Trapezoidal epistemic possibility distribution: In the epistemic (fuzzy like) possibility distribution, each event has a degree of belief associated with the possibility of an event. In the epistemic distribution, the available knowledge about the true alternative is given as a constrain defined in terms of "a fuzzy concept" defined in Ψ. All standard types defining membership function and representing fuzzy constraints (i.e., triangular, trapezoidal, Gaussian, etc.) can be applied to define epistemic type possibility distributions. In here, the trapezoidal function is defined by a lower limit a, an upper limit d, a lower support limit b, and an upper support limit c, where a < b < c < d:
A x 1 1 − β π(x) a b c d A x 1 1 − α π(x) b) a)π ci = max min i − a b − a , 1, d − i d − c , β(9)
To account for the unreliable, a certainty factor β ∈ [0, 1] is added, where β = 0 means a fully unreliable source, and β = 1 means a fully reliable source. the β-Trapezoidal distribution is shown in Fig. 2b. More possibility distributions are also fited in this framework; for more possibilities distributions, see [28,Chapter 2].
C. The Learning
The goal of cMoE is to integrate the expert knowledge representation (via the possibility distribution) into the model structure. To constrain the contextual information, the parameters in Ω are found trough the maximization of the weighted log-likelihood (WLL) of Ω, defined as
L C (Ω) = N i=1 log p(y i |x i ; Ω) πi(10)
where π i = [π 1i , . . . , π ci ] T is the contextual weight vector from the expert knowledge and must be specified a priori. The r.h.s power is defined as p(
y i |x i ; Ω) πi C c=1 g c (x i ; V)p(y i |x i ; Θ c ) πci .
The weighted ML estimation of Ω constrains the contextual information into the model structure, laying down the basis of cMoE framework. The idea is to down-weight samples that have a low degree of belief to belong to regions of expert c.
The maximization of the WLL also L C (Ω) follows from the EM algorithm. By inserting the contextual weights, the WLL of the complete data distribution becomes
L C (Ω) = N i=1 log p(y i , z i |x i ; Ω) πi ,(11)
where the power at the r.h.s is defined as
p(y i , z i |x i ; Ω) πi C c=1 g c (x i ; V) zci p(y i |x i ; Θ c ) zci πci .
LetΩ t denote the estimated parameters at iteration t of the EM algorithm. The expectation (E-step) and maximization steps (M-step) for WLL are: a) E-step: From an initial guessΩ 0 , the expectation of the complete WLL (aka Q-function) is computed with respect to the current estimateΩ t :
Q t (Ω) = N i=1 C c=1 π ci γ t ci log g c (x i ; V)p(y i |x i ; Θ c ) . (12) where γ t ci = π ci g c (x i ; V t )p(y i |x i ; Θ t c ) C k=1 π ki g k (x i ; V t )p(y i |x i ; Θ t k ) γ t
ci are the responsibilities of expert c generated sample i. It should be noted that the responsibilities are also a result of the contextual weights, in the case of π ci = 0, the responsibility of expert c is γ t ci = 0, indicating that context c has no role in generating the sample i.
b) M-step: In the M-Step, the new parameters are found by maximizing the Q-function, as the followinĝ
Ω t+1 = arg max Ω Q t (Ω)(13)
The Q-function is further decomposed to account for the gates and experts contributions separately,
as Q t (Ω) = Q t g (V) + Q t e (Θ), where Q t g (V) = N i=1 C c=1 π ci γ t ci log g p (x i ; V) Q t e (Θ) = N i=1 C c=1 π ci γ t ci log p(y i |x i ; Θ c )
The maximization is performed separately for the experts and gates in the updating phase. Here, the experts and gates models are linear, despite more complex models being allowed and easily integrated into this framework.
D. Experts Learning
In the maximization step, the updated experts parameters are found from the maximization of Q t e (Θ). The contribution of the individual experts can be accounted separately:
Q t e (Θ) = C c=1 Q t ec (θ c ) = C c=1 N i=1 π ci γ t ci log N y i |x T i θ c , σ 2 c(14)
where Q t ec (·) accounts for the contribution of expert c. Hence, the parameters of expert c can be updated apart from the other experts. Equivalently, instead of maximizing Q t ec (θ c ), the updated coefficient is found by minimizing the negative of
Q t ec (θ c ), aŝ θ t+1 c = arg min θc 1 2 N i=1 π ci γ t ci y i − x T i θ c 2 .(15)
To promote the sparsity of expert c coefficients, a 1 penalty is added to Eq. (15), leading tô
θ t+1 c = arg min θc 1 2 N i=1 πciγ t ci yi − x T i θc 2 + λ e c d j=1 |θcj| ,(16)
where λ e c controls the importance of the regularization penalty. This penalty, also known as least absolute shrinkage and selection operator (LASSO), drives irrelevant features coefficients towards zero. This characteristic is suitable for industrial applications where not all variables are relevant to the prediction, providing compact models. Under the cMoE, this penalty will allow the selection of parsimonious models for each expert, reducing the complexity of the overall model structure. Together with the contextual information, the LASSO penalty will provide a relevant set of features for each expert domain, thus allowing a better interpretation of the model representation, as well allowing the learning in scenarios with small number of samples and many features.
The minimization of Eq. (16) will follow from the coordinate gradient descent (CGD). In CGD, each coefficient is minimized individually at a time. The updated regression coefficient of variable j and expert c is given aŝ
θ t+1 cj = S N i=1 π ci γ t ci x ij (y i −ỹ j ci ), λ e c N i=1 π ci γ t ci x 2 ij ,(17)whereỹ j ci = d l =j x il θ t cl
is the fitted value of local expert c, without the contribution of variable j and the S(z, η) is the soft threshold operator, given by S(z, η) = sign(z)(|z| − η) + . From Eq. (17) the contextual weight adds a weighting factor over the responsibilities. In the case where π ci = 1 for all experts, the responsibility will be the primary driving force in determining the contribution for that specific sample.
1) Experts Model Selection:
The selection of LASSO regularization can follow from the k-fold cross-validation (k-CV) error. However, the k-fold cross-validation for the LASSO may present potential bias issues in high-dimensional settings. The reason for bias is that the data change caused by the splitting into folds affects the results. Allowing k to be large enough to reduce the bias is one possible solution. Choosing k = N , for example, results in the leave-one-out cross-validation (LOOCV) error, an unbiased estimator for the LASSO error. However, the computation of LOOCV is heavy as it requires training the model N times, and one solution is to approximate the LOOCV from the data. Under mild conditions assumptions [30], the prediction LOOCV of linear models with LASSO penalty can be approximated from its active set; the active set is the index of those variables with nonzero coefficients.
Let the active set of the expert c be E c = {j ∈ {1, . . . , d} | θ cj = 0}. Also, let X E c for the columns of matrix X in the set E c .
The LOOCV of expert c can be approximated by
CV e c (λ e p ) = N i=1 π ci γ t ci y i −ŷ (−i) ci 2(18)
whereŷ
(−i) ci
is the estimated output of expert c without sample i. The estimation ofŷ −i ci from the active set is given bŷ
y (−i) ci = (ŷ ci − [H c ] ii y i ) 1 − [H c ] ii .(19)
where [H c ] ii are the ith diagonal element of hat-matrix of expert c, which is defined as
H c = X Ec (X T Ec Γ c X Ec ) −1 X T Ec Γ c where Γ c = diag(π c1 γ t c1 , .
. . , π cn γ t cn ) is the diagonal matrix of contextual weights and responsibilities, and X Ec is the active set subset of matrix X. In here, the inverse (X T Ec Γ c X Ec ) −1 is computed via the LU decomposition. If this is not possible, e.g. due the matrix becoming too large, one could estimate the validation error from an independent validation set. The value of λ e p selected is the one that minimizes the value of CV e M . Note that this estimator resembles the predicted residual sum of squares (PRESS), except that only the active set is used along with the LOOCV estimation for the LASSO.
E. Gates Learning
For the gates, the new updated parameters results from the maximization of Q t g (V ), or equivalently by minimizing −Q t g (V ). By expanding the gates contribution, it becomes
Q t g (V ) = N i=1 C c=1 π ci γ t ci x T i v c − φ i log C k=1 exp x T i v k ,(20)
where
φ i = C c=1 π ci γ t ci . The function Q t g (V )
is concave in the parameters, and its maximization of Q t g (V ) will follow the Newton's method. Let {v t c } C c=1 be the current estimates of gates coefficients, the second-order (quadratic) Taylor approximation of Eq. (20) around {v t c } C c=1 is:
Q t g (V ) = C c=1 Q t gc (v c ) + C({v t c } C c=1 )(21)Q t gc (v c ) = − 1 2 N i=1 r ci z ci − x T i v c 2 ,(22)
whereQ t g (V ) is the second order Taylor approximation of Q t g (V ), Q t gc (v c ) accounts by the individual contribution of gate c, and C({v t c } C c=1 ) is a constant term, while r ci and z ci are given by
r ci = φ i g ci (1 − g ci ),(23)z ci = x T i v t c + η π ci γ t ci − φ i g ci φ i g ci (1 − g ci ) ,(24)
with the gates g ci computed from Eq. (3), and the parameter η is a magic parameter added to control the Newton update on the optimization phase. By adding the LASSO penalty to the gates contribution Eq.
1 2 N i=1 r ci z ci − x T i v c 2 + λ g c d j=1 |v cj | .
The gate coefficients are updated from successive local Newton steps. It cycles trough all C gates sequentially, where the values of g ci are calculated from {v t c } C c=1 , and they must be updated as soon a newv t c is computed. The computation of Eq. (25) must be repeated until the coefficients converge; usually, few iterations (less than 10) are needed to the reach convergence.
The solution of Eq. (25) is achieved from the CGD, in whicĥ
v t+1 pj = S N i=1 r ci x ij (z ci −z j ci ), λ g c N i=1 r ci x 2 ij(25)
wherez j ci = d l =j z il v t cl is the fitted value of gate c, without considering variable j. a) Practicalities in gates update: Along the gates coefficients update, some practical issues must be taken • Care should be taken in the update of Eq. (24), to avoid coefficients diverging in order to achieve fitted g ci of 0 or 1. When g ci is within ξ = 10 −3 of 1, Eq. (24) is set to z ci = x T i v t c , and and the weights in (23) r ci are set to ξ. 0 is treated similarly. • The use of full Newton step η = 1 in optimization Eq.
(25) do not guarantee the converge of coefficients [31].
To avoid this issue, along the experiments η was fixed to η = 0.1. 1) Gates Model Selection: Similar to the experts' procedure, the gates model selection will follow from the estimated LOOCV error. The predicted gate output without the sample i is given bŷ
z (−i) ci = (ẑ ci + [M c ] ii z i ) (1 − [M c ] ii )(26)
where M c is the gate hat matrix at each step of the Newton update, and is computed as
M c = X Gc (X T Gc R c X Gc ) −1 X T Gc R c(27)
where G c = {j ∈ {1, . . . , N } | v cj = 0} is the active set of gate c, and R c = diag(r p1 , . . . , r pn ). The LOOCV is then estimated as
CV g c (λ g c ) = N i=1 r ci z i −ẑ (−i) ci 2(28)
The gate regularization parameter is selected to minimize the estimated LOOCV error.
F. EM Stop Condition
In cMoE, the information on the number of experts must be known a priori, or can be defined by the analyst, so this is not an issue for the design. The EM algorithm's condition stops must be properly defined to avoid overfitting or a poorly chosen model. Because the implementation is based on a set of linear models, an approximation of LOOCV error is used to assess the model's quality and set the EM algorithm's stop criteria. The estimated LOOCV for cMoE is given by
CV(Ω) = 1 N N i=1 y i − C c=1 g (−i) ciŷ (−i) ci 2(29)
whereŷ (−i) ci is given by Eq. (19) and g (−i) ci is given by
g (−i) ci = exp z (−i) ci C k=1 exp ẑ (−i) ki .(30)
Complete ignorance region (common contextual knowledge) The performance of cMoE is checked at each iteration by computing CV(Ω); it is expected that the estimated LOOCV CV(Ω) decreases until a minimum, before beginning to increase continuously. This minimum is found by checking whether for n it iterations, the CV(Ω) kept only increasing. If so, the cMoE of n it iterations back is considered to have a global minimum error and is selected as optimized model, and the algorithm is terminated. The cMoE algorithm should terminate after n it iterations if the error continually increases, counting from the iteration where CV(Ω) reaching its minimum. In the experiments, a value of n it = 6 was considered.
G. Evaluation of Expert Process Knowledge Integration
In the cMoE model, each gate g c should reflect the knowledge of each context represented by π c , and in the prediction stage, the gates are responsible for automatically determining which context the process is running in and switching to the appropriate expert model. In fact, the g c is a probability counterpart of the possibility distribution π c . The weakest consistency principle [25] leads to a sufficient condition for checking the consistency between the gate's probability function g c and the possibility distribution π c . It states that a probable occurrence must also be possible to some extent, at least to the same degree. The following inequality can formally express this:
g ci < π ci , ∀ i ∈ Ψ(31)
The possibility distribution is a upper bound for the probability distribution [29]. Each context possibility distribution π c reflects the knowledge expert's uncertainty in a quasi-qualitative manner that is less restrictive than probability g c . This is visually represented in Fig. 3, for a hypothetical three phases process. The left picture shows the contextual information provided by the analyst, with a complete ignorance region (common contextual knowledge). The right picture shows the fitted contextual information with well-defined boundaries, represented as the gate's output. In the specific case where context information follows the complete ignorance possibility distributions (π ci = 1, for c = 1, . . . , C, and i = 1, . . . , N ), the cMoE reduces to MoE. To access if the assignment of context c is correct, the following consistency index of context c (C I,c ) is defined:
C I,c = 1 N N i=1 I(g ci , π ci )(32)
where I(a, b) = 1 iff a ≤ b, and 0 otherwise. This consistency index measures the accuracy of the gates' agreement of context c with respect to the consistency principle. To measure the consistency of all contexts in representing the expert knowledge, the following geometric mean is employed
C I = C c=1 C I,c 1/C(33)
If C I = 1 indicates complete agreement, C I = 0 indicates complete disagreement, indicating an inability of cMoE to incorporate the expert knowledge into its structure. In such cases, the uncertainty in the expert knowledge from the possibility distributions can also be re-tuned, for example in α-Certain distribution, the certainty α parameter can be tuned from automatic methods. In this case, the α should be chosen using the minimal specificity principle, i.e. search for the most informative distribution (lowest α), while keeping a desired consistency index. This can be stated as
α = inf {α * ∈ [0, 1] : C I (α * ) < } .(34)
where 0 ≤ ≤ 1 is the minimum desired consistency index. The same reasoning can be applied to the β-Trapezoidal possibility distribution.
IV. EXPERIMENTAL RESULTS
This section presents the experimental results in two industrial case studies. The first case study deals with the estimation H 2 S at the output stream of a sulfur recover unit described in [24]. The second case study predicts the acidity number in a multiphase polymerization reactor. a) State of art models: The following models were also implemented along with the experiments for performance comparison purposes. The MoLE with LASSO penalty [6], the LASSO regression model, PLS regression model, Gaussian mixture regression model (GMR), decision tree (TREE), and the optimally pruned extreme learning machine model (ELM) [32]. The MoLE and LASSO source code are based on the MoLE Toolbox available at [6]. The PLS is based on the author's own implementation. The GMR implementation is based on the Netlab Toolbox available at [33]. The TREE is based on the Matlab implementation of the Statistics and Machine Learning Toolbox. The ELM were implemented from the author's source code, available at [34].
b) Hyper-parameters Selection: The selection of MoLE and LASSO regularization parameters follows the predicted LOOCV error described in Sec. III-D1; the LASSO parameter is denoted as λ, while the regularization parameters of MoLE is defined as λ e p , λ g p , whereas the e, g superscript denotes the expert and gates parameters, respectively, while p refers to the expert/gate number. The selection of PLS latent variables N lat follows from a 10-fold CV error on the training data. The selection of the hidden neurons N neu in the ELM model follows from the optimization procedure described in [32]. The number of components N c in the GMR is set equal to the number of contexts. For the tree, the minimum number of leaf node observations N leaf , was set to be N leaf for both experiments.
c) Experimental settings: The predictive performance is accessed by the root mean square error (RMSE), the coefficient of determination (R 2 ), and the maximum absolute error (MAE). The results of the second case study have been accessed by following a leave-one-batch-out procedure. The models were trained from all batches except one (to be used as a test). This procedure was repeated so that all the batches were used in the testing phase. The performance metrics were averaged and then reported as the final values. Also, a randomization t-test (from [35]) was used to compare the cMoE performance (the null hypothesis assumes that the RMSE of cMoE and the method to be compared are equal; i.e. equal mean), the p-value is then reported, if p-value < 0.05 the null hypothesis is rejected. Along with the training procedure, the data for the training data was auto-scaled, and the testing data were re-scaled according to the training parameters (mean and variance).
A. SRU Unit
The sulfur recovery unit (SRU) unit aims to remove pollutants and recover sulfur from acid gas streams. The SRU plant takes two acid gas as input, the first (MEA gas), from gas washing plants, rich in H 2 S, and the gas from sour water stripping plants (SWS gas), rich in H 2 S and NH 3 (ammonia). The acid gases are burned into reactors, where H 2 S is transformed into sulfur after oxidation reaction with air. Gaseous products from the reaction are cooled down, collected, and further processed, leading to the formation of water and sulfur. The remaining gas non-converted to sulfur (less than 5%) is further processed for a final conversion phase. The final gas stream contains H 2 S and SO 2 , and online analyzers measure these quantities. The goal is to use the process data to build a soft sensor to replace the online analyzers when under maintenance.
Five main variables are collected, X 1 is the gas flow in MEA zone, X 2 is the air flow in MEA zone to the combustion of MEA gas (set manually by the operators), X 3 is the airflow in MEA zone regulated by an automatic control loop according to the output stream gas composition, X 4 the airflow in SWS zone (set manually by operators), and X 5 the gas flow in SWS zone. The target/output is set here to be the H 2 S at the endtail; the SO 2 can also be predicted using the same principle presented here. Also, according to [24] the operators are more interested in the models that can predict the H 2 S peaks.
There is a total of 10000 samples available. The first 5000 were used for training, and the remaining 5000 for test; Fig. 4(a) shows the training data for the SRU dataset, the peaks are clearly visible. Time-lagged features were designed to account for the process dynamics. Then, for the five variables, the time lags of X i,t−d , for variables i = 1, . . . , 5 and delays d = {0, 5, 7, 9} were considered, resulting in a total of 20 features.
To predict the H 2 S, and allow a better understanding of the causes of the peaks, a cMoE model with three contexts was designed. The first, representing the operator context, accounts for the peaks. The second context is designed to represent the non-peaks. The third context represents the remaining process (d) states that are not accounted for the peaks and non-peaks. The cMoE model is represented bŷ
X1,t−9 X2,t−9 X3,t−9 X4,t−9 X5,t−9 X1,t−7 X2,t−7 X3,t−7 X4,t−7 X5,t−7 X1,t−5 X2,t−5 X3,t−5 X4,t−5 X5,t−5 X1,t X2,t X3,t X4,t X5,ty = h peak + h npeak + h rem ,
where "npeak" is a typo to non-peaks. Two β-Trapezoidal possibility distributions were designed for peaks context π peak , and non-peaks context π npeak , while for the remaining context π rem , a complete ignorance distribution is assumed, i.e. π rem = 1, ∀i ∈ {1, . . . , 5000} , as there are no information about the context. The peaks were selected manually in the training set, and the peak distribution was designed so that the limits of the trapezoidal function were defined to guarantee that the highest values have π peaks = 1 peaks. The possibility distribution for the non-peaks was designed to be complementary to the context of the peak, with a lower bound defined by the certainty β. A portion of the expert knowledge feed to cMoE is depicted in Fig. 4(c). From sample 370−400 a peak in H 2 S is present (as Ref.). π peaks (see Fig. 4(c) samples 370−400, and 460 − 490). The context of non-peak (π npeak ) was designed to be complementary to the peaks context. The remaining context (π rem =1) for all samples. The uncertainty β was chosen using the consistency principle described in Eq. (34). The consistency index and the LOOCV, for different values of the uncertainty parameter α, are shown in Fig. 4(b). The results show that β = 0.3, the selected uncertainty factor, has the lowest LOOCV, with a consistency index of C I = 0.79. Figure 4(d), shows the gates output prediction for the peaks (G peak ), non-peaks (G npeak ) and remaining (G rem ) contexts in the test set, for samples 4600 − 4700. There are two peaks in this portion of the test data, between samples 4620 − 4640, and 4640 − 4660. The gates of peaks expert G peak follow the peak pattern by assigning higher contributions to the peak expert model when the peaks are present. The same behavior is perceived in the non-peaks gates G npeak , which seems to work complementary to the peaks component. The remaining context gates, G rem , seem somewhat oscillating between the patterns; this seems to be related to a constant operation of the system. The gates coefficients allow identifying the root causes of change between the peaks and non-peaks. The gates coefficients for the peaks, and non-peaks is shown in Fig. 4(e). The variable X 3 (marked with a rectangle dashed line) has the largest difference between the two gates, indicating this is the main variable that acts on the peaks and non-peaks model switching. Figure 4(f) shows the variable X 3 , together with the H 2 S peaks. X 3 represented the input airflow to control the end tail H 2 S. Thus H 2 S is a consequence of X 3 . It seems that the control system is unstable, and any oscillation in X 3 causes a large oscillation in the H 2 S. One possible solution to improve the stability of H 2 S and reduce peaks is to improve the control system. Improvements to the control system can have a positive environmental impact by lowering H 2 S emissions and/or reducing costs associated with H 2 S post-treatment. The accuracy of the cMoE was compared with the other state of the art models, and the results are shown in Table I. The results show that the cMoE outperforms all the other models with statistical significance. Results confirm that constraining the model to represent the system's expected behavior positively impacts the prediction performance. Table II shows the parameters obtained for each model in the H 2 S experiment.
B. Polymerization
This case study refers to a polymerization batch process for resin production. The material is loaded into the reactor, which then undergoes the five process phases: heating, pressure, reaction, vacuum, and cooling; most of the phase changes are triggered manually by the operators. The phase change is determined by the quality values of the resin, namely the resin acidity number and the resin viscosity. While a physical sensor measures the viscosity, the acidity number is measured three times, one at the vacuum phase and two at the reaction phase. The objective here is to build a soft sensor to measure the acidity number online and better understand the variables that affect the acidity in that two phases.
For this process, there are data for 33 batches in the specification, with a total of 17 variables measured along the process; they are described in Table III. As there are three acidity measurements for each batch, a total of 99 samples Ref. Then, a cMoE model with two contexts was designed to predict acidity and understand the variables that mostly affect the acidity number. The first context represents the reaction phase, and the second to the vacuum phase. The cMoE model for acidity prediction is represented bŷ
y = h reaction + h vacuum
The process knowledge is available as the phase changes from the operators, between the reaction and vacuum phases. In this case, the α-Certain distribution was designed to represent the operator's context of the phases. This is depicted in Fig. 5(a) for a single batch. There, the two contexts (π reaction and π vacuum ) taken by operators indicate the region of samples belonging to each phase; the change between phases occurs at sample 110. The acidity number is also indicated as 'Ref.', measured at samples 65, 320, and 495. The uncertainty α was chosen using the consistency principle described in Eq. (34). The consistency index and the LOOCV, for different values of the uncertainty parameter α, are shown in Fig. 5(c). The results show that α = 0.4 has the lowest LOOCV, with a consistency index of C I = 0.99. It is worth noting that the LOOCV error is significantly higher for α = 0 (no uncertainty) when compared to higher uncertainty α > 0, indicating that uncertainty plays a significant role in representing process expert knowledge. The predictive performance from the leave-one-batch-out procedure, for the all models compared, is shown in Table V, in terms of R 2 , RMSE and MAE. Table V shows the parameters obtained for fitted models for the first fold of the leave-one-batch-out procedure. The cMoE is statistically different at a 0.05 significance level than PLS, TREE, and ELM and has superior performance to all other models. Also, when inspecting the gate's output provided by cMoE, the cMoE model significantly retained the representation of the initial contextual information and detected the change between phases as shown by the gate's output in Figs. 5(b).
The cMoE coefficients for the reaction, and vacuum experts are shown in Fig. 5(d). The reaction expert has a significant representation of the reaction's phase of the polymerization unit. The main variables which were important for the reaction expert are the oil temperature (X 2 , X 3 ), reactor mixture temperature (X 7 ), condenser temperature (X 5 ) and the liquid viscosity (X 17 ). The vacuum expert represents a more significant portion of the vacuum of the polymerization unit. The most significant variable is the viscosity (X 17 ), as the viscosity is physically a function of pressure. It is also important on the gas return temperature (X 3 ), reactor temperature, and pressure (X 7 , X 9 ). These variables are physically related to the equation of states of the gaseous product within the reactor.
To better understand the phases transition Fig. 5(e) shows the gates coefficients for the reaction and vacuum contexts. From there, variable X 2 , the temperature of oil return, mostly affects the transition to phases. The variable X 2 is plotted in Fig. 5(f). From there, it is possible to check that there is an intermediate step in which oil temperature drops up to a minimum before the operator starts the vacuum phase. The process status again, reaching normalization in sample 200. Also, the oil temperature has two different regimes in the reaction and vacuum phases.
V. DISCUSSION
The proposed cMoE model uses possibility distributions to represent contextual information provided by process operators and to integrate this expert knowledge into the cMoE model via the data using the learning procedure. In addition, to assess how well expert knowledge was integrated into the cMoE model, a consistency index was defined in Eq. (33). The first case study, a continuous process, was broken down into three contexts, with the peaks and non-peaks contexts being the most relevant to operators. As a result, important information and insights on the status of the control system were obtained by inspecting the main variables causing the transition between the peaks and non-peaks contexts. In the second case study, a multiphase batch process, the information on phase transitions was the knowledge to be integrated into the model. As a result, more insights into phase transitions and the impact of each variable on each phase were realized.
It is worth noting that the linear models in cMoE are sufficient to integrate the expert knowledge in both case studies; also worth mentioning that the uncertainty parameters in both cases studies were refined from the data, together with an analysis of the consistency index. In cases where the consistency index performs poorly, or one wants to employ a more informative distribution (i.e., lower values of uncertainty), nonlinear models for gates and experts may be employed as an alternative to linear models so that the consistency index's performance is improved. It would be expected that non-linear modeling must capture the non-linear behavior of the data that must be relevant in integrating expert knowledge. Furthermore, the 1 penalty is not required for the linear solution of cMoE; other penalties, such as 2 or 1,2 , can also be employed. In the case of data collinearity, the solution can be obtained by applying the PLS model to experts and gates, as demonstrated in [17].
Compared to other models that are interpretable by nature, such as the Lasso, PLS, DT, and GMR, they lack mechanisms to integrate expert knowledge. Of course, meaningful relations and rules can be extracted from these models, but this is still driven by the data, and if no context is provided, the extraction of relevant information for more complex relationships, as in the two case studies presented here, is not possible. The contextual framework presented here is flexible enough to allow its implementation in many models, including the GMR model.
VI. CONCLUSIONS
In conclusion, this paper proposes the contextual mixture of experts model, a data-driven model devised to incorporate operator domain knowledge into its structure. The proposed approach has been shown to increase predictive performance while achieving a direct interpretation of process variable contribution in each regime of the process. This approach was evaluated on two different problems, demonstrating better statistical performance than conventional machine learning models that do not rely on contextual information. The proposed method has strong potential as a stable and explainable framework to include contextual information in datadriven modeling. This is important to help the transition to Industry 5.0 by increasing the human-machine synergy in the process industry. Future research could concentrate on nonlinear functions for experts and gates learning to improve predictive performance, as well as explainable methods for interpretability.
Fig. 1 .
1MoE representation with C experts. Solid lines indicates direct data flow, while dashed lines indicates flow of expert knowledge. The process knowledge is encoded via the possibility distributions {π context-1 , . . . , π context-C } .
Fig. 2 .
2Possibilistic distributions, a) α-Certain possibility distribution, b) β-Trapezoidal epistemic possibility distribution.
Fig. 3 .
3Representation of contextual information with uncertainty (left), and fitted contextual information with cMoE (right).
Fig. 4 .
4SRU dataset (a) training data, (b) β, vs consistency index and LOOCV, (c) contextual information set in training data, and (d) gates output prediction in test data, (e) gates coefficients for peaks and npeaks contexts, (f) effect of variable X 3,t in H 2 S output.
Fig. 5 .
5Polymerization dataset, a) context for reaction and vacuum phases, together with acidity and prediction by cMoE, b) cMoE gates output, c) α, vs consistency index and LOOCV, d) reaction and vacuum experts coefficients, e) reaction and vacuum gates coefficients, f) variable X 2 (temperature oil return) is available. The process variables are synchronized with the acidity number by removing the samples that do not have the corresponding acidity number values.
TABLE I
IH 2 S PREDICTION ACCURACY FOR ALL COMPARED MODELS
H 2 S
cMoE MoLE LASSO
PLS
GMR TREE ELM
R 2
0.732
0.583
0.085
0.541 0.519 0.001 0.002
RMSE
0.026
0.035
0.053
0.035 0.032 0.087 0.060
MAE
0.340
0.484
0.630
0.519 0.401 0.603 0.662
p-value 1.000
0.001
0.001
0.001 0.001 0.001 0.001
TABLE II HYPER
II-PARAMETERS OF THE FITTED H 2 S MODELSTABLE III VARIABLES OF THE POLYMERIZATION UNIT.H 2 S
cMoE
MoLE
LASSO
PLS
GMR
TREE
ELM
λ e
peak = 0.00,λ g
peak = 0.00
λ e
peak = 1.00,λ g
peak = 1.049
λ = 1.00 N lat = 16
Nc = 3
N leaf = 1
Nneu = 160
λ e
npeak = 1.00,λ g
npeak = 3.34 λ e
npeak = 12.61,λ e
npeak = 1.31
λ e
rem = 1.00,λ g
rem = 1.00
λ e
rem = 12.83,λ e
rem = 0.00
Variable
Description
Variable
Description
X1
Reactor temperature;
X10
Reactor pressure 2;
X2
Temperature oil return;
X11
Pressure reflux;
X3
Temperature gas return;
X12
Vacuum presure;
X4
Temperature reflux pump;
X13
Flow reflux;
X5
Condensator cooling temperature;
X14
Flow oil;
X6
Column temperature;
X15
Lever water;
X7
Reactor temperature mixture;
X16
Level solvent;
X8
Temperature thermal oil ;
X17
Viscosity;
X9
Reactor pressure 1;
Y
Acidity number.
TABLE IV ACIDITY
IVPREDICTION ACCURACY FOR ALL COMPARED MODELSAcidity
cMoE MoLE LASSO
PLS
GMR TREE ELM
R 2
0.996
0.995
0.995
0.994 0.996 0.994 0.812
RMSE
0.092
0.101
0.101
0.109 0.096 0.120 0.500
MAE
0.134
0.155
0.155
0.166 0.145 0.174 0.768
p-value 1.000
0.342
0.350
0.010 0.669 0.028 0.001
TABLE V
VHYPER-PARAMETERS OF THE FITTED ACIDITY MODELSAcidity
cMoE
MoLE
LASSO
PLS
GMR
TREE
ELM
λ e
reaction = 1.00,λ g
reaction = 1.15
λ e
reaction = 16.90,λ g
reaction = 0
λ = 31.90 N lat = 5
Nc = 2
N leaf = 1
Nneu = 106
λ e
vacuum = 21.92,λ g
vacuum = 2.36
λ e
vacuum = 16.90,λ e
vacuum = 0
Industry 5.0 -a human-centric solution. S Nahavandi, Sustainability. 1116S. Nahavandi, "Industry 5.0 -a human-centric solution," Sustainability, vol. 11, no. 16, 2019.
Incorporation of process-specific structure in statistical process monitoring: A review. M S Reis, G Gins, T J Rato, Journal of Quality Technology. 514M. S. Reis, G. Gins, and T. J. Rato, "Incorporation of process-specific structure in statistical process monitoring: A review," Journal of Quality Technology, vol. 51, no. 4, pp. 407-421, 2019.
. J Sansana, M N Joswiak, I Castillo, Z Wang, R Rendall, L H Chiang, M S Reis, Computers & Chemical Engineering. 151107365Recent trends on hybrid modeling for industry 4.0J. Sansana, M. N. Joswiak, I. Castillo, Z. Wang, R. Rendall, L. H. Chiang, and M. S. Reis, "Recent trends on hybrid modeling for industry 4.0," Computers & Chemical Engineering, vol. 151, p. 107365, 2021.
Informed machine learning -a taxonomy and survey of integrating prior knowledge into learning systems. L Von Rueden, S Mayer, K Beckh, B Georgiev, S Giesselbach, R Heese, B Kirsch, M Walczak, J Pfrommer, A Pick, R Ramamurthy, J Garcke, C Bauckhage, J Schuecker, IEEE Transactions on Knowledge and Data Engineering. L. von Rueden, S. Mayer, K. Beckh, B. Georgiev, S. Giesselbach, R. Heese, B. Kirsch, M. Walczak, J. Pfrommer, A. Pick, R. Ramamurthy, J. Garcke, C. Bauckhage, and J. Schuecker, "Informed machine learning -a taxonomy and survey of integrating prior knowledge into learning systems," IEEE Transactions on Knowledge and Data Engineering, pp. 1-1, 2021.
Deep learning of complex batch process data and its application on quality prediction. K Wang, R B Gopaluni, J Chen, Z Song, IEEE Transactions on Industrial Informatics. 1612K. Wang, R. B. Gopaluni, J. Chen, and Z. Song, "Deep learning of complex batch process data and its application on quality prediction," IEEE Transactions on Industrial Informatics, vol. 16, no. 12, pp. 7233- 7242, 2020.
A regularized mixture of linear experts for quality prediction in multimode and multiphase industrial processes. F Souza, J Mendes, R Araújo, Applied Sciences. 115F. Souza, J. Mendes, and R. Araújo, "A regularized mixture of linear experts for quality prediction in multimode and multiphase industrial processes," Applied Sciences, vol. 11, no. 5, 2021.
Nearest-neighbor method for the automatic maintenance of multivariate statistical soft sensors in batch processing. P Facco, F Bezzo, M Barolo, Industrial & Engineering Chemistry Research. 495P. Facco, F. Bezzo, and M. Barolo, "Nearest-neighbor method for the automatic maintenance of multivariate statistical soft sensors in batch processing," Industrial & Engineering Chemistry Research, vol. 49, no. 5, pp. 2336-2347, 2010.
Between-mode quality analysis based multimode batch process quality prediction. L Zhao, C Zhao, F Gao, Industrial & Engineering Chemistry Research. 5340L. Zhao, C. Zhao, and F. Gao, "Between-mode quality analysis based multimode batch process quality prediction," Industrial & Engineering Chemistry Research, vol. 53, no. 40, pp. 15 629-15 638, 2014.
Quality-related locally weighted non-gaussian regression based soft sensing for multimode processes. Y He, B Zhu, C Liu, J Zeng, Industrial & Engineering Chemistry Research. 5751Y. He, B. Zhu, C. Liu, and J. Zeng, "Quality-related locally weighted non-gaussian regression based soft sensing for multimode processes," Industrial & Engineering Chemistry Research, vol. 57, no. 51, pp. 17 452-17 461, 2018.
Soft sensing of nonlinear and multimode processes based on semi-supervised weighted gaussian regression. X Shi, Q Kang, M Zhou, A Abusorrah, J An, IEEE Sensors Journal. 2021X. Shi, Q. Kang, M. Zhou, A. Abusorrah, and J. An, "Soft sensing of nonlinear and multimode processes based on semi-supervised weighted gaussian regression," IEEE Sensors Journal, vol. 20, no. 21, pp. 12 950- 12 960, 2020.
Fuzzy phase partition and hybrid modeling based quality prediction and process monitoring methods for multiphase batch processes. L Luo, S Bao, J Mao, D Tang, Z Gao, Industrial & Engineering Chemistry Research. 5514L. Luo, S. Bao, J. Mao, D. Tang, and Z. Gao, "Fuzzy phase partition and hybrid modeling based quality prediction and process monitoring methods for multiphase batch processes," Industrial & Engineering Chemistry Research, vol. 55, no. 14, pp. 4045-4058, 2016.
Multiway gaussian mixture model based multiphase batch process monitoring. J Yu, S J Qin, Industrial & Engineering Chemistry Research. 4818J. Yu and S. J. Qin, "Multiway gaussian mixture model based multi- phase batch process monitoring," Industrial & Engineering Chemistry Research, vol. 48, no. 18, pp. 8585-8594, 2009.
Semisupervised robust modeling of multimode industrial processes for quality variable prediction based on student's t mixture model. W Shao, Z Ge, Z Song, J Wang, IEEE Transactions on Industrial Informatics. 165W. Shao, Z. Ge, Z. Song, and J. Wang, "Semisupervised robust mod- eling of multimode industrial processes for quality variable prediction based on student's t mixture model," IEEE Transactions on Industrial Informatics, vol. 16, no. 5, pp. 2965-2976, 2020.
Data-driven mode identification and unsupervised fault detection for nonlinear multimode processes. B Wang, Z Li, Z Dai, N Lawrence, X Yan, IEEE Transactions on Industrial Informatics. 166B. Wang, Z. Li, Z. Dai, N. Lawrence, and X. Yan, "Data-driven mode identification and unsupervised fault detection for nonlinear multimode processes," IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 3651-3661, 2020.
Soft-sensor development for processes with multiple operating modes based on semi-supervised gaussian mixture regression. W Shao, Z Ge, Z Song, IEEE Transactions on Control Systems Technology. W. Shao, Z. Ge, and Z. Song, "Soft-sensor development for processes with multiple operating modes based on semi-supervised gaussian mix- ture regression," IEEE Transactions on Control Systems Technology, pp. 1-13, 2018.
Student's-t mixture regression-based robust soft sensor development for multimode industrial processes. J Wang, W Shao, Z Song, Sensors. 1811J. Wang, W. Shao, and Z. Song, "Student's-t mixture regression-based robust soft sensor development for multimode industrial processes," Sensors, vol. 18, no. 11, 2018.
Mixture of partial least squares experts and application in prediction settings with multiple operating modes. F A A Souza, R Araújo, Chemometrics and Intelligent Laboratory Systems. 130F. A. A. Souza and R. Araújo, "Mixture of partial least squares experts and application in prediction settings with multiple operating modes," Chemometrics and Intelligent Laboratory Systems, vol. 130, pp. 192- 202, January 2014.
New estimation and feature selection methods in mixtureof-experts models. A Khalili, Canadian Journal of Statistics. 384A. Khalili, "New estimation and feature selection methods in mixture- of-experts models," Canadian Journal of Statistics, vol. 38, no. 4, pp. 519-539, 2010.
Variable selection via nonconcave penalized likelihood and its oracle properties. J Fan, R Li, Journal of the American Statistical Association. 96456J. Fan and R. Li, "Variable selection via nonconcave penalized like- lihood and its oracle properties," Journal of the American Statistical Association, vol. 96, no. 456, pp. 1348-1360, 2001.
Embedded local feature selection within mixture of experts. B Peralta, A Soto, Information Sciences. 269B. Peralta and A. Soto, "Embedded local feature selection within mixture of experts," Information Sciences, vol. 269, pp. 176 -187, 2014.
Regularized maximum-likelihood estimation of mixture-of-experts for regression and clustering. F Chamroukhi, B T Huynh, The International Joint Conference on Neural Networks (IJCNN). Rio, BrazilF. Chamroukhi and B. T. Huynh, "Regularized maximum-likelihood estimation of mixture-of-experts for regression and clustering," in The International Joint Conference on Neural Networks (IJCNN), Rio, Brazil, July 2018.
An l 1 -oracle inequality for the lasso in mixture-of-experts regression models. T Nguyen, H D Nguyen, F Chamroukhi, G J Mclachlan, T. Nguyen, H. D. Nguyen, F. Chamroukhi, and G. J. McLachlan, "An l 1 - oracle inequality for the lasso in mixture-of-experts regression models," 2020.
Estimation and feature selection in mixtures of generalized linear experts models. B T Huynh, F Chamroukhi, B. T. Huynh and F. Chamroukhi, "Estimation and feature selection in mixtures of generalized linear experts models," 2019.
Soft Sensors for Monitoring and Control of Industrial Processes. L Fortuna, S Graziani, M G Xibilia, SpringerL. Fortuna, S. Graziani, and M. G. Xibilia, Soft Sensors for Monitoring and Control of Industrial Processes. Springer, 2007.
Fuzzy sets as a basis for a theory of possibility. L A Zadeh, Fuzzy Sets and Systems. 11L. A. Zadeh, "Fuzzy sets as a basis for a theory of possibility," Fuzzy Sets and Systems, vol. 1, no. 1, pp. 3-28, 1978.
D Dubois, H Prade, Possibility Theory: Qualitative and Quantitative Aspects. Dordrecht; NetherlandsSpringerD. Dubois and H. Prade, Possibility Theory: Qualitative and Quantitative Aspects. Dordrecht: Springer Netherlands, 1998, pp. 169-226.
Measuring tranquility and anxiety in decision making: An application of fuzzy sets. R R Yager, International Journal of General Systems. 83R. R. YAGER, "Measuring tranquility and anxiety in decision making: An application of fuzzy sets," International Journal of General Systems, vol. 8, no. 3, pp. 139-146, 1982.
B Solaiman, É Bossé, Fundamental Possibilistic Concepts. Springer International PublishingB. Solaiman and É. Bossé, Fundamental Possibilistic Concepts. Cham: Springer International Publishing, 2019, pp. 13-46.
Possibilistic logic -an overview," in Computational Logic, ser. Handbook of the History of Logic. D Dubois, H Prade, J. H. Siekmann, Ed. North-Holland9D. Dubois and H. Prade, "Possibilistic logic -an overview," in Computational Logic, ser. Handbook of the History of Logic, J. H. Siekmann, Ed. North-Holland, 2014, vol. 9, pp. 283-342.
Approximate cross-validation in high dimensions with guarantees. W Stephenson, T Broderick, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning. Research, S. Chiappa and R. Calandrathe Twenty Third International Conference on Artificial Intelligence and Statistics, ser. Machine LearningPMLR108W. Stephenson and T. Broderick, "Approximate cross-validation in high dimensions with guarantees," in Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, S. Chiappa and R. Calandra, Eds., vol. 108. PMLR, 26-28 Aug 2020, pp. 2424-2434.
Efficient L1 regularized logistic regression. S Lee, H Lee, P Abbeel, A Y Ng, Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference. The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence ConferenceBoston, Massachusetts, USAAAAI PressS. Lee, H. Lee, P. Abbeel, and A. Y. Ng, "Efficient L1 regular- ized logistic regression," in Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 16-20, 2006, Boston, Massachusetts, USA. AAAI Press, 2006, pp. 401-408.
Op-elm: Optimally pruned extreme learning machine. Y Miche, A Sorjamaa, P Bas, O Simula, C Jutten, A Lendasse, IEEE Transactions on Neural Networks. 211Y. Miche, A. Sorjamaa, P. Bas, O. Simula, C. Jutten, and A. Lendasse, "Op-elm: Optimally pruned extreme learning machine," IEEE Transac- tions on Neural Networks, vol. 21, no. 1, pp. 158-162, 2010.
Netlab toolbox. I Nabney, I. Nabney, "Netlab toolbox," https://www.mathworks.com/matlabcentral/ fileexchange/2654-netlab, 2022, accessed: 2022-10-13.
Op-elm toolbox. A Lendasse, S A , Y Miche, A. Lendasse, S. A., and Y. Miche, "Op-elm toolbox," https://research. cs.aalto.fi//aml/software.shtml, 2022, accessed: 2022-10-13.
Comparing the predictive accuracy of models using a simple randomization test. H Van Der, Voet, Chemometrics and Intelligent Laboratory Systems. 252H. van der Voet, "Comparing the predictive accuracy of models using a simple randomization test," Chemometrics and Intelligent Laboratory Systems, vol. 25, no. 2, pp. 313-323, 1994.
| []
|
[
"An Integer Linear Programming Model for Tilings",
"An Integer Linear Programming Model for Tilings"
]
| [
"Gennaro Auricchio \nDepartment of Mathematics\nUniversity of Pavia\n\n",
"Luca Ferrarini \nDepartment of Mathematics\nUniversity of Pavia\n\n",
"Greta Lanzarotto \nDepartment of Mathematics\nUniversity of Pavia\n\n\nIRMA\nUniversity of Strasbourg\n\n"
]
| [
"Department of Mathematics\nUniversity of Pavia\n",
"Department of Mathematics\nUniversity of Pavia\n",
"Department of Mathematics\nUniversity of Pavia\n",
"IRMA\nUniversity of Strasbourg\n"
]
| []
| In this paper, we propose an Integer Linear Model whose solutions are the aperiodic rhythms tiling with a given rhythm A. We show how this model can be used to efficiently check the necessity of the Coven-Meyerowitz's pT 2q condition and also to define an iterative algorithm that finds all the possible tilings of the rhythm A. To conclude, we run several experiments to validate the time efficiency of this model. | 10.1080/17459737.2023.2180812 | [
"https://arxiv.org/pdf/2107.04108v2.pdf"
]
| 235,790,688 | 2107.04108 | 8ade4418a5ea318c82a4a2da8c5fcefb92e84472 |
An Integer Linear Programming Model for Tilings
July 14, 2021
Gennaro Auricchio
Department of Mathematics
University of Pavia
Luca Ferrarini
Department of Mathematics
University of Pavia
Greta Lanzarotto
Department of Mathematics
University of Pavia
IRMA
University of Strasbourg
An Integer Linear Programming Model for Tilings
July 14, 2021Integer ProgrammingMathematics and MusicTiling ProblemsVuza CanonspT 2q Conjecture AMS: 90C1005B45
In this paper, we propose an Integer Linear Model whose solutions are the aperiodic rhythms tiling with a given rhythm A. We show how this model can be used to efficiently check the necessity of the Coven-Meyerowitz's pT 2q condition and also to define an iterative algorithm that finds all the possible tilings of the rhythm A. To conclude, we run several experiments to validate the time efficiency of this model.
Introduction
In this paper, we deal with the mathematical and computational aspect of a musical problem that arouses the interest of mathematicians, computer scientists, music theorists, and composers (see [1] and [4]). It is about the construction of Vuza canons. A Vuza canon is a musical rhythmic canon without internal repetitions, regardless of the pitch, through which the composer tries to completely fill the rhythmic space, with no superimposition between the different voices [17].
The construction of musical canons has always intrigued musicians: think of the complex Flemish polyphonies of composers such as Josquin Desprez or the counterpoint techniques that Johann Sebastian Bach shows in the Goldberg Variations. The formal properties of the latter have been translated into algebraic terms in the work of Scimemi [16]. Olivier Messiaen is perhaps the first theorist and composer to have introduced and studied the concept of rhythmic canon regardless of pitch [13].
From a mathematical point of view, the construction of tiling rhythmic canons can be formalized in terms of factoring abelian groups as the sum of subsets. Another representation makes use of polynomials with coefficients 0 and 1. It is in terms of these polynomials that the Coven-Meyerowitz conditions pT 1q and pT 2q are expressed, which are sufficient for the existence of rhythmic canons, and of which pT 1q is necessary (see [6]). The necessity of pT 2q remains an open problem and it is in this context that we find the central role of Vuza canons:
Theorem 1 (Amiot, [2]). If a rhythmic canon does not satisfy the pT 2q condition, it is possible to collapse it to a Vuza canon that does not satisfy the pT 2q condition.
Therefore, being able to compute Vuza canons and checking if a given rhythm tiles or not, has become a problem of major interest in the mathematical music field.
In this paper, we introduce a linear problem whose binary solutions are all the aperiodic tiling complements of a given rhythm. In particular, we impose the aperiodicity of the solution through linear constraints, at the best of our knowledge, this is the first time.
The purpose of our model is twofold. First, we want to determine, for a given motif A, all the tiling motifs B in Zn. In this case, we are interested not only in testing the tiling property but also in finding all the complements of A. Given a motif A and a period n, the Matolcsi and Kolountzakis' Fill-Out Procedure provides a complete classification of the complements of A in Zn [11]. The main idea behind this algorithm is to use packing complements and add one by one the new elements discovered by an iterative search. At the best of our knowledge, this is the only algorithm able to provide the complete list of complements of a given motif, for n ď 200. For larger n the problem has been considered in [9], but the author was able to give only a lower bound to the number of tiling complements. Therefore, we choose to compare our performances with the one of the Fill-Out Procedure. Secondly, we aim to determine if a given aperiodic motif A, that does not satisfies the (T 2) property, tiles with an aperiodic motif B. This could be used to efficiently test possible counterexamples to the necessity of (T 2) condition [3].
The tiling problem is very similar to the decision problem of DIFF studied in [10], which is shown to be NP-complete. This suggests a lower bound on the computational complexity of the tiling decision problem. Since our problem consists in solving a linear system of 3n´1 unknowns and 3n`3pMnppq´1q constraints, the complexity of finding a single aperiodic solution is Opn c`3 Mnppqq, where Mnppq denotes the number of all distinct primes in the factorization of n.
As we will see, solving this linear problem finds us only one of the possible solutions. However, we can update the problem by removing the founded solution from the feasible set. If we solve the updated problem, we are then able to find a new solution. By iterating this process until the problem cannot be solved, we will find all the tiling complements of the given rhythm A.
Since we are not interested in looking for all the possible solutions but rather for all the classes of equivalents rhythms modulo translations or affine transformations, we can costumize the constraints to add at each step. In particular, if we are interested in finding all the solutions modulo affine transformations, the number of constraints to add at each iteration it is equal to the cardinality of P " ta P N | pa, nq " 1u times the cardinality of the set of all translations fixing the first entry of the solution equal to 1. Therefore, we add Op|P| n n A q new constraints at every iteration, where nA is the cardinality of the rhythm A. As a result, finding new tiling rhythms gets harder at each iteration.
The outline of the paper is the following. In Section 2, we recall the main notions and results about Tiling Rhythm Canons and formulate the tiling problem. In Section 3, we reformulate the tiling problem as an Integer Linear Problem. We endow the obtained system with additional constraints to impose the aperiodicity of the solution. We then define an iterative algorithm able to compute the complete tiling of a given rhythm. In Section 4, we report the results of our tests. We compare the time required by our method with the one required by the Fill-Out Procedure. To conclude, in Section 5, we outline the future works and possible research directions.
Tiling in music
In this section, we fix our notation and recall the main notions about rhythm in mathematics. We refer to [3] for a complete and exaustive tractation of this topic. Definition 1. A tiling rhythmic canon (TRC ) pA, Bq with period n is a factorization of the cyclic group Zn given by subsets A, the inner rhythm, and B, the outer rhythm:
A ' B " Zn. 1. pApxq, pBpxq P t0, 1urxs, and 2. A ' B " tr1, . . . , rnu Ă Z with ri ‰ rj mod pnq for each i, j P t1, . . . , nu, i ‰ j.
Remark 1. Note that ∆npxq " x n´1 x´1 " ź d n d‰1 Φ d pxq
where Φ d pxq is the d´th cyclotomic polynomial, that is the minimal polynomial of any primitive d´th root of unity over the field of the rational numbers.
An important property exploited in our algorithm is the invariance of solutions under affine transformations, that is, any affine transformation sends tiling solutions into tiling solutions.
Theorem 2 (Vuza, [17]). Let A ' B " Zn be a TRC and f : Zn Ñ Zn be an affine transformation of Zn, that is
f : x Þ Ñ ax`b mod n,
where a is coprime with n and b P Zn. The affine transform of A by f still tiles with B; i.e. paA`bq ' B " Zn.
Remark 2. Note that a set A is periodic modulo k n if and only if
x n´1 x k´1ˇp Apxq.
Whenever a rhythm A is periodic modulo k n, with k ‰ n, it is periodic modulo all multiples of k dividing n. For this reason, when it comes to check whether A is periodic or not, it suffices to check if it is periodic modulo m1 " p α 1´1
1 p α 2 2 . . . p α N N , m2 " p α 1 1 p α 2´1 2 . . . p α N N , . . . , mN " p α 1 1 p α 2 2 . . . p α N´1 N , where n " p α 1 1 p α 2 2 . . . p α N N
is the prime powers factorization of n.
Definition 4. A TRC pA, Bq in Zn " A ' B is a Vuza canon if both A and B are aperiodic.
The existence of Vuza canons depends on the order of the cyclic group Zn factorized. In [8], Hajós proposed the following definition.
Definition 5. A finite abelian group G is a good group if in any tiling G " A ' B one of the two subsets A and B has to be periodic. G is a bad group if there exists a tiling G " A ' B where A and B are aperiodic.
In [5], [8], [14], and [15] the good groups and the bad groups have been completely characterized. Moreover, they partition the set of finite cyclic groups in two disjoint classes. In particular:
• the good groups, for which there are no Vuza canons, have orders in p α , p α q, p 2 q 2 , pqr, p 2 qr, pqrs : α P N ( , where p, q, r, s are distinct primes, and • the bad groups, whose orders are of the type N " nmk with
-pn, mq " 1 -n " n1n2, m " m1m2 -n1, n2, m1, m2, k ě 2.
Therefore, the analysis on Vuza canons exclusively concern these last cyclic groups, whose orders are explicitly identified. Although the groups that can be expressed as direct sum decomposition have been identified exactly, it does not mean that every rhythm in Zn tiles. Ethan Coven and Aaron Meyerowitz found two sufficient conditions for a rhythmic pattern to tile [6]. Those conditions have been proved to be also necessary under certain hypothesis, however a proof of their necessity in the general case is still lacking. The polynomial representation of TRCs turns out to be the most suitable for presenting these results.
To state the condition introduced by Coven and Meyerowitz, we need to define two sets on the basis of the cyclotomic polynomials which divide the characteristic polynomial of the rhythm under consideration. Definition 6. Let A Ă N be finite, we define:
• RA :" d P N˚: Φ d pxq pApxq ( , • SA :" d P RA : d " p α , p prime, α P N˚(, where N˚:" Nzt0u.
We can now state the following: Theorem 3 (Coven and Meyerowitz, [6]). Let us consider the conditions: Determining whether the condition pT 2q is necessary for a rhythm A to tile is still an open question. Izabela Laba and Itay Londner were able to prove that the condition pT 2q holds for all integer tilings of period M " ppipjp k q 2 , where pi, pj, p k are distinct odd primes: Theorem 4 ( Laba and Londner, [12]). Let M " p 2 i p 2 j p 2 k , where pi, pj, p k are distinct odd primes. Assume that A ' B " ZM , with |A| " |B| " pipjp k . Then both A and B satisfy (T2).
pT 1q Ap1q " ś p α PS A p; pT 2q if p α 1 1 , . . . , p α N N P SA, then p α 1 1¨¨¨p α N N P RA, where p α 1 1 , . . . , p α N N
A Linear Model for tiling in Z n
In this section, we introduce our Integer Linear model. First of all, we define the linear equations that describe the tiling property. Afterwards, we impose the aperiodicity constraints. Our main result is Theorem 5, where we state that imposing the aperiodicity of the solution can be done through linear constraints. Finally, we show how solving a sequence of increasingly harder linear problems leads to a complete tiling of a given rhythm A.
Feasibility Condition
Let us take an inner rhythm A and a possible outer rhythm B. Since the degrees of their characteristic polynomials, pApxq and pBpxq, are both less than or equal to n´1, the degree of the product pRpxq is less than or equal to 2n´2. We denote by r the vector with 2n´1 entries containing the coefficients of the polynomial pRpxq :" pApxqpBpxq. From Lemma 1, we know that B tiles with A if and only if
pRpxq " 1`x`x 2`¨¨¨`xn´1 , mod x n´1 .(1)
We can express condition (1) through n linear equations ri`ri`n " 1, @i " 0, . . . , n´1.
Therefore, we can express the constraint
pRpxq " pApxqpBpxq " n´1 ÿ i"0 x i , mod x n´1 ,
through the linear system
FipBq´ri " 0 @i P t0, . . . , 2n´2u, rj`rj`n " 1 @j P t0, . . . , n´1u,
where FipBq is the function that associates to a motif B the i-th coefficient of pApxqpBpxq, that is where b0, b1, . . . , bn´1 are the coefficients of pB. Notice that, since A is given, all the equations presented above are linear with respect to the variables bi and ri. We then can express them through a linear system
A¨X " Y,(2)
where • A is a p3n´1qˆp3n´1q matrix which depends only on the given rhythm A,
• X " pb, rq is the vector composed by the coefficients of pB (namely b) and the coefficients pR (namely r), respectively;
• Y is the p3n´1q-dimensional vector defined as
Yi "
# 0 if i P t0, . . . , 2n´2u, 1
otherwise.
Finally, in order to ensure that pB and pR are 0´1 polynomials, we will require bi and ri to be binary variables, i.e. they can only assume value 0 or 1.
Aperiodicy Constraints
Let us assume n " p α 1 1 p α 2 2 . . . p α N N . Without loss of generality, we can suppose p1 ă p2 㨨¨ă pN and, therefore, if we define the set of the maximal divisors of n as Mn :" tm k " n p k u k"1,...,N , we have mN ă mN´1 㨨¨ă m1.
According to Remark 2, to verify if the rhythm B is periodic or not, it is sufficient to check its periodicity only for the elements of Mn. Let us take mj P Mn. To impose that the rhythm B is not mj´periodic, we introduce the family of auxiliary variables U pjq :" U pjq i ui"0,...,m j´1 .
Each family U pjq is composed by binary variables subjected to the the following constraints:
p j´1 ÿ k"0 b i`km j´p j U pjq i ď pj´1,(3)p j´1 ÿ k"0 b i`km j´p j U pjq i ě 0,(4)m j´1 ÿ i"0 U pjq i ď nB pj´1 ,(5)
for each j such that pj|nB, where nB is the cardinality of B.
Since ř p j´1 k"0 b i`km j ď pj, condition (3) assures us that U pjq i " 1 if p j´1 ÿ k"0 b i`km j " pj.
Condition (4) assure us that
U pjq i " 1 only if p j´1 ÿ k"0 b i`km j " pj.
Therefore, conditions (3) and (4) combined, assures us that
U pjq i " 1 ðñ p j´1 ÿ k"0 b i`km j " pj. Since ř n´1 i"0 bi " nB, if ř m j´1 i"0 U pjq i " n B p j , it follows that p j´1 ÿ k"0 b i`km j " $ ' & ' % pj if U pjq i " 1 0 otherwise,
and, hence, B is periodic of period mj. By adding the constraints (3), (4), and (5) to the Linear System, we, therefore, remove all the periodic solutions from the feasible set.
Indeed, if B " pb0, b1, . . . , bn´1q is not mj´periodic, there exists a translation of B such that (6) holds. We remove the family U p0q , since it contains the highest number of variables.
Since conditions (3)- (5) and (6) are linear for any j, we can add them to the system described in (2) and obtain the following Integer Linear Programming (ILP) problem min Optbiu, triu, U q
s.t. i ÿ j"0 ai´jbj´ri " 0 @i P t0, . . . , n´1u,
i`1 ÿ j"0 a n´pi´jq bj´ri`n " 0 @i P t0, . . . , n´2u,
rj`rj`n " 1 @j P t0, . . . , n´1u,
m 0´1 ÿ j"0 bj ď nB m0 n´1 (11) p j´1 ÿ k"0 b i`km j´p j U pjq i ď pj´1, @j P t1, . . . , N u(10)
@i P t0, . . . , mj´1u
p j´1 ÿ k"0 b i`km j´p j U pjq i ě 0, @j P t1, . . . , N u(13)
@i P t0, . . . , mj´1u
m j´1 ÿ i"0 U pjq i ď nB pj´1 , @j P t1, . . . , N u(14)
@i P t0, . . . , mj´1u b0 " 1 (15) b k P t0, 1u @k P t1, . . . , n´1u, r k P t0, 1u @k P t0, . . . , 2n´2u, U pjq i P t0, 1u @j P t1, . . . , N u @i P t0, . . . , mj´1u
where O is a suitable linear function to minimize. The constraint (15) allows us to reduce the size of the feasible set by removing a degree of freedom from the possible solution. We denote the model just introduced as the Master Problem (MP). This functional prefers the tiling complements whose first components are as full as possible of ones. Choosing the right functional O can help in discerning, among all the possible solutions, the ones we want to find. However, since the aim of our tests is to find all the possible tilings, we will not need to impose any selection criteria and, therefore, we will set Opb, rq :" 0 for all the experiments. To find all the aperiodic complements in Zn of a given rhythm A, is therefore equivalent to find all the solutions of the M P , such that n´1 ÿ i"0 bi " nB :" n nA .
We denote with DA the set containing all these solutions.
Cutting Sequential Algorithm
Once we find an aperiodic rhythm B p1q tiling with a given rhythm A, we can remove B p1q from the set of all possible solutions DA and obtain a new set of feasible solutions D p1q
A . Let us denote with M P p1q the restriction on D p1q A of M P and call B p2q the solution of M P p1q , we can then remove this solution from D p1q A , define the set D p2q A , and define M P p2q , starting the whole process again. By repeating this process until we find an unsolvable problem, we retrieve all the possible solutions of the original Master Problem and, therefore, we generate all the aperiodic rhythms tiling with the rhythm A.
In this paragraph, we detail how to cut out from the feasible set the solution found at each iteration.
Let B p1q be a rhythm tiling with A and let b p1q " pb0, . . . , bn´1q be the coefficients of its characteristic polynomial. We denote with I p1q the set of non-zero coordinate indexes of the vector b p1q , that is
I p1q :" ! i P t0, . . . , n´1uˇˇbi " 1
) .
If we add the constraint
ÿ iPI p1q b p1q i ‰ n nA ,(16)
or equivalently ÿ
iPI p1q b p1q i ď n nA´1 ,(17)
to the MP and solve it, we find a new solution b p2q ‰ b p1q of the tiling problem. We iterate this procedure until we find an unsolvable problem. All the solutions found during this process are stored in memory and given as final output of the algorithm. In Algorithm 1, we sketch the pseudocode of this algorithm.
Remark 5. Adding the constraints one by one is highly inefficient. Therefore, once we find a solution, we compute all its affine transformations, which, according to Theorem 2, are possible solutions and remove them as well. Since we impose b0 " 1, we consider only the affine transformations that preserve this identity. This procedure, however, is customizable: if we remove only the translations of the founded solution the algorithm will return all the solutions modulo translations.
Given a solution b p1q , we can remove the affine transformations of a given solution through a linear constraint. According to (17), we impose
ÿ iPI p1q b api`kq ď nB´1(18)
where k runs over all the translations which fix the first position and a runs over the set of numbers co-prime with n.
Algorithm 1: The Cutting Sequential Algorithm. Input : rhythm A Output: S, list of Aperiodic rhythms B, such that A ' B " Z n 1 z˚" OP T pM P q 2 add z˚to S 3 while P ‰ H do
Complexity of the Method
To conclude, we analyze the complexity of the system (2). The unknowns to determine are the 3n´1 binary coordinates of the vector pb, rq plus the variables needed to impose the aperiodicity constraints, U pjq i , which are σn :"
ÿ pPPnztp 0 u n p ,
where Pn is the set of primes that divide n. Therefore, we have 3n´1 constraints for the feasibility, the 3σn given by conditions (12), (13), and (14) plus the one given by condition (11). Since it is well-known that lim nÑ8 ÿ pPP,pďn
1 p « lim nÑ8 logplogpnqq "`8,
it is impossible to give a bound on the number of aperiodicity constraints that does not depend from n. If we want a complete tiling of the given rhythm, the complexity increases, since we are adding constraints at each iteration. The amount of constraints to add depends on the equivalence class we are computing. If we are looking for all the solutions modulo translation, we add nB constraints at each iteration, since there are exactly nB feasible translations preserving the identity b0 " 1. If we search for all the solutions up to affine transformations, the number of constraints added is nB times the quantity of numbers primes to n.
Numerical Results
In this section, we report the results of our tests. Our experiments aim in showcasing the efficiency and the quickness of our model. We inhabit our tests in two frameworks.
In the first one, we aim to find all the complements of a given rhythm. We compare the CSA with the Fill-Out Procedure on rhythms in Zn, for n " 72, 108, 120, 144, 168, 180. In the second one, we want to determine if a given rhythm tiles with an aperiodic rhythm, i.e. we want to find just one of the possible complements of a given rhythm. This simplification allows us to test our methods on larger values of n.
We run all our experiments on a ASUS VivoBook15 with Intelcore i7. The algorithm is implemented in Python using Gurobi v9.1.1, [7].
Runtimes for Complete Tilings
We tested our method and the Fill-Out Procedure on several rhythms in various Zn, for n " 72, 108, 120, 144, 168, 180. The experiment we ran is the following. Given a rhythm A, we list every complement. Afterwards, we reverse the problem: we fix one of the found complements, namely B, and search for all the complements of B.
In Table 2, we compare the runtimes of CSA with the runtimes of the Fill-Out Procedure. The CSA is customized in order to find all the classes modulo affine transformations.
The Tail Effect
Every time we find a solution, we have to add new constraints to the Master Problem and solve it once again. As a result, the problem we solve gets computationally harder at each iteration. In particular, the time needed to compute the last complements of a given rhythm requires way more time than computing the first half.
In Figure 1, we report the time required to find the next tiling solution for two rhythms in Z180. As expected, the time required at each iteration grows exponentially.
Verifying the Tiling Property
We are now interested in determine if a given rhythm A admit an aperiodic tiling complement B. We believe that, by pairing our model with a function that builds a non pT 2q rhythm A, we could create a counter example to the necessity of this condition. For this reason, being able to verify the tiling property of a rhythm A in a reasonable amount of time is important.
In Table 1, we report the rhythms tested with our method. The runtimes required to determine the non-existence of an aperiodic complement varies in a range of 1 minute (for the rhythms in Z1050, Z2310, and Z6300) and up to 10 minutes (for the rhythm in Z27225).
Conclusions and Future Works
We introduced a new Integer Linear Model able to find the aperiodic complements of a given rhythm. We run several tests to prove the time efficiency of our method, especially when it comes to determining if there exists an aperiodic complementary of a given rhythm.
Our future aim is to characterize the polynomial induced by a rhythm that does not satisfy the pT 2q condition through a Linear Programming Model. This could lead to discovering insightful information on the structure of those canons. Moreover, by pairing an algorithm that quickly searches for non pT 2q motifs with the algorithm introduced in this paper, we hope to find a counterexample to the necessity of pT 2q.
We also want to improve our algorithm further by dividing the set of solutions into smaller and disjoint sets. Hopefully, this division will mitigate the "tail effect" showcased in subsection 4.1 and increasing further the quickness of our model. Table 2: Comparison of Runtimes (in seconds) of the Cutting Sequential Algorithm (CSA) and the Fill-Out-Procedure (FP).
Fixed
n P N, a classical problem is to determine if, given an inner rhythm A, there exists an outer rhythm B. It is possible to characterize TRCs through characteristic polynomials.Definition 2. Let A Ă N be finite. The characteristic polynomial of A is defined as pApxq " ÿ kPA x k . Lemma 1. Let pApxq, pBpxq P Nrxs and n a positive integer. Then pApxq¨pBpxq " ∆npxq, mod px n´1 q if and only if
Definition 3 .
3Let k be a non-null element of Zn. A rhythm A Ă Zn is periodic modulo k if and only if k`A " A. A rhythm A Ă Zn is aperiodic if and only if it is not periodic for any k P Z.
F2n´2pBq :" an´1bn´1 ,
an´1bn´1
Remark 3 .
3To improve the efficiency, we remove a family of auxiliary variables U pjq :"
Remark 4 .
4The set of constraints of the MP fully characterize the possible aperiodic rhythms tiling with a given rhythm A. The functional O does not play any role, however it can be used to induce an order or a selection criteria on the space of solutions. For example, let us consider the following functional Opb, rq :"
Theorem 5 .
5Given an inner rhythm A in Z, letŶ " pb, rq be a solution of M P . Then, the rhythm associated to the characteristic polynomial and tiles with A.
are powers of distinct primes. Then 1. if A satisfies (T1) and (T2), then it tiles; 2. if A tiles, then it satisfies (T1); 3. if A tiles and |A| has at most two prime factors, then A satisfies (T2).
Table 1 :
1Non pT 2q candidate rhythms checked.n
Rhythm tested
1050
t0, 15, 30, 35, 45, 60, 70, 75, 90, 105u
2310
t0, 5, 6, 10, 12, 18, 24, 26, 30, 31, 36u
6300 t0, 2, 4, 5, 6, 7, 8, 10, 12, 350, 352, 354, 355, 356, 357, 358, 360, 362u
27225
t0, 9, 15, 18, 24, 27, 30, 36, 39, 45, 54, 3025, 3034, 3040,
3043, 3049, 3052, 3055, 3061, 3064, 3070, 3079, 6050, 6059,
6065, 6068, 6074, 6077, 6080, 6086, 6089, 6095, 6104u
Figure 1: Time (in seconds) to find the next solution with CSA for two rhythms in Z180 . On the top A " t0, 12, 24, 45, 57, 69u, on the bottom A " t0, 12, 24, 36, 45, 48, 57, 69, 81, 93u. 's n˝of B's CSA A FP A CSA B 120 t2, 5, 8, 10, 15, 30, 40, 120u t3, 4, 6, 12, 20, 24, 60u 120 t2, 3, 6, 8, 15, 24, 30, 120u t4, 5, 10, 12, 20, 40, 60u 144 t2, 8, 9, 16, 18, 72, 144u t3, 4, 6, 12, 24, 36, 48u 144 t4, 9, 16, 18, 36, 144u t2, 3, 6, 8, 12, 18, 24, 48, 72u t4, 9, 16, 18, 36, 144u t2, 3, 6, 8, 12, 24, 48, 72u 144 t2, 9, 16, 18, 36, 144u t3, 4, 6, 8, 12, 24, 36, 48, 72u t2, 9, 16, 18, 144u t3, 4, 6, 8, 12, 24, 36, 48, 72u t2, 9, 16, 18, 36, 144u t3, 4, 6, 8, 12, 24, 48, 72u 168 t2, 7, 8, 14, 21, 42, 56, 168u t3, 4, 6, 12, 24, 28, 84u 168 t2, 3, 6, 8, 21, 24, 42, 168u t4, 7, 12, 14, 28, 56, 84u 180 t3, 4, 5, 12, 15, 20, 45, 60, 180u t2, 6, 9, 10, 18, 30, 36, 90u 180 t2, 5, 9, 10, 18, 20, 45, 90, 180u t3, 4, 6, 12, 15, 30, 36, 60u 180 t3, 4, 9, 12, 36, 45, 180u t2, 5, 6, 10, 15, 18, 20, 30, 60, 90u 180 t2, 4, 9, 18, 20, 36, 180u t3, 5, 6, 10, 12, 15, 30, 45, 60, 90un
R
A
R
B
n˝of AFP B
72
t2, 8, 9, 18, 72u
t3, 4, 6, 12, 24, 36u
6 (2)
3 (1)
0.10
1.59
0.02
0.33
108
t3, 4, 12, 27, 108u
t2, 6, 9, 18, 36, 54u
252 (30)
3 (1)
7.84
896.06
0.03
0.72
18 (4)
8 (2)
0.27
24.16
0.07
2.13
20 (3)
16 (5)
0.14
10.92
0.15
3.30
36 (10)
6 (1)
2.93
82.53
0.06
3.77
6 (2)
12 (9)
0.10
7.13
1.71
66.27
6 (2)
312 (1)
12 (2)
6 (1)
0.11
12.13
1.08
33.39
48 (7)
6 (1)
0.83
67.91
12 (2)
156 (9)
1.71
74.78
54 (8)
16 (3)
17.61
461.53
0.13
7.91
42 (4)
104 (15)
0.91
46.11
1.94
35.36
2052 (136)
8 (2)
1422.09 >3600
0.25
1243.06
96 (12)
6 (1)
48.04
900.75
0.11
8.22
1800 (171)
16 (5)
492.18 >3600
0.18
7.51
120 (18)
9 (2)
8.82
280.72
0.29
14.34
Why rhythmic Canons are interesting. E Amiot, Perspectives of Mathematical and Computer-Aided Music Theory, EpOs. E. Lluis-Puebla, G. Mazzola et T. NollE. Amiot. Why rhythmic Canons are interesting. E. Lluis-Puebla, G. Mazzola et T. Noll (eds.), Perspectives of Mathematical and Computer-Aided Music Theory, EpOs, pages 190-209, 2004.
A propos des canons rythmiques. E Amiot, Gazette. des mathématiciensE. Amiot. A propos des canons rythmiques. Gazette des mathématiciens, 106:43-67, 2005.
Structures, algorithms, and algebraic tools for rhythmic canons. Perspectives of new music. E Amiot, 49E. Amiot. Structures, algorithms, and algebraic tools for rhythmic canons. Perspectives of new music, 49(2):93-142, 2011.
On group-theoretical methods applied to music: some compositional and implementational aspects. M Andreatta, Perspectives in Mathematical and Computational Music Theory, EpOs. 169193M. Andreatta. On group-theoretical methods applied to music: some compositional and implementational aspects. Perspectives in Math- ematical and Computational Music Theory, EpOs, 169:193, 2004.
On the factorization of cyclic groups. N G De Bruijn, Indag. Math. 17N. G. De Bruijn. On the factorization of cyclic groups. Indag. Math., 17:370-377, 1955.
Tiling the integers with translates of one finite set. E M Coven, A Meyerowitz, Journal of Algebra. 2121E. M. Coven and A. Meyerowitz. Tiling the integers with translates of one finite set. Journal of Algebra, 212(1):161-174, 1999.
Gurobi optimizer reference manual. Llc Gurobi Optimization, LLC Gurobi Optimization. Gurobi optimizer reference manual, 2021.
Sur la factorisation des groupes abéliens. G Hajós, Časopis Pěst. Mat. Fys. 74G. Hajós. Sur la factorisation des groupes abéliens.Časopis Pěst. Mat. Fys., 74:157-162, 1950.
F Jedrzejewski, arXiv:1304.6609Enumeration of Vuza Canons. arXiv preprintF. Jedrzejewski. Enumeration of Vuza Canons. arXiv preprint arXiv:1304.6609, 2013.
Complex Hadamard matrices and the spectral set conjecture. M N Kolountzakis, M Matolcsi, Collectanea Mathematica, Extra. M. N. Kolountzakis and M. Matolcsi. Complex Hadamard matri- ces and the spectral set conjecture. Collectanea Mathematica, Ex- tra:281-291, 2006.
Algorithms for translational tiling. M N Kolountzakis, M Matolcsi, Journal of Mathematics and Music. 32M. N. Kolountzakis and M. Matolcsi. Algorithms for translational tiling. Journal of Mathematics and Music, 3(2):85-97, 2009.
The Coven-Meyerowitz tiling conditions for 3 odd prime factors. I Laba, I Londner, arXiv:2106.14044arXiv preprintI. Laba and I. Londner. The Coven-Meyerowitz tiling conditions for 3 odd prime factors. arXiv preprint arXiv:2106.14044, 2021.
Traité de Rythme, de couleur et d'Ornithologie (1949-1992). O Messiaen, Alphonse Leduc. O. Messiaen. Traité de Rythme, de couleur et d'Ornithologie (1949- 1992). Alphonse Leduc, Parigi, 1992.
Ein Beitrag zum Problem der Faktorisation von endlichen abelschen Gruppen. L Rédei, Acta Mathematica Academiae Scientiarum Hungarica. 12-4L. Rédei. Ein Beitrag zum Problem der Faktorisation von endlichen abelschen Gruppen. Acta Mathematica Academiae Scientiarum Hun- garica, 1(2-4):197-207, 1950.
On the factorisation of finite abelian groups. iii. A D Sands, Acta Mathematica Academiae Scientiarum Hungarica. 253-4A. D. Sands. On the factorisation of finite abelian groups. iii. Acta Mathematica Academiae Scientiarum Hungarica, 25(3-4):279-284, 1974.
Contrappunto musicale e trasformazioni geometriche. B Scimemi, VeneziaLettera matematica pri-stem" nB. Scimemi. Contrappunto musicale e trasformazioni geometriche. Atti del Convegno "Matematica e cultura", Venezia, a cura di M.Emmer, supplemento a "Lettera matematica pri-stem" n.27-28, page 77-86, 1998.
Supplementary sets and regular complementary unending canons (part one, two, three, four). Perspectives of New Music. D T Vuza, D. T. Vuza. Supplementary sets and regular complementary unend- ing canons (part one, two, three, four). Perspectives of New Music, 1991-93.
| []
|
[
"An Extra Dimensional Approach of Entanglement",
"An Extra Dimensional Approach of Entanglement"
]
| [
"Axel Dietrich [email protected] \nInstitute of Human Genetics\n\n",
"Willem Been \nDepartment of Anatomy and Embryology\nUniversity of Amsterdam\nAMC M-1, Meibergdreef 151105 AZAmsterdamNLThe Netherlands\n"
]
| [
"Institute of Human Genetics\n",
"Department of Anatomy and Embryology\nUniversity of Amsterdam\nAMC M-1, Meibergdreef 151105 AZAmsterdamNLThe Netherlands"
]
| []
| Motivated by the apparent lack of a workable hypothesis we developed a model to describe phenomena such as entanglement and the EPR-paradox. In the model we propose the existence of extra hidden dimensions. Through these dimensions it will be possible for particles, which originate from one source, to remain connected. This connection results in an instantaneous reaction of one particle when the other particle is manipulated. We imagine entanglement in such a model. The results of the experiments which have been performed on this item do not contradict with the existence of the extra dimension(s).In addition, the model opens the possibility to unify the theory of quantum mechanics, gravitation and the general theory of relativity. | null | [
"https://arxiv.org/pdf/quant-ph/0307117v1.pdf"
]
| 118,178,731 | quant-ph/0307117 | 4ae48ffa80c269dc9c1070ca038a6eb65102f893 |
An Extra Dimensional Approach of Entanglement
Axel Dietrich [email protected]
Institute of Human Genetics
Willem Been
Department of Anatomy and Embryology
University of Amsterdam
AMC M-1, Meibergdreef 151105 AZAmsterdamNLThe Netherlands
An Extra Dimensional Approach of Entanglement
10365Ud0450+h1125-w1125Mj Keywords: EntanglementEPR paradoxExtra dimensionsSuperstrings
Motivated by the apparent lack of a workable hypothesis we developed a model to describe phenomena such as entanglement and the EPR-paradox. In the model we propose the existence of extra hidden dimensions. Through these dimensions it will be possible for particles, which originate from one source, to remain connected. This connection results in an instantaneous reaction of one particle when the other particle is manipulated. We imagine entanglement in such a model. The results of the experiments which have been performed on this item do not contradict with the existence of the extra dimension(s).In addition, the model opens the possibility to unify the theory of quantum mechanics, gravitation and the general theory of relativity.
Introduction
In 1935 Einstein, Podolsky and Rosen initiated a discussion that continues up until now and which forms the basis for many experiments [1]. They put forward serious criticism on the validity of the quantum theory. They stated in their paper that the quantum mechanical description of reality is not complete. Or, when operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality. They developed a thought experiment in which two systems interact for a short period of time, after which there was no longer any interaction. It would be impossible to determine the exact properties of simultaneously appearing realities. Bohm [2] expanded the formulation of the thought experiment using a particular example of two particles that originated from one source with opposite spin angular momentum. In that case he was unable to prove that the world is actually made up of separately existing and precisely defined elements of reality. To solve this problem Bohm introduced hidden variables. However, the quantum theory is inconsistent with the assumption of hidden causal variables. As a consequence, this was in agreement with Einstein, Podolsky and Rosen's claim that the theory of quantum mechanics is not complete enough to describe and predict these phenomena [1]. Bell [3] formulated an inequality principle to test for the existence of hidden variables. He showed that there could be no local hidden variables if the inequality was satisfied.
Experiments
Alain Aspect and co-workers [4,5] performed excellent experiments to determine the mutual influence of a pair of photons which originated from one single source. The correlation of linear polarizations of pairs of photons was measured. It was demonstrated that the obtained results were in agreement with the quantum mechanical predictions. This was a strong violation of the generalised Bell's inequalities. The effect which was measured was carried out over 13 m in a time of 10 nanoseconds [6]. To bridge a distance of that size with the speed of light about four times of the observed time is necessary. This excludes an interaction in which elements are exchanged with the speed of light. The possibility of selection cannot be excluded in the experiments of Aspect and his group. Because they use polarimeters there is a selection on a certain type of photons and the polarization states are determined. In addition, Aspect et al. [5] used periodic sinusoidal switching, which is predictable into the future. This does not exclude an explanation by communication slower than the speed of light [7]. Aspect and his group did not perform actually manipulation on the photon after which the state is determined of its twin photon that originated from the same source. The polarization states could actually be the initial states that have existed from the beginning. The group of Anton Zeilinger [7] claims to be the first who actually change the polarization state of a photon and determine this state of the corresponding photon over a distance of 400 m. During this period the quantum mechanical correlation is conserved. This is the correlation we know as entanglement.
Description
The description of the correlation of two particles, which in most cases originates from one source is called entanglement. All experiments suggest that there is more than just well preserved initial properties of the two particles. The explanation nowadays is described by the term entanglement. Schrödinger [8] was the first to describe the phenomenon of a more than classical correlation of two particles, originating from one source. He introduced the term: "Verschränkung" which is entanglement as we call it nowadays. Shimony [6] describes an additional correlation in the route the entangled particles follow. They represent a mirror image of the pathways which is followed. There must be a certain type of connection as stated by Shimony as: "striking correlation in their behaviour, so that a measurement done on one of the entities seems instantaneously to affect the result of the measurement on the other" [6], or as put by Einstein: "spooky actions at distance" [9]. They all agree on the fact that there has to be some kind correlation. Why do we want to produce a more or less classical-like explanation of a phenomenon as entanglement? Because entanglement in our vision is a description and it does not give an explanation in the classical way, probably because an explanation it is very difficult to give. We think that a model is necessary to link different mechanisms and make a comparison possible. So we will try to represent an image that might give a clue towards an explanation. In addition, we would like to produce a model in which quantum mechanical phenomena can be linked onto the other fields in physics, because there are particles that obey the laws of both quantum mechanics as well as relativity. For example photons behave according to the quantum description in all experiments on entanglement. However, they also follow the laws of general relativity and are influenced by gravity. We developed a way of vision to match both phenomena.
Possible explanations for the phenomenon What are the possible explanations for entanglement? We do not expect that entanglement is just a perfect conservation of the initial qualities. This could hardly explain choice of the pathway both particles take as described by Abner Shimony [6]. There are no workable models in the classical sense, which give an explanation for phenomena such as entanglement. We can think of a number of possible explanations: 1) Over the given distance the results of the experiments, that are performed up until now, might give the impression of the existence of a messenger which moves faster than the speed of light. However, messengers using a speed exceeding the speed of light are forbidden in the theory of special relativity [10]. As a consequence, we will not take this option into consideration.
2) The inexplicable correlation of the particles could be the result of a field comparable with the electromagnetic field. We think this is unlikely, because the observed results cannot be realised by phenomena such as fields, because such a field would influence all particles with equal qualities and it would seem that all those particles are entangled.
3) As a last option we could think of a connection between the particles, or even that the particles remain a unity. In these cases, because we cannot see the connection, it probably has to be in an extra hidden dimension. We think that this is the only possible explanation.
The extra dimension
Since Kaluza and Klein in the early 1920s suggested a hidden fifth dimension there has been an ongoing search for extra dimensions, which recently gave rise to a discussion [11,12,13]. The dimensions which are described up until now are very small (10 -35 meter), also in the Calabi-Yau setting [12]. There is a need for bigger extra dimensions. There is an ongoing search for them. It would be a great satisfaction when an extra dimension of considerable size could be found. A start of the experimental search for large extra dimensions has been made [14]. Perhaps we do not have to continue this search, because the large dimensions are already found, as we try to demonstrate in this paper. We try to solve EPR/ Bohm /Bell/ Aspect contradiction using extra and invisible dimensions as looked for by Arkani-Hamed et al. [11], Antoniadis et al. [13] and Greene [14] of a considerable larger size than 1 mm. In the case of the experiments of Aspect and co-workers [4,5] more than 10 m was observed and up to 400 m was observed by Weihs et al. [7].
We think that it is plausible that we found strong indications for the existence of these dimensions indirectly, without realising that we did so. Because the main goal of this paper is the production of visual insight into a mechanism, we will not use mathematics to describe the observed or expected phenomena. We just try to find a model that explains several phenomena such as entanglement and the EPR-paradox. For this reason we refer to Ludwig Wittgenstein [15] who pointed out that the use of mathematics is a method to describe certain phenomena, but that it is not necessary for the explanation. We will try to approach the explanation of entanglement in a describing manner.
How should we imagine this?
The best way is to imagine the situation in 2-D, where the 2-D position corresponds with the actual 3-D situation, whilst the 3D figure actually corresponds with the extra dimensional position (figure 1). Imagine a single event, for example the decay of an atom or molecule producing two particles with different spin angular momentum, like Bohm suggested [2]. It is our suggestion that the two particles will remain connected through an extra invisible dimension. At the beginning there is one particle and it will remain a unity because there is a connection by means of a certain type of super string, or even the real particle structure, running through one of the extra invisible dimensions (figure 1). This causes the instantaneous so called mutual influence. The dimension can be considered as described in figure 1a. However, as an alternative one could consider the three dimensions in which we live, to be folded as suggested Arkani-Hamed et al. [11] (see figure 1b). In essence there is no difference between figure 1a and figure 1b. In the case of figure 1b the distance of e.g. 400 m in the 3D-world could be less in the extra dimension. For both cases the circular endings are the only parts that can be observed in our limited familiar three dimensional world. When we manipulate one end, the connected corresponding other end will react instantaneously. This can also explain the change of the route as described by Shimony [6]. This way it can be explained why it seems that there is a messenger system which exceeds the speed of light. It certainly can explain phenomena that correspond with the description of entanglement. The connection through the extra dimension should not necessarily consist of one string, or the particle unity. It could be considered as composed of more components, such as a chain of closed superstrings. In conclusion, there is no need for a messenger between the particles with a speed exceeding the speed of light. It is a matter of unity: There is no inequality of Bell because the local qualities of two particles are in essence a unity going through an extra timeless dimension. p and q = two particles of one pair originating from one initial event s. s = source of the particle pair.→ and ← = quality such as the spin angular momentum of the different particles. r = extra dimension.
Experimental indications
The results of the experiments of Alain Aspect et al. [4,5] and Weihs et al. [7] actually can be considered as a strong indication for the existence of the large extra invisible dimensions as searched for by Arkani-Hamed et al. [11]. In this case not for dimensions of 1mm but even up to more than 10 m. The experiments of Weihs et al. [7] demonstrate an entanglement even over a distance of 400 m after an actual manipulation of the quantum qualities of the particles. Nevertheless the size could be considerably smaller when the model presented in figure 1b is used. We could even consider the possibility that the distance approaches zero, in which case we observe one entity from two "sides". Arkani-Hamed and his colleagues wonder why no larger extra dimensions are observed [11]. As a matter of fact quite larger extra dimensions in the results of the experiment of Alain Aspect and Anton Zeilinger and their groups can be seen. In our vision these results actually demonstrate the extra dimensions. Our proposed model especially opens the opportunity to unify the theories of quantum mechanics, gravitation and general relativity [16]. This aspect of the model can be considered as the introduction of a broader unifying theory. We will try to present a more general theory in a separate paper.
Furthermore, the observed dimensions are rather large up to more than 400 m. It is not clear at what distances the elasticity of the dimensions or strings will remain intact or at what distance the elasticity will snap. In the case of the version of figure 1b this is not necessary. It is unclear if the distance gets so large that the strings will disintegrate when they reach the end of elasticity, as proposed by Wheeler [17]. Aharonov [18] speaks of entanglement length and distinguishes finite and infinite entanglement length which depends on the background noise. Below the critical background noise the entanglement length is infinite [18]. When we consider the Big Bang as a starting point, or even before the Big Bang as suggested by Tryon [19], there might be found the initiation of gravity. He considered the gravitational energy as the opposite component of mass, giving a net energy of our universe of zero. Probably Tryon imagined that mass particles would remain connected by gravity. He did not mention extra dimensions but these could explain the mechanism. Particles could remain connected by gravitation through extra dimensions as imagined in figure 1a. In that case particles could remain connected, but perhaps the connections are disrupted at larger distances when the end of the elasticity is reached [17] or when the background noise becomes too strong [18].
Perspectives
It will be very useful to repeat the entanglement experiments, not by determination of polarization but by the study of the actual spin angular momentum and altering this of the particles as proposed by Bohm [2]. This is within the possibilities because there is system in which spin angular moment can be used on 199 Hg as pointed out by Fry and Walther [20]. In this type of experiment undesired selection like polarization, can be avoided. In addition it probably will be useful to develop experiments in which other qualities than spin angular moment are used. This could even go as far as the annihilation of one of the two particles. Further experiments will be necessary to determine whether there is "snapping" of strings at greater distance. Experiments as performed by the group of Anton Zeilinger [7] have to be carried out on a larger scale, for example in space, to determine what the maximum size of the extra dimension(s) is. In addition, we think that our model will open the gate towards the possibility to unifying quantum mechanics and the theory of relativity.
FIG. 1 .
1Schematic representation of the connection through an extra dimension. a and b are alternative representations.
AcknowledgementsWe thank Ruud van den Bogaard for useful suggestions and discussion, Eelco Roos and Rob Lutgerhorst for producing the figures.
Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?. A Einstein, B Podolsky, N Rosen, Physical Review. 47Einstein, A., Podolsky, B. & Rosen, N. Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review 47, 777-780 (1935).
Quantum Theory of the Measurement Process. D Bohm, Quantum Theory. Englewood Cliffs, N.J.Prentice-HallBohm, D. Quantum Theory of the Measurement Process. in: Quantum Theory. Prentice-Hall, Englewood Cliffs, N.J. pp. 583-623 (1951).
On the Einstein Podolsky Rosen Paradox. J S Bell, Physics. 13Bell, J.S. On the Einstein Podolsky Rosen Paradox. Physics 1(3), 195-202 (1964).
Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities. A Aspect, P Grangier, G Roger, Physical Review Letters. 49Aspect, A., Grangier, P. & Roger, G. Experimental Realization of Einstein-Podolsky- Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities. Physical Review Letters 49, 91-94 (1982,a)
Experimental Test of Bell's Inequalities Using Time-Varying Analyzers. A Aspect, J Dalibard, G Roger, Physical Review Letters. 49Aspect,A., Dalibard, J. & Roger, G. Experimental Test of Bell's Inequalities Using Time-Varying Analyzers Physical Review Letters 49, 1804-1807 (1982,b).
The reality of the Quantum World. A Shimony, Scientific American. 2581Shimony, A. The reality of the Quantum World. Scientific American 258 (1), 36-43. (1988)
Violation of Bell's Inequality under Strict Einstein Locality Conditions. G Weihs, T Jennewein, C Simon, H Weinfurter, A Zeilinger, Physical Review Letters. 81Weihs, G., Jennewein, T., Simon, C., Weinfurter, H. & Zeilinger, A. Violation of Bell's Inequality under Strict Einstein Locality Conditions. Physical Review Letters 81, 5039-5043 (1998).
Die gegenwärtige Situation in der Quantenmechanik. E Schrödinger, Die Naturwissenschaften. 23Schrödinger, E. Die gegenwärtige Situation in der Quantenmechanik. Die Naturwissenschaften 23, 807-812, 823-828, 844-849 (1935).
Is the moon there when nobody looks? Reality and the quantum theory. N D Mermin, PhysicsToday. 38Mermin, N.D. Is the moon there when nobody looks? Reality and the quantum theory. PhysicsToday 38, 38-47 (1985).
. A Einstein, Zur Elektrodynamik Bewegter Körper, Annalen der Physik. 17Einstein, A. Zur Elektrodynamik bewegter Körper. Annalen der Physik 17, 891-921 (1905).
The universe's unseen dimensions. N Arkani-Hamed, S Dimopoulos, G Dvali, Scientific American. 2832Arkani-Hamed, N., Dimopoulos, S. & Dvali, G. The universe's unseen dimensions. Scientific American 283 (2), 62-69 (2000).
The Elegant Universe: Superstrings, Hidden Demensions, and the Quest for the Ultimate Theory. B Greene, Vintage LondonGreene, B. The Elegant Universe: Superstrings, Hidden Demensions, and the Quest for the Ultimate Theory. Vintage London (2000).
New dimensions at a millimeter to a fermi and superstrings at a TeV. I Antoniadis, N Arkani-Hamed, S Dimopoulos, G Dvali, Physics. Letters B. 436Antoniadis, I., Arkani-Hamed, N., Dimopoulos, S. & Dvali, G. New dimensions at a millimeter to a fermi and superstrings at a TeV. Physics. Letters B 436, 257-263 (1998).
Submillimeter Test of the Gravitational Inverse Square Law: A search for "Large" Extra Dimensions. C D Hoyle, U Schmidt, B R Heckel, E G Adelberger, J H Gundlach, D J Kapner, H E Swanson, Physical Review Letters. 86Hoyle, C.D., Schmidt, U., Heckel, B.R., Adelberger, E.G., Gundlach, J.H., Kapner, D.J. & Swanson, H.E. Submillimeter Test of the Gravitational Inverse Square Law: A search for "Large" Extra Dimensions. Physical Review Letters 86, 1418-1421 (2001).
Tractatus logico-philosophicus. Logisch-philosophische Abhandlung. Annalen der Natur-und Kulturphilosophie XIV. L Wittgenstein, Wittgenstein, L. Tractatus logico-philosophicus. Logisch-philosophische Abhandlung. Annalen der Natur-und Kulturphilosophie XIV, 185-262 (1921).
Die Grundlage der allgemeine Relativitätstheorie. A Einstein, Annalen der Physik. 49Einstein, A. Die Grundlage der allgemeine Relativitätstheorie. Annalen der Physik 49, 769- 822 (1916).
. C W Misner, K S Thorne, J A Wheeler, Gravitation, Freeman & Co. San FranciscoMisner, C.W., Thorne, K.S. & Wheeler, J.A. Gravitation. Freeman & Co. San Francisco (1973).
Quantum to classical phase transition in noisy quantum computers. D Aharonov, Physical Review A. 62Aharonov, D. Quantum to classical phase transition in noisy quantum computers. Physical Review A 62, 062311-1-13 (2000).
Is the Universe a Vacuum Fluctuation. E Tryon, Nature. 246Tryon, E. Is the Universe a Vacuum Fluctuation? Nature 246, 396-397 (1973).
A Bell Inequality Experiment Based on Molecular Dissociation-Extension of the LO-Shimony Proposal to 199Hg (Nuclear Spin 1/2) Dimers. In: Experimental Metaphysics. E S Fry, T Walther, Boston Studies in the Philosophy of Science. R.S. Cohen et al.193Kluwer Academic PublishersFry, E.S. & Walther, T. A Bell Inequality Experiment Based on Molecular Dissociation-Extension of the LO-Shimony Proposal to 199Hg (Nuclear Spin 1/2) Dimers. In: Experimental Metaphysics. Boston Studies in the Philosophy of Science vol. 193. eds. R.S. Cohen et al. Kluwer Academic Publishers, Dordrecht (1997).
| []
|
[
"Eleven-year, 22-year and ∼90-year solar cycles discovered in nitrate concentrations in a Dome Fuji (Antarctica) ice core",
"Eleven-year, 22-year and ∼90-year solar cycles discovered in nitrate concentrations in a Dome Fuji (Antarctica) ice core"
]
| [
"Yuko Motizuki ",
"Yoichi Nakai ",
"Kazuya Takahashi ",
"Takashi Imamura ",
"Hideaki Motoyama "
]
| []
| []
| Ice cores are known to yield information about astronomical phenomena as well as information about past climate. We report time series analyses of annually resolved nitrate variations in an ice core, drilled at the Dome Fuji station in East Antarctica, corresponding to the period from CE 1610 to 1904. Our analyses revealed clear evidence of ∼11, ∼22, and ∼90 year periodicities, comparable to the respective periodicities of the well-known Schwabe, Hale, and Gleissberg solar cycles. Our results show for the first time that nitrate concentrations in an ice core can be used as a proxy for past solar activity on decadal to multidecadal time scales. Furthermore, 11-year and 22-year periodicities were detected in nitrate variations even during the Maunder Minimum (1645-1715), when sunspots were almost absent. This discovery may support cyclic behavior of the solar dynamo during the grand solar minimum. | null | [
"https://export.arxiv.org/pdf/2209.11330v1.pdf"
]
| 252,519,582 | 2209.11330 | 5fb8a28d62c11dff336c1798a91a9fa533c93c68 |
Eleven-year, 22-year and ∼90-year solar cycles discovered in nitrate concentrations in a Dome Fuji (Antarctica) ice core
Yuko Motizuki
Yoichi Nakai
Kazuya Takahashi
Takashi Imamura
Hideaki Motoyama
Eleven-year, 22-year and ∼90-year solar cycles discovered in nitrate concentrations in a Dome Fuji (Antarctica) ice core
solar cycleice corenitrateDome FujiMaunder Minimum
Ice cores are known to yield information about astronomical phenomena as well as information about past climate. We report time series analyses of annually resolved nitrate variations in an ice core, drilled at the Dome Fuji station in East Antarctica, corresponding to the period from CE 1610 to 1904. Our analyses revealed clear evidence of ∼11, ∼22, and ∼90 year periodicities, comparable to the respective periodicities of the well-known Schwabe, Hale, and Gleissberg solar cycles. Our results show for the first time that nitrate concentrations in an ice core can be used as a proxy for past solar activity on decadal to multidecadal time scales. Furthermore, 11-year and 22-year periodicities were detected in nitrate variations even during the Maunder Minimum (1645-1715), when sunspots were almost absent. This discovery may support cyclic behavior of the solar dynamo during the grand solar minimum.
Introduction
Traditionally, cosmogenic nuclides, 14 C in tree rings 1)−3) and 10 Be in ice cores 4)−7) have been used to investigate past solar activity cycles. Brehm et al. 2) found an 11-year solar cycle in annually resolved 14 C tree sample measurements covering the last millennium, but they could not identify an 11-year cycle during the Maunder Minimum (1645-1715), 8) when sunspots were almost absent, because of weather-induced noise. Annually resolved 10 Be data from a Greenland ice core record an 11-year solar cycle during the last 600 years (1400-2000), 5) even during the Maunder Minimum, despite the presence of weather-induced noise. These tree-ring 14 C records and ice core 10 Be records do not agree well, neither with respect to observations of 11-year oscillations in the grand solar minimum nor with respect to oscillations on time scales of 1,000 years and longer. 3) It is thus important to seek another potential proxy for solar activity. In the present study, we investigated the nitrate ion (NO − 3 ) concentration in an Antarctic Dome Fuji ice core as a potential new proxy.
It has been known that continental and anthropologic effect in NO − 3 concentrations in Antarctic snow and ice are not significant. 9) NO − 3 data obtained from Antarctic ice cores then suggested there might be a relationship between NO − 3 and solar activity. 10)−12) For example, Watanabe et al. 11) demonstrated the existence of an 11-year cycle in NO − 3 concentrations in a firn core covering a 60-year period collected from the S25 site near Showa station in coastal East Antarctica (Fig. 1). Traversi et al. (2012) 12) studied NO − 3 concentrations in an ice core recovered from Talos Dome (Fig. 1). They noted that meteorological noise on an interannual scale made it impossible to resolve individual 11-year solar cycles and reported a weak statistical correlation (r = 0.31) between variations in solar activity-related cosmic ray intensities and the NO − 3 concentrations in a segment corresponding to the "pre-industrial" period . With deeper ice core analyses, they concluded that NO − 3 concentrations in the Talos Dome ice core were a potential new proxy for solar activity on centennial to millennial time scales.
Extraction of the 11-year solar cycle from NO − 3 concentration variations in ice cores is generally considered to be difficult, mainly because the process of NO − 3 subsidence to the ground and subsequent preservation in ice cores is affected by atmospheric dynamics in the troposphere and recycling process from central Antarctica to the coast, even if the NO − 3 budget in the stratosphere follows the solar modulations. 13) Furthermore, the NO − 3 may have several terrestrial origins: For example, NO − 3 is generated by lightning and biomass burning. 9)14)15) As well, sporadically enhanced deposition of NO − 3 in sea salt and terrestrial aerosols 13) can affect NO − 3 concentrations in ice cores and, due to its chemical properties, NO − 3 evaporates easily and can be displaced by other anions, such as sulfate (SO 2− 4 ) that form strong acids (see Sect. 3). Nevertheless, despite these difficulties, NO − 3 constitutes one of the main anionic species in ice cores, so much effort has been put into understanding its origins and concentration profiles.
Solar UV radiation (wavelength in 200-315 nm) is absorbed into the stratosphere and induces the production of reactive nitrogen, designated by NO y (represented simply by NO, NO 2 , and HNO 3 in this work), from atmospheric N 2 O (Fig. 2). 16)17) As discussed by Vitt and Jackman,17) the oxidation of N 2 O by solar UV dominates the global NO y source, but galactic cosmic rays (GCRs), which descend along geomagnetic field lines to the polar regions, are also significant at polar latitudes. The effects of in situ production of NO y by solar UV radiation, and their transport from lower (<50 • ) latitudes, on the amount of NO y in the polar stratosphere are likely to be larger than the effect of NO y production by GCRs 17) (see Sect. 3).
The NO y in the polar stratosphere is then deposited in precipitation down to the troposphere (a process called denitrification) within the polar vortex that develops in Antarctic winters (Fig. 2), which is where polar stratospheric clouds (PSCs) form. The NO y , which constitute the aerosols in PSCs, are transported downwards into the troposphere by gravitational sedimentation, and are then precipitated onto the surface snow. 18)19) NO y thus accumulates in ice cores and, when ice core samples are melted, the NO y occurs as aqueous NO − 3 ions. Since no solar UV radiation reaches Antarctica in wintertime, no back reactions (Fig. 2) operate, so the stratospheric NO y in the Antarctic winter is composed mainly of HNO 3 .
Inclusion of the NO y budget of stratospheric origin in ice cores is site-specific, and depends on the location of the drilling site: NO − 3 in Antarctic ice cores drilled at coastal sites is dominated by the components from the troposphere, 9) whereas cores obtained from inland sites, in particular where effectively affected by the denitrification process, are likely to contain some stratospheric NO − 3 components, as in the case of Dome Fuji cores mentioned below. As a result, some ice cores from inland Antarctica are likely to contain a greater proportion of stratospheric NO y than some cores from coastal Antarctica. Solar UV radiation affects both the NO y production in the stratosphere (200-315 nm) and the loss of NO − 3 on the surface (300-340 nm). In particular, there are post-depositional processes 20) that also affect specific sites and greatly reduce the NO − 3 concentrations in ice cores collected from low-accumulation sites: the most relevant postdepositional process is photolysis in surface snow, caused by solar UV (300-340 nm, mainly UV-A) reaching the ground. 21)− 26) Dome Fuji station (77 • 19' 01" S, 39 • 42' 12" E) is located at an inland Antarctic site ( Fig. 1), on the summit of a mountain in Dronning Maud Land (elevation 3,810 m a.s.l.). The mean annual temperature of snow at Dome Fuji from 1995 to 2006, measured at 10 m depth, was −57.3 • C, and the mean rate of snow accumulation was 27.3 mm water-equivalent yr −1 . 27) Snow and ice at Dome Fuji station may contain a relatively large fraction of stratospheric chemical components with respect to tropospheric components compared with some other sites in Antarctica, based on the following experimental evidence:
(1) Radioactive tritium (T) from fallout from nuclear bomb tests conducted in the 1960s, which is a key tracer of stratospheric subsidence, is found in ice cores. In fact, T concentrations in snow samples collected from around the Dome Fuji site were the highest measured in any of the samples collected from snow pits at 16 Antarctic sites (Fourré et al., 2006 28) ), including Dome C, the Halley Research Station, the South Pole, Talos Dome, and Vostok ( Fig. 1). Note, however, that Fourré et al. (their Table 1) incorrectly reported the highest T value (4200 Tritium Units, where 1 TU indicates a T/ 1 H ratio of 10 −18 ) in a sample obtained around Dome Fuji station as being from Dome C (at the time the paper, Kamiyama et al. (1989), 29) cited by Fourré et al. as the reference for the highest 4200 TU value, was published, Dome Fuji station had yet to be constructed; but a site neighboring the present Dome Fuji station was referred to as "Dome Camp" with the abbreviation "DC", which Fourré et al. misinterpreted as "Dome C".)
(2) Both wet and dry deposition containing tropospheric components also contain sea salt. However, the ratios of the averaged ionic concentrations in snow and ice samples from Dome Fuji are inconsistent with those that would be expected if most of the ions were of tropospheric origin. This result also suggests that the ionic components of samples from Dome Fuji may contain a higher proportion of stratospheric origin. 29)−31)
The denitrification process affecting Dome Fuji, as portrayed above, is supported by a year-round observation conducted in 1997-1998 by the 38th Japanese Antarctic Research Expedition. 32) The NO − 3 concentration in fresh snowfall observed at Dome Fuji started to increase drastically from late winter (July), giving a prominent peak in early spring (August), and the NO − 3 enhancement continued until the end of spring (October). The NO − 3 concentrations observed in the spring (Aug-Oct, 1997) were five times larger than those observed in the fall (Feb-Apr, 1997) and no summer peak was observed; these features are distinctly different from those observed at other Antarctic sites. 13)23)33) Also, the increased proportions of the NO − 3 ions in equivalent concentrations from July to October in 1997 were accompanied by exactly the same increased amount of H + as counter cations. This indicates that the molecules in the depositional process were in the form of gaseous HNO 3 . As mentioned above, these observations were consistent with a picture that the NO − 3 concentrations in Dome Fuji ice cores were affected by a denitrification process associated with the formation of the polar vortex and the PSCs, with a precipitation lag of about 1 to 2 months (see Fig. 2). A more detailed study of the seasonal variations in ionic compositions observed at Dome Fuji will be presented elsewhere.
The explanation for the relatively high apparent stratospheric contribution to the Dome Fuji ice cores could be that the elevation of the site is high (3,810 m a.s.l) and that the specific location may be in an area that tends to be affected by the denitrification process. Although a detailed mechanism to explain the relatively higher stratospheric contribu-tion around the Dome Fuji area compared with other areas of Antarctica should be investigated further, a Dome Fuji core appears to have reasonable potential for studying stratospheric NO y .
Because the snow accumulation rate at Dome Fuji is low, HNO 3 (including NO − 3 ; see Fig. 2) precipitated in snow may undergo photolysis by solar UV radiation, as mentioned above, leading to emission from or diffusion within the surface snow ( Fig. 2). In fact, NO − 3 concentrations in the top ∼40 cm of snow at the Dome Fuji site were observed to be decreased very rapidly with depth; the concentration at 1 m snow depth can be as low as about one-tenth of the concentration at the snow surface (see fig. 6 of Watanabe et al. 34) ). This rapid decrease with depth implies that the post-depositional loss of NO − 3 at Dome Fuji may be controlled predominantly by NO − 3 photolysis, as reported for Kohnen station located in the same Dronning Maud Land area ( Fig. 1). 25) The photolysis effect on the surface snow may greatly affect the NO − 3 concentration profile recorded in Dome Fuji ice cores. This topic is also examined in Sect. 3.
The DF01 ice core: Drilling, analyses, and dating
We investigated the uppermost part of a 122-mlong firn core (DF01) drilled in 2001 at Dome Fuji station. Core DF01 was obtained from the same hole as a deep ice core (DF2; 3,035.22 m) drilled over the course of the 7 years from 2001-2007 35)36) and extending back to ∼720,000 years ago. 37) The upper, younger part of the DF01 firn core was very fragile, and some portion of the top 7.7 m of the core was lost during the drilling. We performed continuous, annually resolved measurements of ions in the DF01 ice core segment from 7.7 to 85.5 m depth at RIKEN Nishina Center.
The concentrations of anions (
SO 2− 4 , Cl − , NO − 3 , F − , CH 3 COO − , HCOO − , NO − 2 , C 2 O 2− 4 , PO 3− 4 , and CH 3 SO − 3 )
and cations (Na + , K + , Mg 2+ , Ca 2+ , and NH + 4 ) in the DF01 ice core were analyzed by a highly sensitive ion chromatography technique (using a ICS2000 system for anions and Dionex 500 system for cations). 31) for details of the analysis procedures. Here, we report NO − 3 concentrations and periodicities in the DF01 ice core for a ∼300-year period (1610-1904; corresponding core depths are 23.0-7.7 m). In this core segment, the precision of the NO − 3 concentration measurements was reanalyzed in detail and found to be within 0.14-0.48 µg L −1 at 10 µg L −1 , signified as the maximum, assuming that the errors deduced from each chromatogram are independent and adopting the law of error propagation.
The firn top at 7.7 m depth in the DF01 core corresponds to the year 1904, whereas the ice at 85.5 m depth dates back to more than 2,000 years ago. 38) The DF01 ice core was cut into shorter lengths of 50 cm at Dome Fuji, and then transported to the National Institute of Polar Research (NIPR). At NIPR, the 50-cm segments were further subdivided as follows: Those from depths shallower than 20 m were cut into 5-cm pieces, those from 20-50 m into 4-cm pieces, those from 50-75 m into 3-cm pieces, and those from deeper than 75 m into 2.5-cm pieces. Depending on the depth, the temporal resolution of the samples ranged from 0.7 to 1.0 year and was approximately 0.9 year on average. 38) The chronology of the studied section of the DF01 ice core was established by synchronizing volcanic eruption signals in the DF01 ice core with corresponding signals in a reference core (the 100m-deep B32 ice core collected from a site close to Kohnen station; Fig. 1) which was dated by counting annual layers. 38) Two time scales were established for DF01, DFS1 and DFS2, where DFS stands for "Dome Fuji Shallow". The DFS1 time scale, covering the period from CE 187 to 1904, was synchronized with the B32 ice core time scale by matching 31 volcanic eruption peaks in the non-sea-salt sulfate ion concentrations in the depth profiles of the two ice cores. The DFS2 time scale, covering the period from CE 1 to 1904, was based in part on published data 39) and also on four volcanic dates obtained from samples from the upper part of the 1,000-mdeep EPICA DML ice core drilled at the Kohnen station. The accumulation rates between neighboring volcanic eruptions, or time markers, were then assumed to be constant. The dating error of the DF01 ice core is thus made up of the absolute error of the referenced time marker plus the interpolation error which depends on the temporal distance from the nearest time marker. 38) In the present study, we confined ourselves to the ∼300-year period from 1610 to 1904. The dating of this period was both robust and rather precise because it was based on many well-established volcanic time markers with absolute errors of 1-3 years. 38) For this paper, we used the DFS2 time scale, but it should be noted that the DFS1 and DFS2 time scales for this period are identical.
Solar signatures in the DF01 core
In this section, our results of time series analyses of NO − 3 variations are presented and discussed.
Time series of nitrate ion variations.
The time series of NO − 3 variations (raw data) in the DF01 ice core from 1610 to 1904, based on the DFS2 chronology, 38) is depicted in Fig. 3. The raw data show meteorological noise (seen as positive spikes) on an interannual scale, as Traversi et al. (2012) 12) mentioned with regard to the Talos Dome core. The raw data may contain some meaningful spike structures; these analytic results will be reported separately. "Negative" spikes, also seen in our raw data, correspond with the positions of time markers (vertical gray dashed lines in Fig. 3). These time markers represent the signals of the volcanic eruptions used to determine the DFS2 time scale. Negative spikes occurred where nitrate was displaced by sulfate that originated from volcanic eruptions, as mentioned in Sect. 1.
To determine the baseline variation in the raw NO − 3 concentrations and investigate NO − 3 concentration modulations possibly embedded in our timeseries data, we applied running median filters. Using the median instead of the mean minimized the risk that outliers stemming from other physical sources (Sect. 1, mostly event-type) might have skewed the result. We found that the time series of NO − 3 variations obtained after applying a running 7-point (corresponding to 6-year) median filter to the raw NO − 3 concentration time series (Fig. 3) was appropriate since it was not affected by positive or negative spikes during the period studied. Note that the measurement imprecision (within 0.14-0.48 µg L −1 at concentrations of 10 µg L −1 ; see Sect. 2) was very small compared with the magnitude of the variations in the median-filtered NO − 3 concentration time series (Fig. 3), which we therefore regard as baseline variations. A direct eye-inspection of our baseline variations in Fig. 3 enables us to observe oscillations of ∼20 years.
In Fig. 3, annual group sunspot numbers (GSN) proposed by Hoyt and Schatten (1998) 40) and Chatzistergos et al. (2017) 41) (HS98 and C17, respectively) are also shown. These GSN profiles together cover the period from 1610 to 2010. The C17 GSN series calibrates the results from 314 observers since 1739 with a non-linear non-parametric method and may be one of the best current estimates. 42)
Periodicity in nitrate ion variations.
The Maximum Entropy Method (MEM) 43)44) was used to detect periodicities in the median-filtered, baseline NO − 3 concentration profile (Fig. 4a). This method can generate high-resolution power spectra for a short, evenly spaced time series. We also applied the Lomb-Scargle (LS) method 45)−47) to our raw and median-filtered data (Fig. 4b). The LS method is able to generate power spectra of adequate resolution even when applied to unevenly spaced time-series data or even if a portion of the data series is masked. The application to our baseline data of these two distinct statistical methods generated well-matched peaks at 11.6, 21, and around 90 years (Fig. 4). We confirmed that periodicities with these peaks also existed in the raw data (Fig. 4b). The power spectra of periodicities shorter than 10 years in the raw data are evident, as expected, because of the meteorological noise (Fig. 3).
The 99% and 95% confidence levels ( Fig. 4b) were calculated as the probability that the height of an LS power of a given periodicity exceeds by 1% and 5%, respectively, chance variations due to random noise, if the random noise follows a normal distribution. (Note that equivalent confidence levels cannot be derived for MEM.) The peak values of the detected periodicities of around 11, 22, and 90 years are apparently higher than the 99% confidence level (Fig. 4b). These statistically significant NO − 3 concentration periodicities of ∼11, ∼22, and ∼90 years are almost certainly related to the well-known 11year Schwabe, 22-year Hale, and ∼80-90-year Gleissberg solar cycles. 48)−50) The simultaneous detection of the three known shortest solar periodicities in the NO − 3 concentrations in an ice core has not been reported previously.
Prominent 22-year periodicity and its
band-pass filtering. We will consider first about the prominent emergence of the 22-year signal obtained in Fig. 4, because this is somewhat different from the understanding of the activity of the sun on the 11-year and 22-year cycles. The 22-year solar periodicity is associated with the reversals of the magnetic dipole field polarity of the sun in every 11 years. The 22-year periodicity affects GCRs penetrating into the earth: When the solar activity is high, the GCR intensity reaching Earth weakens because the intense solar magnetic field in space prevents the GCRs from entering the internal space surrounding the Earth. It is well known that the power of the 22-year periodicity of the intrinsic solar activity is considerably smaller than that of the 11-year periodicity.
As mentioned in Sect. 1, Watanabe et al.
(1999), 11) using the MEM method, found an 11-year periodicity in their NO − 3 concentration time series, but not a 22-year periodicity (their fig. 4), in a short firn core covering the years 1920-1980, a period overlapping the modern grand maximum (Fig. 3), obtained from the S25 site (Fig. 1). They confirmed that the S25 firn core was not affected by the photolysis in the surface snow because of the high snow precipitation rate. In addition, by applying a 9-13-year bandpass filter to their NO − 3 concentration data, they found that the filtered 11-year NO − 3 modulation was in phase with the filtered 11-year cycle in sunspot numbers (their fig. 5). Taking into account the photochemical reactions occurring in the stratosphere, we can reasonably consider that when solar activity is high, the rate of production of NO y should also be high, and vice versa (Fig. 2). These give us a hint that the dominant NO y production mechanism for the S25 core site would be caused by N 2 O oxidation in the stratosphere (Fig. 2), not by GCRs, for which the bandpass result should have had inverse phases.
We hypothesized that the intense ∼22-year signature in our DF01 core could be attributed to the photolysis in the surface snow that occurs at Dome Fuji but not at S25, and applied an 18-30 year band pass filter to both the baseline NO − 3 concentration profile and to the HS98 and C17 sunspot number profiles (see Fig. 3) in order to investigate their "22year" oscillations. The bandwidth was determined by an observation of the 22-year peak in our NO − 3 time series (see the inset in Fig. 4a) as well as preceding work. The result is shown in Fig. 5a. We also confirmed that applying a shorter bandpass range did not change the essence of the result. Because we assumed that the "22-year" periodicity was embedded by the photolysis in the surface snow at Dome Fuji, the NO − 3 axis in Fig. 5a was reversed to see the inverse relationship between filtered NO − 3 and GSN time series easier.
First, we point out in Fig. 5a that the amplitude of each "22-year" NO − 3 oscillation is distinctly larger than the referenced maximum measurement error, mentioned in Sect.2: Throughout the studied period, the "22-year" oscillations were stably identified. We observe that the "22-year" oscillations were eminent during both the Maunder Minimum and the Dalton Minimum. This means that there is a possibility that these 22-year NO − 3 modulations could be used as a relatively reliable time measure for dating the deep Dome Fuji cores in the future.
Second, we see in Fig. 5a that the "22-year" fil-tered NO − 3 oscillations are mostly in inverse phases with the "22-year" filtered GSN oscillations throughout the total period, except phases in ∼1720-1780 and ∼1875, just after the Maunder Minimum and the Dalton Minimum. The photolysis in the surface snow is influenced by local variations in the UV intensity reaching the ground, air temperatures, and so forth. 23) The anticorrelation found in Fig. 5a might then indicate that the ∼22-year solar periodicity was superposed on the NO − 3 concentrations in the DF01 core by the local loss of NO − 3 through solar UV photolysis. As mentioned in Sect. 1, the photolysis process has been intensively studied, 21)−26) but much remains to be understood; in addition, Fig. 5a requires further new data analyses to be comprehensively explained.
"11-year" band-pass filtering and cyclic-
ity of solar activity in Maunder Minimum. Although the average duration of the 11-year solar cycle is 11 years, the duration of individual cycles varies between 9 years (e.g., solar cycles 2, 3, 8, and 22) and 14 years (13.6 years, solar cycle 4), according to the smoothed monthly sunspot number time series from the Solar Influences Data analysis Center (SIDC) (https://www.sidc.be/silso/cyclesminmax).
Here we applied an [8][9][10][11][12][13][14][15][16] year bandpass filter to the moving median-filtered NO − 3 concentration profile and to the HS98 and C17 sunspot number profiles (see Fig. 3) as in the preceding work. 5) The result is depicted in Fig. 5b.
We see in Fig. 5b that the amplitude of each "11-year" NO − 3 oscillation is again predominantly or marginally larger than the referenced maximum measurement error. We also recognize that the "11year" oscillations during both the Maunder Minimum and the Dalton Minimum exceed the maximum measurement errors, while during the Maunder Minimum with almost no sunspots (see Fig. 3), no 11-year oscillations are discernible in our bandpassfiltered result for the HS98 time series of group sunspot numbers. 40) This may indicate the existence of the cyclic behavior of solar dynamo even during the grand Maunder Minium, as suggested by preceding 10 Be and 14 C studies. 5)51)52)
Next, our "11-year" bandpass-filtered NO − 3 concentration profile and the sunspot number profiles exhibit both inphase and reverse phase correlations. They are: 1) in phase around the years 1620 and 1800-1820; the time markers for 1619 (unknown eruption) and 1816 and 1809 (Tambora and pre-Tambora eruptions) make this view relatively certain; 2) in phase during the years around 1720-1760, but we need to be cautious because the dating of DF01 samples covering these years is less precise as the two closest time marker positions are distant, at 1696 and 1810; thus, the absolute error plus the interpolation error of the DF01 sample dates for around 1750 can be ∼5 years; 38) 3) in reverse phase during the years 1860-1910, the time marker for 1883 (Kratakau, Indonesia) making this observation reliable.
Regarding the present result of Fig. 5, the stratospheric NO y production by the N 2 O oxidation and the loss of NO − 3 by the photolysis in the surface snow might, together, affect the NO − 3 concentration profile at the Dome Fuji site, but to draw conclusions will require further careful investigation.
Concluding remarks
The DF01 ice core drilled at Dome Fuji station in East Antarctica has been shown to demonstrate substantial sensitivity to variations in atmospheric NO y in the Antarctic polar stratosphere. This was possible mainly because of: 1) a local feature of the Dome Fuji ice core, which appears to retain more of the NO y stratospheric budget than some ice cores obtained elsewhere; 2) continuous, annually-resolved precision NO − 3 measurements and error analyses; and 3) reliable dating of the ice core.
Our time series analysis results for NO − 3 concentrations in the DF01 ice core segment covering the period from 1610 to 1904 revealed ∼11-year, ∼22-year, and ∼90-year periodicities, most probably corresponding to the three well-known decadal to multi-decadal solar cycles. These results represent the first finding of these three shorter cycles in the NO − 3 concentrations in ice cores at the same time. We propose that the NO − 3 concentration profile in the Dome Fuji ice core be used as a new proxy for these shorter solar cycles. The 22-year and 11-year modulations were seen both in inverse phases and in phases with respect to the sunspot number modulations. Among the modulations, it was suggested that the 22-year periodicity might have been superposed by the photolysis that occurs in the surface snow; we hypothesized this since the 22-year modulations were intense and mostly in inverse phases with respect to the sunspot number modulations, except just after the Maunder Minimum and the Dalton Minimum. Regarding the present result in Fig. 5, both the stratospheric NO y production by N 2 O oxidation and the local effect of the loss by the photolysis in the surface snow could affect the NO − 3 concentration profile at the Dome Fuji site, but to draw conclusions will require further careful investigation.
Finally, we have pointed out the possibility of using the relatively stable "22-year" NO − 3 modulations as a time measure to date deep ice cores, at least within the core from the same drilling hole (DF2; its bottom reached ∼720,000 years ago; Sect.2). This could be possible even when an annual layer thickness becomes sub-centimeter in deeper depths; such a high-resolution sampling has already been a target, using our new laser-melting method for ice cores (Motizuki et al., submitted for publication). Finding these decadal to multi-decadal cycles in other Dome Fuji shallow and deep ice cores would be an interesting challenge, and disentangling the mechanism underlying the results reported here is very important to an understanding of the profiles of NO − 3 as it is a major chemical components observed in ice cores. The use of annually resolved NO − 3 concentrations in Dome Fuji ice cores as a new potential proxy for solar activity should increase the value of such measurements.
31 )
31See Motizuki et al. (2017)
Fig. 1 .
1Map of Antarctica showing the locations of Dome Fuji station (red star) and other research stations and ice-core drilling sites, including the S25 core site mentioned in this study.
Fig. 2 .
2Schematic diagram of the stratospheric production of nitrogen oxides induced by solar UV radiation (wavelength mainly 200-315 nm) and post-depositional processes that likely occur around Dome Fuji station, where the snow accumulation rate is low. Only the principal chemical reaction chains are depicted; in particular, reaction channels to N 2 O 5 are omitted. Denitrification occurs within the polar vortex, which develops in winter. Note that processes occurring at both microscopic and macroscopic scales are shown on this diagram. [Vol.
Fig. 3 .
3(Top) Time series of annually resolved NO − 3 concentrations during 1610-1904 in the DF01 ice core. Raw data (a dashed blue line) and those after application of 7-point moving median smoothing (a blue line). (Bottom) Annual group sunspot numbers (GSN) proposed by Hoyt and Schatten (1998) (HS98) and Chatzistergos et al. (2017) (C17) are shown by a solid orange and a dashed red line, respectively. The vertical dashed gray lines indicate the positions of time markers used to date the DF01 ice core. These time markers represent the signals of the volcanic eruptions used to determine the DFS2 time scale. The NO − 3 concentrations (raw data) constitute negative spikes at the vertical gray lines because nitrate, weakly acidic, was displaced by coexsisting with sulfate, strongly acidic, originated from volcanic eruptions. Durations of the grand solar minima, Maunder minimum and Dalton minimum, are indicated by bars.
Fig. 4 .
4Power spectra for the baseline NO − 3 concentration time series for 1610-1904 obtained by (a) the Maximum Entropy Method (MEM) for the 7-point median series with the inset to show the 21.5-year peak signal, and (b) the Lomb-Scargle (LS) method (shown with confidence levels; C.L.). In (b), the red line shows the result obtained for the raw data, and the blue line shows that obtained for the 7-point median series.
Fig. 5 .
5Bandpass-filtered variations in the baseline NO − 3 concentration (blue) and group sunspot numbers (solid orange and dashed red lines) from 1610 to 2010. Vertical dashed gray lines indicate the positions of the time markers in the DF01 ice core. a) Bandpass filtered results with the band from 18 to 30 years show "22-year" modulations. b) Those with the band from 8 to 16 years show "11-year" modulations. Both results, a) and b), were obtained using a Butterworth filter. The light blue highlighted region indicates the estimated maximum error in our NO − 3 measurements. Note that the range of the left axes of a) and b) are the same but reversed in a). Durations of the grand solar minima and modern solar maximum are indicated by bars in the center (see also Fig. 3).
AcknowledgmentsWe are deeply grateful to K. Makishima for enlightening discussions as well as for his support for our research team. We are grateful to M. Igarashi for his early analyses for this project. We thank Y. Fujii for his crucial role in initiating this astronomical collaboration as well as for his continuous encouragement, and we acknowledge the help of K. Kamiyama and T. Ohata in starting this collaboration. We also thank K. Suzuki, Y. Iizuka, H. Akiyoshi, T. Yokoyama, and A. Asai for valuable comments. We acknowledge H. Sakurai and Y. Yano for discussions on the measurement precision and Y. V. Sahoo for repeating the error check. We are indebted to M. Kitagawa for help in measuring the ion concentrations in this work. Finally, we are grateful to the members of the 42nd Japanese Antarctic Research Expedition and the Dome Fuji drilling team for supplying the DF01 ice core for this work and to all of the members who participated in sampling the DF01 core at NIPR. This research was supported in part by a JSPS Grant-in-Aid for Scientific Research (A) (Grant Number 22244015), the NEXT Program of CSTI (Grant Number GR098), and research funds provided by RIKEN Nishina Center.
Solar modulation of cosmogenic nuclide production over the last millennium: comparison between 14 C and 10 Be records. E Bard, G M Raisbeck, F Yiou, J Jouzel, Earth and Planetary Science Letters. 150Bard, E., Raisbeck, G. M., Yiou, F. and Jouzel, J. (1997) Solar modulation of cosmogenic nuclide production over the last millennium: comparison between 14 C and 10 Be records. Earth and Plan- etary Science Letters, 150, 453-462.
Eleven-year solar cycles over the last millennium revealed by radiocarbon in tree rings. N Brehm, A Bayliss, M Christl, H A Synal, F Adolphi, J Beer, B Kromer, R Muscheler, S K Solanki, I Usoskin, N Bleicher, S Bollhalder, C Tyers, L Wacker, Nat. Geosci. 14Brehm, N., Bayliss, A., Christl, M., Synal, H. A., Adolphi, F., Beer, J., Kromer, B., Muscheler, R., Solanki, S. K., Usoskin, I., Bleicher, N., Bollhalder, S., Tyers C. and Wacker, L. (2021) Eleven-year solar cycles over the last millen- nium revealed by radiocarbon in tree rings. Nat. Geosci. 14, 10-15.
Radiocarbon: A key tracer for studying Earth's dynamo, climate system, carbon cycle, and Sun. T J Heaton, E Bard, C B Ramsey, M Butzin, P Köhler, R Muscheler, P J Reimer, L Wacker, Science. 3747096Heaton, T. J., Bard, E., Ramsey, C. B., Butzin, M., Köhler, P., Muscheler, R., Reimer, P. J. and Wacker, L. (2021) Radiocarbon: A key tracer for studying Earth's dynamo, climate system, car- bon cycle, and Sun. Science 374, eabd7096.
Use of 10 Be in polar ice to trace the 11-year cycle of solar activity. J Beer, A Blinov, G Bonani, R C Finkel, H J Hofmann, B Lehmann, H Oeschger, A Sigg, J Schwander, T Staffelbach, B Stauffer, M Suter, W Wötfli, Nature. 347Beer, J., Blinov, A., Bonani, G., Finkel, R. C., Hof- mann, H. J., Lehmann, B., Oeschger, H., Sigg, A., Schwander, J., Staffelbach, T., Stauffer, B., Suter, M. and Wötfli, W. (1990) Use of 10 Be in polar ice to trace the 11-year cycle of solar activ- ity. Nature 347, 164-166.
A 600-year annual 10 Be record from the NGRIP ice core. A.-M Berggren, J Beer, G Possnert, A Aldahan, P Kubik, M Christl, S J Johnsen, J Abreu, B M Vinther, Geophys. Res. Lett. 3611801Berggren, A.-M., Beer, J., Possnert, G., Aldahan, A., Kubik, P., Christl, M., Johnsen, S. J., Abreu, J. and Vinther, B. M. (2009) A 600-year annual 10 Be record from the NGRIP ice core, Greenland. Geophys. Res. Lett., 36, L11801.
Volcanic and solar activity, and atmospheric circulation influences on cosmogenic 10 Be fallout at Vostok and Concordia (Antarctica) over the last 60 years. M Baroni, E Bard, J R Petit, O Magand, D Bourles, Geochimica et Cosmochimica Acta. 75Baroni, M., Bard, E., Petit, J. R., Magand, O. and Bourles, D. (2011) Volcanic and solar ac- tivity, and atmospheric circulation influences on cosmogenic 10 Be fallout at Vostok and Concordia (Antarctica) over the last 60 years. Geochimica et Cosmochimica Acta 75, 7132-7145.
Ice core record of 10 Be over the past millennium from Dome Fuji, Antarctica: A new proxy record of past solar activity and a powerful tool for stratigraphic dating. K Horiuchi, T Uchida, Y Sakamoto, A Ohta, H Matsuzaki, Y Shibata, H Motoyama, Quaternary Geochronology. 3Horiuchi, K., Uchida, T., Sakamoto, Y., Ohta, A., Matsuzaki, H., Shibata, Y. and Motoyama, H. (2008) Ice core record of 10 Be over the past mil- lennium from Dome Fuji, Antarctica: A new proxy record of past solar activity and a pow- erful tool for stratigraphic dating. Quaternary Geochronology 3, 253-261.
The Maunder Minimum: The reign of Louis XIV appears to have been a time of real anomaly in the behavior of the sun. J A Eddy, Science. 192Eddy, J. A. (1976) The Maunder Minimum: The reign of Louis XIV appears to have been a time of real anomaly in the behavior of the sun. Science 192, 1189-1202.
Relative contributions of tropospheric and stratospheric sources to nitrate in Antarctic snow. M R Legrand, S Delmas, Tellus B: Chemical and Physical Meteorology. 38Legrand, M. R. and Delmas, S. (1986) Relative contributions of tropospheric and stratospheric sources to nitrate in Antarctic snow. Tellus B: Chemical and Physical Meteorology 38, 236-249.
Nitrate ion in Antarctic firn as a marker for solar activity. E J Zeller, B C Parker, Geophys. Res. Lett. 8Zeller, E. J. and Parker, B. C. (1981) Nitrate ion in Antarctic firn as a marker for solar activity. Geophys. Res. Lett. 8, 895-898.
Non-seasalt sulfate and nitrate variations in the S25 core, near the coastal region. K Watanabe, K Satow, K Kamiyama, H Motoyama, O Watanabe, East Antarctica. Polar Meteorol. Glaciol. 13Watanabe, K., Satow, K., Kamiyama, K., Mo- toyama, H. and Watanabe, O. (1999) Non-sea- salt sulfate and nitrate variations in the S25 core, near the coastal region, East Antarctica. Polar Meteorol. Glaciol. 13, 64-74.
Nitrate in Polar Ice: A new tracer of solar variability. R Traversi, I G Usoskin, S K Solanki, S Becagli, M Frezzotti, M Severi, B Stenni, R Udisti, Solar Phys. 280Traversi, R., Usoskin, I. G., Solanki, S. K., Becagli, S., Frezzotti, M., Severi, M., Stenni, B. and Ud- isti, R. (2012) Nitrate in Polar Ice: A new tracer of solar variability. Solar Phys. 280, 237-254.
The interpretation of spikes and trends in concentration of nitrate in polar ice cores, based on evidence from snow and atmospheric measurements. E W Wolff, A E Jones, S B Bauguitte, R A Salmon, Atm. Chem. Phys. 8Wolff, E. W., Jones, A. E., Bauguitte, S. B. and Salmon, R. A. (2008) The interpretation of spikes and trends in concentration of nitrate in polar ice cores, based on evidence from snow and at- mospheric measurements. Atm. Chem. Phys. 8, 5627-5634.
Origins and variations of nitrate in South polar precipitation. M R Legrand, S Kirchner, J. Geophys. Res. 95Legrand, M. R. and Kirchner, S. (1990) Origins and variations of nitrate in South polar precipitation. J. Geophys. Res. 95, 3493-3507.
Nitrate in polar ice. E W Wolff, Ice core studies of global biogeochemical cycles. Berlin, HeidelbergSpringerWolff, E. W. (1995) Nitrate in polar ice. In Ice core studies of global biogeochemical cycles pp. 195- 224, Springer, Berlin, Heidelberg.
G Brasseur, S Solomon, Aeronomy of the Middle Atmosphere: Chemistry and Physics of the Stratosphere and Mesosphere. 3rd ed.Brasseur, G. and Solomon, S. (2005) Aeronomy of the Middle Atmosphere: Chemistry and Physics of the Stratosphere and Mesosphere (3rd ed.)
. Dordrecht Springer, Springer, Dordrecht.
A comparison of sources of odd nitrogen production from 1974 through 1993 in the Earth's middle atmosphere as calculated using a two-dimensional model. F M Vitt, C H Jackman, JGR: Atmospheres. 101Vitt, F. M. and Jackman, C. H. (1996) A com- parison of sources of odd nitrogen production from 1974 through 1993 in the Earth's middle at- mosphere as calculated using a two-dimensional model. JGR: Atmospheres 101, 6729-6739.
Interannual variations of early winter Antarctic polar stratospheric cloud formation and nitric acid observed by CALIOP and MLS. A Lambert, M L Santee, N J Livesey, Atmos. Chem. Phys. 16Lambert, A., Santee, M. L. and Livesey, N. J. (2016) Interannual variations of early winter Antarctic polar stratospheric cloud formation and nitric acid observed by CALIOP and MLS. Atmos. Chem. Phys. 16, 15219-15246.
Nitric acid trihydrate nucleation and denitrification in the Arctic stratosphere. J U Grooss, I Engel, S Borrmann, W Frey, G Günther, C R Hoyle, R Kivi, B P Luo, S Molleker, T Peter, M C Pitts, H Schlager, G Stiller, H Vömel, K A Walker, R Müller, Atmos. Chem. Phys. 14Grooss, J. U., Engel, I., Borrmann, S., Frey, W., Günther, G., Hoyle, C. R., Kivi, R., Luo, B. P., Molleker, S., Peter, T., Pitts, M. C., Schlager, H., Stiller, G., Vömel, H., Walker, K. A. and Müller, R. (2014) Nitric acid trihydrate nucleation and denitrification in the Arctic stratosphere. Atmos. Chem. Phys. 14, 1055-1073.
Nitrate in Greenland and Antarctic ice cores: a detailed description of post-depositional processes. R Röethlisberger, M A Hutterli, E W Wolff, R Mulvaney, H Fischer, M Bigler, K Goto-Azuma, M E Hansson, U Ruth, M.-L Siggaard-Andersen, J P Steffensen, Annals of Glaciology. 35Röethlisberger, R., Hutterli, M. A., Wolff, E. W., Mulvaney, R., Fischer, H., Bigler, M., Goto- Azuma, K., Hansson, M. E., Ruth, U., Siggaard- Andersen, M.-L. and Steffensen, J. P. (2002) Ni- trate in Greenland and Antarctic ice cores: a de- tailed description of post-depositional processes. Annals of Glaciology 35, 209-216.
Photolysis imprint in the nitrate stable isotope signal in snow and atmosphere of East Antarctica and implications for reactive nitrogen cycling. M M Frey, J Savarino, S Morin, J Erbland, J M F Martins, Atmos. Chem. Phys. 9Frey, M. M., Savarino, J., Morin, S., Erbland, J. and Martins, J. M. F. (2009) Photolysis imprint in the nitrate stable isotope signal in snow and at- mosphere of East Antarctica and implications for reactive nitrogen cycling. Atmos. Chem. Phys. 9, 8681-8696.
Air-snow transfer of nitrate on the East Antarctic Plateau -Part 2: An isotopic model for the interpretation of deep ice-core records. J Erbland, J Savarino, S Morin, J L France, M M Frey, M D King, Atmos. Chem. Phys. 15Erbland, J., Savarino, J., Morin, S., France, J. L., Frey, M. M. and King, M. D. Air-snow transfer of nitrate on the East Antarctic Plateau -Part 2: An isotopic model for the interpretation of deep ice-core records. (2015) Atmos. Chem. Phys. 15, 12079-12113.
Multiyear record of atmospheric and snow surface nitrate in the central Antarctic plateau. R Traversi, S Becagli, M Brogioni, L Caiazzo, V Ciardini, F Giardi, M Legrand, G Macelloni, B Petkov, S Preunkert, C Scarchilli, M Severi, V Vitale, R Udisti, 172Traversi, R., Becagli, S., Brogioni, M., Caiazzo, L., Ciardini, V., Giardi, F., Legrand, M., Macelloni, G., Petkov, B., Preunkert, S., Scarchilli, C., Sev- eri, M., Vitale, V. and Udisti, R. (2017) Multi- year record of atmospheric and snow surface ni- trate in the central Antarctic plateau. Chemo- sphere 172, 341-354.
Spatial variation of isotopic compositions of snowpack nitrate related to post-depositional processes in eastern Dronning. K Noro, S Hattori, R Uemura, K Fukui, M Hirabayashi, K Kawamura, H Motoyama, N Takenaka, N Yoshida, Geochemical Journal. 52Noro, K., Hattori, S., Uemura, R., Fukui, K., Hirabayashi, M., Kawamura, K., Motoyama, H., Takenaka, N. and Yoshida, N. (2018) Spatial variation of isotopic compositions of snowpack nitrate related to post-depositional processes in eastern Dronning Maud Land, East Antarctica. Geochemical Journal 52, e7-e14.
Deposition, recycling, and archival of nitrate stable isotopes between the air-snow interface: comparison between Dronning Maud Land and Dome C. V H L Winton, A Ming, N Caillon, L Hauge, A E Jones, J Savarino, X Yang, M M Frey, Atmos. Chem. Phys. 20Winton, V. H. L., Ming, A., Caillon, N., Hauge, L., Jones, A. E., Savarino, J., Yang, X. and Frey, M. M. (2020). Deposition, recycling, and archival of nitrate stable isotopes between the air-snow interface: comparison between Dronning Maud Land and Dome C, Antarctica. Atmos. Chem. Phys. 20, 5861-5885.
Sunlight-driven nitrate loss records Antarctic surface mass balance. P D Akers, J Savarino, N Caillon, A P Servettaz, E Le Meur, O Magand, J Martins, C Agosta, P Crockford, K Kobayashi, S Hattori, M Curran, T Van Ommen, L Jong, J L Roberts, Nat. Commun. 13427410 pagesAkers, P. D., Savarino, J., Caillon, N., Servettaz, A. P., Le Meur, E., Magand, O., Martins, J., Agosta, C., Crockford, P., Kobayashi, K., Hat- tori, S., Curran, M., van Ommen, T., Jong, L. and Roberts, J. L. (2022) Sunlight-driven nitrate loss records Antarctic surface mass balance. Nat. Commun. 13, 4274 (10 pages).
Temporal and spatial variability of surface mass balance at Dome Fuji, East Antarctica, by the stake method from 1995 to. T Kameda, H Motoyama, S Fujita, S Takahashi, J. Glaciol. 54Kameda, T., Motoyama, H., Fujita, S. and Taka- hashi, S. (2008) Temporal and spatial variabil- ity of surface mass balance at Dome Fuji, East Antarctica, by the stake method from 1995 to 2006. J. Glaciol. 54, 107-116.
Past and recent tritium levels in Arctic and Antarctic polar caps. E Fourré, P Jean-Baptiste, A Dapoigny, D Baumier, J R Petit, J Jouzel, Earth and Planetary Science Lett. 245Fourré, E., Jean-Baptiste, P., Dapoigny, A., Bau- mier, D., Petit, J. R. and Jouzel, J. (2006). Past and recent tritium levels in Arctic and Antarc- tic polar caps. Earth and Planetary Science Lett. 245, 56-64.
Atmospheric and depositional environments traced from unique chemical compositions of the snow over an inland high plateau. K Kamiyama, Y Ageta, Y Fujii, Antarctica. Journal of Geophysical Research: Atmospheres. 94D15Kamiyama, k., Ageta, Y. and Fujii, Y. (1989) At- mospheric and depositional environments traced from unique chemical compositions of the snow over an inland high plateau, Antarctica. Journal of Geophysical Research: Atmospheres 94 (D15), 18515-18519.
Na2SO4 and MgSO4 salts during the Holocene period derived by high-resolution depth analysis of a Dome Fuji ice core. Y Iizuka, T Hondoh, Y Fujii, J. Glaciol. 52Iizuka, Y., Hondoh, T. and Fujii, Y. (2006) Na2SO4 and MgSO4 salts during the Holocene period de- rived by high-resolution depth analysis of a Dome Fuji ice core. J. Glaciol. 52, 58-64.
Overview of the chemical composition and characteristics of Na + and Cl − distributions in shallow samples from Antarctic ice core DF01 (Dome Fuji) drilled in 2001. Y Motizuki, H Motoyama, Y Nakai, K Suzuki, Y Iizuka, K Takahashi, Geochem. J. 51Motizuki, Y., Motoyama, H., Nakai, Y., Suzuki, K., Iizuka, Y. and Takahashi, K. (2017) Overview of the chemical composition and char- acteristics of Na + and Cl − distributions in shal- low samples from Antarctic ice core DF01 (Dome Fuji) drilled in 2001. Geochem. J. 51, 293-298.
Seasonal variations in oxygen isotope ratios of daily collected precipitation and wind drift samples and in the final snow cover at Dome Fuji Station. H Motoyama, N Hirasawa, K Satow, O Watanabe, Antarctica. J. Geophys. Res.: Atmospheres. 110Motoyama, H., Hirasawa, N., Satow, K. and Watanabe, O. (2005) Seasonal variations in oxy- gen isotope ratios of daily collected precipitation and wind drift samples and in the final snow cover at Dome Fuji Station, Antarctica. J. Geo- phys. Res.: Atmospheres 110, D11106.
Yearround records of bulk and size-segregated aerosol composition in central Antarctica (Concordia site) -Part 1: Fractionation of sea-salt particles. M Legrand, S Preunkert, E Wolff, R Weller, B Jourdain, D Wagenbach, Atmos. Chem. Phys. 17Legrand, M., Preunkert, S., Wolff, E., Weller, R., Jourdain, B. and Wagenbach, D. (2017) Year- round records of bulk and size-segregated aerosol composition in central Antarctica (Concordia site) -Part 1: Fractionation of sea-salt particles. Atmos. Chem. Phys. 17, 14039-14054.
General tendencies of stable isotopes and major chemical constituents of the Dome Fuji deep ice core. Memoirs of National Institute of. O Watanabe, K Kamiyama, H Motoyama, Y Fujii, M Igarashi, T Furukawa, K Goto-Azuma, T Saito, S Kanamori, N Kanamori, N Yoshida, R Uemura, Polar Research Special Issue. 57Watanabe, O., Kamiyama, K. Motoyama, H., Fujii, Y., Igarashi, M., Furukawa, T., Goto-Azuma, K., Saito, T., Kanamori, S., Kanamori, N., Yoshida, N. and Uemura, R. (2003) General tendencies of stable isotopes and major chemical constituents of the Dome Fuji deep ice core. Memoirs of Na- tional Institute of Polar Research Special Issue 57, 1-24.
The second deep ice coring project at Dome Fuji. H Motoyama, Antarctica. Scientific Drilling. 5Motoyama, H. (2007) The second deep ice cor- ing project at Dome Fuji, Antarctica. Scientific Drilling 5, 41-43.
Deep ice core drilling to a depth of 3035.22 m at Dome Fuji, Antarctica in 2001-07. H Motoyama, A Takahashi, Y Tanaka, K Shinbori, M Miyahara, T Yoshimoto, Y Fujii, A Furusaki, N Azuma, Y Ozawa, A Kobayashi, Y Yoshise, Annals of Glaciology. 62Motoyama, H., Takahashi, A., Tanaka, Y., Shin- bori, K., Miyahara, M., Yoshimoto, T., Fujii, Y., Furusaki, A., Azuma, N., Ozawa, Y., Kobayashi, A. and Yoshise, Y. (2021) Deep ice core drilling to a depth of 3035.22 m at Dome Fuji, Antarctica in 2001-07. Annals of Glaciology 62, 212-222.
. K Kawamura, A Abe-Ouchi, H Motoyama, Y Ageta, S Aoki, N Azuma, Y Fujii, K Fujita, S Fujita, K Fukui, T Furukawa, A Furusaki, K Goto-Azuma, R Greve, M Hirabayashi, T Hondoh, A Hori, S Horikawa, K Horiuchi, M Igarashi, Y Iizuka, T Kameda, H Kanda, M Kohno, T Kuramoto, Y Matsushi, M Miyahara, T Miyake, A Miyamoto, Y Nagashima, Y Nakayama, T Nakazawa, F Nakazawa, F Nishio, I Obinata, R Ohgaito, A Oka, J Okuno, J Okuyama, I Oyabu, F Parrenin, F Pattyn, F Saito, T Saito, T Saito, T Sakurai, K Sasa, H Seddik, Y Shibata, K Shinbori, K Suzuki, T Suzuki, A Takahashi, K Takahashi, S Takahashi, M Takata, Y Tanaka, R Uemura, G Watanabe, O Watanabe, T Yamasaki, K Yokoyama, M Yoshimori, T Yoshimoto, State dependence of climatic instability over the past 720,000 years from Antarctic ice cores and climate modeling. Sci. Adv. 3, e1600446K. Kawamura, A. Abe-Ouchi, H. Motoyama, Ageta, Y., Aoki, S., Azuma, N., Fujii, Y., Fujita, K., Fujita, S., Fukui, K., Furukawa, T., Furusaki, A., Goto-Azuma, K., Greve, R., Hirabayashi, M., Hondoh, T., Hori, A., Horikawa, S., Horiuchi, K., Igarashi, M., Iizuka, Y., Kameda, T., Kanda, H., Kohno, M., Kuramoto, T., Matsushi, Y., Miya- hara, M., Miyake, T., Miyamoto, A., Nagashima, Y., Nakayama, Y., Nakazawa, T., Nakazawa, F., Nishio, F., Obinata, I., Ohgaito, R., Oka, A., Okuno, J., Okuyama, J., Oyabu, I., Parrenin, F., Pattyn, F., Saito, F., Saito, T., Saito, T., Sakurai, T., Sasa, K., Seddik, H., Shibata, Y., Shinbori, K., Suzuki, K., Suzuki, T., Takahashi, A., Takahashi, K., Takahashi, S., Takata, M., Tanaka, Y., Uemura, R., Watanabe, G., Watan- abe, O., Yamasaki, T., Yokoyama, K., Yoshimori M. and Yoshimoto, T. (2017) State dependence of climatic instability over the past 720,000 years from Antarctic ice cores and climate modeling. Sci. Adv. 3, e1600446.
Dating of a Dome Fuji (Antarctica) shallow ice core by volcanic signal synchronization with B32 and EDML1 chronologies. Y Motizuki, Y Nakai, K Takahashi, M Igarashi, H Motoyama, K Suzuki, The Cryosphere Discussion. 8Motizuki, Y., Nakai, Y., Takahashi, K., Igarashi, M., Motoyama, H. and Suzuki, K. (2014) Dat- ing of a Dome Fuji (Antarctica) shallow ice core by volcanic signal synchronization with B32 and EDML1 chronologies. The Cryosphere Discussion 8, 769-804.
EDML1": a chronology for the EPICA deep ice core from Dronning Maud Land. U Ruth, J.-M Barnola, J Beer, M Bigler, T Blunier, E Castellano, H Fischer, F Fundel, P Huybrechts, P Kaufmann, S Kipfstuhl, A Lambrecht, A Morganti, H Oerter, F Parrenin, O Rybak, M Severi, R Udisti, F Wilhelms, E Wolff, Clim. Past. 3over the last 150000 yearsRuth, U., Barnola, J.-M., Beer, J., Bigler, M., Blunier, T., Castellano, E., Fischer, H., Fun- del, F., Huybrechts, P., Kaufmann, P., Kipfs- tuhl, S., Lambrecht, A., Morganti, A., Oerter, H., Parrenin, F., Rybak, O., Severi, M., Udisti, R., Wilhelms, F. and Wolff, E. (2007) "EDML1": a chronology for the EPICA deep ice core from Dronning Maud Land, Antarctica, over the last 150000 years, Clim. Past. 3, 475-484.
Group sunspot numbers: A new solar activity reconstruction. D V Hoyt, K H Schatten, Solar physics. 179Hoyt, D. V. and Schatten, K. H. (1998) Group sunspot numbers: A new solar activity recon- struction. Solar physics 179, 189-219.
New reconstruction of the sunspot group numbers since 1739 using direct calibration and "backbone. T Chatzistergos, I G Usoskin, G A Kovaltsov, N A Krivova, S K Solanki, methods. Astron. and Astrophys. 60269Chatzistergos, T., Usoskin, I. G., Kovaltsov, G. A., Krivova, N. A. and Solanki, S. K. (2017) New re- construction of the sunspot group numbers since 1739 using direct calibration and "backbone" methods. Astron. and Astrophys. 602, A69.
Visualization of the challenges and limitations of the long-term sunspot number record. A Munoz-Jaramillo, J M Vaquero, Nature Astronomy. 3Munoz-Jaramillo, A. and Vaquero, J. M. (2019) Vi- sualization of the challenges and limitations of the long-term sunspot number record. Nature As- tronomy 3, 205-211.
The Maximum Entropy Method. N Wu, SpringerBerlinWu, N. (1997) The Maximum Entropy Method. Springer, Berlin.
New Edition, 2010) Spectral Analysis. M Hino, Asakura, Tokyoin JapaneseHino, M. (New Edition, 2010) Spectral Analysis. (in Japanese) Asakura, Tokyo, pp. 210-225.
Least-squares frequency analysis of unequally spaced data. N R Lomb, Astrophysics and space science. 39Lomb, N. R. (1976) Least-squares frequency anal- ysis of unequally spaced data. Astrophysics and space science 39, 447-462.
Studies in astronomical time series analysis. II-Statistical aspects of spectral analysis of unevenly spaced data. J D Scargle, The Astrophys. J. 263Scargle, J. D. (1982) Studies in astronomical time series analysis. II-Statistical aspects of spectral analysis of unevenly spaced data. The Astrophys. J. 263, 835-853.
A prescription for period analysis of unevenly sampled time series. J H Horne, S L Baliunas, The Astrophys. J. 302Horne, J. H. and Baliunas, S. L. (1986) A prescrip- tion for period analysis of unevenly sampled time series. The Astrophys. J. 302, 757-763.
A history of solar activity over millennia. I Usoskin, Living Rev. Sol. Phys. 14Usoskin, I. A history of solar activity over millen- nia. (2017) Living Rev. Sol. Phys. 14, 3, pp. 18- 19.
The Solar Cycle. D H Hathaway, Living Rev. Sol. Phys. 1256Hathaway, D.H. The Solar Cycle. (2015) Living Rev. Sol. Phys. 12, 4, p. 56.
Persistence of the Gleissberg 88-year solar cycle over the last ∼12,000 years: Evidence from cosmogenic isotopes. A N Peristykh, P E Damon, SSH-1-SSH-15Journal of Geophysical Research: Space Physics. 108Peristykh, A. N. and Damon, P. E. (2003) Persis- tence of the Gleissberg 88-year solar cycle over the last ∼12,000 years: Evidence from cosmo- genic isotopes. Journal of Geophysical Research: Space Physics 108, SSH-1-SSH-15.
An active sun throughout the Maunder Minimum. J Beer, S Tobias, N Weiss, Solar Physics. 181Beer, J., Tobias, S. and Weiss, N. An active sun throughout the Maunder Minimum. (1998) Solar Physics 181, 237-249.
Cyclicity of solar activity during the Maunder Minimum deduced from radiocarbon content. H Miyahara, K Masuda, Y Muraki, H Furuzawa, H Menjo, T Nakamura, Solar Physics. 224Miyahara, H., Masuda, K., Muraki, Y., Furuzawa, H., Menjo, H. and Nakamura, T. (2004) Cyclicity of solar activity during the Maunder Minimum deduced from radiocarbon content. Solar Physics 224, 317-322.
| []
|
[
"Hadron Production via e + e − Collisions with Initial State Radiation",
"Hadron Production via e + e − Collisions with Initial State Radiation"
]
| [
"V P Druzhinin ",
"S I Eidelman ",
"S I Serednyakov ",
"E P Solodov ",
"\nBudker Institute of Nuclear Physics SB RAS\n630090NovosibirskRussia\n",
"\nNovosibirsk State University\n630090NovosibirskRussia\n"
]
| [
"Budker Institute of Nuclear Physics SB RAS\n630090NovosibirskRussia",
"Novosibirsk State University\n630090NovosibirskRussia"
]
| []
| A novel method of studying e + e − annihilation into hadrons using initial state radiation at e + e − colliders is described. After brief history of the method, its theoretical foundations are considered. Numerous experiments in which exclusive cross sections of e + e − annihilation into hadrons below the center-of-mass energy of 5 GeV have been measured are presented. Some applications of the results obtained to fundamental tests of the Standard Model are listed. | 10.1103/revmodphys.83.1545 | [
"https://arxiv.org/pdf/1105.4975v2.pdf"
]
| 118,435,614 | 1105.4975 | 48e82a7019b3a86090ac30732f872f6ba6bf90fc |
Hadron Production via e + e − Collisions with Initial State Radiation
12 Aug 2011
V P Druzhinin
S I Eidelman
S I Serednyakov
E P Solodov
Budker Institute of Nuclear Physics SB RAS
630090NovosibirskRussia
Novosibirsk State University
630090NovosibirskRussia
Hadron Production via e + e − Collisions with Initial State Radiation
12 Aug 2011PACS numbers:
A novel method of studying e + e − annihilation into hadrons using initial state radiation at e + e − colliders is described. After brief history of the method, its theoretical foundations are considered. Numerous experiments in which exclusive cross sections of e + e − annihilation into hadrons below the center-of-mass energy of 5 GeV have been measured are presented. Some applications of the results obtained to fundamental tests of the Standard Model are listed.
III. Production of light quark mesons 13 A. Overview 13 B. e + e − → π + π − 13 C. e + e − → π + π − π 0 15 D. e + e − → K + K − π 0 , K 0 S K ± π ∓ , K + K − η 16 E. e + e − → π + π − π + π − , π + π − 2π 0 19 F. e + e − → K + K − π + π − , K + K − π 0 π 0 20 G. e + e − → 2(K + K − ) 24 H. e + e − → 5 mesons 24 I. e + e − → 3(π + π − ), 2(π + π − π 0 ) 29 J. Summary
I. INTRODUCTION
A. Why is low energy e + e − annihilation interesting?
Studies of low energy e + e − annihilation into hadrons are of great interest for theory and have numerous applications. According to current concepts, e + e − annihilation into hadrons proceeds via an intermediate virtual photon which produces a pair of quarks, qq, followed by the hadronization of quarks into observed hadrons. This process is described by the lowest-order Feynman diagram shown in Fig. 1. When the initial energy of e + e − , or equivalently of the intermediate virtual photon, is large enough, the process of hadronization is well described by Quantum Chromodynamics (QCD). At small energies, lower than 2-3 GeV, produced hadrons are relatively soft and intensively interact with each other forming hadronic resonances. At the moment QCD fails to describe this energy region. Because of that, it is vitally important to gain sufficient information from experiment to be used as an input to various QCD-based theoretical models. QCD sum rules are an example of how measurements of total and exclusive cross sections can be used to extract such fundamental parameters of theory as the strong coupling constant α s , quark and gluon condensates (Shifman et al., 2000).
Precise knowledge of vacuum polarization effects based on the total cross section of e + e − annihilation into hadrons is necessary to estimate the hadronic contributions to the running fine-structure constant and thus determine its value at the Z boson mass, α(M 2 Z ), a key component of the highprecision tests of the Standard Model (Actis et al., 2010;Burkhardt et al., 1989;Burkhardt and Pietrzyk, 2005;Eidelman and Jegerlehner, 1995;Hagiwara et al., 2003). Improvement of the precision with which the total cross section of e + e − annihilation into hadrons is known is also needed for a more accurate estimation of the hadronic contribution to the muon anomalous magnetic moment since it is one of the crucial limiting factors in a search for New Physics (Bennett et al., 2006;Bouchiat and Michel, 1961;Gourdin and de Rafael, 1969). There is an important relation between spectral functions in e + e − annihilation into hadrons with isospin I = 1 and corresponding τ lepton decays based on conservation of vector current (CVC) and isospin symmetry (Thacker and Sakurai, 1971;Tsai, 1971).
While first detailed tests of such relations showed satisfactory agreement between such spectral functions (Eidelman and Ivanchenko, 1991;Kawamoto and Sanda, 1978), higher accuracy reached in both e + e − and τ lepton sectors revealed possible systematic effects not accounted for in the e + e − and/or τ experiments (Davier et al., 2003a,b). Understanding of these effects is crucial for improving the accuracy with which the hadronic contributions to the muon anomalous magnetic moment can be estimated from τ decays to two and four pions as was first suggested by (Alemany et al., 1998).
Detailed measurements of the energy dependence of various exclusive cross sections allow to improve our knowledge of vector mesons and look for new states, both of light (Druzhinin, 2007) and heavy quarks (Eichten et al., 2008).
B. Idea of ISR
In e + e − collider experiments exclusive and total hadronic cross sections are usually measured by scanning the accessible energy range. The process of e + e − annihilation is accompanied by emission of one or several photons from the initial state. The lowest-order Feynman diagram describing initial-state radiation (ISR) is shown in Fig. 2. The quantity measured directly in the experiment is the visible cross section
σ vis = N L ,(1)
where N is the number of selected events of the process e + e − → hadrons + nγ, n = 0, 1, 2, . . ., and L is the integrated luminosity of the collider collected at the center-of-mass (c.m.) e + e − energy 2E 0 . The visible cross section can be related to the Born cross section σ 0 corresponding to the lowest-order diagram of Fig. 1 via the integral (Kuraev and Fadin, 1985), providing the 10 −3 accuracy:
σ vis = 1−m 2 min /s 0 ε(s, x)W (s, x)σ 0 (s(1 − x))dx,(2)
where s = 4E 2 0 , x is an effective fraction of the beam energy E 0 carried by photons emitted from the initial state, m min is the minimal possible invariant mass of the final hadrons, ε(s, x) is the detection efficiency for the process e + e − → hadrons + nγ as a function of x and s. The so-called radiator function W (s, x) taking into account higher-order QED contributions, in particular, from the diagram in Fig. 2, is fully calculable in QED (Actis et al., 2010). Due to the photon emission from the initial state the visible cross section depends on the Born cross section at all energies below the nominal e + e − c.m. energy 2E 0 .
In conventional scanning experiments the influence of ISR is suppressed by the requirements of the energy and momentum balance between the final hadrons and the initial e + e − state. In this case the detection efficiency has x dependence close to the step function: ε(s, x) = ε 0 (s) for x < x 0 , and zero for x > x 0 . At small x 0 , the equation (2) can be rewritten:
σ vis = ε 0 (s)σ 0 (s)(1 + δ(s)),(3)
where 1 + δ(s) is the radiative correction factor, which takes into account higher-order QED corrections. To calculate this factor it is necessary to know s dependence of σ 0 in the range from s(1 − x 0 ) to s. For slowly varying cross sections, δ is about 10%, and can be determined with accuracy better than 1% using existing data on the cross section energy dependence. Thus, in scanning experiments, from the data collected at the c.m. energy √ s the cross section σ 0 (s) is determined directly.
Another approach is also possible. Equation (2) can be rewritten in the differential form:
dσ vis (s, m) dm = 2m s ε(s, m)W (s, x)σ 0 (m),(4)
where we have made a transformation to the variable m = s(1 − x), the invariant mass of the hadronic system. At non-zero x the dominant contribution to the visible cross section comes from the one-photon ISR (Fig. 2). With the inclusion of the ISR photon momentum into the selection conditions on the energy and momentum balance, the non-zero detection efficiency for ISR events can be obtained in a wide range of the hadronic invariant mass. So, from the measurement of the mass spectrum for the process e + e − → hadrons + γ at fixed c.m. energy √ s the cross section σ 0 (m) can be extracted in the invariant mass range from threshold to the mass close to √ s. The idea of utilizing initial-state radiation from a high-mass state to explore electron-positron processes at all energies below that state was outlined long ago in Refs. (Baier and Fadin, 1968;Baier and Khoze, 1965). A possibility of exploiting such processes at high luminosity φ-and B-factories was discussed in Refs. (Arbuzov et al., 1998;Benayoun et al., 1999;Binner et al., 1999;Konchatnij and Merenkov, 1999) and motivated studies described in this paper.
Analysis of ISR events at e + e − -factories provides independent and contiguous measurements of hadronic cross sections in the low-energy region and also contributes to the spectroscopy of low-mass resonances.
C. Calculation of ISR and accuracy
In the lowest order (Fig. 2) the probability of the initial-state radiation of the photon with the energy xE 0 and the polar angle θ is as follows (Baier and Khoze, 1965;Bonneau and Martin, 1971): where α is the fine-structure constant, and m e is the electron mass. The ISR photon is predominantly emitted at small angles with respect to the beam axis. In Fig. 3 we present the dependence of the function W 0 (θ 0 , x)/W 0 (0, x) on the polar angle limit θ 0 , where
w 0 (θ, x) = α πx (1 − x + x 2 2 ) sin 2 θ − x 2 2 sin 4 θ sin 2 θ +W 0 (θ 0 , x) = π−θ0 θ0 w 0 (θ, x) sin θdθ.(6)
The integration is performed for three values of x at 2E 0 = 10.58 GeV, the c.m. energy of B-factories. It can be seen that the angular distribution of the ISR photon weakly depends on x and that a considerable fraction of the photons is emitted at large angles. In the next section we will discuss two approaches to study ISR events, a tagged and untagged one. In the tagged approach the ISR photon should be detected, i.e., emitted at a large angle, into the fiducial volume of the detector. At B-factories (2E 0 = 10.58 GeV) about 10% of highenergy ISR photons have 30 • < θ < 150 • . This angular range approximately corresponds to the fiducial volume of the electromagnetic calorimeter of the BABAR detector. The fraction of the large-angle ISR increases with decrease of the energy as shown in Fig. 4. The compact expressions for W 0 can be written for two practically applicable cases. For the range of integration θ 0 < θ < π − θ 0 , θ 0 ≫ m e / √ s W 0 (θ 0 , x) = α πx (2 − 2x + x 2 ) ln 1 + cos θ 0 1 − cos θ 0 −x 2 cos θ 0 .
For the full range of polar angles 0 < θ < π W 0 (0, x) = α πx (ln s m 2 e − 1)(2 − 2x + x 2 ).
The formulae given above describe ISR processes in the lowest QED order. To estimate a contribution of higher- The mass (m = 2E0 √ 1 − x) dependence of the relative difference between the radiator function W (x) from Ref. (Kuraev and Fadin, 1985) and the lowest-order function W0(0, x) for 2E0 = 1.02 GeV.
order diagrams (loops and related to extra photon emission) the function W (x) from Ref. (Kuraev and Fadin, 1985) can be used, which takes into account soft multiphoton emission and α 2 terms in the leading logarithmic approximation. In this approximation the accuracy ∆W/W is expected to be better than 1%. The relative difference between W (x) and W 0 (0, x) as a function of the invariant mass of the final hadronic system is shown in Fig. 5 for 2E 0 = 1.02 GeV, the c.m. energy of the φ-factory in Frascati. It is seen that the radiative correction to the lowest-order radiator function reaches 15%. It should be noted that the size of the radiation correction depends on experimental conditions. For example, in Ref. (Aubert et al., 2006a) the function W (x) is calculated at 2E 0 = 10.58 GeV with conditions that the highest-energy ISR photon has a polar angle in the range 20 • < θ < 160 • and that the invariant mass of the hadronic system combined with the ISR photon is greater than 8 GeV/c 2 . The latter condition restricts the maximum energy of extra photons emitted from the initial state. With these conditions the radiative correction factor 1 + δ = W (20 • , x)/W 0 (20 • , x) is close to unity with the maximum deviation δ of about 2%.
To provide accuracy better than 1% required for the measurement of the exclusive hadronic cross sections at low energies, the calculation of the radiator function should include the higher-order radiative correction, in particular, due to emission of extra photons. Several theoretical papers are devoted to study radiative corrections to ISR processes, for example, (Arbuzov et al., 1998;Binner et al., 1999;Czyż et al., 2003;Khoze et al., 2001Khoze et al., , 2002Rodrigo et al., 2002). The approaches of Refs. (Binner et al., 1999;Czyż et al., 2003;Rodrigo et al., 2002) allow one to develop genera-tors of Monte Carlo (MC) events and are used in analyses of experimental data. In Ref. (Binner et al., 1999) the photon emission at large angles only is considered; radiative corrections are calculated in the leading logarithmic approximation with the structure function technique (Caffo et al., 1994(Caffo et al., , 1997. The accuracy of the method is determined by neglecting sub-leading α 2 contributions and estimated in Ref. (Rodrigo et al., 2001) to be about 1%. In Refs. (Czyż et al., 2003;Rodrigo et al., 2002) the one-loop corrections and exact matrix element for emission of two hard photons are calculated. The accuracy of this next-to-leading order (NLO) calculation is estimated to be about 0.5% (Rodrigo et al., 2002) due to the higher-order effects.
D. Monte Carlo generators
The calculation of the radiator function is usually performed by the Monte Carlo method. A special computer code referred to as an "event generator" provides events (sets of the four-momenta of the final particles) distributed over the phase space according to the matrix element squared of the process under study. The phase space can be restricted by some conditions on the angles and energies of the generated ISR photons. These conditions should be looser than the actual experimental conditions used for event selection.
The interaction of the generated particles with the detector and the detector response are then simulated. In modern experiments the detector simulation is based on the GEANT4 (Agostinelli et al., 2003) package. The simulated events are reconstructed with the program chain used for experimental data. The detection efficiency is determined as the ratio of the mass spectrum of simulated events that passed selection criteria to the spectrum of generated events.
Most of ISR analyses discussed in this paper are based on two event generators. Historically, EVA was the first ISR Monte Carlo generator. The AfkQed package used in the BABAR experiment at the SLAC B-factory is a development of the EVA generator (Binner et al., 1999;Czyż and Kühn, 2001) initially designed to simulate ISR production of the 2π and 4π final states with an ISR photon emitted at large angles. The soft-photon radiation from the initial state is generated with the structure function method (Caffo et al., 1994(Caffo et al., , 1997. Two extra photons are emitted in the directions of the initial electron and positron. The program has a modular structure allowing to implement easily new hadronic modes. The AfkQed package includes generation of 2π, 3π, 4π, 5π, 6π, ηπ + π − states, modes with kaons KK + nπ n = 0, 1, 2, 3, 4, and protons pp, pp2π. The generation of the process e + e − → µ + µ − γ is also included into the AfkQed package. For this process both initialand final-state radiation (FSR) diagrams and their interference are taken into account. For the charged particles the final-state radiation is generated using the PHOTOS package (Barberio et al., 1991).
The Phokhara event generator is used in the BABAR and Belle experiments at the B-factories, and in the KLOE experiment at the φ-factory. Its latest version 6.1 (PHOKHARA web site, 2009) includes generation of the 2π, 3π, 4π, KK, pp, and ΛΛ hadronic states, and the process e + e − → µ + µ − γ. The initial-state radiation is generated in NLO (Czyż et al., 2003;Rodrigo et al., 2002), i.e., one or two photons can be emitted by the initial electron and positron. The generator can be used for simulation of both tagged and untagged ISR measurements. For the processes e + e − → µ + µ − γ, e + e − → π + π − γ, and e + e − → K + K − γ, NLO FSR radiative corrections are implemented. In particular, a hard ISR photon can be accompanied by emission of a soft photon from the final state.
For all the hadronic states except the two-body 2π and KK as well as π + π − π 0 , the structure of the electromagnetic hadronic current entering the matrix element of the process e + e − → hadrons is model dependent and the object of a study by itself. This model dependence is the second source of the theoretical uncertainty. For most of the measurements of multihadron cross sections its contribution significantly exceeds the 0.5-1.0% uncertainty of the radiator function. To estimate the model uncertainty, the distributions of hadrons in data are compared to the corresponding simulated distributions. Usually, the difference between the detection efficiency obtained with different models of the hadronic currents is taken as an estimate of the model uncertainty.
II. EXPERIMENTAL TECHNIQUES
A. Tagged and untagged ISR There are two approaches for studying ISR events. In the first approach, the untagged one, detection of the ISR photon is not required, but all the final hadrons must be detected and fully reconstructed. The ISR events are selected by the requirement that the recoil mass against the hadronic system be close to zero. The mass dependence of the detection efficiency for the process e + e − → π + π − γ at 2E 0 = 10.58 GeV is shown in Fig. 6. The efficiency is calculated with the Phokhara event generator in leadingorder mode. The detector acceptance for charged pions is assumed to be limited by the condition 30 • < θ < 150 • which corresponds to the polar angle coverage of the BABAR detector. The solid curve in Fig. 6 represents the efficiency for the case of untagged ISR photons. For two-pion masses below 3 GeV/c 2 the detection efficiency is about 10% and changes very slowly with mass. At these relatively low invariant masses, pions are produced in a narrow cone around the vector opposite to the ISR photon momentum and therefore can be detected only if the ISR photon is emitted at a large angle. The dotted curve in Fig. 6 represents the detection efficiency for the case of a tagged ISR photon. The photon polar angle is The detection efficiency for the process e + e − → π + π − γ at 2E0 = 10.58 GeV as a function of the 2π invariant mass for untagged (solid curve) and tagged (dotted curve) ISR photons.
required to be in the range from 30 • to 150 • . It is seen that tagged and untagged efficiencies are very close in the mass range below 3 GeV/c 2 . For higher masses the small-angle ISR begins to contribute to the untagged efficiency leading to its rapid increase, whereas the efficiency for the case of a tagged ISR photon varies insignificantly. At B-factories the untagged approach is used for measurements of exclusive cross sections for masses of produced hadronic systems above 3.5 GeV/c 2 . The untagged detection efficiency is very sensitive to the angular distributions of the final hadrons. Therefore this approach is suitable for the measurement of hadronic processes with well defined dynamics, for example, e + e − → DD or e + e − → D * D . For multihadron final states this strong sensitivity to hadron angular distributions can lead to a sizeable systematic uncertainty of the measurement.
All measurements of exclusive cross sections of e + e − annihilation into light hadrons at B-factories were performed using the tagged approach. In contrast to the case of untagged ISR, the efficiency for events with a detected photon depends weakly on the angular distributions of the final hadrons. As an example, the angular dependence of the detection efficiency for the process e + e − → ppγ (Aubert et al., 2006a) is shown in Fig. 7, where θ p is the proton angle in the pp rest frame measured with respect to the ISR photon direction. This advantage of the tagged ISR approach allows one to measure the cross section for multihadron final states with a relatively small model uncertainty.
Since ISR photons are emitted predominantly along the beam axis, in untagged ISR measurements the additional condition that cos θ γ is close to ±1 can be used, where θ γ is the polar angle of the momentum recoil against the hadronic system in the e + e − c.m. The cos θp dependence of the detection efficiency for the process e + e − → ppγ (Aubert et al., 2006a), where θp is the proton angle measured in the pp rest frame with respect to the ISR photon direction. The horizontal line indicates the detection efficiency averaged over cos θp.
frame. In particular, in Refs. (Aloisio et al., 2005;Ambrosino et al., 2009) the condition θ γ < 15 • or θ γ > 165 • is used to select e + e − → π + π − γ events at the φ-factory. This condition allows to significantly reduce background from the decay φ → 3π and almost completely remove the FSR background, i.e., e + e − → π + π − γ events with the photon emitted from the final state. It should be noted that the FSR contribution related to radiation by pions is negligible in B-factory experiments due to the smallness of the pion electromagnetic form factor at s = 112 GeV 2 . At this energy, the structuredependent contribution, for example, of the processes e + e − → f 0 γ and e + e − → f 2 γ is also expected to be small. Theoretical estimations for the cross sections of these processes at large s are absent in literature. An estimate was made for the process e + e − → pp in Ref. (Aubert,2006a). The FSR contribution (including a structure-dependent part) was found to be less than 10 −3 for the pp mass below 4.5 GeV. The detection efficiency for the process e + e − → π + π − γ at 2E 0 = 1.02 GeV with the condition on θ γ described above is shown in Fig. 8. The pion polar angles are required to be in the range 50 • -130 • . Due to this restriction the detection efficiency falls rapidly with decreasing 2π mass. The untagged approach was used in Refs. (Aloisio et al., 2005;Ambrosino et al., 2009) to measure the e + e − → π + π − cross section in the mass range from 0.592 to 0.975 GeV. The tagged approach allows one to access the near-threshold mass region. The detection efficiency for π + π − γ events with a detected photon (50 • < θ γ < 130 • ) is shown in Fig. 8 by the dashed curve. This selection was also used in the KLOE experiment (Ambrosino et al., 2011;Muller, 2009) and allowed to reduce the lower mass boundary for the M 2π (GeV/c 2 ) cross section measurement from 0.592 to 0.316 GeV.
B. Hadronic mass resolution and mass scale calibration
The detector resolution on the hadronic invariant mass and the accuracy of the mass scale calibration are important experimental parameters for the ISR cross section measurements.
The mass resolution σ m is usually determined using MC simulation as RMS of the (m meas − m true ) distribution, where m meas and m true are the measured and generated invariant masses, respectively. The experimental value of the mass resolution can be extracted from the fit of the measured line shape of a narrow resonance, for example, J/ψ.
In general, the invariant mass can be represented as a sum of the two terms: m meas = Σ i m i + ∆m( p 1 , p 2 , . . .), where m i are masses of stable hadrons produced in the process under study, and ∆m is the term depending on the final particle momenta p i . The mass resolution σ m is determined by the precision of the measurement of the momenta of the charged hadron tracks and photons from π 0 decays. Since Σ i m i has no sizeable spread, and the ∆m term and its uncertainty are minimal near threshold and grow with the mass increase, it is expected that σ m also increases with mass. As an example, the mass resolution versus the proton-antiproton mass for the ISR process e + e − → ppγ (Aubert et al., 2006a) is shown in Fig.9.
At B-factories the mass resolution for multihadron systems consisting of light quarks varies from 4-7 MeV/c 2 at the mass of 1.5 GeV/c 2 to 6-11 MeV/c 2 at 3 GeV/c 2 ; The mass dependence of the pp mass resolution obtained from MC simulation for the process e + e − → ppγ in Ref. (Aubert et al., 2006a). The curve represents the result of a polynomial fit.
the worse values are for hadron states with neutral pions. The hadronic cross sections in the mass region between the φ-and J/ψ-meson resonances do not contain structures with a width comparable to the detector resolution. The 25-MeV/c 2 mass bin was chosen for a study of most of the processes with light hadrons. With such a bin size the distortion of the mass spectrum shape because of resolution effects is small. A smaller bin size was used for analyses of the processes e + e − → ppγ and e + e − → π + π − γ. For the former, it is important to study a near-threshold enhancement in the mass dependence of the proton electromagnetic form factor. The good pp mass resolution for masses below 2 GeV/c 2 (see Fig.9) allows to measure the cross section in this region with the 5 MeV/c 2 mass bin (Aubert et al., 2006a). The e + e − → π + π − cross section near the ρ-meson peak was measured in the BABAR experiment with the mass interval of 2 MeV/c 2 (Aubert et al., 2009a), which is significantly smaller than the π + π − mass resolution (about 6 MeV/c 2 at the ρ peak). The unfolding of resolution effects from the high-statistics (about one-half million events) mass spectrum was performed with the procedure described in Ref. (Malaescu, 2009). The procedure uses a mass-transfer matrix that gives the probability that an event with true mass in an interval i is reconstructed with m meas in interval j. The transfer matrix is usually obtained using MC simulation and corrected to take into account a difference in the resolution between data and simulation. The measurement of the e + e − → π + π − cross section at the φ-factory (Ambrosino et al., 2009) with the KLOE detector was performed with the 0.01 GeV 2 step in the squared mass s ′ = m 2 2π corresponding to a mass bin width of 6.5 MeV/c 2 near the ρ peak. The mass reso-lution of the KLOE detector is about 1.3 MeV/c 2 at the ρ mass. The resolution effects are substantial only in the mass region of the ω-ρ interference. For comparison with theory, these effects were removed by unfolding the mass spectrum using the Bayesian method (D'Agostini, 1995).
For the J/ψ and ψ(2S) produced in ISR processes the observed line shapes are fully determined by the detector resolution. In this case better mass resolution leads to the larger signal-to-background ratio. For the process e + e − → 2(π + π − )π 0 γ (Aubert et al., 2007c) in the mass region of the J/ψ and ψ(2S) mesons discussed in Section V, the value of the mass resolution obtained from the fit to the J/ψ spectrum is about 9 MeV/c 2 , in good agreement with MC simulation.
For the final states containing charmed and charmonium mesons (J/ψπ + π − , DD, . . . ), the typical resolution in the 4-5 GeV/c 2 mass range is about 5 MeV/c 2 . The corresponding cross sections were measured with the 20-25 MeV/c 2 mass bin. For these final states the influence of the limited mass resolution on the cross section measurement is negligible.
The precision of the absolute mass scale calibration can be tested by comparison of the measured mass values for known resonances with their nominal values. For many multihadron states (see Sec. V) the mass calibration is performed at the J/ψ mass. The difference between the measured and nominal (Eidelman et al., 2004) J/ψ masses is found to be less than 1 MeV/c 2 (see, for example, Refs. (Aubert et al., 2007c(Aubert et al., , 2008b). For the 3π final state the mass scale shift was determined at the ω-and φ-meson masses (Aubert et al., 2004b): m ω − m nominal ω = −(0.2 ± 0.1) MeV/c 2 and m φ − m nominal φ = −(0.6 ± 0.2) MeV/c 2 . We conclude that for the measurements of hadronic cross sections at B-factories the mass scale is defined with a relative accuracy better than or about 5 × 10 −4 .
C. ISR luminosity
It is clear that a radiation of a hard photon significantly decreases the cross section, so the ISR technique can be efficient at high-luminosity colliders only. To compare the effectiveness of the ISR method for the measurement of hadronic cross sections with direct e + e − experiments, it is useful to introduce the concept of ISR luminosity. The mass spectrum for the ISR process e + e − → Xγ is expressed in terms of the ISR differential luminosity dL/dm and the Born cross section for the process e + e − → X as
dN dm = ε(m)(1 + δ(m))σ 0 (m) dL dm ,(9)
where 1 + δ(m) = W (m)/W 0 (m) is the radiative correction factor discussed in Sec. I.C. The ISR luminosity is proportional to the total integrated luminosity L collected in an experiment and the lowest-order radiator function given by Eq. (7) The mass dependence of the ISR differential luminosity multiplied by the detection efficiency. The solid curve shows the εdL/dm for the B-factory (2E0 = 10.58 GeV, L = 500 fb −1 , tagged ISR photon), while the dashed curve shows the same function for the φ-factory (2E0 = 1.02 GeV, L = 240 pb −1 , untagged ISR photon). The histogram represents integrated luminosities collected in direct e + e − experiments with the SND detector (Achasov et al., 2002) at the Novosibirsk VEPP-2M collider (below 1.4 GeV/c 2 ), and with the DM1 (Bisello et al., 1981) and DM2 (Antonelli et al., 1992) detectors at the Orsay DCI collider (above 1.4 GeV/c 2 ). angular range used for determination of the detection efficiency ε(m):
dL dm = W 0 (m) 2m s L.(10)
The mass dependence of the ISR differential luminosity multiplied by the detection efficiency for the BABAR experiment is shown in Fig. 10 for masses below 2.2 GeV/c 2 . The detection efficiency used was calculated in Sec. II.A for the process e + e − → π + π − γ with a tagged ISR photon. The integrated luminosity is taken to be 500 fb −1 . The dashed curve in Fig. 10 shows the same quantity calculated for the KLOE experiment with the integrated luminosity of 240 pb −1 and detection efficiency taken for the case of an untagged ISR photon (Fig. 8). The luminosity of 240 pb −1 was used in the recent measurement (Ambrosino et al., 2009) of the e + e − → π + π − cross section in the 0.592-0.975 GeV/c 2 mass range. The total integrated luminosity collected by the KLOE is about an order of magnitude larger, 2.5 fb −1 . The KLOE ISR luminosity is shown only up to 0.92 GeV/c 2 . It increases sharply and reaches 21 pb −1 at 0.975 GeV/c 2 . It should be noted that the BABAR measurement of the e + e − → π + π − cross section (Aubert et al., 2009a) is also based on a part of the recorded data corresponding to 232 fb −1 . The histogram in Fig. 10 shows the distribution of the integrated luminosities collected in some direct e + e − experiments. At masses below 1.4 GeV/c 2 the statistics of the SND experiment (Achasov et al., 2002) recorded at the VEPP-2M collider is presented. This is the largest integrated luminosity collected in this mass region in a single experiment. The mass bin 1.0-1.1 GeV/c 2 does not include about 13 pb −1 taken by SND in vicinity of the φ-meson resonance. The significant part of the statistics from the 0.7-0.8 GeV/c 2 mass interval is collected in the ω-meson mass window 0.76-0.80 GeV/c 2 . In the c.m. energy range 1.4-2.2 GeV/c 2 the experiments with the largest statistics are DM1 and DM2 at the Orsay DCI e + e − collider. The histogram at m > 1.4 GeV/c 2 shows a sum of the integrated luminosities collected with these detectors. At low masses of the hadronic system the data samples of ISR events currently available at B-factories exceed the statistics collected in conventional e + e − experiments, especially at masses below 0.7 GeV/c 2 and above 1.4 GeV/c 2 . The ISR luminosity of the φ-factory increases very rapidly with mass. For masses below 0.8 GeV/c 2 the luminosity currently used for ISR analysis (Ambrosino et al., 2009) is comparable with that collected in direct e + e − experiments. For higher masses it exceeds both BABAR and e + e − luminosities.
The ISR luminosity for the mass region of charm production is presented in Fig. 11. It corresponds to the 500 fb −1 integrated luminosity collected at 2E 0 = 10.58 GeV/c 2 and is multiplied by the detection efficiency calculated for the case of an untagged ISR photon (Fig. 6). The ISR luminosity in this mass region sig- Thus, the current data samples of ISR events produced at the B-and φ-factories are larger than those produced directly in e + e − collisions for all masses of interest excluding the regions near the narrow resonances (ω, φ, J/ψ, ψ(2S)). For masses above 1.4 GeV/c 2 this allows to significantly improve accuracy of the measurements of exclusive hadronic cross sections. In the mass region below 1.4 GeV/c 2 the results obtained with the ISR method are comparable to rather precise direct e + e − measurements.
D. Comparison with e + e − scan
The ISR technique offers some advantages over conventional e + e − measurements. One of them is that the entire hadronic mass range is accessible in one experiment. This allows one to avoid relative normalization uncertainties which inevitably arise when data from different experiments, or from different machine settings in one experiment, are combined.
The ISR measurements with a tagged photon have additional advantages. In many cases, particularly for final states with low invariant mass of the produced particles, the hadronic system is collimated along the direction opposite to the ISR photon. Therefore, the detection efficiency has low sensitivity to hadron angular distributions in the hadronic-system rest frame. In Fig.7 the angular dependence of the detection efficiency is shown for the process e + e − → ppγ (Aubert et al., 2006a). The angular dependence is close to uniform. This reduces the model dependence of the cross section measurement due to the unknown relation between the values of the proton electric and magnetic form factors, and significantly facilitates data analysis. Note that in conventional experiments at e + e − or pp colliders the detector acceptance for the final pp or e + e − systems falls to zero when cos θ p approaches ±1.
For ISR events the final hadrons have non-zero momenta at the production threshold and are therefore detected with full efficiency. In Fig.12 the detection efficiency for the process e + e − → ppγ process (Aubert et al., 2006a) is shown as a function of the pp invariant mass. No strong variation of the efficiency with mass is observed, while in direct e + e − measurements the detection efficiency vanishes at the threshold because of the low momenta of the produced particles. This feature of ISR hadron production was successfully used at BABAR for the measurements of the e + e − → pp (Aubert et al., 2006a) and e + e − → π + π − (Aubert et al., 2009a) cross sections in near-threshold mass regions.
For the measurement of the e + e − → π + π − cross section, particle identification plays a crucial role. In the ISR process e + e − → π + π − γ at B-factories most of the final pions have momenta larger than 1 GeV/c. For such pion momenta a good π/µ/e separation is provided which allows one to almost completely remove the e + e − → e + e − γ and e + e − → µ + µ − γ backgrounds (Aubert et al., 2009a). This is in contrast with direct e + e − measurements (Achasov et al., 2006;Akhmetshin et al., 2004aAkhmetshin et al., , 2007 in which it is difficult to separate e + e − → π + π − and e + e − → µ + µ − events in the most interesting ρmeson mass region (0.60-0.95 GeV). As a result, a sum of the cross sections is measured. The contribution of the process e + e − → µ + µ − is then subtracted using its theoretically calculated cross section. This leads to an increase of statistical and systematic errors of the measurement.
It should be noted that the advantages of tagged ISR discussed above (weak mass and angular dependences of the detection efficiency) are completely absent for untagged ISR. In this case the mass and angular dependences are even stronger than those for events of direct e + e − annihilation.
A disadvantage of ISR is that the mass resolution and absolute mass scale calibration are much poorer than the beam energy spread and the accuracy of the beam energy setting in direct e + e − annihilation experiments. The influence of the resolution effects on the ISR measurement is discussed in Sec. II.B.
The main disadvantage of the ISR measurements is presence of a wide spectrum of background processes different from those in direct e + e − experiments. For example, in e + e − annihilation the main background process for e + e − → π + π − π 0 is e + e − → π + π − π 0 π 0 with a lost π 0 . For the ISR process e + e − → π + π − π 0 γ with the 3π mass in the range m 3π ± ∆m/2, this background corresponds to the contribution of the process e + e − → π + π − π 0 π 0 γ with the 4π mass in the same range m 3π ± ∆m/2. The presence in ISR of 4πγ events with arbitrary masses, which may, in particular, be out of the m 3π ± ∆m/2 range, greatly increases background.
At the φ-factory and in future ISR measurements at the tau-charm factory in Beijing the background from FSR processes should be taken into account when the ISR photon is detected. The FSR contribution for the e + e − → π + π − measurements at KLOE is calculated with the PHOKHARA generator, which models FSR for pions using scalar QED, and also takes into account the radiative φ decays to π + π γ via the f 0 (980)γ and ρπ intermediate states. The pion electromagnetic form factor used in the generator is obtained from a fit to the e + e − → π + π − experimental data. In the case of the tau-charm factory, experimental information on exclusive hadronic cross sections in the energy range from 3.0 to 4.5 GeV obtained at B-factories can be used to estimate the FSR contribution. Additional theoretical input is required to estimate structure-dependent FSR.
Another background source is the non-ISR process of e + e − annihilation into hadrons containing a high-energy π 0 . In particular, the events of the process e + e − → Xπ 0 with an undetected soft photon or merged photons from the π 0 decay may almost completely imitate the e + e − → Xγ events. This background is usually subtracted statistically using for normalization selected e + e − → Xπ 0 events with a reconstructed π 0 . In tagged ISR measurements at B-factories the process e + e − → Xπ 0 becomes the dominant background source at relatively high masses, about 2 GeV/c 2 . It limits the mass region for ISR studies of light hadrons to masses below 4.0-4.5 GeV/c 2 .
In ISR measurements with an untagged ISR photon, the background from e + e − → Xπ 0 can be significantly suppressed by requiring that the missing momentum in an event be directed along the beam axis. For untagged ISR, the main sources of background are ISR processes and two-photon processes e + e − → e + e − γ * γ * → e + e − X in which initial electron and positron are scattered predominantly at small angles. The latter background can be suppressed by a condition on the missing mass, which should be close to zero for ISR events and has a wide distribution for two-photon events.
Background suppression and subtraction are the main sources of the systematic uncertainty on ISR measurements.
E. Colliders and detectors using ISR
ISR processes were studied in many e + e − experiments either as a source of useful physical information or as a source of background. For example, possibly the first study of the process e + e − → π + π − γ was performed more than 20 years ago with the ND detector at the VEPP-2M collider (Dolinsky et al., 1991;Vasserman et al., 1988). In this work, the FSR process e + e − → ρ → π + π − γ was measured with the ISR process e + e − → ργ → π + π − γ studied as a main source of background. Many inter-esting ISR studies have been performed with the CLEO detector, see, e.g., Ref. (Adams et al., 2006). Below we give a brief description of only three detectors: BABAR, Belle, and KLOE, which made a great contribution both to development of the ISR technique and ISR measurements of hadronic cross sections.
F. PEP-II and BABAR
The PEP-II B-factory at SLAC is a two-ring asymmetric-energy e + e − collider with energies of 9 GeV for the electron and 3.1 GeV for the positron beam, operating at the c.m. energy of 10.58 GeV, at the maximum of the Υ(4S) resonance (Seeman et al., 2001). The maximum luminosity achieved at PEP-II was slightly over 10 34 cm −2 s −1 . The principal goal of the PEP-II Bfactory and the BABAR detector is studies of CP violation in the B-meson system.
The BABAR detector ( Fig.13) is described in detail elsewhere (Aubert et al., 2002). Final states with charged particles are reconstructed in the BABAR tracking system, which comprises a five-layer silicon vertex tracker (SVT) and a 40-layer drift chamber (DCH) operating in a 1.5-T axial magnetic field. The vertex position is measured by the SVT with the accuracy of 50 µm. The momentum resolution for 1 GeV/c charged tracks is σ pt /p t = 0.5%. Charged-particle identification is provided by an internally reflecting ring-imaging Cherenkov detector (DIRC), and by energy loss measurements in the SVT and DCH. The hard ISR photon and photons from π 0 decays are detected in a CsI(Tl) electromagnetic calorimeter (EMC). The energy resolution for 1 GeV photons is about 3%; the angles of photons are measured with the 4 mrad accuracy. Muons are identified in the instrumented flux return (IFR) of the solenoid, which consists of iron plates interleaved with resistive plate chambers.
Experiments at the PEP-II collider with the BABAR detector were carried out from 1999 to 2008. The total integrated luminosity is close to 530 fb −1 . The ISR studies at BABAR started in 2001. The ISR research program includes a study of the light hadron production with a tagged ISR photon and charm and charmonium studies with an untagged photon.
G. KEKB and BELLE
The KEK B-Factory, KEK-B, is an asymmetric-energy (similar to PEP-II) e + e − collider with the 8-GeV electron and 3.5-GeV positron beams and the maximum luminosity of 2.1 · 10 34 cm −2 s −1 (Kurokawa et al., 2003). The main physical goal of this project is to perform a detailed study of B-meson properties, in particular, CP -violation.
The Belle detector (Abashian et al., 2002) (Fig.14) is configured inside a 1.5 T superconducting solenoid. The B-meson vertices are measured in a three-layer dou- blesided silicon vertex detector with about 50 µm impact parameter resolution for 1 GeV/c momentum track at θ ≃ π/2. Track momenta are measured in a 50-layer wire drift chamber with a 0.4% momentum resolution at 1 GeV/c. Particle identification is provided by dE/dx measurements in the drift chamber, aerogel Cherenkov counters, and time-of-flight counters placed outside the drift chamber. Electromagnetic showers are detected in a CsI(Tl) calorimeter located inside the solenoid coil. The energy resolution is 2% for 1-GeV photons. An iron fluxreturn located outside the coil is instrumented to detect K L -mesons and identify muons.
Experiments with Belle started in 2000 and stopped in 2010. The Belle integrated luminosity reaches 1000 fb −1 . The ISR experiments are mainly devoted to the production of charm and charmonium hadronic states with mass above 4 GeV/c 2 . ISR analysis of light mesons is in progress. (Muller, 2009). The polar-angle regions used to select tagged (50 • < θγ < 130 • ) and untagged (θγ < 15 • or θγ > 165 • ) ISR events are shown.
FIG. 15 KLOE detector
H. DAΦNE and KLOE
DAΦNE,
the Frascati φ-factory (Franzini and Moulson, 2006), is in operation since 1999. The main goal of the DAΦNE project is a study of neutral and charged kaons, intensively produced at the energy corresponding to the maximum of φ(1020) resonance. Similar to PEP-II and KEK-B, DAΦNE uses two separate rings for storing electron and positrons, but beams have equal energies. The DAΦNE design luminosity is 5 · 10 32 cm −2 s −1 .
KLOE (Franzini and Moulson, 2006) (Fig.15) is the main DAΦNE detector. The detector consists of a largevolume drift chamber (DC) surrounded by a hermetic electromagnetic calorimeter (EMC). A superconducting coil provides an axial magnetic field of 0.52 T. In order to reduce neutral kaon regeneration and chargedparticle multiple scattering, the gas mixture of 90% helium and 10% isobutane is used in the DC. Chargedtrack momenta are measured with the σ p /p = 0.4% accuracy. The lead-scintillation fiber calorimeter provides the energy resolution for electromagnetic showers of σ E /E = 5.7%/ E(GeV), and the time resolution of σ t = 54ps/ E(GeV) 140ps.
The total integrated luminosity accumulated with KLOE is about 3 fb −1 . The only, but very important, ISR process studied at KLOE is e + e − → π + π − γ.
III. PRODUCTION OF LIGHT QUARK MESONS
A. Overview
As already mentioned in the Introduction, e + e − annihilation into hadrons at c.m. energies below 2 GeV plays a very important role in many fundamental problems of particle physics. In particular, knowledge of its total cross section is mandatory for the calculation of the muon anomalous magnetic moment in Standard Model. For many years only e + e − scan experiments provided information on this reaction and determined the uncertainty of the SM prediction of the muon anomaly (Davier et al., 2003a,b). Main information on light vector mesons has been also obtained in such measurements. Unfortunately, the collected data samples were not sufficient for a precise determination of parameters of excited vector mesons.
Recently, due to a very high luminosity of the e + e − factories, DAFNE, KEK-B, and PEP-II, the ISR technique became a powerful tool for an independent study of e + e − annihilation at low energies.
The KLOE collaboration used the ISR method at the φ meson energy to study the reaction e + e − → π + π − and measure the pion electromagnetic form factor (Aloisio et al., 2005;Ambrosino et al., 2009Ambrosino et al., , 2011Muller, 2009). Recently, results on this process were also reported by the BABAR collaboration (Aubert et al., 2009a).
A variety of high-multiplicity final states were studied at BABAR: Aubert et al., 2008b). The final K + K − π + π − state was also investigated by Belle (Shen et al., 2009).
π + π − π 0 (Aubert et al., 2004b), 2(π + π − ), π + π − K + K − and 2(K + K − ) (Aubert et al., 2005b), 3(π + π − ), 2(π + π − π 0 ) and K + K − 2(π + π − ) (Aubert et al., 2006b), 2(π + π − )π 0 , 2(π + π − )η, K + K − π + π − π 0 and K + K − π + π − η (Aubert et al., 2007c), K + K − π + π − and K + K − π 0 π 0 (Aubert et al., 2007b), K + K − π + π − , K + K − π 0 π 0 and K + K − K + K − (Aubert et al., 2007b), K ± K 0 S π ∓ , K + K − π 0 and K + K − η (
Studies of the exclusive channels of e + e − annihilation listed above allow to determine such fundamental parameters as mass, width and leptonic width of various vector mesons. In addition to the low-lying resonances, such as the ρ, ω and φ, where ISR studies can independently provide meaningful and competitive information, they are indispensable for a much more precise than before investigation of the excited vector states.
Moreover, detailed analysis of the dynamics shows that in many cases a multiparticle final state can be reached via different intermediate mechanisms. For example, four pions can be produced via ωπ 0 , a 1 (1260)π, ρ 0 f 0 , . . .. In the following sections we show a complexity of the internal substructures observed in some channels, which are often used to extract parameters of the resonances involved in the substructures.
In general, amplitudes corresponding to different intermediate mechanisms interfere affecting the energy and The pion form factor obtained by KLOE in the reaction e + e − → π + π − γ with a tagged ISR photon (Muller, 2009). Bottom: Relative difference between the KLOE result with an untagged ISR photon (Ambrosino et al., 2009) and the direct e + e − measurements by SND (Achasov et al., 2006) and CMD-2 (Akhmetshin et al., 2007). The dark (light) band indicates the KLOE uncertainty (statistical and systematic errors combined in quadrature). For the SND and CMD-2 data, the combined statistical and systematic errors are shown.
angular distributions of the final particles. This interference should be taken into account to avoid additional systematic errors. Unless otherwise stated, all cross sections in the following sections are corrected for effects of initial-state radiation only. Neither final-state radiation nor vacuum polarization corrections have been applied.
B. e + e − → π + π −
The reaction e + e − → π + π − was relatively well studied for c.m. energies up to 1.4 GeV in direct e + e − experiments. The most precise measurements were performed with the CMD-2 (Akhmetshin et al., 2004a(Akhmetshin et al., , 2007 and SND (Achasov et al., 2006) collider. The CMD-2 measurements have a systematic uncertainty in the 1% range.
The dominant contribution to this process comes from the ρ(770) meson production.
A measurement of the e + e − → π + π − cross section in the ρ(770) mass region was performed by KLOE using the ISR method (Aloisio et al., 2005;Ambrosino et al., 2009Ambrosino et al., , 2011Muller, 2009).
For the first time it was demonstrated that the cross section determined by this method could have smaller statistical errors than direct e + e − measurements and could be compet-itive with them in a systematic uncertainty. Both untagged (Aloisio et al., 2005;Ambrosino et al., 2009) and tagged (Ambrosino et al., 2011;Muller, 2009) ISR π + π − γ events were studied with consistent results. While the tagged measurement has worse statistical errors and an additional source of the systematic uncertainty due to the FSR contribution, it covers the region of small invariant masses inaccessible for the untagged measurement. The result of the tagged measurement (Ambrosino et al., 2011;Muller, 2009) represented as a pion electromagnetic form factor squared is shown in Fig. 16(top) together with the results of the direct e + e − measurements with the CMD-2 (Akhmetshin et al., 2007) and SND (Achasov et al., 2006) detectors. Comparison of the more precise untagged KLOE measurement (Ambrosino et al., 2009) with the CMD-2 and SND data is given in Fig. 16(bottom). At invariant masses corresponding to the maximum of the ρ resonance and its high-mass tail the points from direct e + e − measurements lie systematically higher than those from KLOE. In this mass region the difference between the CMD-2 and KLOE measurements is definitely larger than their combined systematic uncertainty. The KLOE systematic error includes the experimental (0.6%) and theoretical (0.6%) uncertainties. Two main sources of the former are tracking and luminosity measurement. The latter is determined mostly by the accuracy of the radiator function calculated with the PHOKHARA event generator. Note that the KLOE Collaboration performed a dedicated study to validate a calculation of FSR effects using forward-backward asymmetry arising from the interference between the ISR and FSR amplitudes (Müller, 2009). The study showed that the assumption of pointlike pions works reasonably well and can be used for the FSR calculation, see Fig. 17.
A structure seen at the top of the ρ-meson resonance is due to its interference with the much more narrow ω(782) resonance also decaying to π + π − . Because ω(782) mass is known precisely, the position of this structure can be used to test the accuracy of the mass scale calibration. Unfortunately, neither KLOE nor BABAR (see below) report the result of such a test.
The PEP-II B-factory also provided a large sample of the e + e − → π + π − γ events (about 530 thousand) and the e + e − → π + π − cross section (Aubert et al., 2009a) was measured for the e + e − c.m. energies up to 3.0 GeV. In this experiment the e + e − → π + π − cross section is obtained from the ratio of the π + π − and µ + µ − mass spectra. Due to the normalization to the cross section of the theoretically well known process e + e − → µ + µ − γ, the measurement becomes much less sensitive to the experimental uncertainties and to the theoretical uncertainty of the radiator function. A comparison of the measured µ + µ − mass spectrum for the reaction e + e − → µ + µ − γ with the QED prediction is shown in Fig. 18(top). The data and the prediction are consistent within the estimated systematic uncertainty of 1.1%, dominated by the accuracy of the integrated luminosity measurement. Us- The e + e − → π + π − cross section above 1 GeV measured with the BABAR detector (Wang , 2009). Comparison with the CMD-2 (Aulchenko et al., 2005) and DM2 (Bisello et al., 1989) measurements is shown. ing the bin-by-bin ratio to the cross section of the process e + e − → µ + µ − γ one minimizes theoretical uncertainties and reduces a systematic error at the ρ peak to 0.5% dominated by pion identification and ISR luminosity. Previously such a test was performed in e + e − scan experiments at the OLYA detector in the c.m. energy range from 640 MeV to 1400 MeV (Kurdadze et al., 1984) and at the CMD-2 detector from 370 MeV to 520 MeV (Aulchenko et al., 2006) with the achieved precision of comparison of 3% and 1%, respectively.
The measured cross section is shown in Fig. 18(bottom).
For the first time a relatively high-statistics measurement is performed for c.m. energies above 1 GeV. The cross section in this energy range shown in Fig. 19 demonstrates some statistically significant structures which can be possibly explained by the interference between the wide ρ-like excited states. Note that the cross section shown in Fig. 18 is bare and includes FSR effects. The relative difference between the CMD-2 (Akhmetshin et al., 2004a(Akhmetshin et al., , 2007 and BABAR (Wang , 2009) measurements. The band corresponds to the BABAR statistical and systematic uncertainties combined in quadrature.
The claimed sub-percent level of systematic uncertainties on the e + e − → π + π − measurements can be verified by comparison of the results from these very different experiments. Above we found that the difference between the KLOE and CMD-2 measurements is larger than their combined systematic uncertainty. Figure 20 shows a relative difference between the KLOE and BABAR measurements. Again the deviations larger than declared systematic errors are seen indicating a presence of unaccounted systematic uncertainties in one or both experiments. Comparison between the CMD-2 and BABAR shown in Fig. 21 also reveals some non-statistical up to 5% deviations both below and above the ρ-resonance maximum. In the whole energy range BABAR data are in fair agreement with the SND (Achasov et al., 2006) results within experimental uncertainties.
In Section VII we discuss the impact of these measurements on the problem of the muon anomaly.
C. e + e − → π + π − π 0 A study of the three-pion production in the ISR process was reported by BABAR in Ref. (Aubert et al., 2004b). The three-pion mass distribution for the e + e − → π + π − π 0 γ reaction shown in Fig. 22 is dominated by the well known ω(782), φ(1020), and J/ψ resonances. For the ω(782) and φ(1020) resonances they determine the product of the leptonic width and the branching fraction to three pions consistent with other measurements and having comparable accuracy. Large data samples make possible the observation of two structures in the 1-2 GeV/c 2 mass region (see Fig. 23). The cross section below 1.4 GeV is in agreement with the SND measurement (Achasov et al., 2002), but at higher energies a large deviation from the DM2 results (Antonelli et al., 1992) is observed. The cross section in this region is fitted (see inset in The m(π + π − π 0 ) distribution for the e + e − → π + π − π 0 γ reaction measured with the BABAR detector (Aubert et al., 2004b). 2008). The parameters of these states are still not well determined. In this case they strongly depend on relative phases between the corresponding amplitudes and their phase differences with the ω(782) and φ(1020) amplitudes. The latter resonances have a much larger decay rate to the 3π mode. The obtained parameters of the ω(1420) and ω(1650) are summarized in Table II. The three-body final state is a relatively simple process for a study of hadron dynamics. Its Dalitz plot analysis shows that the ρ(770) ± π ∓ and ρ(770) 0 π 0 intermediate states dominate at all energies. There is also a small contribution of the ωπ intermediate state with ω decay to π + π − . Figures 24 and 25 show the e + e − → K + K − π 0 and e + e − → K 0 S K ± π ∓ cross sections measured in the BABAR experiment (Aubert et al., 2008b) The e + e − → K 0 S K ± π ∓ cross section measured by BABAR (Aubert et al., 2008b) (top). Comparison of the BABAR measurement with the results of the previous DM1 (Buon et al., 1982) and DM2 (Bisello et al., 1991b) experiments (bottom).
D. e + e − → K + K − π 0 , K 0 S K ± π ∓ , K + K − η
(top) and comparison of the BABAR results with DM1 (Buon et al., 1982) and DM2 (Bisello et al., 1991b) measurements below 2.4 GeV where the previous data are available (bottom). The BABAR data are about 10 times more precise. The "spike" at 3.1 GeV is due to J/ψ decays to these final states and will be discussed later.
In the K + K − 2γ final state the φ(1020)η and φ(1020)π intermediate states were also observed. The measured cross sections for these states, which were not previously studied, are shown in Figs. 26 and 27. The e + e − → φ(1020)η is the best channel for a study of the excited φ state. The contributions of the ω-like states to this channel should be suppressed by the OZI rule.
The reaction e + e − → φ(1020)π 0 is promising for a search for exotic isovector resonances. For ordinary isovector states, the φπ 0 decay should be suppressed by the OZI rule. The authors perform two fits of the cross section. In the first one they assume a single resonance and obtain for it mass and width of 1593±32 MeV/c 2 and 203 ± 97 MeV, respectively. These parameters are compatible with those of the ρ(1700) (Amsler et al., 2008). A somewhat better quality of the fit is achieved if two resonances are assumed. The obtained parameters of the first resonance are 1570±36±62 MeV/c 2 for the mass and 144±75±43 MeV for the width, i.e., consistent with those of the C(1480) state observed in Ref. (Bityukov et al., 1987). The mass and width for the second resonance are 1909 ± 17 ± 25 MeV/c 2 and 48 ± 17 ± 2 MeV, respectively, compatible with the dip already observed in other experiments, predominantly in multipion final states (Antonelli et al., 1996;Aubert et al., 2006b;Baldini et al., 1988;Frabetti et al., 2000). With the limited statistics available at the moment they cannot draw a firm conclusion: an OZI-violating decay of the ρ(1700) cannot be excluded. Figure 28 shows the Dalitz plots for the K + K − π 0 and K 0 S K ± π ∓ final states. It is seen that the KK * (892) and KK * 2 (1430) intermediate states give the main contribution to the KKπ production. For the K 0 S K ± π ∓ final
M 2 (GeV 2 /c 4 ) K ± π ± K S π ± (b) FIG. 28
The Dalitz plot distribution for the K + K − π 0 (a) and K 0 S K ± π ∓ final state (b) from Ref. (Aubert et al., 2008b). A sum over all accessible c.m. energies of the hadronic final states is given.
FIG. 29 Isoscalar (a) and isovector (b) components of the e + e − → KKπ cross section; the e + e − → K ± K * (892) ∓ cross section obtained using e + e − → K + K − π 0 events (c), and the e + e − → φη cross section (d) (Aubert et al., 2008b). The points with error bars are data and the gray band represents the fit and its uncertainty. (Aubert et al., 2008b) from the global fit to the isoscalar and isovector amplitudes using the e + e − → K ± K * (892) ∓ ,K 0 S K ± π ∓ and e + e − → φη cross sections.
R with I = 0 φ ′ φ ′′ Γ R ee B R KK * (892) (eV) 408 ± 49 - Γ R ee B R φη (eV) 172 ± 31 1.9 ± 1.0 mR(MeV) 1723 ± 20 2139 ± 35 ΓR(MeV) 371 ± 75 76 ± 62 R with I = 1 ρ ′ Γ R ee B R KK * (892) (eV) 135 ± 12 mR(MeV) 1506 ± 16 ΓR(MeV) 437 ± 24
state both the neutral K 0K * 0 and charged K ± K * ∓ combinations are involved. Since the K 0K * 0 and K ± K * ∓ amplitudes are the sum and the difference of the isovector and isoscalar amplitudes, respectively, the Dalitz plot population for the K 0 S K ± π ∓ mode is asymmetric and strongly depends on isospin composition. From the Dalitz plot analysis the moduli and relative phase of the isoscalar and isovector amplitudes both for the KK * (892) and KK * 2 (1430) intermediate states were determined. The obtained isoscalar and isovector e + e − → KK * (892) cross sections are shown in Fig. 29 (a,b).
The global fit to the e + e − → φ(1020)η and e + e − → K + K − π 0 cross sections, isovector and isoscalar KK * (892) amplitudes, and their relative phase was performed to determine parameters of the φ and ρ excitations decaying into these final states. The fit results are shown in Fig. 29 and summarized in Table I. The obtained mass and width of the φ ′ and ρ ′ are in reasonable agreement with the parameters of the ρ(1450) and φ(1680) resonances measured in other experiments (see Ref. (Amsler et al., 2008) for references). The parameters of the φ ′′ , which is seen in the φη final state, are close to those for the Y (2175) state observed in the φf 0 (980) final state. This state will be discussed in Sec. III.F.
E. e + e − → π + π − π + π − , π + π − 2π 0
The reactions e + e − → π + π − π + π − , π + π − 2π 0 have the largest cross sections in the energy region above the φ-meson resonance. They were studied with the BABAR detector (Aubert et al., 2005b;Druzhinin, 2007) in the energy region below 4.5 GeV. Figures 30 and 31 show comparison of the BABAR results with the previous direct e + e − measurements, see Refs. (Achasov et al., 2003;Akhmetshin et al., 2004b;Bacci et al., 1980;Bisello et al., 1991a;Cordier et al., 1982a;Cosme et al., 1979;Dolinsky et al., 1991;Kurdadze et al., 1988) Comparison of the BABAR results on the e + e − → π + π − π + π − cross section (Aubert et al., 2005b) with the previous direct e + e − measurements (Achasov et al., 2003;Akhmetshin et al., 2004b;Bacci et al., 1980;Bisello et al., 1991a;Cordier et al., 1982a;Cosme et al., 1979;Dolinsky et al., 1991;Kurdadze et al., 1988).
for π + π − π + π − and Refs. (Achasov et al., 2003;Akhmetshin et al., 1999;Bacci et al., 1981;Bisello et al., 1991a;Cosme et al., 1979;Dolinsky et al., 1991;Kurdadze et al., 1986) for π + π − 2π 0 , in the energy range covered by these measurements. The large difference between the data sets from different experiments indicates that some previous measurements had large, up to 50%, unaccounted systematic errors. The BABAR systematic uncertainty on the e + e − → π + π − π + π − cross section is estimated to be about 5% in the 1-3 GeV energy range. For this channel the BABAR data are in good agreement with the recent SND (Achasov et al., 2003) and CMD-2 (Akhmetshin et al., 2004b) measurements at the energies below 1.4 GeV. The DM2 and BABAR data are also in reasonable agreement.
For the π + π − 2π 0 channel the BABAR results are still preliminary. The estimated systematic uncertainty changes from 8% in the maximum of the cross section to 10% at 1 and 3 GeV.
At energies below 1.4 GeV the BABAR cross sections agree well with the results of the recent SND (Achasov et al., 2003) and older OLYA (Kurdadze et al., 1986) measurements, but not with the ND (Dolinsky et al., 1991) and CMD-2 (Akhmetshin et al., 1999) cross sections that may be affected by large unaccounted systematic errors as mentioned above.
The shape of the cross sections for both reactions shows wide structures peaked at about 1.5 GeV. Different intermediate states contribute to the e + e − → 4π cross sections. The observed bumps are sums of the contributions from the ρ(770), ρ(1450), and ρ(1700) decays Comparison of the BABAR results on the e + e − → π + π − 2π 0 cross section (Druzhinin, 2007) with the previous direct e + e − measurements (Achasov et al., 2003;Akhmetshin et al., 1999;Bacci et al., 1981;Bisello et al., 1991a;Cosme et al., 1979;Dolinsky et al., 1991;Kurdadze et al., 1986).
into these intermediate states, which should be separated for a study of the excited ρ properties. Unfortunately, such analysis was performed at BABAR only at a qualitative level. The two-and three-pion mass distributions for the π + π − π + π − final state are relatively well described by the model of the a 1 (1260)π intermediate state with a small contribution of the f 0 (1300)ρ state. This agrees with the a 1 π dominance hypothesis suggested in Ref. (Akhmetshin et al., 1999) to describe the 4π dynamics at energies below 1.4 GeV. A strong deviation from this hypothesis is observed in the π + π − 2π 0 channel. In addition to the expected ωπ 0 and a 1 π contributions, a surprisingly large contribution of the ρ + ρ − intermediate state was observed. This is demonstrated in Fig. 32, where the 4π mass spectra for ωπ, non-ωπ, and ρ + ρ − intermediate states are shown together with the total mass spectrum for the e + e − → π + π − 2π 0 reaction. The contributions of the different intermediate states were separated using simple conditions on 3π and 2π invariant masses. It is seen that the ρ + ρ − cross section is more than a half of the non-ωπ cross section at the energy about 1.7 GeV. For the π + π − 2π 0 masses higher than 2.5 GeV/c 2 a clear signal of the f 0 (980) meson and a peak at the mass about 1.25 GeV/c 2 (probably from the f 2 (1270) meson) are seen in the π 0 π 0 mass spectrum corresponding to the contributions of the f 0 (980)ρ and f 2 (1270)ρ intermediate states.
F. e + e − → K + K − π + π − , K + K − π 0 π 0 The e + e − → K + K − π 0 π 0 reaction has been never studied before the BABAR experiment (Aubert et al., 2007b;Lees et al., 2011), while the fully charged mode was previously measured with the DM1 detector (Cordier et al., 1982b) but with an about 100 times smaller data set. The measured cross sections are shown in Figs. 33 and 34. The systematic uncertainties for these measurements are estimated to be at the (5-9)% level. The structures seen in the cross section energy dependence cannot be understood without analysis of intermediate states involved.
The distributions of the Kπ invariant masses shown in Fig. 35 indicate that the K * (892) 0 K ± π ∓ and K * (892) ∓ K ± π 0 (similar plots are not shown) intermediate states dominate in these reactions. A small contribution of the K * 2 (1430)Kπ state is also seen ( Fig. 35(b)). A special correlation study (Lees et al., 2011) showed that the intermediate state with two K * , K * (892)K * (892), K * (892)K * 2 (1430), and K * 2 (1430)K * 2 (1430), contributes less than 1% to the total reaction yield (the associated K * (892)K * 2 (1430) production is observed in J/ψ decays). Taking the numbers of events in the K * peaks for each c.m. energy interval, the "inclusive" e + e − → K * (892) 0 Kπ and e + e − → K * 2 (1430) 0 Kπ cross sections shown in Fig. 36 were extracted. Figures 37(a) and 37(b) show scatter plots of the reconstructed a) m(π + π − ) and b) m(π 0 π 0 ) versus m(K + K − ) for selected events of the reactions e + e − → K + K − π + π − and e + e − → K + K − π 0 π 0 , respectively. A clear φ(1020) signal is seen in the K + K − invariant mass in both figures and is discussed in more detail below. A signal of the ρ(770) is observed in the π + π − invariant mass distribution in Fig. 37(a). The π + π − invariant mass distribution for K + K − π + π − events not containing the K * (892) meson is shown in Fig. 38 (a). The ρ(770) peak, probably corresponding to the intermediate K 1 (1230) ± and K 1 (1400) ± K ∓ states, is clearly seen in the π + π − mass spectrum. In Fig. 38 (b) the "inclusive" cross section for the K + K − ρ(770) reaction is presented. It is obtained by fitting the ρ(770) signal in the π + π − invariant mass distributions for each c.m. energy interval in Fig. 38.
One of the interesting ISR studies performed by BABAR (Aubert et al., 2007b) and later reproduced by Belle (Shen et al., 2009) is extracting relatively small contributions of the φ(1020)π + π − , φ(1020)π 0 π 0 , (φ → K + K − ) intermediate states. Since the φ(1020) resonance is relatively narrow, the clean sample of φππ events can be easily separated. Figure 39 shows the m(π + π − ) distribution for these events demonstrating a clear signal from the f 0 (980) resonance and a bump at lower masses which can be interpreted as the f 0 (600) state. A similar plot is obtained for the π 0 π 0 invariant mass. These invariant mass distributions can be fitted with a superposition of two Breit-Wigner functions for the scalar f 0 (980) and f 0 (600) resonances as shown in Fig. 39. The e + e − → φ(1020)π + π − cross section measured by The e + e − → K + K − π + π − cross section measured with the BABAR detector (Lees et al., 2011) in comparison with the only previous measurement by DM1 (Cordier et al., 1982b).
BABAR and Belle is shown in Fig. 40. Two resonance structures are seen at 1.7 GeV and at 2.1 GeV. The BABAR Collaboration investigated decay mechanisms for these structures and concluded that the second structure decays only to the φ(1020)f 0 (980) final state. The structure completely disappears if events associated with the f 0 (980) peak in the m(π + π − ) distribution are removed. The first structure is associated with the φ(1680), a radial excitation of the vector ss state. Its decays to φ(1020)f 0 (600) and φ(1020)f 0 (980) are not forbidden.
A simple VDM based model was suggested to describe the observed e + e − → φ(1020)π + π − cross section. The model assumes that two vector mesons contribute to the cross section; one resonance associated with the φ(1680) decays both to φf 0 (600) and to φf 0 (980), while another referred to as Y (2175) decays to φf 0 (980) only. Since the nominal φ(1680) mass lies below the φf 0 (980) threshold, the φ(1680) → φf 0 (980) decay will reveal itself as a smooth bump in the energy dependence of the . 38 (a) The π + π − mass distribution for K + K − π + π − events (K * (892)Kπ events are excluded); the solid curve represents a fit using a signal Breit-Wigner function with ρ(770) parameters and a polynomial background (hatched area). (b) The e + e − → K + K − ρ(770) cross section obtained using the fitted numbers of ρ-meson events in each 25 MeV c.m. energy interval (Lees et al., 2011). e + e − → φf 0 (980) cross section above 2 GeV. The result of the fit to the e + e − → φ(1020)π + π − cross section with this model is shown in Fig. 41. It is clearly seen that the data above 2 GeV cannot be described with a contribution of the φ(1680) resonance only. An additional relatively narrow resonance Y (2175) is needed to do this. The tongue below 2 GeV in the cross section for the reaction e + e − → φ(1680) → φf 0 (980) in Fig. 41 is due to the finite width of the f 0 (980) state. A relatively clean sample of φf 0 (980) events is selected using the requirement 0.85GeV/c 2 < m(ππ) < 1.1 GeV/c 2 . The cross section for events of the K + K − π + π − mode, fitted with the model described above, is shown in Fig. 42. The contribution of the Y (2175) is seen much better with this selection. The comparison of the e + e − → φf 0 (980) cross sections measured by BABAR in two f 0 (980) decay modes, π + π − and π 0 π 0 , is shown in Fig. 43. It is seen that two measurements agree. The fit of two modes gives the peak cross section, mass and The e + e − → φ(1020)f0(980) cross section measured in the K + K − π + π − (circles) and K + K − π 0 π 0 (squares) final states by BABAR (Lees et al., 2011). The solid and dashed curves represent the results of the two-resonance fit described in the text. width of the resonance:
σ Y = 0.104 ± 0.025 nb, m Y = 2.179 ± 0.009 GeV/c 2 , Γ Y = 0.079 ± 0.017 GeV.
The e + e − → φf 0 (980) cross section measured in the Belle experiment (Shen et al., 2009) is shown in Fig. 44, and also requires a resonance structure with similar parameters. Some properties of this resonance, a relatively small width and absence of the φf 0 (600) decay, are unusual. The nature of this state is not clear (Gomez-Avila et al., 2009;Napsuciale et al., 2007). One of the possible interpretations is that the Y (2175) is a ssss four-quark state. Indeed, the f 0 (600) does not contain strange quarks, while the f 0 (980), strongly coupled with KK, definitely contains them. For the ssss state, the observed Y (2175) → φf 0 (980) is natural decay, while the not seen Y (2175) → φf 0 (600) transition is suppressed by the OZI rule. The observation of the Y (2175) decay to the φη final state containing four s quarks, also supports this hypothesis.
G. e + e − → 2(K + K − )
The reaction e + e − → 2(K + K − ) was studied for the first time by BABAR (Aubert et al., 2007b). The measured cross section is shown in Fig. 45. The most significant structure in the cross sections is due to the J/ψ decay. It is natural to expect that intermediate states for this reaction contain the φ(1020) meson which has a large decay rate to K + K − . Indeed, the strong φ meson peak is seen in the K + K − invariant mass distribution shown in Fig. 46. Since the φ meson is present in almost each four-kaon event, it is concluded that the reaction e + e − → 2(K + K − ) is strongly dominated by the φK + K − production.
A study of events containing the φ meson was performed by BABAR. The K + K − pair forming the φ meson is selected by the requirement that its invariant mass is within ±10 MeV of the φ nominal mass. The invariant mass distribution for the second K + K − pair is shown in Fig. 47(a). Figures 47(b,c,d) show the cross section for events with the K + K − invariant mass in the regions 1, 2, and 3 indicated in Fig. 47(a). An enhancement in the K + K − invariant mass spectrum near the K + K − threshold can be interpreted as being due to decay f 0 (980) → K + K − . Therefore, the cross section for the region 1 is expected to have a structure similar to that observed in e + e − → φf 0 → K + K − ππ (see Sec. III.F). The bump at 2.175 GeV is indeed seen in the cross section shown in Fig. 47(b), however, the data sample is too low to perform a quantitative analysis. The relatively narrow region 2 with 1.06 GeV/c 2 < m(K + K − ) < 1.2 GeV/c 2 is responsible for the spike seen at 2.25 GeV in Fig. 45. The spike is much more significant in Fig. 47 (c) showing the cross section for events from this mass region. There is no explanation of this structure.
The peak in the K + K − mass spectrum near 1.5 GeV/c 2 is associated with the f ′ 2 (1525). The region 3 (1.45 GeV/c 2 < m(K + K − ) < 1.6 GeV/c 2 ) is chosen to select φf ′ 2 (1525) events. The cross section for this mass region is shown in Fig. 47(d) and exhibits a broad structure at 2.7 GeV and a strong J/ψ signal.
H. e + e − → 5 mesons
The BABAR detector studied a number of ISR reactions with five hadrons in the final state: 2(π + π − )π 0 , 2(π + π − )η, K + K − π + π − π 0 , and K + K − π + π − η (Aubert et al., 2007c).
The e + e − → 2(π + π − )π 0 reaction has the largest cross section among the processes mentioned above. In the π + π − π 0 invariant mass spectrum for this re- action (Fig. 48) clear signals of η and ω mesons are seen corresponding to the ωπ + π − and ηπ + π − intermediate states. The cross sections for these reactions were measured in direct e + e − experiments (Akhmetshin et al., 2000;Antonelli et al., 1988Antonelli et al., , 1992Cordier et al., 1981;Druzhinin et al., 1986), but BABAR data are significantly more accurate. The e + e − → ηπ + π − and e + e − → ωπ + π − cross sections measured by BABAR and in direct e + e − experiments are shown in Fig. 49 and Fig. 50, respectively. The m(π + π − π 0 ) distribution for 2(π + π − )π 0 events (Aubert et al., 2007c). The two pions from the reaction e + e − → ηπ + π − pre- The m(π + π − ) distribution for selected ωπ + π − events in data (points with error bars) and in simulation (histogram) (Aubert et al., 2007c).
dominantly form the ρ(770). In the two-pion invariant mass spectrum for the e + e − → ωπ + π − reaction shown in Fig. 51 a clear f 0 (980) signal is observed. The contribution of the ωf 0 (980) intermediate state was extracted, and the cross section e + e − → ωf 0 (980) was measured for the first time. It is shown in Fig. 52. The e + e − → ωπ + π − cross section after subtraction of the ωf 0 (980) contribution is shown in Fig. 53. The cross section is fitted with a sum of of two resonances. The fit result is shown in Fig. 53 and listed in Table II. The obtained parameters are in good agreement with those obtained for the π + π − π 0 channel (see discussion in Sec. III.J).
The ωπ + π − and ηπ + π − intermediate states do not saturate the total e + e − → 2(π + π − )π 0 cross section as shown in Fig. 54. The m(π + π − ) and m(π ± π 0 ) distributions for e + e − → 2(π + π − )π 0 events with the ωπ + π − and ηπ + π − contributions excluded are shown in Fig. 55. From the analysis of these two-pion mass distributions it was concluded that the dominant intermediate state for these events is ρ 0 ρ ± π ∓ . The ρπ mass spectrum also exhibits a resonance structure with the parameters: m X = 1.243 ± 0.012 ± 0.020 GeV/c 2 ; Γ X = 0.410 ± 0.031 ± 0.030 GeV. The e + e − → 2(π + π − )π 0 cross section (Aubert et al., 2007c) and contributions from ωπ + π − (squares) and ηπ + π − (triangles). The m(π + π − ) (points) and m(π ± π 0 ) (histogram) distributions for 2(π + π − )π 0 events with the ωπ + π − and ηπ + π − contributions excluded (Aubert et al., 2007c).
The yield of the X(1240) state is consistent with the complete dominance of the quasi-two-body reaction e + e − → ρ(770)X(1240) → ρ 0 ρ ± π ∓ . The best candidates for X(1240) are the π(1300) or a 1 (1260) resonances (Amsler et al., 2008). The e + e − → 2(π + π − )η reaction was studied for the first time by BABAR. The measured cross section is shown in Fig. 56. A rich internal structure is expected for the 4πη final state. The four-pion mass distribution exhibits a wide resonance structure which can be a mixture of the known ρ(1450) and ρ(1700) resonances. Figure 57(a) shows the ηπ + π − mass distribution with two σ(e + e -→π + π -π + π -η) (nb)
FIG. 56
The e + e − → 2(π + π − )η cross section measured by BABAR (Aubert et al., 2007c). . 57 (a) The m(ηπ + π − ) distribution for 2(π + π − )η events; (b) The e + e − → η(958)π + π − cross section and the result of the Breit-Wigner fit (Aubert et al., 2007c). narrow peaks. The lowest mass peak corresponds to the η ′ (958). The measured e + e − → η ′ (958)π + π − cross section is shown in Fig. 57(b). The resonance-like structure observed in the cross section energy dependence is fitted with a single Breit-Wigner function. The fitted resonance parameters are σ 0 = 0.18 ± 0.07 nb, m x = 1.99 ± 0.08 GeV/c 2 , Γ x = 0.31 ± 0.14 GeV.
There is no entry for these parameters in the current PDG tables (Amsler et al., 2008). Taking into account possible large systematic uncertainties on mass and width, the observed resonance can be interpreted as the ρ(2150), extensively discussed in the past (Amsler et al., 2008).
Another clear structure seen in the ηπ + π − mass distribution ( Fig. 57(a)) and shown in detail in Fig. 58(a) was interpreted as the f 1 (1285) meson. The measured e + e − → f 1 (1285)π + π − cross section is shown in σ(e + e -→f1(1285)π + π -) (nb) FIG. 58 (a) The m(ηπ + π − ) distribution for 2(π + π − )η events; (b) The e + e − → f1(1285)π + π − cross section and the result of the Breit-Wigner fit (Aubert et al., 2007c). Fig. 58(b). The observed resonance structure has parameters: σ 0 = 1.00 ± 0.18 ± 0.15 nb, m x = 2.15 ± 0.04 ± 0.05 GeV/c 2 , Γ x = 0.35 ± 0.04 ± 0.05 GeV.
The mass and width are close to those measured in the reaction e + e − → η ′ (958)π + π − , but the cross section is significantly larger. The observed structure can also be assigned to the ρ(2150) resonance. The e + e − → K + K − π + π − π 0 cross section measured by BABAR (Aubert et al., 2007c).
The cross section for the reaction e + e − → K + K − π + π − π 0 shown in Fig. 59 was also measured by BABAR for the first time. The three-pion and twokaon invariant mass spectra for this reaction are shown in Fig. 60. Clear η and ω signals are seen in the three-pion mass distribution and a strong φ signal in the K + K − mass distribution. Figure 61 shows the calculated cross sections for the e + e − → φη (a) and e + e − → ωK + K − (b) subprocesses. The former is in good agreement with that The m(π + π − π 0 ) (a) and m(K + K − ) (b) distributions for K + K − π + π − π 0 events (Aubert et al., 2007c). The hatched histogram represents the estimated non-ISR background. obtained in the η → γγ mode (Aubert et al., 2008b). It is a first observation of the process e + e − → ωK + K − . The reaction e + e − → K + K − π + π − η was also studied by BABAR in Ref. (Aubert et al., 2007c). The measured cross section is small and rises from threshold to a maximum value of about 0.2 nb at 2.8 GeV, followed by a monotonic decrease with increasing energy. The clear signal of the φη ′ (958) intermediate state is observed in the K + K − and π + π − η mass distributions. Unfortunately, the φη ′ (958) invariant mass spectrum is not shown in Ref. (Aubert et al., 2007c). This spectrum is interesting since for the four-quark Y (2175) resonance (Sec. III.F) the decay to φη ′ (958) is expected.
I. e + e − → 3(π + π − ), 2(π + π − π 0 )
The reactions e + e − → 3(π + π − ) and e + e − → 2(π + π − π 0 ) were studied before in a number of direct e + e − experiments, but with limited data samples (Bacci et al., 1981;Baldini et al., 1988;Bisello et al., 1981;Cosme et al., 1979;Esposito et al., 1981).
The BABAR detector studied the six-pion production using the ISR method from the threshold to 4.5 GeV (Aubert et al., 2006b). As a result, the statistical and systematic uncertainties on the cross sections were dramatically reduced.
An interesting feature of the 3(π + π − ) final state is the presence, among many π + π − combinations, of only one ρ(770) 0 per event. No other intermediate resonance signals were observed. For the 2(π + π − π 0 ) final state also one ρ(770) only, neutral or charged, per event is observed in the expected proportion 1 : 2.
In the 2(π + π − π 0 ) final state, η and ω signals are seen in the π + π − π 0 invariant mass distribution. A small fraction of events corresponds to the associated production of the η and ω. Selecting these ηω events the e + e − → ωη cross section shown in Fig. 62 (left) was measured for the first time. The observed resonance structure which is expected to be the ω(1650) is fitted with a Breit-Wigner function. The fitted curve is shown in Fig. 62 (left). The obtained resonance parameters are listed in Table II together with the resonance parameters obtained from the fits to e + e − → 3π and e + e − → ωππ cross sections (see discussion in Sec. III.J). Comparison of the ω3π and ω(782)η contributions with the total e + e − → 2(π + π − π 0 ) cross section is shown in Fig. 62(right).
The total e + e − → 2(π + π − π 0 ) and e + e − → 3(π + π − ) cross sections shown in Fig. 63 have very similar energy dependence. The ratio of the cross sections is almost constant over the energy range under study. Its average value is equal to 3.98 ± 0.06 ± 0.41. A dip structure just below 2 GeV in the six-pion cross section was observed in the DM2 experiment (Baldini et al., 1988) and then confirmed in the diffractive photoproduction of six pions in the FOCUS experiment (Frabetti et al., 2000). Such a dip at 1.9 GeV was also observed in the total cross section of e + e − annihilation into hadrons by the FENICE detector (Antonelli et al., 1996). This structure in BABAR data is fitted using the Breit-Wigner function coherent with the smooth non-resonant background. The fitted curves for both cross sections are shown in Fig. 63. The following "resonance" parameters are obtained: m 6π = 1.88 ± 0.03 GeV/c 2 , m 4π2π 0 = 1.86 ± 0.02 GeV/c 2 , Γ 6π = 0.13 ± 0.03 GeV, Γ 4π2π 0 = 0.16 ± 0.02 GeV.
The parameter values seem to be essentially independent of the final-state charge combination. These values may be also compared with those obtained in the FO-CUS experiment (Frabetti et al., 2000): m = 1.91 ± 0.01 GeV/c 2 , Γ = 0.037 ± 0.013 GeV. The mass values are consistent, but the widths obtained by BABAR are substantially larger. Note that typical widths of known isovector resonances with mass near 2 GeV/c 2 are 200-300 MeV. Since the obtained mass of the resonance structure is close to the double proton mass, it may be interpreted as a proton-antiproton subthreshold bound state (Datta and O'Donnell, 2003). σ(e + e -→2(π + π -)π 0 π 0 ) (nb)
FIG. 62
The e + e − → ωη cross section (left) and contribution of e + e − → ωπ + π − π 0 (squares) and e + e − → ωη (triangles) cross sections to all e + e − → 2(π + π − π 0 ) events (right) (Aubert et al., 2006b). . 63 The e + e − → 2(π + π − π 0 ) cross section (left) and the e + e − → 3(π + π − ) cross section (right) with a fit to the (anti-)resonance function (Aubert et al., 2006b). A resonance responsible for the "dip" is shown shaded.
σ(π + π - π + π - π + π - ) (nb) FIG
J. Summary
The BABAR ISR study covers the low energy range of e + e − interactions from the 2π threshold to 4.0-4.5 GeV with exclusively measured cross sections for many processes. Figure 64 shows all exclusive cross sections measured by BABAR in a single plot. One can see that in most of the cases cross sections strongly depend on energy and their central values vary by five orders of magnitude.
One of the purposes of the BABAR ISR program was to measure the total hadronic cross section in the energy range below 2 GeV with improved accuracy (Druzhinin, 2007). To finalize this program, the cross sections at least for the π + π − 3π 0 , π + π − 4π 0 , K + K − , K S K L , K S K L ππ, K S K ± π ∓ π 0 final states should be additionally measured.
Note that the total cross section value is not the direct sum of the cross sections shown in Fig. 64. Each channel has internal subprocesses which include different resonances with different branching fractions to the observed final states. To perform a correct summation, each subchannel should be extracted separately and corrected II Summary of the ω(1420)(ω ′ ) and ω(1650)(ω") resonance parameters obtained from the fits described in the text. The values without errors were fixed in the fits.
Fit
ωη (Aubert et al., 2006b) -34.9/48 --for the decay rate of an internal resonance. The exclusive ISR study of hadron production allows to investigate and improve our knowledge of excited states for light vector mesons. For most of them the parameters are still rather imprecise and new investigations are needed.
φ w ′′ (rad.) 0 0 0 - σ0w (nb) 0 102±67 PDG - χ 2 /n.d.f.
For multihadron final states it may be difficult to isolate the contributions of particular vector resonances due to presence of many interfering intermediate states. An example is the reaction e + e − → π + π − π 0 π 0 , to which the ωπ 0 , a 1 π, ρ + ρ − intermediate states give dominant contributions. The two latter states contain wide resonances and strongly interfere. A partial-wave analysis is required to separate the sub-processes of the e + e − → π + π − π 0 π 0 and e + e − → π + π − π + π − reactions. We hope that BABAR has enough data to perform such an analysis. This is necessary to separate contributions of two excited ρ states, ρ(1450) and ρ(1700), and determine their parameters. A detailed study of intermediate mechanisms strongly benefits from the quasi-two-body character of some final states, but is much more difficult for multibody final states.
The BABAR ISR data on isoscalar channels already allow to improve parameters of excited φ and ω states. A global fit to the isovector and isoscalar components for the process e + e − → K * (892)K, and the e + e − → φη cross section (Buon et al., 1982) was used to determine parameters of the φ(1680) resonance (see Table I).
In Table II we summarize the results of the fits to the e + e − → 3π, e + e − → ωππ, and e + e − → ωη cross sections performed by BABAR in Refs. (Aubert et al., 2004b(Aubert et al., , 2006b(Aubert et al., , 2007c, and compare them with the corresponding PDG (Amsler et al., 2008) parameters. A simultaneous fit to all three channels could significantly improve the results and give additional information on relative decay rates.
Due to numerous extensive studies of various exclusive cross sections, we have learned a lot about the total cross section of e + e − annihilation into hadrons and its com-ponents allowing a more precise estimation of hadronic vacuum polarization effects to be performed (see also Section VII).
IV. BARYON FORM FACTORS
A. General formulae
The cross section of the process e + e − → BB, where B is a spin-1/2 baryon, is given by (Renard, 1981) dσ dΩ = α 2 βC 4s (|G M (s)| 2 (1 + cos 2 θ)
+ 1 τ |G E (s)| 2 sin 2 θ) (11) − 1 τ |G E (s)| 2 ) sin 2 θ cos 2ϕ ,(12)
where β = 1 − 4m 2 B /s and m B are the baryon velocity (v/c) and mass, C = y/(1 − e −y ) with y = παm B /β √ s is the Coulomb correction factor (Tzara, 1970) for charged baryons (C = 1 for neutral baryons), τ = m 2 /4M 2 B is the inverse helicity suppression factor, G M and G E are the baryon magnetic and electric form factors. The number of form factors (two) corresponds to two BB states with different angular momenta: 3 S 1 and 3 D 1 . At the BB threshold the D-wave state vanishes, and |G E | = |G M |. At high √ s the terms containing G E are suppressed by the helicity factor 1/τ . With unpolarized beams the total cross section is
σ(s) = 4πα 2 βC 3s |G M (s)| 2 + 1 2τ |G E (s)| 2 .(13)
As discussed above (see Sections II.A and II.D, and Fig. 7), the detection efficiency in the ISR measurement with a tagged photon has weak dependence on the angular distributions of final hadrons. In the case of the The results on e + e − → π + π − π 0 π 0 are preliminary.
dibaryon production this allows one to measure the total cross section [Eq.(13)] independently of the relation between the electric and magnetic form factors. The ratio of the form factors can then be determined from an analysis of the baryon angular distribution. In direct e + e − or pp experiments the range of the accessible polar angles is limited by the detector acceptance. In this case the cross section cannot be measured in a modelindependent way. The detection efficiency is determined, and the proton magnetic form factor |G M | is extracted, usually under the assumption that |G M | = |G E |. In the BABAR paper (Aubert et al., 2006a) on the ISR study of the reaction e + e − → pp the effective form factor is introduced as a linear combination of |G M | 2 and |G E | 2 :
|F (m)| 2 = 2τ |G M (s)| 2 + |G E (s)| 2 2τ + 1(14)
With the effective form factor the total cross section looks like
σ 0 (m) = 4πα 2 βC 3m 2 (1 + 1 2τ )|F (m)| 2(15)
The effective form factor defined in such a way allows an easy comparison of the results of the model-independent ISR measurement with |G M | obtained in direct e + e − and pp experiments under the assumption that |G M | = |G E |.
The modulus of the ratio of the electric and magnetic form factors can be determined from the analysis of the baryon polar angle distribution. This distribution can be presented as a sum of the terms proportional to |G M | 2 and |G E | 2 . For the e + e − → ppγ cross section the fully differential formula can be found in (Czyż et al., 2004). In this process the θ p dependences of the G E and G M terms are not strongly different from sin 2 θ p and 1 + cos 2 θ p , describing the angular distributions for the electric and magnetic form factors in Eq. (12). Note that in direct e + e − experiments with transversely polarized beams a study of the proton azimuthal angle distribution can improve G E /G M separation (see Eq. (12)).
A nonzero relative phase between the electric and magnetic form factors manifests itself in a polarization of the outgoing baryons. In the reaction e + e − → BB this polarization is perpendicular to the production plane (Dubnickova et al., 1996). For the ISR process e + e − → BBγ the polarization observables are analyzed in Refs. (Czyż et al., 2007;Kardapoltzev, 2007). In the case of the ΛΛ final state the Λ → pπ − decay can be used to measure the Λ polarization and hence the phase between the form factors.
B. Measurement of time-like baryon form factors
Measurements of the e + e − → pp cross section have been performed in e + e − experiments (Ablikim et al., 2005;Antonelli et al., 1998;Bisello et al., 1983Bisello et al., , 1990Castellano et al., 1973;Delcourt et al., 1979;Pedlar et al., 2005) with a (20-30)% precision. The cross section and the proton form factor were deduced assuming |G E | = |G M |. More precise measurements of the proton form factor have been performed in pp → e + e − experiments (Ambrogiani et al., 1999;Armstrong et al., 1993;Bardin et al., 1994). In the PS170 experiment (Bardin et al., 1994) at LEAR, the proton form factor was measured from threshold (pp annihilation at rest) up to a mass of 2.05 GeV/c 2 . The ratio |G E /G M | was measured using the angular dependence of the cross section and was found to be compatible with unity. The LEAR data show a strong dependence of the form factor on pp mass near threshold, and a weak dependence in the range 1.95-2.05 GeV/c 2 . The Fermilab experiments, E760 (Armstrong et al., 1993) and E835 (Ambrogiani et al., 1999), show that the form factor decreases rapidly at higher masses, in agreement with the perturbative QCD prediction G M ∝ α 2 s (m 2 )/m 4 . Experimental information on the reactions e + e − → ΛΛ, Σ 0Σ0 , ΛΣ 0 is very scarce. The e + e − → ΛΛ cross section is measured to be 100 +65 −35 pb at 2.386 GeV, and at the same energy the upper limits for e + e − → Σ 0Σ0 (< 120 pb) and e + e − → ΛΣ 0 (< 75 pb) cross sections have been obtained (Bisello et al., 1990).
C. e + e − → ppγ
The first ISR baryon experiment was the measurement of the proton-antiproton production cross section (Aubert et al., 2006a) by BABAR. The measured e + e − → pp cross section shown in Figs. 65 and 66 is almost flat near the threshold, and then decreases from 1 nb to about 1 pb at 4.5 GeV. There are two rapid drops of the cross section near 2.15 and 2.9 GeV. The BABAR proton form factor data presented in Fig. 67, in general, agree with the previous measurements. Figure 68 shows an expanded view of the near-threshold region. The BABAR measurement confirms the PS170 observation (Bardin et al., 1994) of the significant increase in the form factor for energies approaching the pp threshold. The proton form factor reaches about 0.6 at the threshold.
A study of the proton angular distribution allows one to extract the value of the ratio of the electric and magnetic form factors |G E /G M |. The results of the BABAR |G E /G M | measurement are shown in Fig. 69 in comparison with the data obtained at LEAR (Bardin et al., 1994). In disagreement with the LEAR result, the BABAR data indicate that |G E /G M | significantly exceeds unity in the energy range between threshold and The e + e − → pp cross section measured by BABAR (Aubert et al., 2006a) in comparison with the BES data (Ablikim et al., 2005) D. e + e − → ΛΛγ
The e + e − → ΛΛ cross section measured by the BABAR detector (Aubert et al., 2007d) is shown in Fig.70 in comparison with the only previous measurement (Bisello et al., 1990). The BABAR measurement is based on about 200 ΛΛ events selected in the decay mode Λ → pπ.
The measured Λ effective form factor is shown in Fig.71. The ratio |G E /G M | is found to be consistent The proton form factor measured in different experiments (Ablikim et al., 2005;Ambrogiani et al., 1999;Antonelli et al., 1998;Armstrong et al., 1993;Aubert et al., 2006a;Bardin et al., 1994;Bisello et al., 1983;Delcourt et al., 1979;Pedlar et al., 2005). The solid line represents the QCD fit described in the text. The proton form factor in the near-threshold region (Antonelli et al., 1998;Aubert et al., 2006a;Bardin et al., 1994;Delcourt et al., 1979).
with unity. The use of the Λ → pπ decay allows to measure the relative phase φ Λ between the complex G E and G M form factors. A non-zero φ Λ leads to polarization ζ of the outgoing baryons. The value of ζ is extracted from the analysis of the proton angular distribution in the Λ → pπ decay. The measured cos θ pζ distribution, where θ pζ is the angle between the Λ polarization vector and the proton momentum in the Λ rest frame, is shown in Fig.72. No cos θ pζ distribution asymmetry corresponding to the non-zero polarization is seen. Because of the limited data sample only a very weak limit on the phase between the G E and G M has been set for the Λ hyperon: −0.76 < sin φ Λ < 0.98. The e + e − → ΛΛ cross section measured by BABAR (Aubert et al., 2007d) in comparison with the DM2 measurement (Bisello et al., 1990).
E. e + e − → Σ 0Σ0 , ΛΣ 0 (Σ 0Λ )
The BABAR measurement of the Σ 0 and Σ 0 Λ form factors is described in Ref. (Aubert et al., 2007d). The decay chain Σ 0 → Λγ → pπγ is used to reconstruct Σ 0 . About 20 candidate events were selected for each of the ISR reactions, e + e − → Σ 0Σ0 γ and e + e − → Σ 0Λ γ. The effective Σ 0 and Σ 0 Λ form factors are shown in Fig.71. The corresponding values of the e + e − → Σ 0Σ0 and e + e − → Σ 0Λ cross sections are about 40 pb near the reaction thresholds. It is seen that the Λ, Σ 0 and Σ 0 Λ form factors are of the same order.
F. Summary
The baryon form factors are a subject of various phenomenological models (see Ref. (Baldini et al., 2009) and references therein). QCD predicts for the baryon form factor the asymptotic behavior F (q 2 ) ∼ α 2 s (q 2 )/q 4 (Chernyak and Zhitnitsky, 1977). Comparison of this prediction with the data on the proton form factor is shown in Fig. 67. It is seen that the asymptotic regime is reached at energies higher than 3 GeV.
The remarkable feature of the process e + e − → pp is a nearly flat cross section in the 200-MeV region above the pp threshold. This feature is explained in Ref. (Baldini et al., 2009) by the opposite trends in the energy dependence of the S-wave and D-wave contributions.
A natural explanation for the sharp increase of the proton form factor in the vicinity of the pp threshold is the final state interaction of the proton and antiproton (see, for example, Ref. (Dmitriev and Milstein, 2007) and references therein). Another possibility is a contribution of the vector-meson state located just below the pp threshold. This state is observed in the reaction e + e − → 6π (Aubert et al., 2006b;Baldini et al., 1988;Frabetti et al., 2000).
The rapid drop of the cross section at 2.15 GeV may be a manifestation of the isovector state ρ(2150), which is seen in the reactions e + e − → η ′ π + π − and e + e − → f 1 (1285)π + π − (Aubert et al., 2007c). The drop in the cross section near 2.9 GeV is still not understood.
The e + e − → ΛΛ cross section has some features similar to those for the process e + e − → pp. In the energy region of about 200 MeV above threshold the ΛΛ cross section is flat; the G E /G M ratio is consistent with that measured by BABAR for the e + e − → pp. The attempt to explain the unusual energy dependence of the e + e − → ΛΛ cross section was made in Ref. (Baldini et al., 2009). A fit to the Λ form factor with the power-law function const/q n (Fig.73) gives n = 9.2 ± 0.3. This n value strongly differs from the QCD asymptotic prediction n = 4. Similarly to the pp case, the asymptotic regime is not reached for e + e − → ΛΛ at the energies below 3 GeV.
The cross sections of the processes e + e − → Σ 0Σ0 and e + e − → ΛΣ 0 (Σ 0Λ ) have been measured with large errors. The corresponding Σ 0 and Σ 0 Λ form factors (Fig.71) show a monotonic decrease starting just from the threshold. A fit to form factor data with the powerlaw function gives n > 4 but with large errors.
It is interesting to compare the measured form factors with each other and with the QCD prediction for the asymptotic form factor ratios (Chernyak et al., 1989):
F p = 4.1F Λ , F Σ 0 = −1.18F Λ , F Σ 0 Λ = −2.34F Λ .
From comparison of the form factors in Fig.71, it is seen that the prediction works (possibly accidentally) only for the ratio of the Λ and Σ 0 form factors. The ratio F Λ /F p falls with energy. In the highest energy interval 2.8-3.00 GeV the ratio is equal to 0.3 +0.2 −0.3 and agrees with the asymptotic value 0.24. This is an indication that the asymptotic regime is reached just above 3 GeV.
The BABAR experiment shows that the ISR method is well suited for the measurement of baryon form factors. Future Super B-factories as well as already running BEPC e + e − collider will make possible measurements q (GeV) of the form factors, especially for the proton, with unprecedented accuracy. High-precision measurements of the proton form factor are also planned in the PANDA experiment. .
V. DECAYS OF THE J/ψ AND ψ(2S)
For all the processes described in the previous sections, clear narrow peaks are seen in the energy dependence of the cross sections corresponding to the J/ψ and ψ(2S) decays. The Born cross section for the ISR production of a narrow resonance, for example, the J/ψ, decaying to the final state h can be calculated using (Aubert et al., 2004a)
σ J/ψ = 12π 2 Γ(J/ψ → e + e − )B(J/ψ → h) sm J/ψ W 0 (θ 0 , x J/ψ )(16)
where m J/ψ and Γ(J/ψ → e + e − ) are the mass and electronic width of the J/ψ meson, x J/ψ = 1 − m 2 J/ψ /s, and B(J/ψ → h) is the branching fraction for the J/ψ decay to the final state h. The function W 0 is described in Sec. I.C by Eq. (7). Therefore, a measurement of the number of J/ψ → h decays in the ISR process e + e − → hγ determines the product of the electronic width and the branching fraction: Γ(J/ψ → e + e − )B(J/ψ → h).
The total cross section for the process e + e − → γJ/ψ with a tagged ISR photon (θ 0 = 30 • ) is about 3.4 pb. With the integrated luminosity of ∼ 500 fb −1 collected by the BABAR detector it corresponds to about 1.7 million produced J/ψ's. This number is significantly smaller than, for example, about 60 million J/ψ's produced in the BESII experiment at the BEPC e + e − collider. However, the general quality of the BABAR detector and its particle identification in particular, are much better compared to the BESII detector. As a result, the detector efficiency and the integrated luminosity are determined with lower systematic errors. A typical systematic uncertainty of the BABAR measurement is 3-5%, while BESII usually quotes 10-15%. The lower systematic error makes ISR results on many J/ψ decays competitive with BESII and other previous measurements. Practically all decays with the rates about 10 −3 and higher can be measured via ISR with better overall accuracy. Moreover, because of excellent particle identification, many J/ψ and ψ(2S) decays with kaons in the final state have been studied using ISR for the first time.
In the BABAR experiment the ISR method enabled to measure a few tens of J/ψ and ψ(2S) decays with the best-to-date accuracy and discover about 20 new decays of these resonances.
A. Leptonic decays
The BABAR (Aubert et al., 2004a) and CLEO (Adams et al., 2006) collaborations performed a study of the J/ψ production in the reaction e + e − → µ + µ − γ. The dimuon mass spectrum for this reaction obtained by BABAR is shown in Fig. 74. The signal of the ISR J/ψ production is well seen in the mass spectrum. The nonresonant spectrum is due to muon pair production in the process e + e − → µ + µ − γ, where photon can be emitted by both initial electrons and final muons. Since the dimuon decay of the J/ψ meson proceeds through a single photon transition, J/ψ → γ * → µ + µ − , the angular and momentum distributions for events from the J/ψ peak are completely identical to those for the ISR part of nonresonant events. The idea of the BABAR measurement is to determine the ratio of the number of J/ψ events to the level of the nonresonant spectrum which is well known theoretically. The dimuon spectrum of Fig. 74 has been fit with a function taking into account the energy dependence of the nonresonant cross section and the experimental J/ψ line shape. The ratio
r = N J/ψ dN dm · ∆m(17)
was the main fit parameter. After substituting cross sections for the numbers of events, this ratio can be rewritten
r = σ Born J/ψ dσ Born ISR dm · ∆m · 1 K ; K = dσ vis Total /dm dσ vis ISR /dm .(18)
Detector acceptances and ISR radiative corrections, which are the same for the nonresonant ISR and J/ψ contributions to the reaction e + e − → µ + µ − γ, cancel in the ratio. The total nonresonant cross section includes the FSR contribution, which is parameterized in terms of K, the ratio of the visible nonresonant total and ISRonly (FSR switched off) cross sections. Since BABAR selects events with the photon emitted at a large angle, the FSR contribution is relatively large. Using simulated events, the coefficient K = 1.11 ± 0.01 (statistical error only) is determined for the selection criteria used. The result of the fit is shown in Fig. 74. The value r = 18.94 ± 0.44 is found with χ 2 /ndf = 122/144. From the product r · K = 21.03 ± 0.49 ± 0.47 the cross section σ J/ψ = 2124 ± 49 ± 47 fb and the product of the J/ψ parameters Γ ee · B µµ = 0.3301 ± 0.0077 ± 0.0073 keV are determined. The main sources of the systematic error quoted are uncertainties in the J/ψ line shape and the coefficient K, both due to imperfect simulation of the detector response.
Using the values for B ee and B µµ (Eidelman et al., 2004), which are well measured in the cascade ψ(2S) → J/ψπ + π − decays (Bai et al., 1998), the electronic and total widths of the J/ψ meson were derived, Γ ee = 5.61 ± 0.20 keV, Γ = 94.7 ± 4.4 keV .
These were the best-to-date measurements of the J/ψ parameters.
The BABAR measurement was improved by CLEO. With the integrated luminocity of 281 pb −1 collected at √ s = 3.77 GeV about 13 × 10 3 ISR produced J/ψ → µ + µ − events were selected (compared to 8 × 10 3 J/ψ → µ + µ − events in the BABAR measurement based on a 88 fb −1 data sample). Since CLEO used the untagged approach, the FSR contribution to the nonresonant cross section was significantly reduced. The second important improvement of the method was that the J/ψ line shape was extracted from data. To do this, the data collected by CLEO at the ψ(2S) resonance were used to select a clean sample of ψ(2S) → J/ψπ + π − , J/ψ → µ + µ − events. The CLEO result for the product of the J/ψ parameters is Γ ee · B µµ = 0.3384 ± 0.0058 ± 0.0071 keV. Despite the analysis improvements described above, the systematic error of the CLEO result was not reduced compared to the BABAR measurement. However, the sources of the systematic uncertainties are different for the two measurements. So the results can be considered as completely independent.
The data collected at √ s = 3.77 GeV were used by CLEO to study ψ(2S) ISR production (Adam et al., 2006). ISR ψ(2S) events were selected in the decay modes ψ(2S) → π + π − J/ψ, π 0 π 0 J/ψ, and ηJ/ψ with the J/ψ decaying to the lepton pair, e + e − or µ + µ − . From the number of the ψ(2S) events the products Γ(ψ(2S) → e + e − )B(ψ(2S) → XJ/ψ), where X = π + π − , π 0 π 0 , and η, were obtained. Since the branching fractions for these decay modes are known with 1.5-2% accuracy (Amsler et al., 2008), the measurement of the products can by used to improve accuracy of the ψ(2S) electronic width. The CLEO result dominates in the current PDG value Γ(ψ(2S) → e + e − ) = 2.36 ± 0.04 keV (Amsler et al., 2008).
B. Decays to light mesons and baryons
A systematic study of the J/ψ and ψ(2S) decays to light hadrons was performed in the BABAR experiment (Aubert et al., 2004b(Aubert et al., , 2005b(Aubert et al., , 2006a(Aubert et al., ,b, 2007c. An example of the J/ψ signal for J/ψ → π + π − π 0 , one of the most probable J/ψ decay modes, is shown in Fig. 75 (Aubert et al., 2004b). It is seen that the nonresonant background is small. From the number of events at the peak, the product Γ(J/ψ → e + e − )B(J/ψ → 3π) = 0.122 ± 0.005 ± 0.008 keV was determined. Using the J/ψ electronic width value, known from the ISR study of the J/ψ → µ + µ − decay, the branching fraction B(J/ψ → 3π) = (2.18±0.19)% was calculated, which differed by about 50% from the PDG value, (1.47 ± 0.13)%, available when the analysis (Aubert et al., 2004b) was carried out. Similar deviation was observed in the BES experiment (Bai et al., 2004) where B(J/ψ → 3π) = (2.10 ± 0.11)% was obtained.
Another example of a J/ψ decay mode with a rather high probability, which was studied using ISR, is shown in Fig. 76, where the signals of J/ψ → 2(π + π − )π 0 and ψ(2S) → 2(π + π − )π 0 are clearly seen (Aubert et al., 2007c). Again, by determining the number of peak events over the nonresonant background and using Eq. (16), the product Γ(J/ψ → e + e − )B(J/ψ → π + π − π + π − π 0 ) = (3.03 ± 0.05 ± 0.18) × 10 −4 keV was determined. The value of the J/ψ → π + π − π + π − π 0 branching fraction obtained from this product, (5.46±0.09±0.34)%, differed by about 5σ from the PDG value, (3.37 ± 0.26)%, available (Aubert et al., 2004b(Aubert et al., , 2005b(Aubert et al., , 2006a(Aubert et al., ,b, 2007c compared to the current world-average values (Nakamura et al., 2010).
Measured
Measured J/ψ or ψ(2S) branching fraction (10 −3 ) quantity value (eV) BABAR PDG-2010 Γ J/ψ ee B J/ψ→π + π − π 0 122 ± 5 ± 8 21.8 ± 1.0 ± 1.6 20.7 ± 1.2(S = 1.2) a Γ J/ψ ee B J/ψ→2(π + π − )
19.5 ± 1.4 ± 1.3 3.70 ± 0.26 ± 0.37 3.55 ± 0.23 Γ J/ψ ee B J/ψ→2(π + π − )π 0 303 ± 5 ± 18 54.6 ± 0.9 ± 3.4 41 ± 5(S = 2.4) Γ J/ψ ee B J/ψ→3(π + π − ) 23.7 ± 1.6 ± 1.4 4.40 ± 0.29 ± 0.29 4.3 ± 0.4 Γ J/ψ ee B J/ψ→2(π + π − π 0 ) 89 ± 5 ± 10 16.5 ± 1.0 ± 1.8 16.2 ± 2.1 Γ J/ψ ee B J/ψ→K + K − π + π − 37.9 ± 0.8 ± 1.1 6.84 ± 0.15 ± 0.27 6.6 ± 0.5 Γ J/ψ ee B J/ψ→K + K − π 0 π 0 11.8 ± 0.8 ± 0.9 2.12 ± 0.15 ± 0.18 2.45 ± 0.32 Γ
J/ψ ee B J/ψ→K + K − K + K − 4.00 ± 0.33 ± 0.29 0.72 ± 0.06 ± 0.05 0.76 ± 0.09 Γ J/ψ ee B J/ψ→K + K − π + π − π 0
107 ± 4 ± 6 19.2 ± 0.8 ± 1.5 17.9 ± 2.9(S = 2.2) Γ
J/ψ ee B J/ψ→K + K − 2(π + π − )
27.5 ± 2.3 ± 1.7 5.09 ± 0.42 ± 0.35 4.7 ± 0.7(S = 1.3) Γ J/ψ ee B J/ψ→ωπ + π − Bω→3π 47.8 ± 3.1 ± 3.2 9.7 ± 0.6 ± 0.6 8.6 ± 0.7(S = 1.1) Γ ) 29.0 ± 1.7 ± 1.3 5.2 ± 0.3 ± 0.2 5.12 ± 0.30 Γ J/ψ ee B J/ψ→K * 0K * 0 B K * 0 →K + π − BK * 0 →K − π + 0.57 ± 0.15 ± 0.03 0.23 ± 0.06 ± 0.01 0.23 ± 0.07 Γ J/ψ ee B J/ψ→φπ + π − B φ→K + K − 2.19 ± 0.23 ± 0.07 0.81 ± 0.08 ± 0.03 0.94 ± 0.09(S = 1.2) Γ J/ψ ee B J/ψ→φπ 0 π 0 B φ→K + K − 1.36 ± 0.27 ± 0.07 0.50 ± 0.10 ± 0.03 0.56 ± 0.16 Γ J/ψ ee B J/ψ→φK + K − B φ→K + K − 2.26 ± 0.26 ± 0.16 1.67 ± 0.19 ± 0.12 b 1.83 ± 0.24 Γ J/ψ ee B J/ψ→φf 0 B φ→K + K − B f 0 →π + π − 0.69 ± 0.11 ± 0.05 0.38 ± 0.06 ± 0.02 0.32 ± 0.09(S = 1.9) Γ J/ψ ee B J/ψ→φf 0 B φ→K + K − B f 0 →π 0 π 0 0.48 ± 0.12 ± 0.05 0.53 ± 0.13 ± 0.05 0.32 ± 0.09(S = 1.9) Γ J/ψ ee B J/ψ→φ2(π + π − ) B φ→K + K − 4.7 ± 0.9 ± 0.9 1.77 ± 0.35 ± 0.12 1.66 ± 0.23 Γ ψ(2S) ee B ψ(2S)→2(π + π − )π 0 29.7 ± 2.2 ± 1.8 12.0 ± 0.9 ± 0.7 2.9 ± 1.0(S = 4.6) Γ ψ(2S) ee B ψ(2S)→2(π + π − π 0 ) 11.2 ± 3.3 ± 1.3 5.3 ± 1.6 ± 0.6 5.3 ± 1.7 Γ ψ(2S) ee B ψ(2S)→K + K − 2(π + π − ) 4.4 ± 2.1 ± 0.3 2.1 ± 1.0 ± 0.2 1.9 ± 0.9 Γ ψ(2S) ee
J/ψ ee B J/ψ→ωπ + π − π 0 Bω→3π 22 ± 3 ± 2 4.1 ± 0.6 ± 0.4 4.0 ± 0.7 Γ J/ψ ee B J/ψ→ηπ + π − Bη→3π 0.51 ± 0.22 ± 0.03 0.40 ± 0.17 ± 0.03 0.40 ± 0.17 Γ J/ψ ee B J/ψ→2(π + π − )η Bη→γγ 5.16 ± 0.85 ± 0.39 2.35 ± 0.39 ± 0.20 2.29 ± 0.24 Γ J/ψ ee B J/ψ→φη B φ→K + K − Bη→3π 0.84 ± 0.37 ± 0.05 1.4 ± 0.6 ± 0.1 0.75 ± 0.08(S = 1.5) Γ J/ψ ee B J/ψ→K + K − η Bη→2γ 4.8 ± 0.7 ± 0.3 0.87 ± 0.13 ± 0.07 0.87 ± 0.15 Γ J/ψ ee B J/ψ→ωK + K − Bω→3π 3.3 ± 1.3 ± 0.2 1.36 ± 0.50 ± 0.10 1.36 ± 0.51 Γ J/ψ ee B J/ψ→K + K − π + π − η Bη→γγ 10.2 ± 1.3 ± 0.8 4.7 ± 0.6 ± 0.4 4.67 ± 0.70 Γ J/ψ ee B J/ψ→(K * 0K * 0 2 +c.c.) B K * 0 →Kπ B K * 0 2 →Kπ 8.59 ± 0.36 ± 0.27 6.98 ± 0.29 ± 0.21 6.0 ± 0.6 Γ J/ψ ee B J/ψ→(K 0K * (892) 0 +c.c.) 26.6 ± 2.5 ± 1.5 4.8 ± 0.5 ± 0.3 4.39 ± 0.31 Γ J/ψ ee B J/ψ→(K +K * (892) − +c.c.B ψ(2S)→J/ψπ + π − B J/ψ→3π
18.6 ± 1.2 ± 1.1 23.6 ± 1.6 ± 1.6 20.7 ± 1.
2(S = 1.2) c Γ ψ(2S) ee B ψ(2S)→ωπ + π − Bω→3π
2.69 ± 0.73 ± 0.16 1.22 ± 0.33 ± 0.07 0.73 ± 0.12 Γ ψ(2S) ee B ψ(2S)→J/ψη Bη→3πB J/ψ→µ + µ − 1.11 ± 0.33 ± 0.07 33.4 ± 9.9 ± 2.0 32.8 ± 0.7 Γ ψ(2S) ee B ψ(2S)→2(π + π − )η Bη→γγ 1.13 ± 0.55 ± 0.08
1.2 ± 0.6 ± 0.1 1.2 ± 0.6 Γ ψ(2S) ee B ψ(2S)→K + K − π + π − π 0 4.4 ± 1.3 ± 0.3 1.8 ± 0.5 ± 0.1 1.26 ± 0.09(S = 1.2) Γ ψ(2S) ee B ψ(2S)→K + K − π + π − η Bη→γγ 1.2 ± 0.7 ± 0.1 1.3 ± 0.7 ± 0.1 1.3 ± 0.7 Γ ψ(2S) ee B ψ(2S)→K + K − π + π −
1.92 ± 0.30 ± 0.06 0.81 ± 0.13 ± 0.03 0.75 ± 0.09(S = 1.9) Γ ψ(2S) ee B ψ(2S)→K + K − π 0 π 0 0.60 ± 0.31 ± 0.03 0.25 ± 0.13 ± 0.02 0.25 ± 0.13 Γ ψ(2S) ee The 3π mass spectrum for selected e + e − → π + π − π 0 γ data events in the vicinity of the J/ψ resonance (Aubert et al., 2004b). The 2(π + π − )π 0 mass distribution for ISR e + e − → 2(π + π − )π 0 γ events in the J/ψ-ψ(2S) mass region (Aubert et al., 2007c).
B ψ(2S)→K + K − K + K − 0.22 ± 0.10 ± 0.02 0.09 ± 0.04 ± 0.01 0.060 ± 0.014 Γ ψ(2S) ee B ψ(2S)→φπ + π − B φ→K + K − 0.27 ± 0.09 ± 0.02 0.35 ± 0.12 ± 0.01 0.117 ± 0.029(S = 1.7) Γ ψ(2S) ee B ψ(2S)→φf 0 B φ→K + K − B f 0 →π + π − 0.17 ±
when the analysis (Aubert et al., 2007c) was carried out. As was shown in Sec. III.H, the five-pion final state includes production of many intermediate resonances. All of them are seen in the J/ψ → 5π decay. This may be a source of the systematic error unaccounted in previous measurements of the decay. The detection efficiency in the ISR method with a tagged photon is weakly sensitive to the dynamics of the J/ψ → 5π decay. The model uncertainty in the detection efficiency for the BABAR measurement (Aubert et al., 2007c) was estimeted from the difference in efficiency values for phase-space generated five-pion events and events generated for the ωπ + π − or ηπ + π − final states. It was found to be less than 3%. A part of events from the ψ(2S) peak comes from the decay chain ψ(2S) → J/ψπ + π − → 2(π + π − )π 0 with the J/ψ decaying to three pions. To select these events, the π + π − π 0 combination with the invariant mass closest to the J/ψ mass is chosen. Figure 77(a) shows the scatter plot of this three-pion mass versus the five-pion mass. A clear signal from the above decay chain is seen. The fivepion mass spectrum for events with the π + π − π 0 mass within the ±0.05 GeV/c 2 window around the J/ψ mass is shown in Fig. 77(b). From the fit to the mass spectrum with a double-Gaussian function the number of detected ψ(2S) → J/ψπ + π − → 2(π + π − )π 0 events was determined to be 256 ± 17, and the triple product B(ψ(2S) → J/ψπ + π − )B(J/ψ → π + π − π 0 ) ×Γ(ψ(2S) → e + e − ) = (1.86 ± 0.12 ± 0.11) × 10 −2 keV was obtained. By using the world-average Γ(ψ(2S) → e + e − ) and B(ψ(2S) → J/ψπ + π − ) values (Amsler et al., 2008), the branching fraction B(J/ψ → π + π − π 0 ) = (2.36±0.16±0.16)% was obtained, which is in good agreement with the BABAR measurement in the 3π final state: B(J/ψ → π + π − π 0 ) = (2.18 ± 0.19)% (Aubert et al., 2004b). This, in particular, confirms the correctness of the normalization procedure used for the measurement of B(J/ψ → 5π). Table III presents measurements of the J/ψ and ψ(2S) decay rates performed with the BABAR detector via ISR for many multihadron final states. The current PDG values are shown in the last column (Nakamura et al., 2010). In most of the cases these values are close to those of BABAR emphasizing their importance. Note also that in a few cases the scale factor is significantly higher than one indicating a large difference between the BABAR measurement and previous results.
As can be seen from Table III, the J/ψ decay rates to even numbers of pions (4π, 6π . . . ) are much smaller compared to the decays to odd numbers of pions. Indeed, a strong decay of the J/ψ to an even number of pions is forbidden by G-parity conservation. It is expected that this decay is dominated by a single photon transition, The 2(π + π − ) mass distribution for ISR-produced e + e − → 2(π + π − ) events in the mass region around the J/ψ and ψ(2S) (Aubert et al., 2005b); there are clear signals at the J/ψ and ψ(2S) mass positions. The latter is dominatesb . y the ψ(2S) → J/ψπ + π − → µ + µ − π + π − transition; selected events with two muons from the J/ψ decays are shown by the shaded histogram.
J/ψ → γ * → nπ. No such suppression occurs for the strong J/ψ decays to other modes, such as to three or five pions, which mainly proceed through three gluons. The 2(π + π − ) and 3(π + π − ) mass spectra for events of the ISR processes e + e − → 2(π + π − )γ and e + e − → 3(π + π − )γ, in the mass regions of the J/ψ and ψ(2S) resonances, are shown in Figs. 78 and 79, respectively. From the fits to the mass spectra the numbers of J/ψ and ψ(2S) events and also the level of the nonresonant background are determined. The latter is proportional to the value of the nonresonant e + e − → 2(π + π − ) or e + e − → 3(π + π − ) cross section. In the BABAR paper (Aubert et al., 2006b) the ratio
R J/ψ = 6π 2 Γ(J/ψ → e + e − )B(J/ψ → f )/m 2 J/ψ σ e + e − →f (m J/ψ )(19)
is calculated, where σ e + e − →f is the value of the nonresonant cross section to the final state f at the J/ψ mass. The numerator of the ratio represents the integral over the J/ψ excitation curve. The R J/ψ values for the 4π, 6π, 2K2π, 2K4π, and 4K final states are listed in Table IV together with the R J/ψ value obtained for the µ + µ − final state. The R J/ψ values for the 4π and 6π final states are closer to that for µ + µ − compared to the final states with kaons and indicate that the single-photon exchange dominates for the J/ψ decays into these modes. For the J/ψ decays to the final states with kaons, which can contain 0 50 100 150 200 3 3.25 3.5 3.75 m(π + ππ + ππ + π -) (GeV/c 2 ) Events/(0.0067 GeV/c 2 ) FIG. 79 The 3(π + π − ) mass distribution for ISR-produced e + e − → 3(π + π − ) events in the mass region around the J/ψ and ψ(2S) (Aubert et al., 2006b); there are clear signals at the J/ψ and ψ(2S) mass positions.
a sizeable isoscalar component, the single-photon transition is expected to be less dominant, as indicated by the larger central values of the ratios.
TABLE IV Ratios of the J/ψ partial production rates to continuum cross sections R J/ψ (see Eq. ( 19)). The result for µ + µ − is from Ref. (Aubert et al., 2004a). The result for 3(π + π − ) is from Ref. (Aubert et al., 2006b) and the results for 2(π + π − ), K + K − π + π − and K + K − K + K − are from Ref. (Aubert et al., 2005b).
Final state R J/ψ (MeV) 2(π + π − )
85.1 ± 7.9 3(π + π − ) 106 ± 10 2(π + π − π 0 ) 99.1 ± 6.5
K + K − 2(π + π − ) 122 ± 10 K + K − π + π − 166 ± 19 K + K − K + K − 138 ± 32 µ + µ − 84.12 ± 2.69
VI. ISR STUDIES IN THE CHARMONIUM REGION
In this chapter we will discuss recent progress in the charmonium spectroscopy mainly achieved due to the application of the ISR method, see also recent reviews (Brambilla et al., 2011;Pakhlova et al., 2010). We will start with the description of the open charm final states addressing later so-called charmonium-like states, presumably states with hidden charm.
A. Final states with open charm
For a quarter of a century our knowledge of the vector charmonia above the threshold of open charm production (throughout this section referred to as ψ states) was based on the pioneer experiments of Mark-I (Siegrist et al., 1976) and DASP (Brandelik et al., 1978). Even such basic parameters of the ψ mesons as mass, width and leptonic width were known with large uncertainties mainly determined by low statistics of the old experiments. In Ref. (Seth, 2005) an attempt was made to use the updated information on the R values from Crystal Ball (Osterheld et al., 1986) and BES (Bai et al., 2002) to improve these parameters. Finally, the BES Collaboration performed a global fit of the data on R collected by BES in the energy range from 3.7 to 5 GeV (Ablikim et al., 2008). In some cases the obtained values of mass, width and leptonic width for the ψ states differ significantly from the older values and still suffer from big uncertainties caused by insufficient statistics and model dependence primarily due to numerous thresholds of charm production opening in this energy region. It became clear that serious progress would be possible after tedious exclusive studies, which recently became possible due to ISR analyses of BABAR and Belle based on very large integrated luminosities.
Exclusive e + e − cross sections for hadronic final states containing charm mesons in the √ s=3.7-5 GeV/c 2 energy range were measured by BABAR (Aubert et al., 2007e, 2009b and Belle (Pakhlova et al., 2007(Pakhlova et al., , 2008a(Pakhlova et al., ,b,c, 2009 using ISR to reach the charmonium region. Note that in these analyses Belle systematically employs a partial reconstruction technique to increase the detection efficiency and suppress background.
The DD cross sections in the entire charm energy range from Belle (Pakhlova et al., 2008a) and BABAR (Aubert et al., 2007e) are shown in Figs. 80(a),(b) and are consistent with each other. Both exhibit clear evidence of structures near 4.1 and 4.4 GeV/c 2 . They also observe a structure (Figs. 80(a) and (b)) at 3900 MeV which must be taken into account to describe the DD cross section and R in the region between the ψ(3770) and ψ(4040). This enhancement is not considered as a new cc resonance, as it is qualitatively consistent with the energy dependence of the sum of the cross sections for various channels opening in this energy range predicted in a coupled-channel model (Eichten et al., 1980). The DD * cross sections from Belle (Pakhlova et al., 2007) and BABAR (Aubert et al., 2009b) shown in Figs. 80(c),(d) exhibit a single broad peak near threshold (close to the ψ(4040) position), whereas the D * D * results from Belle (Pakhlova et al., 2007) and BABAR (Aubert et al., 2009b) (Figs. 80(e),(f)) feature several local maxima and minima in this energy range.
BABAR (Aubert et al., 2009b) performed unbinned maximum likelihood fits to the DD, DD * , and D * D * spectra.
The expected ψ signals were parameter-ized by p-wave relativistic Breit-Wigner (RBW) functions with their parameters fixed to the PDG08 values (Amsler et al., 2008). An interference between the resonances and the non-resonant contributions was required in the fit. The computed ratios of the branching fractions for the ψ resonances and the quark model predictions are presented in Table V. The BABAR results deviate from some of the theoretical expectations, which often differ from each other. The e + e − → D 0 D − π + cross section measured by Belle (Pakhlova et al., 2008b) is shown in Fig. 80(g) and exhibits an unambiguous ψ(4415) signal. A study of the resonant structure shows clear signals for thē D * 2 (2460) 0 and D * 2 (2460) + mesons and constructive interference between the neutral D 0D * 2 (2460) 0 and the charged D − D * 2 (2460) + decay amplitudes. Belle performed a likelihood fit to the D 0 D − π + mass distribution with a ψ(4415) signal parameterized by an S-wave RBW function. The significance for the signal is ∼10σ and the peak mass and total width are in good agreement with the PDG06 (Yao et al., 2006) values and the BES fit results (Ablikim et al., 2008). The product of the branching fractions B(ψ(4415) → DD * 2 (2460)) × B(D * 2 (2460) → Dπ + ) was found to be between 10% and 20% depend- (Aubert et al., 2009b). Theoretical expectations are from models denoted 3 P0 ( Barnes et al., 2005), C 3 (Eichten et al., 2006), and ρKρ (Swanson et al., 2006).
State
Ratio and D * + s D * − s . The Belle collaboration has also measured the cross section of the process e + e − → Λ + c Λ − c (Pakhlova et al., 2008c). Because of the large number of the Λ c decay channels with small branching fractions full reconstruction of both Λ c is not effective. The strategy of a search for Λ + c Λ − c γ events at Belle (Pakhlova et al., 2008c) was the following: one of the Λ c baryons was reconstructed using three decay modes pK 0 S , pK − π + , Λπ + . Then in the spectrum of masses recoiling against the Λ + c γ system, a peak at the Λ − c mass was searched for. This peak presumably corresponded to the process e + e − → Λ + c Λ − c γ. The resulting exclusive cross section of the process e + e − → Λ + c Λ − c is shown in Fig.80(i). The cross section is nearly flat from the threshold up to 5.4 GeV/c 2 except the region just above threshold, where a peak with the mass M = 4634 +10 −30 MeV/c 2 , width Γ = 92 +40 −27 MeV/c 2 and significance of 8.2 σ is observed. The state is denoted as X(4630) and the product of the branching fractions measured for it is B(e + e − )×B(Λ cΛc ) = (0.68±0.33)×10 −6 . The nature of this enhancement remains unclear. Although both mass and width of the X(4630) are consistent within errors with those of another Belle state Y (4660), that was found in ψ(2S)ππ decays via ISR and is described in the next section (Wang et al., 2007), this could be coincidence and does not exclude other interpretations.
Although in general the energy behavior of the exclusive cross sections from BABAR and Belle qualitatively follows the expectations of the coupled-channel model (Eichten et al., 1980), some features are not reproduced by theory. This is confirmed by the measurement of CLEO (Cronin-Hennessy et al., 2009), which scanned the energy range between 3.97 and 4.26 GeV and reported the cross sections for final states consisting of two charm mesons (DD, D * D , D * D * , D + s D − s , D * + s D − s , and D * + s D * − s ) as well as for those in which the charm-meson pair is accompanied with a pion. The updated potential model predictions of Eichten (Eichten et al., 1980(Eichten et al., , 2006 fail to describe many features of the data.
B. New charmonium-like states
The first observation of an unexpected vector charmonium-like state was made by BABAR (Aubert et al., 2005a) in ISR production of Y (4260) → J/ψπ + π − , which was later updated (Aubert et al., 2008a) with twice the data, as shown in Fig. 81. CLEO (He et al., 2006) and Belle (Yuan et al., 2007) confirmed the BABAR result, but Belle also found a smaller, broader structure at 4008 MeV/c 2 , as seen in Fig. 82.
Aside from the lower mass state, for which the updated BABAR (Aubert et al., 2008a) analysis placed an upper limit, the three sets of measurements were quite consistent in mass and width, as shown in Table VI, but only roughly so in strength. BABAR (Aubert et al., 2007a) found one more apparent enhancement Y (4360) in ψ(2S)π + π − , which Belle (Wang et al., 2007) measured with somewhat larger mass and smaller width, as seen in Table VII. Belle also found a second structure near 4660 MeV/c 2 in the same final state, as seen in Fig. 83. (A combined fit (Liu et al., 2008) to Belle and BABAR ψ(2S)π + π − data found consistency between them.) Because dipion transitions between vector quarkonia are commonplace for charmonium and bottomonium, it was natural to ascribe the Y 's to excited vector charmonia. A number of additional features of these states are in conflict with this hypothesis. Only one, Y (4660), is remotely near a predicted 1 −− cc state (1 3 D 1 ). The Y (4260) and Y (4360) did not show up in inclusive hadronic cross section (R) measurements (Bai et al., 2002), as would be expected of such states (there is no fine-grained R-scan data near Y (4660)).
A comparison of the measured J/ψπ + π − and total hadronic cross sections in the √ s ≃ 4260 MeV region yields a lower bound for Γ(Y → J/ψπ + π − )>508 keV at 90% C.L., an order of magnitude higher than expectations for conventional vector charmonium states (Mo et al., 2006). Charmonium would also feature dominant open charm decays, exceeding those of dipion transitions by a factor expected to be > ∼ 100, because such is the case for ψ(3770) and ψ(4160). As summarized in Table VIII, no such evidence has been found, significantly narrowing any window for either charmonia or, in some cases, quark-gluon hybrid interpretations.
CLEO (Coan et al., 2006) studied direct production of the Y (4260) in e + e − collisions and identified the only non-J/ψπ + π − decay modes seen The invariant mass of J/ψπ + π − candidates produced in initial state radiation studied by Belle (Yuan et al., 2007), with J/ψ-sidebands already subtracted, unlike Fig. 81. Points with error bars represent data, the solid curve shows the best fits to the data to two resonances including interference with a floating phase, and the dashed and dashed-dot curves show the two pair of individual resonance contributions for the two equally probable best-fit phases.
so far, J/ψπ 0 π 0 and J/ψK + K − , occuring at roughly half and one-sixth, respectively, of the J/ψπ + π − rate. The J/ψK + K − decay mode was also observed by Belle . Any interpretation for these vector states will not only have to explain their masses, widths, and manifest reluctance to materialize in open charm or unflavored light meson final states. The dipion invariant mass spectra exhibit curious structures, as seen for the Y (4260) (Liu et al., 2008) in Fig. 84 (Aubert et al., 2008a), for the Y (4360) in Fig. 85(a) (Wang et al., 2007), and for the Y (4660) in Fig. 85(b) (Wang et al., 2007). The first shows a distinctly non-phase-space double-hump structure which is qualitatively confirmed by Belle (Yuan et al., 2007), the second exhibits a majority of events at higher masses, and the third indicates a quite dominant f 0 (980) component.
VII. SOME IMPLICATIONS FOR THEORY AND PERSPECTIVES
The progress in precision of the low energy data on e + e − → hadrons achieved recently due to ISR studies allows an update of the estimation of the hadronic con- tribution to the muon anomalous magnetic moment to be performed. It is well known that the precision of the Standard Model prediction of this quantity is limited by the contributions from strong interactions. These are conventionally separated into a theory-driven light-bylight contribution, see a recent review in (Prades et al., 2009), and two experiment-driven vacuum polarization contributions, the dominant lowest-order and higherorder parts. The lowest-order term can be calculated 2009)) and B(Y → T )/B(Y → J/ψπ + π − ) (for Y (4260)) (BABAR (Aubert et al., 2007e, 2009b and Belle (Pakhlova et al., 2007) from a dispersion integral (Bouchiat and Michel, 1961;Gourdin and de Rafael, 1969) in which the integrand contains a combination of experimental data on cross sections of e + e − → hadrons and perturbative QCD. The integral ranges from the threshold of hadron production, i.e., from the π 0 γ threshold to infinity:
a had,LO µ = αm µ 3π 2 ∞ m 2 π ds R(s)K(s) s 2 .(20)
The functionK(s) in the integration kernel is rather smooth, whereas a factor 1/s 2 emphasizes the low-energy part of the spectrum. Of particular importance is the process e + e − → π + π − (γ), which provides about 73% of the lowest-order hadronic contribution and about 62% of its total quadratic error. In most cases new ISR results from BABAR are consistent with previous measurements and have comparable or better accuracy. However, not always these results Process Before BABAR With BABAR π + π − π 0 2.45 ± 0.26 3.25 ± 0.09 2π + 2π − 14.20 ± 0.90 13.09 ± 0.44 3π + 3π − 0.10 ± 0.10 0.11 ± 0.02 2π + 2π − 2π 0 1.42 ± 0.30 0.89 ± 0.09 integrated from threshold to 1.8 GeV for the measurements before BABAR and including the new BABAR results. For the π + π − π 0 final state the contribution of the ω and φ mesons is excluded. All values are in units of 10 −10 .
agree with the corresponding old datasets. For example, from Fig. 23 discussed in Ch. 3 it is clear that the cross section of the process e + e − → π + π − π 0 obtained by BABAR (Aubert et al., 2004b) is consistent with that of SND (Achasov et al., 2002) below √ s = 1.4 GeV, but is much higher than that at DM2 (Antonelli et al., 1992) above this energy. The energy dependence of the cross section observed by DM2 is also inconsistent with other measurements (see the discussion of this problem in Ref. (Achasov et al., 2002)) and the existence of the rather well established ω(1420) and ω(1650) resonances. The contribution of this process to a had,LO µ , which was equal to (2.45±0.26)·10 −10 before BABAR (Davier et al., 2003b), becomes (3.25 ± 0.09) · 10 −10 after the new results are taken into account (Davier, 2007). For the process e + e − → 2π + 2π − , which cross section is one of the largest above 1 GeV, the new BABAR measurement (Aubert et al., 2005b) is in good agreement with the older results and after taking them into account the precision of the corresponding contribution improves by a factor of two. Another example is the measurement of two six-pion final states (Aubert et al., 2006b). In Figs. 86 and 87 we compare the cross sections from BABAR with those in older measurements. It is clear that the improvement is dramatical because older measurements were too imprecise to make a reasonable prediction. We summarize the discussed contributions to a had,LO µ integrated from threshold to 1.8 GeV for the measurements before BABAR (see the references in Ref. (Davier et al., 2003b)) and with BABAR in Table IX (Davier, 2007).
The calculation using all multibody modes measured by BABAR (Aubert et al., 2004b(Aubert et al., , 2005b(Aubert et al., , 2006b(Aubert et al., , 2007c together with the relevant older measurements (for a complete list of references see Ref. (Davier et al., 2003b)) gives for the contribution of hadronic continuum from threshold to 1.8 GeV the value (54.2 ± 1.9) · 10 −10 . It is consistent with the result of the older calculation (Davier et al., 2003b), (55.0 ± 2.6) · 10 −10 , and more precise.
A very important step in the calculations of a had,LO µ has been recently made in Ref. (Davier et al., 2010a), which for the first time took into account the BABAR measurement of the reaction e + e − → π + π − (Aubert et al., 2009a) and finally in Ref. (Davier et al., 2011), which σ(2(π + π -π 0 )) (nb) FIG. 86 The cross section of the process e + e − → 2π + 2π − 2π 0 (Aubert et al., 2006b;Bacci et al., 1981;Baldini et al., 1988;Cosme et al., 1979;Esposito et al., 1981). σ(3(π + π -)) (nb) FIG. 87 The cross section of the process e + e − → 3π + 3π − (Aubert et al., 2006b;Baldini et al., 1988;Jean-Marie et al., 1976). also accounted for the KLOE results on the π + π − final state (Ambrosino et al., 2009(Ambrosino et al., , 2011. It uses the whole set of experimental data in the energy range up to 1.8 GeV, the region dominated by hadronic resonances, and perturbative QCD for the contribution of the quark continuum beyond that energy. In particular, they modify the treatment of the ω(782) and φ(1020) res- [ππ] and BCVC contributions from the e + e − data (Davier et al., 2011). onances using the data from Ref. (Aubert et al., 2004b), and include preliminary data of BABAR on e + e − → π + π − 2π 0 (Druzhinin, 2007). They also use isospin invariance to estimate the contributions from several unmeasured final states with six pions and KK(nπ) relating them to those of known channels. For other channels they refer to earlier calculations (Davier et al., 2003a,b;Davier, 2007).
Contributions to a had,LO µ [ππ] from the individual π + π − cross sections measured at BABAR (Aubert et al., 2009a), KLOE (Ambrosino et al., 2009, CMD2 (Akhmetshin et al., 2004a(Akhmetshin et al., , 2007, and SND (Achasov et al., 2006) are given in the middle column of Table X (Davier et al., 2011). In the right column we list the corresponding CVC predictions for the τ ± → π ± π 0 ν τ branching fraction corrected for isospin-breaking effects (Davier et al., 2010b). Here the first error is experimental and the second estimates the uncertainty in the isospin-breaking corrections. For each experiment, all available data in the energy range from threshold to 1.8 GeV (m τ for B CVC ) are used, and the missing part is completed by the combined e + e − data.
The average of the four separate results gives a had,LO [ππ] = (503.5 ± 3.5 tot ) · 10 −10 , shows that the inclusion of the new BABAR data significantly increases the central value of the integral. For the higherorder hadronic contributions to a had,LO µ , which are also estimated based on e + e − data, there is a slight gain in accuracy from −9.79 ± 0.08 exp ± 0.03 rad (Hagiwara et al., 2007) to −9.79±0.06 exp ±0.03 rad (Hagiwara et al., 2011).
A compilation of recent results for a µ , from which the central value of the experimental average (Bennett et al., 2006) has been subtracted, is given in Fig. 88.
The (e + e − -based, with BABAR π + π− data) (Davier et al., 2010a), HLMNT 11 (e + e − -based, with BABAR and KLOE π + π − data) (Hagiwara et al., 2011), DHMZ 10 (τ and e + e − ) (Davier et al., 2011), BDDJ 11 ((τ and e + e − ) (Benayoun et al., 2011).
There have been only a few tests of CVC based on the new data. The most interesting final state is of course that with two pions, where a serious discrepancy be- tween the e + e − and τ data was reported (Davier et al., 2003a,b). The results of the most recent tests in this channel (Davier et al., 2010b(Davier et al., , 2011 are based on the reevaluation of isospin-breaking corrections. The predictions for the branching fraction of τ ± → π ± π 0 ν τ shown in Table X. can be compared to the world average value of (25.51 ± 0.09)% (Nakamura et al., 2010). Although the difference between the CVC prediction and the experimental value is less significant than previously (Davier et al., 2003b;Davier, 2007), it is still substantial for all groups but BABAR. A new approach to the problem was suggested very recently in Ref. (Jegerlehner and Szafron, 2011), where the ρ − γ mixing was properly taken into account. The CVC prediction for the branching fraction to two pions is (25.20 ± 0.17 ± 0.28)% in good agreement with the direct measurement. Finally, Ref. (Benayoun et al., 2011) reports consistent results on e + e − and τ in the 2π channel and obtains a theoretical prediction for a µ , which is lower than the experimental value by 4.1σ. One more test of CVC that included recent ISR results from BABAR has been performed in Ref. (Cherepanov and Eidelman, 2009). The authors use CVC together with the data on e + e − → ηπ + π − and e + e − → η ′ π + π − to estimate the branching fraction of the corresponding τ decays.
For the former final state the estimate based on the older data (Akhmetshin et al., 2000;Antonelli et al., 1988;Delcourt et al., 1982;Druzhinin et al., 1986) predicts for the branching fraction (0.132 ± 0.016)%, somewhat smaller but not incompatible with (0.165 ± 0.015)% obtained from the BABAR data (Aubert et al., 2007c). The average of the two gives the CVC prediction of (0.150 ± 0.016)%, in good agreement with the new world average B(τ − → ηπ − π 0 ν τ ) = (0.139 ± 0.010)% that uses the new presise measurement at Belle (Inami et al., 2009). For the B(τ − → η ′ π − π 0 ν τ ) they give an upper limit of < 3.2 · 10 −5 at 90% CL, which is a factor of 2.5 more restrictive than the upper limit based on the only existing measurement by CLEO: < 8 · 10 −5 (Bergfeld et al., 1997).
There are two recent evaluations of ∆α (5) had (M 2 Z ), the hadronic contribution to the running α from five flavors (Davier et al., 2011;Hagiwara et al., 2011). In Ref. (Hagiwara et al., 2011) a data set of e + e − cross sections includes multibody data from BABAR and 2π data from KLOE and the calculation gives the value 0.02760 ± 0.00015, slightly higher and significantly more accurate than the previously accepted value 0.02758 ± 0.00035 (Burkhardt and Pietrzyk, 2005). The estimation performed in Ref. (Davier et al., 2011) additionally uses the BABAR data on the ππ final state and perturbative QCD between 1.8 and 3.7 GeV and gives an even more precise value 0.02749 ± 0.00010.
VIII. CONCLUSIONS
Successful experiments at high-luminosity e + e − colliders (φ-and B-factories) opened a new era in a study of e + e − annihilation into hadrons at low energies using a novel method of initial-state radiation usually referred to as ISR or radiative return.
Modern detectors operating at these factories which collected unprecedentally high integrated luminosity allow this method to compete with direct e + e − experiments.
A lot of new data on the cross sections of e + e − annihilation into hadrons were obtained using ISR, first of all, on the production of mesons from threshold of their production ∼ 2m π to the c.m. energy of about 4-5 GeV. More than 30 processes have been studied in which mesons and hadronic resonances were produced, many of them for the first time.
Valuable information on the particles with mass of about a few GeV has been obtained, primarily on excited vector mesons, radial and/or orbital excitations. Parameters of vector charmonia were investigated, new data on more than 40 decay channels were obtained, many decays observed for the first time.
New data on production cross sections were obtained for various baryons: proton, Λ, Σ 0 and Λ c hyperon, opening new possibilities for testing form factor models.
New states (ρ(1900), Y (2175), Y (4260), Y (4320) . . .), some of them with presumably exotic quark structure, have been discovered. Their nature is not yet established and widely discussed.
New values of the cross sections obtained using ISR can be used for more precise predictions of the muon anomalous magnetic moment, running fine-structure constant at the Z boson mass, tests of CVC and many other theoretical models.
Only part of the available ISR data sample has been processed, e.g., for BABAR it is about 1/3, analysis is in progress. Belle has only started a corresponding data processing.
If existing projects of Super B-Factories are approved, prospects of reaching an integrated luminosity by a factor of 30-100 exceeding that today appear. Such experiments will improve accuracy for many processes which studies are now statistically limited.
30 IV.FIG. 1
301Baryon form factors 31 A. General formulae 31 B. Measurement of time-like baryon form factors 33 C. e + e − → ppγ 33 D. e + e − → ΛΛγ 33 E. e + e − → Σ 0Σ0 , ΛΣ 0 (Σ 0Λ ) 34 F. Summary 35 V. Decays of the J/ψ and ψ(2S) 36 A. Leptonic decays 36 B. Decays to light mesons and baryons 37 VI. ISR studies in the charmonium region 40 A. Final states with open charm 41 B. New charmonium-like states 42VII. Some implications for theory and perspectives 43 The lowest-order Feynman diagram describing the process of e + e − annihilation into hadrons.
FIG. 2
2The lowest-order Feynman diagram describing the initial state radiation process e + e − → γ+ hadrons.
FIG. 3
3The relative probability for the ISR photon to be emitted into the polar angle range θ0 < θ < 180 • − θ0 for three representative values of x.
FIG. 4
4The relative probability for the ISR photon to be emitted into the polar angle range 30 • < θ < 150 • as a function of the e + e − c.m. energy for three representative values of x.
FIG. 5 The mass (m = 2E0 √ 1 − x) dependence of the relative difference between the radiator function W (x) from Ref. (Kuraev and Fadin, 1985) and the lowest-order function W0(0, x) for 2E0 = 1.02 GeV.
FIG. 6 The detection efficiency for the process e + e − → π + π − γ at 2E0 = 10.58 GeV as a function of the 2π invariant mass for untagged (solid curve) and tagged (dotted curve) ISR photons.
FIG. 7 The cos θp dependence of the detection efficiency for the process e + e − → ppγ (Aubert et al., 2006a), where θp is the proton angle measured in the pp rest frame with respect to the ISR photon direction. The horizontal line indicates the detection efficiency averaged over cos θp.
FIG. 9 The mass dependence of the pp mass resolution obtained from MC simulation for the process e + e − → ppγ in Ref. (Aubert et al., 2006a). The curve represents the result of a polynomial fit.
or Eq.(8) depending on the
FIG. 11
11The mass dependence of the ISR differential luminosity multiplied by the detection efficiency for experiments at the B-factory (2E0 = 10.58 GeV, L = 500 fb −1 , untagged ISR photon) in the charm production mass region.
FIG. 12
12The detection efficiency for the process e + e − → ppγ (Aubert et al., 2006a) as a function of the pp invariant mass. The curve represents the result of a polynomial fit. nificantly exceeds the integrated luminosity collected in direct e + e − experiments including the recent CLEO-c energy scan (Cronin-Hennessy et al., 2009), 60 pb −1 at twelve points between 3.97 and 4.26 GeV.
FIG. 13
13View of the BABAR detector. FIG. 14 Side view of the Belle detector.
FIG. 16 Top: The pion form factor obtained by KLOE in the reaction e + e − → π + π − γ with a tagged ISR photon (Muller, 2009). Bottom: Relative difference between the KLOE result with an untagged ISR photon (Ambrosino et al., 2009) and the direct e + e − measurements by SND (Achasov et al., 2006) and CMD-2 (Akhmetshin et al., 2007). The dark (light) band indicates the KLOE uncertainty (statistical and systematic errors combined in quadrature). For the SND and CMD-2 data, the combined statistical and systematic errors are shown.
FIG
. 17 Forward-backward asymmetry in the reaction e + e − → π + π − (γ) measured with the KLOE detector(Muller, 2009).
FIG
. 18 Top: The QED test by the ratio of the e + e − → µ + µ − cross section in data to the theoretical one. Bottom: The e + e − → π + π − cross sections measured with the BABAR detector(Aubert et al., 2009a).
FIG. 19 The e + e − → π + π − cross section above 1 GeV measured with the BABAR detector (Wang , 2009). Comparison with the CMD-2 (Aulchenko et al., 2005) and DM2 (Bisello et al., 1989) measurements is shown.
Ambrosino et al., 2009) and BABAR(Wang , 2009) measurements. The band corresponds to the BABAR statistical and systematic uncertainties combined in quadrature.
FIG. 21 The relative difference between the CMD-2 (Akhmetshin et al., 2004a, 2007) and BABAR (Wang , 2009) measurements. The band corresponds to the BABAR statistical and systematic uncertainties combined in quadrature.
Fig. 23) assuming the presence of two excited ω-like states, ω(1420) and ω(1650)(Amsler et al.,
FIG. 23
23The e + e − → π + π − π 0 cross section measured with the BABAR detector(Aubert et al., 2004b) in the 1-3 GeV/c 2 range compared with the SND(Achasov et al., 2002) (open circles) and DM2(Antonelli et al., 1992) (triangles) data. The inset shows the mass distribution fitted with two resonances.
FIG. 24 The e + e − → K 0 S K ± π ∓ cross section measured by BABAR (Aubert et al., 2008b) (top). Comparison of the BABAR measurement with the results of the previous DM1 (Buon et al., 1982) and DM2 (Bisello et al., 1991b) experiments (bottom).
FIG. 25The e + e − → K + K − π 0 cross section measured by BABAR(Aubert et al., 2008b)(top). Comparison of the BABAR measurement with the result of the DM2 experiment(Bisello et al., 1991b) (bottom).
FIG. 27
27The e + e − → φη cross sections measured with the BABAR detector (Aubert et al., The φπ 0 cross sections measured with the BABAR detector (Aubert et al., 2008b).
FIG. 30 Comparison of the BABAR results on the e + e − → π + π − π + π − cross section (Aubert et al., 2005b) with the previous direct e + e − measurements (Achasov et al., 2003; Akhmetshin et al., 2004b; Bacci et al., 1980; Bisello et al., 1991a; Cordier et al., 1982a; Cosme et al., 1979; Dolinsky et al., 1991; Kurdadze et al., 1988).
FIG. 31 Comparison of the BABAR results on the e + e − → π + π − 2π 0 cross section (Druzhinin, 2007) with the previous direct e + e − measurements (Achasov et al., 2003; Akhmetshin et al., 1999; Bacci et al., 1981; Bisello et al., 1991a; Cosme et al., 1979; Dolinsky et al., 1991; Kurdadze et al., 1986).
FIG. 32
32The 4π invariant mass spectrum for selected e + e − → π + π − 2π 0 events(Druzhinin, 2007) (points with error bars) in comparison with the spectrum for non-ωπ 0 -events only (left) or with the spectrum for ωπ 0 -events only (right). In the left plot the lowest histogram shows the contribution of the ρ + ρ − intermediate state.
FIG. 34
34The e + e − → K + K − π 0 π 0 cross section measured with the BABAR detector(Lees et al., 2011).
. 35 (a) Scatter plots m(K − π + ) vs. m(K + π − ), and (b) projection m(K ± π ∓ ) plot (two entries per event) for the reaction e + e − → K + K − π + π −(Lees et al., 2011). A sum over all accessible c.m. energies is given.
FIG. 37
37The e + e − → K * (892) 0 Kπ, and (b) K * 2 (1430) 0 Kπ cross sections(Lees et al., 2011) obtained from the K * (892) 0 and K * 2 (1430) 0 signals ofFig. 35(b)The scatter plots of the reconstructed a) m(π + π − ) and b) m(π 0 π 0 ) versus m(K + K − ) for selected events in the data. The vertical (horizontal) lines bound a φ (f0(980)) signal region(Lees et al., 2011)..
FIG. 38 (a) The π + π − mass distribution for K + K − π + π − events (K * (892)Kπ events are excluded); the solid curve represents a fit using a signal Breit-Wigner function with ρ(770) parameters and a polynomial background (hatched area). (b) The e + e − → K + K − ρ(770) cross section obtained using the fitted numbers of ρ-meson events in each 25 MeV c.m. energy interval (Lees et al., 2011).
FIG. 40
40The m(π + π − ) distribution for the e + e − → φ(1020)π + π − reaction(Lees et al., 2011). The e + e − → φπ + π − cross sections measured with the BABAR(Lees et al., 2011) (circles) and Belle(Shen et al., 2009) (squares) detectors.
FIG. 41 FIG. 42
4142The fit to the e + e − → φπ + π − cross section(Lees et al., 2011) in the two-resonance model described in the text (solid curve). The contribution of the first resonance (φ(1680)) is shown by the dashed line. The dotted line shows the first resonance contribution in the φf0(980) decay mode only. The cross section for e + e − → φ(1020)f0(980) events selected with the cut 0.85 < m(ππ) < 1.1 GeV/c 2(Lees et al., 2011). The solid curve is the result of the two-resonance fit. The dashed and dotted curves are the contributions of the φ(1680) → φf0(980) and φ(1680) → φf0(600) decay channels, respectively.
FIG. 43 The e + e − → φ(1020)f0(980) cross section measured in the K + K − π + π − (circles) and K + K − π 0 π 0 (squares) final states by BABAR (Lees et al., 2011). The solid and dashed curves represent the results of the two-resonance fit described in the text.
FIG. 44
44The e + e − → φ(1020)f0(980) cross section measured by Belle(Shen et al., 2009). The solid and dashed curves represent the results of the two-resonance fit described in the text.
The e + e − → 2(K + K − ) cross section as a function of c.m. energy measured with the BABAR detector using ISR(Aubert et al., 2007b).
The K + K − invariant mass distribution for selected e + e − → 2(K + K − ) events(Aubert et al., 2007b) (open histogram, four entries per event), and that for the combination in each event closest to the φ-meson mass (hatched histogram).
FIG. 47
47The K + K − invariant mass distribution for φK + K − events(Aubert et al., 2007b) (a). Events from the J/ψ → φK + K − decay are excluded from the spectrum shown by the open histogram. The hatched histogram is for events from the J/ψ decay. The numbered regions of the K + K − mass spectrum are used to calculate the cross sections shown in the plots (b), (c), and (d) for the regions 1, 2, and 3, respectively.
FIG. 48 The m(π + π − π 0 ) distribution for 2(π + π − )π 0 events (Aubert et al., 2007c).
FIG. 49
49The e + e − → ηπ + π − cross section measured by BABAR(Aubert et al., 2007c) in comparison with the direct e + e − measurements(Akhmetshin et al., 2000;Antonelli et al., 1992;Druzhinin et al., 1986).
The e + e − → ωπ + π − cross section measured by BABAR(Aubert et al., 2007c) in comparison with the direct e + e − measurements(Akhmetshin et al., 2000;Antonelli et al., 1988;
FIG. 53
53The e + e − → ωf0(980) cross section measured by BABAR(Aubert et al., The fit with two Breit-Wigner functions to the ωπ + π − cross section with the ωf0(980) contribution subtracted(Aubert et al., 2007c).
FIG. 54 The e + e − → 2(π + π − )π 0 cross section (Aubert et al., 2007c) and contributions from ωπ + π − (squares) and ηπ + π − (triangles).
FIG. 55 The m(π + π − ) (points) and m(π ± π 0 ) (histogram) distributions for 2(π + π − )π 0 events with the ωπ + π − and ηπ + π − contributions excluded (Aubert et al., 2007c).
FIG. 57 (a) The m(ηπ + π − ) distribution for 2(π + π − )η events; (b) The e + e − → η(958)π + π − cross section and the result of the Breit-Wigner fit (Aubert et al., 2007c).
FIG. 59 The e + e − → K + K − π + π − π 0 cross section measured by BABAR (Aubert et al., 2007c).
FIG. 60 The m(π + π − π 0 ) (a) and m(K + K − ) (b) distributions for K + K − π + π − π 0 events (Aubert et al., 2007c). The hatched histogram represents the estimated non-ISR background.
61 (a) The e + e − → φη cross section measured by BABAR in the K + K − π + π − π 0 (Aubert et al., 2007c) (circles) and K + K − γγ (squares)(Aubert et al., 2008b) final states. (b) The cross sections for the e + e − → ωK + K − process measured by BABAR(Aubert et al., 2007c).
FIG. 64 The cross sections of e + e − → hadrons measured with the BABAR detector via ISR.
FIG. 65
65The e + e − → pp cross section measured by BABAR(Aubert et al., 2006a).
FIG. 67 The proton form factor measured in different experiments (Ablikim et al., 2005; Ambrogiani et al., 1999; Antonelli et al., 1998; Armstrong et al., 1993; Aubert et al., 2006a; Bardin et al., 1994; Bisello et al., 1983; Delcourt et al., 1979; Pedlar et al., 2005). The solid line represents the QCD fit described in the text.
FIG. 68 The proton form factor in the near-threshold region (Antonelli et al., 1998; Aubert et al., 2006a; Bardin et al., 1994; Delcourt et al., 1979).
FIG. 69
69The proton |GE/GM | ratio measured by BABAR(Aubert et al., 2006a) (black points) compared with LEAR data(Bardin et al., 1994) (open circles).
FIG. 70 The e + e − → ΛΛ cross section measured by BABAR (Aubert et al., 2007d) in comparison with the DM2 measurement (Bisello et al., 1990).
FIG. 72
72baryon form factors measured by BABAR(Aubert et al., 2006a(Aubert et al., , 2007d versus the dibaryon invariant mass. The cos θ pζ distribution in the e + e − → ΛΛ process(Aubert et al., 2007d). The line represents the result of the fit to data with a first-order polynomial.
FIG
. 73 A fit of the Λ form factor(Aubert et al., 2007d) with the power-law function F ∼ const/q n and with the QCDinspired function F ∼ const/q 4 .
FIG. 74
74The µ + µ − mass spectrum in the J/ψ region for selected events of the reaction e + e − → µ + µ − γ(Aubert et al., 2004a). The curve is the result of the fit described in the text.
a
S is a PDG scale factor. b B J/ψ→φKK c B J/ψ→3π .
FIG. 76 The 2(π + π − )π 0 mass distribution for ISR e + e − → 2(π + π − )π 0 γ events in the J/ψ-ψ(2S) mass region (Aubert et al., 2007c).
FIG
. 77 (a) The three-pion combination closest to J/ψ mass versus five-pion mass. (b) The five-pion mass for events with the three-pion mass in the ±50 MeV window around J/ψ mass(Aubert et al., 2007c).
FIG. 78 The 2(π + π − ) mass distribution for ISR-produced e + e − → 2(π + π − ) events in the mass region around the J/ψ and ψ(2S) (Aubert et al., 2005b); there are clear signals at the J/ψ and ψ(2S) mass positions. The latter is dominatesb . y the ψ(2S) → J/ψπ + π − → µ + µ − π + π − transition; selected events with two muons from the J/ψ decays are shown by the shaded histogram.
FIG. 80
80Measured e + e − exclusive open-charm cross sections for √ s=3.7-5.0 GeV/c 2 from Belle and BABAR, showing (a) DD (Pakhlova et al., 2008a), (b) DD (Aubert et al., 2007e) , (c) D + D * − (Pakhlova et al., 2007); (d) DD * for D=D 0 (solid squares) and D=D ± (open circles) (Aubert et al., 2009b); (e) D * + D * − (Pakhlova et al., 2007); (f) D * D * (Aubert et al., 2009b); (g) D 0 D − π + (Pakhlova et al., 2008b);(h) D 0 D * − π +(Pakhlova et al., 2009); (i) ΛcΛc(Pakhlova et al., 2008c). Vertical dashed lines indicate ψ masses in the region.
4160) DD/D * D * 0.02±0.03±0.02 0.46 0.08 DD * /D * D * 0.34±0.14±0.05 0.011 0.16 ψ(4415) DD/D * D * 0.14±0.12±0.03 0.025 DD * /D * D * 0.17±0.25±0.03 0.14 ing on the ψ(4415) parameterization. The non-resonantD 0 D − π + branching fraction was found to be <22% of B(ψ(4415) → DD * 2 (2460) → D 0 D − π + ).Similarly, the energy dependence of the cross section of the D 0 D * − π + final state, shown inFig. 80(h), has been measured by Belle(Pakhlova et al., 2009); a marginal signal of the ψ(4415) is found (3.1σ), and its branching fraction was limited to <10.6%. Very recently BABAR(del Amo Sanchez et al., 2010) and Belle(Pakhlova et al., 2011) reported consistent results on the cross sections of D + s D − s , D + s D * + s
FIG. 81
81The invariant mass of J/ψπ + π − candidates produced in initial state radiation, e + e − → γISR J/ψπ + π − . Points with error bars represent data, and the curves show the fit (solid) to a signal plus a linear background (dashed)(Aubert et al., 2008a).
FIG. 83
83From a binned maximum likelihood fit(Liu et al., 2008) of combined Belle and BABAR data, the ψ(2S)π + π − invariant-mass cross section as a function of √ s. The solid circles and stars show the Belle and BABAR data, respectively. The solid curve shows the best fits to the data to two resonances including interference with a floating phase, and the dashed curves show the contributions of two pairs of individual resonances for the two equally probable best-fit phases.
FIG. 84
84The dipion invariant mass distribution on ISRproduced Y (4260) → J/ψπ + π − decays, where points represent data and the line histogram phase-space MC simulation(Aubert et al., 2008a).
(FIG. 85
85Cronin-Hennessy et al., 2009) 7.6 (Aubert et al., 2007e) DD * 45 (Cronin-Hennessy et al., 2009) 34 (Aubert et al., 2009b) D * D * 11 (Cronin-Hennessy et al., 2009) 40 (Aubert et al., 2009b) DD * π 15 (Cronin-Hennessy et al., 2009) 9 (Pakhlova et al., 2007) D * D * π 8.2 (Cronin-Hennessy et al., 2009) DsDs 1.3 (Cronin-Hennessy et al.,The dipion invariant mass distribution on ISRproduced (a) Y (4360) → ψ(2S)π + π − and (b) Y (4660) → ψ(2S)π + π − , where points represent data and the line histogram phase-space MC simulation(Wang et al., 2007).
µ [ππ] = (507.8±2.8 tot )·10 −10 (Davier et al., 2011). The comparison with their previous result (Davier et al., 2010b), a had,LO µ
shaded vertical band indicates the experimental error. The SM predictions are taken from: HMNT 07 (Hagiwara et al., 2007), JN 09 (Jegerlehner and Nyffeler, 2009), Davier et al. 09/1 (τ -based) (Davier et al., 2010b), Davier et al. 09/2
FIG. 8 The mass dependence of the detection efficiency for the process e + e − → π + π − γ at 2E0 = 1.02 GeV for two selections, untagged (θγ < 15 • or θγ > 165 • ) and tagged (50 • < θγ < 130 • ), shown by the solid and dashed curves, respectively. The pion polar angles range from 50 • to 130 • .Detection efficiency
0
0.2
0.4
0.6
0.4
0.6
0.8
TABLE I
IParameters of the isoscalar and isovector resonances obtained in Ref.
TABLE
TABLE III
IIIMeasurements of the J/ψ and ψ(2S) branching fractions via ISR at BABAR
TABLE V
VRatios of branching fractions for the ψ(4040), ψ(4160) and ψ(4415) resonances from BABAR
TABLE VI
VIMeasured properties of the Y (4260) → J/ψπ + π − . The Belle(Wang et al., 2007) single-resonance fit result is quoted to allow for comparison to the other two.Quantity
Value
From (χ 2 /ndf)
M
4259±8 +2
−6
BABAR (Aubert et al., 2008a)
(MeV/c 2 )
4263±6
Belle (Yuan et al., 2007)
4284 +17
−16 ±4
CLEO (He et al., 2006)
4263±5
Avg (1.8/2)
Γ
88±23 +6
−4
BABAR (Aubert et al., 2008a)
(MeV)
126±18
Belle (Yuan et al., 2007)
73 +39
−25 ±5
CLEO (He et al., 2006)
108±15
Avg (2.4/2)
B × Γee 5.5±1.0 +0.8
−0.7 BABAR (Aubert et al., 2008a)
(eV)
9.7±1.1
Belle (Yuan et al., 2007)
8.9 +3.9
−3.1 ± 1.8
CLEO (He et al., 2006)
8.0±1.4
Avg (6.1/2)
TABLE VII
VIIMeasured properties of the two enhancements found in the ψ(2S)π + π − mass distribution, the Y (4360) and Y (4660). Liu et al.(Liu et al., 2008) performed a binned maximum likelihood fit to the combined Belle and BABAR cross section distributions(Fig. 83).Quantity
Value
From (χ 2 /d.o.f.)
M
4324±24 BABAR (Aubert et al., 2007a)
(MeV/c 2 ) 4361±9±9
Belle (Wang et al., 2007)
4353±15
Avg (1.8/1)
4355 + 9
−10 ±9
Liu (Liu et al., 2008)
Γ
172±33
BABAR (Aubert et al., 2007a)
(MeV)
74±15±10
Belle (Wang et al., 2007)
96±42
Avg (6.8/1)
103 +17
−15 ±11
Liu (Liu et al., 2008)
M
4664±11±5
Belle (Wang et al., 2007)
(MeV/c 2 ) 4661 +9
−8 ±6
Liu (Liu et al., 2008)
Γ
48±15±3
Belle (Wang et al., 2007)
(MeV)
42 +17
−12 ±6
Liu
TABLE VIII
VIIIUpper limits at 90% C.L. on the ratios σ(e + e − → Y → T )/σ(e + e − → Y → J/ψπ + π − ) at √ s = 4.26 GeV/c 2 (CLEO (Cronin-Hennessy et al.,
TABLE IX The
IXcontribution of some multipion processes to a had,LOµ
Experiment a had,LOTABLE X Estimated a had,LOµ
[ππ] [10 −10 ]
BCVC [%]
BABAR
514.1 ± 3.8 (1.00) 25.15 ± 0.18 ± 0.22 (1.00)
KLOE
503.1 ± 7.1 (0.97) 24.56 ± 0.26 ± 0.22 (0.92)
CMD2
506.6 ± 3.9 (0.89) 24.96 ± 0.21 ± 0.22 (0.96)
SND
505.1 ± 6.7 (0.94) 24.82 ± 0.30 ± 0.22 (0.91)
µ
Davier et al. 09/1 (τ) -157 ± 52 FIG. 88 Compilation of recent results for aµ.HMNT 07 (e
+ e
-)
-285 ± 51
JN 09 (e
+ e
-)
-299 ± 65
Davier et al. 09/2 (e
+ e
-)
-255 ± 49
DHMZ 10 (τ)
-195 ± 54
DHMZ 10 (e
+ e
-)
-287 ± 49
HLMNT 11 (e
+ e
-)
-261 ± 49
BDDJ 11 (e
+ e
-+ τ)
-335 ± 53
BNL-E821
0 ± 63
(a µ -a µ,exp )×10
11
-800
-600
-400
-200
0
Acknowledgments 47AcknowledgmentsThe authors are grateful to Vera Luth for the idea of this review. We are indebted to our colleagues from the BABAR and Belle Collaborations for many years of fruitful work. Thanks are also due to Brian Heltsley from CLEO and Graziano Venanzoni from KLOE for valuable comments.This work was supported in part by the grants RFBR 10-02-00695, 11-02-00112, 11-02-00558, and the DFG grant GZ 436 RUS 113/769/0-3.
. A Abashian, Belle CollaborationNucl. Instr. and Meth. A. 479117Abashian, A., et al. [Belle Collaboration], 2002, Nucl. Instr. and Meth. A 479, 117.
. M Ablikim, BES CollaborationPhys. Lett. B. 63014Ablikim, M., et al. [BES Collaboration], 2005, Phys. Lett. B 630, 14.
. M Ablikim, BES CollaborationPhys. Lett. B. 660315Ablikim, M., et al. [BES Collaboration], 2008, Phys. Lett. B 660, 315.
. M N Achasov, SND CollaborationPhys. Rev. D. 6632001Achasov, M.N., et al. [SND Collaboration], 2002, Phys. Rev. D 66, 032001.
. M N Achasov, SND CollaborationJ. Exp. Theor. Phys. 96789Achasov, M.N., et al. [SND Collaboration], 2003, J. Exp. Theor. Phys. 96, 789.
. M N Achasov, SND CollaborationJ. Exp. Theor. Phys. 103380Achasov, M.N., et al. [SND Collaboration], 2006, J. Exp. Theor. Phys. 103, 380.
. S Actis, Eur. Phys. J. C. 66585Actis, S., et al., 2010, Eur. Phys. J. C 66, 585.
. N E Adam, CLEO CollaborationPhys. Rev. Lett. 9682004Adam, N.E., et al. [CLEO Collaboration], 2006, Phys. Rev. Lett. 96, 082004.
. G S Adams, CLEO CollaborationPhys. Rev. D. 7351103Adams, G.S., et al. [CLEO Collaboration], 2006, Phys. Rev. D 73, 051103.
. S Agostinelli, GEANT4 CollaborationNucl. Instr. Meth. A. 506250Agostinelli, S., et al. [GEANT4 Collaboration], 2003, Nucl. Instr. Meth. A 506, 250.
. R R Akhmetshin, CMD-2 CollaborationPhys. Lett. B. 466392Akhmetshin, R.R., et al. [CMD-2 Collaboration], 1999, Phys. Lett. B 466, 392.
. R R Akhmetshin, CMD-2 CollaborationPhys. Lett. B. 489125Akhmetshin, R.R., et al. [CMD-2 Collaboration], 2000, Phys. Lett. B 489, 125.
. R R Akhmetshin, CMD-2 CollaborationPhys. Lett. B. 578285Akhmetshin, R.R., et al. [CMD-2 Collaboration], 2004a, Phys. Lett. B 578, 285.
. R R Akhmetshin, CMD-2 CollaborationPhys. Lett. B. 595101Akhmetshin, R.R., et al. [CMD-2 Collaboration], 2004b, Phys. Lett. B 595, 101.
. R R Akhmetshin, CMD-2 CollaborationPhys. Lett. B. 64828Akhmetshin, R.R., et al. [CMD-2 Collaboration], 2007, Phys. Lett. B 648, 28.
. R Alemany, M Davier, A Höcker, Eur. Phys. J. C. 2123Alemany, R., M. Davier, and A. Höcker, 1998, Eur. Phys. J. C 2, 123.
. A Aloisio, KLOE CollaborationPhys. Lett. B. 60612Aloisio, A., et al. [KLOE Collaboration], 2005, Phys. Lett. B 606, 12.
. M Ambrogiani, E835 CollaborationPhys. Rev. D. 6032002Ambrogiani, M., et al. [E835 Collaboration], 1999, Phys. Rev. D 60, 032002.
. F Ambrosino, KLOE CollaborationPhys. Lett. B. 670285Ambrosino, F., et al. [KLOE Collaboration], 2009, Phys. Lett. B 670, 285.
. F Ambrosino, KLOE CollaborationPhys. Lett. B. 700102Ambrosino, F., et al. [KLOE Collaboration], 2011, Phys. Lett. B 700, 102.
. C Amsler, Phys. Lett. B. 6671Particle Data GroupAmsler, C., et al. [Particle Data Group], 2008, Phys. Lett. B 667, 1.
. A Antonelli, DM2 CollaborationPhys. Lett. B. 212133Antonelli, A., et al. [DM2 Collaboration], 1988, Phys. Lett. B 212, 133.
. A Antonelli, DM2 CollaborationZ. Phys. C. 5615Antonelli, A., et al. [DM2 Collaboration], 1992, Z. Phys. C 56, 15.
. A Antonelli, FENICE CollaborationPhys. Lett. B. 365427Antonelli, A., et al. [FENICE Collaboration], 1996, Phys. Lett. B 365, 427.
. A Antonelli, FENICE CollaborationNucl. Phys. B. 5173Antonelli, A., et al. [FENICE Collaboration], 1998, Nucl. Phys. B 517, 3.
. A B Arbuzov, JHEP. 98129Arbuzov, A.B., et al., 1998, JHEP 9812, 009.
. T A Armstrong, E760 CollaborationPhys. Rev. Lett. 701212Armstrong, T.A., et al. [E760 Collaboration], 1993, Phys. Rev. Lett. 70, 1212.
. B Aubert, BABAR CollaborationNucl. Instr. Meth. A. 4791Aubert, B., et al. [BABAR Collaboration], 2002, Nucl. Instr. Meth. A 479, 1.
. B Aubert, BABAR CollaborationPhys. Rev. D. 6911103Aubert, B., et al. [BABAR Collaboration], 2004a, Phys. Rev. D 69, 011103.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7072004Aubert, B., et al. [BABAR Collaboration], 2004b, Phys. Rev. D 70, 072004.
. B Aubert, BABAR CollaborationPhys. Rev. Lett. 95142001Aubert, B., et al. [BABAR Collaboration], 2005a, Phys. Rev. Lett. 95, 142001.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7152001Aubert, B., et al. [BABAR Collaboration], 2005b, Phys. Rev. D 71, 052001.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7312005Aubert, B., et al. [BABAR Collaboration], 2006a, Phys. Rev. D 73, 012005.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7352003Aubert, B., et al. [BABAR Collaboration], 2006b, Phys. Rev. D 73, 052003.
. B Aubert, BABAR CollaborationPhys. Rev. Lett. 98212001Aubert, B., et al. [BABAR Collaboration], 2007a, Phys. Rev. Lett. 98, 212001.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7612008Aubert, B., et al. [BABAR Collaboration], 2007b, Phys. Rev. D 76, 012008.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7677Erratum-ibidAubert, B., et al. [BABAR Collaboration], 2007c, Phys. Rev. D 76, 092005, [Erratum-ibid, 2008, 77, 119902].
. B Aubert, BABAR CollaborationPhys. Rev. D. 7692006Aubert, B., et al. [BABAR Collaboration], 2007d, Phys. Rev. D 76, 092006.
. B Aubert, BABAR CollaborationPhys. Rev. D. 76111105Aubert, B., et al. [BABAR Collaboration], 2007e, Phys. Rev. D 76, 111105.
. B Aubert, BABAR CollaborationarXiv:0808.1543hep-exAubert, B., et al. [BABAR Collaboration], 2008a, eprint arXiv:0808.1543 [hep-ex].
. B Aubert, BABAR CollaborationPhys. Rev. D. 7792002Aubert, B., et al. [BABAR Collaboration], 2008b, Phys. Rev. D 77, 092002.
. B Aubert, BABAR CollaborationPhys. Rev. Lett. 103231801Aubert, B., et al. [BABAR Collaboration], 2009a, Phys. Rev. Lett. 103, 231801.
. B Aubert, BABAR CollaborationPhys. Rev. D. 7992001Aubert, B., et al. [BABAR Collaboration], 2009b, Phys. Rev. D 79, 092001.
. V M Aulchenko, CMD-2 CollaborationJETP Lett. 82743Aulchenko, V. M. et al. [CMD-2 Collaboration], 2005, JETP Lett. 82, 743.
. V M Aulchenko, CMD-2 CollaborationJETP Lett. 84413Aulchenko, V. M. et al. [CMD-2 Collaboration], 2006, JETP Lett. 84, 413.
. C Bacci, Phys. Lett. B. 95139γγ2 CollaborationBacci, C., et al. [γγ2 Collaboration], 1980, Phys. Lett. B 95, 139.
. C Bacci, Nucl. Phys. B. 18431γγ2 CollaborationBacci, C., et al. [γγ2 Collaboration], 1981, Nucl. Phys. B 184, 31.
. J Z Bai, BES CollaborationPhys. Rev. D. 5892006Bai, J.Z., et al. [BES Collaboration], 1998, Phys. Rev. D 58, 092006.
. J Z Bai, BES CollaborationPhys. Rev. Lett. 88101802Bai, J.Z., et al. [BES Collaboration], 2002, Phys. Rev. Lett. 88, 101802.
. J Z Bai, BES CollaborationPhys. Rev. D. 7012005Bai, J.Z., et al. [BES Collaboration], 2004, Phys. Rev. D 70, 012005.
. V N Baier, V S Fadin, V N Baier, V A Khoze, Sov. Phys. JETP. 271145Phys. Lett. BBaier, V.N., and V. S. Fadin, 1968, Phys. Lett. B 27, 223. Baier, V.N., and V. A. Khoze, 1965, Sov. Phys. JETP 21, 1145.
R Baldini, DM2 CollaborationProceedings of the "Fenice" Workshop. the "Fenice" WorkshopFrascatiBaldini, R., et al. [DM2 Collaboration], 1988, in Proceedings of the "Fenice" Workshop, Frascati.
. R Baldini, Comput. Phys. Commun. 39115Eur. Phys. J. ABaldini, R., et al., 2009, Eur. Phys. J. A 39, 315. Barberio, E., B. van Eijk, and Z. Was, 1991, Comput. Phys. Commun. 66, 115.
. G Bardin, PS170 CollaborationNucl. Phys. B. 4113Bardin, G., et al. [PS170 Collaboration], 1994, Nucl. Phys. B 411, 3.
. T Barnes, S Godfrey, E S Swanson, Phys. Rev. D. 7254026Barnes, T., S. Godfrey, and E. S. Swanson, 2005, Phys. Rev. D 72, 054026.
. M Benayoun, arXiv:1106.1315Mod. Phys. Lett. 14hep-phBenayoun, M., et al., 1999, Mod. Phys. Lett. 14, 2605. Benayoun, M., et al., 2011, arXiv:1106.1315 [hep-ph].
. G W Bennett, E821 CollaborationPhys. Rev. D. 7372003Bennett, G.W., et al. [E821 Collaboration], 2006, Phys. Rev. D 73, 072003.
. T Bergfeld, CLEO CollaborationPhys. Rev. Lett. 792406Bergfeld, T., et al. [CLEO Collaboration], 1997, Phys. Rev. Lett. 79, 2406.
. S Binner, J H Kühn, K Melnikov, Phys. Lett. B. 459279Binner, S., J.H. Kühn, and K. Melnikov, 1999, Phys. Lett. B 459, 279.
. D Bisello, DM1 CollaborationPhys. Lett. B. 107145Bisello, D., et al. [DM1 Collaboration], 1981, Phys. Lett. B 107, 145.
. D Bisello, DM2 CollaborationNucl. Phys. B. 224379Bisello, D., et al. [DM2 Collaboration], 1983, Nucl. Phys. B 224, 379.
. D Bisello, DM2 CollaborationPhys. Lett. B. 220321Bisello, D., et al. [DM2 Collaboration], 1989, Phys. Lett. B 220, 321.
. D Bisello, DM2 CollaborationZ. Phys. C. 4823Bisello, D., et al. [DM2 Collaboration], 1990, Z. Phys. C 48, 23.
. D Bisello, DM2 CollaborationNucl. Phys. Proc. 111Suppl. 21Bisello, D., et al. [DM2 Collaboration], 1991a, Nucl. Phys. Proc. Suppl. 21, 111.
. D Bisello, DM2 CollaborationZ. Phys. C. 52227Bisello, D., et al. [DM2 Collaboration], 1991b, Z. Phys. C 52, 227.
. S I Bityukov, DASP CollaborationJ. Phys. Radium. 188361Phys. Lett. BBityukov, S.I., et al., 1987, Phys. Lett. B 188, 383. Bonneau, G., and F. Martin, 1971, Nucl. Phys. B 27, 381. Bouchiat, C., and L. Michel, 1961, J. Phys. Radium 22, 121. Brambilla, H., et al. [QWG], 2011, Eur. Phys. J. C 71, 1515. Brandelik, R., et al. [DASP Collaboration], 1978, Phys. Lett. B 76, 361.
. J Buon, DM1 CollaborationPhys. Lett. B. 118221Buon, J., et al. [DM1 Collaboration], 1982, Phys. Lett. B 118, 221.
. H Burkhardt, Z. Phys. C. 43497Burkhardt, H., et al., 1989, Z. Phys. C 43, 497.
. H Burkhardt, B Pietrzyk, Phys. Rev. D. 7257501Burkhardt, H., and B. Pietrzyk, 2005, Phys. Rev. D 72, 057501.
. M Caffo, H Czyż, E Remiddi, Phys. Lett. B. 327369Caffo, M., H. Czyż, and E. Remiddi, 1994, Phys. Lett. B 327, 369.
. M Caffo, H Czyż, E Remiddi, Nuovo Cim. A. 110515Caffo, M., H. Czyż, and E. Remiddi, 1997, Nuovo Cim. A 110, 515.
. M Castellano, Nuovo Cim. A. 1. Cherepanov, V.A., and S.I. Eidelman14429JETP Lett.Castellano, M., et al., 1973, Nuovo Cim. A 14, 1. Cherepanov, V.A., and S.I. Eidelman, 2009, JETP Lett. 89, 429.
. V L Chernyak, A R Zhitnitsky ; Chernyak, V L , JETP Lett. 25569Z. Phys. CChernyak, V.L., and A. R. Zhitnitsky, 1977, JETP Lett. 25, 510. Chernyak, V.L., et al., 1989, Z. Phys. C 42, 569.
. T E Coan, CLEO CollaborationPhys. Rev. Lett. 96162003Coan, T.E., et al. [CLEO Collaboration], 2006, Phys. Rev. Lett. 96, 162003.
. A Cordier, DM1 CollaborationPhys. Lett. B. 106155Cordier, A., et al. [DM1 Collaboration], 1981, Phys. Lett. B 106, 155.
. A Cordier, DM1 CollaborationPhys. Lett. B. 109129Cordier, A., et al. [DM1 Collaboration], 1982a, Phys. Lett. B 109, 129.
. A Cordier, DM1 CollaborationPhys. Lett. B. 110335Cordier, A., et al. [DM1 Collaboration], 1982b, Phys. Lett. B 110, 335.
. G Cosme, M3N CollaborationNucl. Phys. B. 152215Cosme, G., et al. [M3N Collaboration], 1979, Nucl. Phys. B 152, 215.
. D Cronin-Hennessy, CLEO CollaborationPhys. Rev. D. 8072001Cronin-Hennessy, D., et al. [CLEO Collaboration], 2009, Phys. Rev. D 80, 072001.
. H Czyż, J H Kühn ; Czyż, H , Eur. Phys. J. C. 18527Eur. Phys. J. CCzyż, H., and J. H. Kühn, 2001, Eur. Phys. J. C 18, 497. Czyż, H., et al., 2003, Eur. Phys. J. C 27, 563. Czyż, H., et al., 2004, Eur. Phys. J. C 35, 527.
. H Czyż, A Grzelinska, J H Kühn, Phys. Rev. D. 7574026Czyż, H., A. Grzelinska, and J. H. Kühn, 2007, Phys. Rev. D 75, 074026.
. G D'agostini, Nucl. Instrum. Meth. A. 362487D'Agostini, G., 1995, Nucl. Instrum. Meth. A 362, 487.
A Datta, P J Davier, M , Proc. Suppl.) 169, 288. Davier. Suppl.) 169, 288. Davier5671Eur. Phys. J. CDatta, A., and P.J. O'Donnell, 2003, Phys. Lett. B 567, 263. Davier, M., et al., 2003a, Eur. Phys. J. C 27, 497. Davier, M., et al., 2003b, Eur. Phys. J. C 31, 503. Davier, M., 2007, Nucl. Phys. B (Proc. Suppl.) 169, 288. Davier, M., et al., 2010a, Eur. Phys. J. C 66, 1.
. M Davier, Eur. Phys. J. C. 661515Eur. Phys. J. CDavier, M., et al., 2010b, Eur. Phys. J. C 66, 127. Davier, M., et al., 2011, Eur. Phys. J. C 71:1515.
. P Del Amo Sanchez, BABAR CollaborationPhys. Rev. D. 8252004del Amo Sanchez, P., et al. [BABAR Collaboration], 2010, Phys. Rev. D 82, 052004.
. B Delcourt, DM1 CollaborationPhys. Lett. B. 86395Delcourt, B., et al. [DM1 Collaboration], 1979, Phys. Lett. B 86, 395.
. B Delcourt, DM1 CollaborationPhys. Lett. B. 11393Delcourt, B., et al. [DM1 Collaboration], 1982, Phys. Lett. B 113, 93.
. V F Dmitriev, A I Milstein, Phys. Lett. B. 65813Dmitriev, V.F., and A. I. Milstein, 2007, Phys. Lett. B 658, 13.
. S I Dolinsky, ND CollaborationPhys. Rept. 20299Dolinsky, S.I., et al. [ND Collaboration], 1991, Phys. Rept. 202, 99.
. V P Druzhinin, ND CollaborationPhys. Lett. B. 174115Druzhinin, V.P., et al. [ND Collaboration], 1986, Phys. Lett. B 174, 115.
. V P Druzhinin, arXiv:0710.3455hep-exDruzhinin, V.P., 2007, eprint arXiv:0710.3455 [hep-ex].
. A Z Dubnickova, S Dubnicka, M P Rekalo, Nuovo Cim. A. 109241Dubnickova, A.Z., S. Dubnicka, and M. P. Rekalo, 1996, Nuovo Cim. A 109, 241.
. E J Eichten, Phys. Rev. D. 21203Eichten, E.J., et al., 1980, Phys. Rev. D 21, 203.
. E J Eichten, K Lane, C Quigg, Phys. Rev. D. 73Erratum-ibid. 2006, D 73, 079903 (2006)Eichten, E.J., K. Lane and C. Quigg, 2006, Phys. Rev. D 73, 014014, [Erratum-ibid. 2006, D 73, 079903 (2006)].
. E J Eichten, Rev. Mod. Phys. 80437Phys. Lett. BEichten, E.J., et al., 2008, Rev. Mod. Phys. 80, 1161. Eidelman, S.I., and V.N. Ivanchenko, 1991, Phys. Lett. B 257, 437.
. S Eidelman, F Jegerlehner ; Eidelman, S , Phys. Lett. B. 671Z. Phys. CEidelman, S., and F. Jegerlehner, 1995, Z. Phys. C 67, 585. Eidelman, S., et al. [Particle Data Group], 2004, Phys. Lett. B 592, 1.
. B Esposito, MEA CollaborationLett. Nuovo Cim. 31445Esposito, B., et al. [MEA Collaboration], 1981, Lett. Nuovo Cim. 31, 445.
. P L Frabetti, FOCUS CollaborationPhys. Lett. B. 514240Frabetti, P.L., et al. [FOCUS Collaboration], 2000, Phys. Lett. B 514, 240.
. P Franzini, M Moulson, S Gomez-Avila, M Napsuciale, E Oset, Annual Review of Nuclear and Particle Science. 5634018Phys.Rev. DFranzini, P., and M. Moulson, 2006, Annual Review of Nu- clear and Particle Science, 56, 207. Gomez-Avila, S., M. Napsuciale, and E. Oset, 2009, Phys.Rev. D 79, 034018.
. M Gourdin, E Hagiwara, K , Nucl. Phys. B. 1085003J. Phys. GGourdin, M., and E. de Rafael, 1969, Nucl. Phys. B 10, 667. Hagiwara, K., et al., 2003, Phys. Lett. B 557, 69. Hagiwara, K., et al., 2007, Phys. Lett. B 649, 173. Hagiwara, K., et al., 2011, J. Phys. G 38, 085003.
. Q He, CLEO CollaborationPhys. Rev. D. 7491104He, Q., et al. [CLEO Collaboration], 2006, Phys. Rev. D 74, 091104.
. K Inami, Belle CollaborationPhys. Lett. B. 672209Inami, K., et al. [Belle Collaboration], 2009, Phys. Lett. B 672, 209.
. B Jean-Marie, Preprint SLAC-PUB-1711Maek-II CollaborationJean-Marie, B., et al. [Maek-II Collaboration], 1976, Preprint SLAC-PUB-1711.
. F Jegerlehner, A Nyffeler, F Jegerlehner, R Szafron, Eur. Phys. J. C. 4771332Phys. Rept.Jegerlehner, F., and A. Nyffeler, 2009, Phys. Rept. 477, 1. Jegerlehner, F., and R. Szafron, 2011, Eur. Phys. J. C 71:1332.
Bachelor's thesis. L V Kardapoltzev, Novosibirsk State UniversityunpublishedKardapoltzev, L.V., 2007, Bachelor's thesis, Novosibirsk State University (unpublished).
. N Kawamoto, A I Sanda ; Khoze, V A , Eur. Phys. J. C. 76811JETP Lett.Kawamoto, N., and A.I. Sanda, 1978, Phys. Lett. B 76, 446. Khoze, V.A., et al., 2001, Eur. Phys. J. C 18, 481. Khoze, V.A., et al., 2002, Eur. Phys. J. C 25, 199. Konchatnij, M.I., and N.P. Merenkov, 1999, JETP Lett. 69, 811.
. E A Kuraev, V S Fadin, Sov. J. Nucl. Phys. 41466Kuraev, E.A., and V.S. Fadin, 1985, Sov. J. Nucl. Phys. 41, 466.
. L M Kurdadze, Olya CollaborationSov. J. Nucl. Phys. 40286Kurdadze, L.M., et al. [Olya Collaboration], 1984, Sov. J. Nucl. Phys. 40, 286.
. L M Kurdadze, Olya CollaborationJETP Lett. 43643Kurdadze, L.M., et al. [Olya Collaboration], 1986, JETP Lett. 43, 643.
. L M Kurdadze, Olya CollaborationJETP Lett. 47512Kurdadze, L.M., et al. [Olya Collaboration], 1988, JETP Lett. 47, 512.
. S Kurokawa, E Kikutani, Nucl. Instr. and Meth. A. 499included in this volumeKurokawa, S., and E. Kikutani, 2003, Nucl. Instr. and Meth. A 499, 1, and other papers included in this volume.
. J P Lees, BABAR CollaborationJ P , BABAR CollaborationarXiv:1103.3001The. hep-exJ. P. Lees, J. P., et al. [The BABAR Collaboration], 2011, eprint arXiv:1103.3001 [hep-ex].
. Z Q Liu, X S Qin, C Z Yuan, Phys. Rev. D. 7814032Liu, Z.Q., X.S. Qin, and C.Z. Yuan, 2008, Phys. Rev. D 78, 014032.
. B Malaescu, arXiv:0907.3791physics.data-anMalaescu, B., 2009, eprint arXiv:0907.3791 [physics.data-an].
. X H Mo, Phys. Lett. B. 64018Mo, X.H., et al., 2006, Phys. Lett. B 640, 18.
S E Muller, KLOE Collaboration ; Crystal Ball CollaborationChina Beijing, KLOE Collaboration ; Crystal Ball CollaborationK Nakamura, KLOE Collaboration ; Crystal Ball CollaborationSLAC- PUB-41602009, talk given at International Workshop on e+e-collisions from phi to psi (PHIPSI09). 3774012J. Phys. GMuller, S. E., [KLOE Collaboration], 2009, talk given at In- ternational Workshop on e+e-collisions from phi to psi (PHIPSI09), October 13-16, 2009, Beijing, China. Nakamura, K., et al., 2010, J. Phys. G 37, 075021. Napsuciale, M., et al., 2007, Phys.Rev. D 76, 074012. Osterheld, A., et al. [Crystal Ball Collaboration], 1986, SLAC- PUB-4160.
. G Pakhlova, Belle CollaborationPhys. Rev. Lett. 9892001Pakhlova, G., et al. [Belle Collaboration], 2007, Phys. Rev. Lett. 98, 092001.
. G Pakhlova, Belle CollaborationPhys. Rev. D. 7711103Pakhlova, G., et al. [Belle Collaboration], 2008a, Phys. Rev. D 77, 011103.
. G Pakhlova, Belle CollaborationPhys. Rev. Lett. 10062001Pakhlova, G., et al. [Belle Collaboration], 2008b, Phys. Rev. Lett. 100, 062001.
. G Pakhlova, Belle CollaborationPhys. Rev. Lett. 101172001Pakhlova, G., et al. [Belle Collaboration], 2008c, Phys. Rev. Lett. 101, 172001.
. G Pakhlova, Belle CollaborationPhys. Rev. D. 8091101Pakhlova, G., et al. [Belle Collaboration], 2009, Phys. Rev. D 80, 091101.
. G V Pakhlova, P N Pakhlov, S I Eidelman, Physics-Uspekhi. 53219Pakhlova, G.V., P.N. Pakhlov, and S.I. Eidelman, 2010, Physics-Uspekhi 53, 219.
. G Pakhlova, Belle CollaborationPhys. Rev. D. 8311101Pakhlova, G., et al. [Belle Collaboration], 2011, Phys. Rev. D 83, 011101.
. T K Pedlar, CLEO CollaborationarXiv:0901.0306Phys. Rev. Lett. J., E. de Rafael and A. Vainshtein95261803hep-phPedlar, T.K., et al. [CLEO Collaboration], 2005, Phys. Rev. Lett. 95, 261803. http://ific.uv.es/∼rodrigo/phokhara/ Prades, J., E. de Rafael and A. Vainshtein, 2009, eprint arXiv:0901.0306 [hep-ph].
Basics of Electron Positron Collisions, Editions Frontieres. F M Renard, Gif sur Yvette, FranceRenard, F.M., 1981, Basics of Electron Positron Collisions, Editions Frontieres, Gif sur Yvette, France.
. G Rodrigo, SLAC-PRPRINT-2001-040Eur. Phys. J. C. 2271Eur. Phys. J. CRodrigo, G., et al., 2001, Eur. Phys. J. C 22, 81. Rodrigo, G., et al., 2002, Eur. Phys. J. C 24, 71. Seeman, J.T., et al., 2001, SLAC-PRPRINT-2001-040.
. K K Seth, Phys. Rev. D. 7217501Seth, K.K., 2005, Phys. Rev. D 72, 017501.
. J L Siegrist, Phys. Rev. Lett. 36700MARK-I CollaborationSiegrist, J.L., et al. [MARK-I Collaboration], 1976, Phys. Rev. Lett. 36, 700.
. C P Shen, Belle CollaborationPhys. Rev. D. 8031101Shen, C.P., et al. [Belle Collaboration], 2009, Phys. Rev. D 80, 031101.
. M A Shifman, A I Vainshtein, V I Zacharov, Nucl. Phys. B. 147385Shifman, M.A., A.I. Vainshtein, and V.I. Zacharov, 1979, Nucl. Phys. B 147, 385.
. E S Swanson, Phys. Rept. 429243Swanson, E.S., 2006, Phys. Rept. 429, 243.
. H B Thacker, J J Sakurai, Phys. Lett. B. 103. Tsai, Y.S.36Phys. Rev. D. Erratum-ibid. 1976, D 13, 771Thacker, H.B., and J.J. Sakurai, 1971, Phys. Lett. B 36, 103. Tsai, Y.S., 1971, Phys. Rev. D 4, 2821, [Erratum-ibid. 1976, D 13, 771].
. C Tzara, Nucl. Phys. B. 18246Tzara, C., 1970, Nucl. Phys. B 18, 246.
. I B Vasserman, ND CollaborationSov. J. Nucl. Phys. 471035Vasserman, I.B., et al. [ND Collaboration], 1988, Sov. J. Nucl. Phys. 47 1035.
W F Wang, BABAR Collaborationtalk given at International Workshop on e+e-collisions from phi to psi (PHIPSI09). Beijing, ChinaWang, W.F. [BABAR Collaboration], 2009, talk given at In- ternational Workshop on e+e-collisions from phi to psi (PHIPSI09), October 13-16, 2009, Beijing, China.
. X L Wang, Belle CollaborationPhys. Rev. Lett. 99142002Wang, X.L., et al. [Belle Collaboration], 2007, Phys. Rev. Lett. 99, 142002.
. W M Yao, J. Phys. G. 331Particle Data GroupYao, W.M., et al. [Particle Data Group], 2006, J. Phys. G 33, 1.
. C Z Yuan, Belle CollaborationPhys. Rev. Lett. 99182004Yuan, C.Z., et al. [Belle Collaboration], 2007, Phys. Rev. Lett. 99, 182004.
. C Z Yuan, Belle CollaborationPhys. Rev. D. 7711105Yuan, C.Z., et al. [Belle Collaboration], 2008, Phys. Rev. D 77, 011105.
| []
|
[
"COMPLEXITY RESULTS FOR MCMC DERIVED FROM QUANTITATIVE BOUNDS",
"COMPLEXITY RESULTS FOR MCMC DERIVED FROM QUANTITATIVE BOUNDS"
]
| [
"Jun Yang \nDepartment of Statistics\nUniversity of Oxford\nUK\n",
"Jeffrey S Rosenthal \nDepartment of Statistical Sciences\nUniversity of Toronto\nCanada\n"
]
| [
"Department of Statistics\nUniversity of Oxford\nUK",
"Department of Statistical Sciences\nUniversity of Toronto\nCanada"
]
| []
| This paper considers how to obtain MCMC quantitative convergence bounds which can be translated into tight complexity bounds in highdimensional settings. We propose a modified drift-and-minorization approach, which establishes generalized drift conditions defined in subsets of the state space. The subsets are called the "large sets", and are chosen to rule out some "bad" states which have poor drift property when the dimension of the state space gets large. Using the "large sets" together with a "fitted family of drift functions", a quantitative bound can be obtained which can be translated into a tight complexity bound. As a demonstration, we analyze several Gibbs samplers and obtain complexity upper bounds for the mixing time. In particular, for one example of Gibbs sampler which is related to the James-Stein estimator, we show that the number of iterations required for the Gibbs sampler to converge is constant under certain conditions on the observed data and the initial state. It is our hope that this modified drift-andminorization approach can be employed in many other specific examples to obtain complexity bounds for high-dimensional Markov chains.MSC2020 subject classifications: Primary 60J20, 60J22; secondary 65C05. | 10.1214/22-aap1846 | [
"https://arxiv.org/pdf/1708.00829v6.pdf"
]
| 51,998,181 | 1708.00829 | 6a334caae360fce0369f1929f0a4e3b1ec28bd9c |
COMPLEXITY RESULTS FOR MCMC DERIVED FROM QUANTITATIVE BOUNDS
10 May 2022
Jun Yang
Department of Statistics
University of Oxford
UK
Jeffrey S Rosenthal
Department of Statistical Sciences
University of Toronto
Canada
COMPLEXITY RESULTS FOR MCMC DERIVED FROM QUANTITATIVE BOUNDS
10 May 2022arXiv:1708.00829v6 [stat.CO] Submitted to the Annals of Applied Probability
This paper considers how to obtain MCMC quantitative convergence bounds which can be translated into tight complexity bounds in highdimensional settings. We propose a modified drift-and-minorization approach, which establishes generalized drift conditions defined in subsets of the state space. The subsets are called the "large sets", and are chosen to rule out some "bad" states which have poor drift property when the dimension of the state space gets large. Using the "large sets" together with a "fitted family of drift functions", a quantitative bound can be obtained which can be translated into a tight complexity bound. As a demonstration, we analyze several Gibbs samplers and obtain complexity upper bounds for the mixing time. In particular, for one example of Gibbs sampler which is related to the James-Stein estimator, we show that the number of iterations required for the Gibbs sampler to converge is constant under certain conditions on the observed data and the initial state. It is our hope that this modified drift-andminorization approach can be employed in many other specific examples to obtain complexity bounds for high-dimensional Markov chains.MSC2020 subject classifications: Primary 60J20, 60J22; secondary 65C05.
1. Introduction. Markov chain Monte Carlo (MCMC) algorithms are extremely widely used and studied in statistics, e.g. [5,19], and their running times are an extremely important practical issue. They have been studied from a variety of perspectives, including convergence "diagnostics" via the Markov chain output (e.g. [18]), proving weak convergence limits of speed-up versions of the algorithms to diffusion limits [39,40], and directly bounding the convergence in total variation distance [34,44,46,42,24,47,16,3,25]. Furthermore, there is a recent trend focusing on quantitative mixing time bounds in terms of either total variation distance or Wasserstein distance for certain types of MCMC methods (such as Langevin Monte Carlo) and targets (such as strongly log-concave targets), see e.g. [9,11]. Among the work of directly bounding the total variation distance, most of the quantitative convergence bounds proceed by establishing a drift condition and an associated minorization condition for the Markov chain in question (see e.g. [35]). One approach for finding quantitative bounds has been the drift and minorization method set forth by [44].
Computer scientists take a slightly different perspective, in terms of running time complexity order as the "size" of the problem goes to infinity. Complexity results in computer science go back at least to [7], and took on greater focus with the pioneering NP-complete work of [8]. In the Markov chain context, computer scientists have been bounding convergence times of Markov chain algorithms since at least [52], focusing largely on spectral gap bounds for Markov chains on finite state spaces. More recently, attention has turned to bounding spectral gaps of modern Markov chain algorithms on general state spaces, again primarily via spectral gaps, such as [29,53,30,54,55] and the references therein. These bounds often focus on the order of the convergence time in terms of some particular parameter, such as the dimension of the corresponding state space. In recent years, there is much interest in the "large p, large n" or "large p, small n" high-dimensional settings, where p is the number of parameters and n is the sample size. [38] use the term convergence complexity to denote the ability of a high-dimensional MCMC scheme to draw samples from the posterior, and how the ability to do so changes as the dimension of the parameter set grows.
Direct total variation bounds for MCMC are sometimes presented in terms of the convergence order, for example, the work by [45] for a Gibbs sampler for a variance components model. However, current methods for obtaining total variation bounds of such MCMCs typically proceed as if the dimension of the parameter, p, and sample size, n, are fixed. It is thus important to bridge the gap between statistics-style convergence bounds, and computerscience-style complexity results.
In one direction, [41] connect known results about diffusion limits of MCMC to the computer science notion of algorithm complexity. They show that any weak limit of a Markov process implies a corresponding complexity bound in an appropriate metric. For example, under appropriate assumptions, in p dimensions, the Random-Walk Metropolis algorithm takes O(p) iterations (see also [56]) and the Metropolis-adjusted Langevin algorithm (MALA) takes O(p 1/3 ) iterations to converge to stationarity. This paper considers how to obtain MCMC quantitative convergence bounds that can be translated into tight complexity bounds in high-dimensional settings. At the first glance, it may seem that an approach to answering the question of convergence complexity may be provided by the drift-and-minorization method of [44]. However, [38] demonstrate that, somewhat problematically, a few specific upper bounds in the literature obtained by the drift-andminorization method tend to 1 as n or p tends to infinity. For example, by directly translating the existing work by [6,26], which are both based on the general approach of [44], [38] show that the "small set" gets large fast as the dimension p increases. And this seems to happen generally when the drift-and-minorization approach is applied to statistical problems. [38] also discuss special cases when the method of [44] can still be used to obtain tight bounds on the convergence rate. However, the conditions proposed in [38] are very restrictive. First, it requires the MCMC algorithm to be analyzed is a Gibbs sampler. Second, the Gibbs sampler must have only one high-dimensional parameter which must be drawn in the last step of the Gibbs sampling cycle. Unfortunately, other than some tailored examples [38], most realistic MCMC algorithms do not satisfy these conditions. It is unclear whether some particular drift functions lead to bad complexity bounds or the drift-and-minorization approach itself has some limitations. It is therefore the hope by [38] that proposals and developments of new ideas analogous to those of [44], which are suitable for high-dimensional settings, can be motivated.
In this paper, we attempt to address concerns about obtaining quantitative bounds that can be translated into tight complexity bounds. We note that although [38] provide evidence for the claim that many published bounds have poor dependence on n and p, the statistics literature has not focused on controlling the complexity order on n and p. We give some intuition why most directly translated complexity bounds are quite loose and provide advice on how to obtain tight complexity bounds for high-dimensional Markov chains. The key ideas are (1) the drift function should be small near the region of concentration of the posterior in high dimensions; (2) "bad" states which have poor drift property when n and/or p gets large should be ruled out when establishing the drift condition. In order to get tight complexity bounds, we propose a modified drift-and-minorization approach by establishing generalized drift conditions in subsets of the state space, which are called the "large sets", instead of the whole state space; see Section 2. The "large sets" are chosen to rule out some "bad" states which have poor drift property when the dimension of the state space gets large. By establishing the generalized drift condition, a new quantitative bound is obtained, which is composed of two parts. The first part is an upper bound on the probability the Markov chain will visit the states outside of the "large set"; the second part is an upper bound on the total variation distance of a constructed restricted Markov chain defined only on the "large set". In order to obtain good complexity bounds for high-dimensional settings, as the dimension increases, the family of drift functions should be chosen such that the function values are small near the region of concentration of the posterior, which we will define formally as a "fitted family of drift functions", and the "large sets" should be adjusted depending on n and p to balance the complexity order of the two parts.
As a demonstration, we prove three Gibbs samplers to get complexity bounds. In the first two examples, we demonstrate how to choose the "fitted family of drift functions". In the third example, we demonstrate the use of "fitted family of drift functions" together with "large sets". More specifically, we show in Section 3.3 that a certain realistic Gibbs sampler related to the James-Stein estimator converges in O(1) iterations; see Theorem 3.7. As far as we know, this is the first successful example for analyzing the convergence complexity of a non-trivial realistic MCMC algorithm using the (modified) drift-and-minorization approach. Several months after we uploaded this manuscript to arXiv, [37] successfully analyzed another realistic MCMC algorithm using the drift-and-minorization approach. Although the analysis by [37] does not make use of the "large set" technique proposed in this paper, they do make use of a "fitted family of drift functions", which they use an informal concept called "a centered drift function". We explain in this paper that when there exists some "bad" states, using a "fitted family of drift functions" might not be enough to establish a tight complexity bound. For example, for the Gibbs sampler we successfully analyze in Section 3.3, it is unknown how to obtain tight complexity bound by the traditional drift-and-minorization approach or other approaches. This is confirmed in a later study by [10]. To the best of our knowledge, our approach using the "large set" is the only successful approach so far to get the tight complexity bound of this example. For another successful example using the "large set", we refer to recent work in [57] for high-dimensional Bayesian variable selection. An important message from the successful analysis of several MCMC examples using the "large set" together with a "fitted family of drift functions" is that complexity bounds can be obtained even without any particular form of non-deteriorating convergence bounds. Previous attempts in the literature on studying how the geometric convergence rate behaves as a function of p and n are incomplete. It is our hope that our approach can be employed to many other specific examples for obtaining quantitative bounds that can be translated to complexity bounds in high-dimensional settings.
Notation: We use d − → for weak convergence and π(·) to denote the stationary distribution of the Markov chain. The total variation distance is denoted by · var and the law of a random variable X denoted by L(X). We adopt the Big-O, Little-O, Theta, and Omega notations. Formally, T (n) = O(f (n)) if and only if for some constants c and n 0 , T (n) ≤ cf (n) for all n ≥ n 0 ; T (n) = Ω(f (n)) if and only if for some constants c and n 0 , T (n) ≥ cf (n) for all n ≥ n 0 ; T (n) is Θ(f (n)) if and only if both T (n) = O(f (n)) and T (n) = Ω(f (n)); T (n) = o(f (n)) if and only if T (n) = O(f (n)) and T (n) = Ω(f (n)).
Generalized Geometric Drift Conditions and Large Sets.
Scaling classical MCMCs to very high dimensions can be problematic. Even if a chain is geometrically ergodic for fixed n and p, the convergence of Markov chains may still be quite slow as p → ∞ and n → ∞. Throughout the paper, we assume the Markov chain is positive Harris recurrent, aperiodic, and π-irreducible, where π denotes the unique stationary distribution. For a Markov chain {X (i) , i = 0, 1, . . . } on a state space (X , B) with transition kernel P (x, ·), defined by
P (x, B) = P(X (i+1) ∈ B | X (i) = x), ∀x ∈ X , B ∈ B(1)
the general method of [44] proceeds by establishing a drift condition
E(f (X (1) ) | X (0) = x) ≤ λf (x) + b, ∀x ∈ X ,(2)
where f : X → R + is the "drift function", some 0 < λ < 1 and b < ∞; and an associated minorization condition
P (x, ·) ≥ ǫQ(·), ∀x ∈ R,(3)
where R := {x ∈ X : f (x) ≤ d} is called the "small set", and d > 2b/(1 − λ), for some ǫ > 0 and some probability measure Q(·) on X . Then [44,Theorem 12] states that under both drift and minorization conditions, if the Markov chain starts from an initial distribution ν, then for any 0 < r < 1, we have
L(X (k) ) − π var ≤ (1 − ǫ) rk + α −k (αΛ) rk 1 + E ν (f (x)) + b 1 − λ ,(4)where α −1 = 1+2b+λd 1+d , Λ = 1 + 2(λd + b) and E ν [f (x)]
denotes the expectation of f (x) over x ∼ ν(·). However, it is observed, for example, in [38,37], that for many specific bounds obtained by the drift-and-minorization method, when the dimension gets larger, the typical scenario for the drift condition of Eq. (2) seems to be λ going to one, and/or b getting much larger. This makes the "size" of the small set R grow too fast, which leads to the minorization volume ǫ go to 0 exponentially fast. In the following, we give an intuitive explanation of what makes a "good" drift condition in high-dimensional settings.
2.1. Intuition. It is useful to think of the drift function f (x) as an energy function [24]. Then the drift condition in Eq. (2) implies the chain tends to "drift" toward states which have "lower energy" in expectation. It is well-known that a "good" drift condition is established when both λ and b are small. Intuitively, λ being small implies that when the chain is in a "high-energy" state, then it tends to "drift" back to "low-energy" states fast; and b being small implies that when the chain is in a "low-energy" state, then it tends to remain in a "low-energy" state in the next iteration too. In a high-dimensional setting as the dimension grows to infinity, for a collection of drift conditions to be "good", we would like it to satisfy the following two properties: P1. λ is small, in the sense that it converges to 1 slowly or is bounded away from 1;
P2. b is small, in the sense that it grows at a slower rate than do typical values of the drift function.
We explain the intuition behind the properties and define a new notion of "fitted family of drift functions" in this subsection and later demonstrate how to establish the properties using examples in Section 3. One way to understand this intuition is to think of it as controlling the complexity order of the size of the "small set", R = {x ∈ X : f (x) ≤ d}. Since d > 2b/(1 − λ), if λ converges to 1 slowly or is bounded away from 1, and if b is growing at a slower rate than typical values of f (x) (we will illustrate the meaning of "typical values" later in examples), then the size of the small set parameter d can be chosen to have a small complexity order on n and/or p. This in turn makes the minorization volume ǫ converge to 0 sufficiently slowly (or even remain bounded away from 0).
Next, we define the notion of "fitted family of drift functions", which is somewhat related to the informal concept of "centered drift function" in [37]. DEFINITION 2.1. Let π p be the target distributions when the dimension of the state space is p. We call a collection of non-negative functions,
{f p (·)} ∞ p=1 , a fitted family if lim p→∞ E π p [f p (x)] = 0. (5)
Then a fitted family of drift functions is just a fitted family of functions which also satisfy a family of (generalized) drift conditions. Note that the fitted family of functions can also depend on n if n is a function of p. In the rest of the paper, we may simply write π p as π and f p (x) as f (x) for simplicity. However, we should keep in mind that the notation π and f (x) are actually a family of target distributions and a fitted family of drift functions in high-dimensional settings when we study the behavior of the Markov chains for p → ∞.
Now we explain the intuition on why we should use a fitted family of drift functions in high-dimensional settings. For clarity, we first assume that λ is bounded away from 1, and focus on conditions required for b to grow at a slower rate than typical values of f (x). Assume for definiteness that p is fixed and n → ∞, and the drift function is scaled in such a way that f (x) = O(1) and there is a fixed typical statex with f (x) = Θ(1) regardless of dimension. Then, to satisfy property P2 above, we require that b = o(1). On the other hand, taking expectation over x ∼ π(·) on both sides of Eq. (1) implies that the drift function should be chosen such that E π [f (x)] → 0, which is exactly the definition of the fitted family of drift functions. Therefore, to get a small b in a high-dimensional setting, we require a (properly scaled) drift function f (·) whose values f (x), where x ∼ π(·), concentrate around 0, which is guaranteed by the fitted family of drift functions.
(2) yields b ≥ E π [f (x)]/(1 − λ), so b = Ω(E π [f (x)]). To make b = o
Note that the fitted family of drift functions for high-dimensional settings can be very different than traditional "good" drift functions. For example, to study a Markov chain {X (k) } sampling a fixed-dimensional target π, one might think f (x) = π(x) −α for some fixed number α > 0 is a good candidate for the drift function. However, this is not a good intuition for choosing the fitted family of drift functions for the high-dimensional settings. The following is a toy example. EXAMPLE 2.2. Consider π is the standard multivariate Gaussian N (0, I p ). One choice for the drift function could be f (x) = exp( x 2 ) − 1 or f (x) = x 2 /p (which is similar to the one used in [44,Example 1]). However, a better fitted family of drift functions in high dimensional settings could be
f (x) = ( x 2 /p − 1) 2 .(6)
This is because that under X ∼ N (0, I p ), we know X 2 /p concentrates around 1. The family of drift functions {( x 2 /p − 1) 2 } ∞ p=1 exactly fits this concentration phenomenon. The traditional popular choices of drift functions do not have this property.
Note that in the existing literature, the drift functions used to establish the drift condition usually don't satisfy the definition of fitted family of drift functions. This is because in the traditional setting where n and p are fixed, a "good" drift condition is established whenever λ and b are small enough for specific fixed values of n and p. The complexity orders of λ and b as functions of n and/or p are not essential, so fitting the concentration region of the posterior as dimension increases is not necessary. As a result, many existing quantitative bounds cannot be directly translated into tight complexity bounds, since the size of the small set does not have a small complexity order on n and/or p. At the very least, one has to re-analyze such MCMC algorithms using a fitted family of drift functions.
Next, we focus on establishing λ that is either bounded away from 1 or converges to 1 slowly, assuming a fitted family of drift functions is already chosen. Intuitively, λ describes the behavior of the Markov chain when its current state has a "high energy". If λ goes to 1 very fast when n and/or p goes to infinity, this may suggest the existence of some "bad" states, i.e. states which have "high energy", but the drift property becomes poor as n and/or p gets large. Therefore, in high dimensions, once the Markov chain visits one of these "bad" states, it only slowly drifts back toward to the corresponding small set. Since the drift condition in Eq. (2) must hold for all x ∈ X , the existence of "bad" states forces λ go to 1 very fast. And since the small set is defined as R = {x ∈ X : f (x) ≤ d} where d > 2b/(1 − λ), the scenario λ → 1 very fast forces R to become very large, and hence the minorization volume ǫ goes to zero very fast. One perspective on this problem is that the definition of drift condition in Eq. (2) is too restrictive, since it must hold for all states x, even the bad ones.
In summary, we are able to establish a small b as in P2 above by using a fitted family of drift functions. However, the other difficulty in establishing a small λ as in P1 above is the existence of some "bad" states when n and/or p gets large. Since the traditional drift condition defined in Eq. (2) is restrictive, the traditional drift-and-minorization method is not flexible enough to deal with these "bad" states. In the following, we instead propose a modified drift-and-minorization approach using a generalized drift condition, where the drift function is defined only in a "large set". This allows us to rule out those "bad" states in high-dimensional cases.
New Quantitative Bound.
We first relax the traditional drift condition and define a generalized drift condition which is established only on a subset of the state space. Recall that {X (k) } denotes the Markov chain on a state space (X , B) with a transition kernel P (x, ·), ∀x ∈ X . Let P k (x, ·) be the k-step transition kernel. Denote R 0 as the "large set", i.e., R 0 ∈ B is a subset of X .
E(f (X (1) ) | X (0) = x) ≤ λf (x) + b, ∀x ∈ R 0 ,(7)
and (C1) or (C1') holds.
(C1). The "large set" R 0 is defined by
R 0 = {x ∈ X : f (x) ≤ d 0 } for some d 0 > 0.
(C1'). The transition kernel P (x, ·) is a composition of reversible (with respect to π) steps P = I i=1 P i , i.e. , P (x, dy) = (x1,...,xI−1)∈X ×···×X P 1 (x, dx 1 )P 2 (x 1 , dx 2 ) · · · P I (x I−1 , dy), where I ≥ 1 is a fixed integer, and
E(f (X (1) ) |X (0) = x) ≤ E(f (X (1) ) | X (0) = x), ∀x ∈ R 0 ,(8)
where {X (k) } denotes a restricted Markov chain with a transition kernel I i=1P i wherẽ P i (x, dy) := P i (x, dy) for x, y ∈ R 0 , x = y, andP i (x, x) := 1 − P i (x, R 0 \{x}), ∀x ∈ R 0 . REMARK 2.4. Note that only one of (C1) and (C1') is required. For (C1'), the Markov chain needs to be either reversible or can be written as a composition of reversible steps. This condition is very mild since it is satisfied by most realistic MCMC algorithms. For example, full-dimensional and random-scan Metropolis-Hastings algorithms and random-scan Gibbs samplers are reversible, and their deterministic-scan versions can be written as a composition of reversible steps. For (C1), it is required that the "large set" is constructed using the drift function in a certain way but there is no restriction for the transition kernel P . If R 0 is constructed as in (C1) then Eq. (8) automatically holds. Therefore, one should verify (C1') if one hopes to have more flexibility for constructing R 0 than the particular way in (C1). Particularly, if the drift function f (x) depends on all coordinates, it might be hard to control all the states in {x ∈ X : f (x) ≤ d 0 } as the dimension increases. Then (C1') might be preferable. REMARK 2.5. To verify (C1') in Definition 2.3, one has to check a new inequality
E(f (X (1) ) |X (0) = x) ≤ E(f (X (1) ) | X (0) = x)
. This inequality in (C1') implies the "large set" R 0 should be chosen such that the states in R 0 have "lower energy" on expectation. This is intuitive since we assume the "bad" states all have "high energy" and poor drift property when n and/or p gets large. One trick is to choose R 0 by ruling out some (but not too many) states with "high energy" even if the states are not "bad". In Section 3.3, we demonstrate the use of this trick to select the "large set" R 0 so that E(f (X (1) )
|X (0) = x) ≤ E(f (X (1) ) | X (0) = x)
can be easily verified. The constructed R 0 in Section 3.3 satisfies (C1') but not (C1).
Next, we propose a new quantitative bound, which is based on the generalized drift condition on a "large set". THEOREM 2.6. Suppose the Markov chain satisfies the generalized drift condition in Definition 2.3 on a "large set" R 0 . Furthermore, for a "small set"
R := {x ∈ X : f (x) ≤ d} where d > 2b/(1 − λ)
, the Markov chain also satisfies a minorization condition:
P (x, ·) ≥ ǫQ(·), ∀x ∈ R,(9)
for some ǫ > 0, some probability measure Q(·) on X . Finally, suppose the Markov chain begins with an initial distribution ν such that ν(R 0 ) = 1. Then for any 0 < r < 1, we have
L(X (k) ) − π var ≤ (1 − ǫQ(R 0 )) rk + (αΛ) rk 1 + E ν [f (x)] + b 1−λ − α rk α k − α rk + k π(R c 0 ) + k i=1 νP i (R c 0 ),(10)
where α −1 = 1+2b+λd 1+d , Λ = 1 + 2(λd + b), and νP i (·) := X P i (x, ·)ν(dx).
PROOF. See Appendix A.
REMARK 2.7. Note that the new bound in Theorem 2.6 assumes the Markov chain begins with an initial distribution ν such that ν(R 0 ) = 1. This assumption is not very restrictive since the "large set" ideally should include all "good" states. In high-dimensional settings, the Markov chain is not expected to converge fast beginning with any state (see Section 3.3.2 for discussions on initial states). Furthermore, the use of "warm start" becomes popular recently, see e.g. [12]. However, it doesn't directly relate to the large set. We only require that the initial distribution µ is supported in the large set. For example, µ can be a point mass. For the term Q(R 0 ) in Eq. (10), it can be replaced by any lower bound of Q(R 0 ). Since the "large set" is ideally chosen to include all "good" states, one can expect Q(R 0 ) is at least bounded away from 0. In particular, if we have established an upper bound for P (x, R c 0 ) with x ∈ R, then we can apply ǫQ(R c 0 ) ≤ P (x, R c 0 ) to get an upper bound of Q(R c 0 ) which can be turned into a lower bound on Q(R 0 ). REMARK 2.8. In the proof of Theorem 2.6, the generalized drift condition in Definition 2.3 essentially implies a traditional drift condition in Eq. (2) for a constructed "restricted" Markov chain only on the "large set" R 0 . The first two terms in the upper bound Eq. (10) are indeed an upper bound on the total variation distance of this constructed "restricted" Markov chain. Note that the general idea of studying the restriction of a Markov chain to some "good" subset of the state space has appeared in the literature, such as [32,13,21,15,31,51,33] and the references therein, in which different ways of restrictions have been considered for different reasons. For example, [4] studied the rate of convergence of the MALA algorithm by a similar argument, which is later extended in [14] to study contraction rate in Wasserstein distance w.r.t. Gaussian reference measure. However, the argument in [4] is only for the MALA algorithm and the proof technique is by constructing a restricted chain. Comparing with [4], our Theorem 2.6 is for general MCMC algorithms with weaker conditions in (C1) and (C1').
In the proof, we use either a trace chain or a restricted chain depending on which condition is satisfied. Most importantly, the motivation of this work is to obtain tight complexity bound which is quite different from [4]. In Theorem 2.6, the goal of considering a "good" subset of the state space is to obtain better control on the dependence on n and p for the upper bound. REMARK 2.9. The last two terms in the upper bound Eq. (10) give an upper bound of the probability that the Markov chain will visit R c 0 starting from either the initial distribution ν or the stationary distribution π. Therefore, the proposed method in Theorem 2.6 is a generalized version of the classic drift-and-minorization method [44] by allowing the drift condition to be established on a chosen "large set". Indeed, if we choose R 0 = X , then Eq. (10) is almost the same as Eq. (4), except slightly tighter due to the terms α rk . REMARK 2.10. One more note about Eq. (10) is that the new bound does not decrease exponentially with k. For example, the term k π(R c 0 ) is linear increasing with k for fixed n and p. We emphasize that we do not aim to prove a Markov chain is geometrically ergodic here. An upper bound which decreases exponentially with k for fixed n and p does not guarantee to have a tight complexity order on n and/or p, which has been discussed in [38]. Instead, our new bound in Eq. (10) is designed for controlling complexity orders of n and/or p for high-dimensional Markov chains. In Section 3.3, we obtain a tight complexity bound for a Gibbs sampler of a simple random effect model related to the James-Stein estimator. Previous unsuccessful attempts for the same Gibbs sampler (see [10]), were focusing on how to obtain convergence bounds with geometric/polynomial rates as a function of p and n. The successful analysis of the Gibbs sampler in the current paper implies that complexity bounds can be obtained even without any particular form of non-deteriorating convergence bounds.
Complexity Bound.
Note that mixing time is often defined uniformly over initial states, which is difficult to extend to general state spaces. In this paper, the term "mixing time" is defined depending on the initial state. The formal definition is given in the following. DEFINITION 2.11. For any 0 < c < 1, we define the mixing time K c,x of a Markov chain
{X (k) } with initial state x by K c,x := arg min k L(X (k) ) − π var ≤ c subject to X (0) = x.(11)
The proposed new bound in Theorem 2.6 can be used to obtain complexity bounds in highdimensional settings. The key is to balance the complexity orders of k on n and/or p required for both the first two terms and the last two terms of the upper bound in Eq. (10) to be small.
The complexity order of k on n and/or p for the first two terms to be small can be controlled by adjusting the "large set". The "large set" should be kept as large as possible provided that "bad" states have been ruled out. For the last two terms to be small, we should determine the growth rate of k as a function of n and p so that
k π(R c 0 ) + k i=1 νP i (R c 0 ) → 0.(12)
This may involve (carefully) bounding the tail probability of the transition kernel, depending on the definition of the "large set" and the complexity order aimed to establish.
We give a direct corollary of Theorem 2.6 on mixing time in terms of p. In general, mixing time in terms of both n and p can be obtained using Theorem 2.6. COROLLARY 2.12. Suppose Theorem 2.6 has been established for every dimension p. Letk p andk p be sequences of positive integers as functions of p such that bothk p → ∞ and k p → ∞ as p → ∞. Furthermore, lim p→∞kp −k p ≥ 0 and
k p π(R c 0 ) +k p i=1 νP i (R c 0 ) → 0 (13) log(2 + E ν [f (x)] + b 1−λ ) − log(1 − ǫQ(R 0 )) log(αΛ/(1 − ǫQ(R 0 )) log(α) = O(k p ).(14)
Then the mixing time of the MCMC starting from ν has the complexity order O(k p ).
Using Corollary 2.12, one can plug-in the orders of b, 1 − λ, and ǫ to get the complexity bound. The following result is directly from Corollary 2.12. COROLLARY 2.13. Suppose Theorem 2.6 has been established for every dimension p and c 1 , · · · , c 5 are non-negative constants such that 1
ǫ = O(p c1 ), 1 1−λ = O(p c2 ), and b = O(p c3 ). Also, c 4 > c 1 and p c4 π(R c 0 ) + p c 4 i=1 νP i (R c 0 ) → 0. (15) Furthermore, if Q(R c 0 ) = o(1) and E ν [f (x)] = O(p c5 )
, then the mixing time starting from ν has the complexity order O(p c1 log(p c5 + p c2+c3 ) log(p c2+c3 )) = O(p c1 (log(p)) 2 ).
We will discuss several MCMC examples in Section 3 to demonstrate the use of the fitted family of drift functions and "large sets" to get complexity bounds.
2.4.
Discussions. We finish this section by giving a few more remarks and discussions on our main results.
• Geometric ergodicity: The Markov chain to be analyzed in Theorem 2.6 does not have to be geometrically ergodic. The proof of Eq. (10) only implies that, after ruling out "bad" states, a constructed "restricted" Markov chain defined on the "large set" is geometrically ergodic. Therefore, the new bound in Eq. (10) can be used to analyze non-geometrically ergodic high-dimensional Markov chains.
• Relation to spectral gaps: Many approaches in MCMC literature bound the spectral gap of the corresponding Markov operator [29,53,30,54,55]. However, on general state spaces, the spectral gap is zero for Markov chains which are not geometrically ergodic, even if they do converge to stationarity. Our results do not require the Markov chain to be geometrically ergodic. Instead, we only require the constructed "restricted" chain on the "large set" in our proof is geometrically ergodic. Therefore, we cannot connect our results to bounds on spectral gaps. Furthermore, we do not require the Markov chain to be reversible. So our results apply even in the non-reversible cases, which makes spectral gaps harder to study or interpret. For these reasons, we do not present the main results in terms of spectral gaps. • Other types of drift condition: In this paper, we use the drift condition of the type in [44].
There is another popular drift condition (e.g., in [43]) and the connection between the two is well-known (see [25,Lemma 3.1]). Therefore, it is straightforward to establish our main result using the other drift condition in [43]. • Complexity of MCMC estimators: It would be nice to obtain rate of convergence (or nonasymptotic bounds) for general MCMC estimators. The proof techniques in the existing literature on establishing rate of convergence of MCMC estimators [2,1,36,28,27,48,49,50] requires certain conditions such as geometric/polynomial drift conditions, or spectral gaps. However, our result doesn't require establishing a geometric/polynomial drift condition or a spectral gap. Therefore, it is not clear how to connect our complexity results to complexity of other MCMC estimators. This is certainly an interesting direction for future work.
Gibbs Sampler Convergence Bound.
In this section, we study several examples of Gibbs sampling to analyze the convergence complexity using the proposed approach. In Section 3.1 and Section 3.2, we consider a simple Gaussian example and a hierarchical Poisson example. Simplified versions of both examples for fixed dimensions was originally studied in [44, Example 1 and Example 2] and the original mixing times have poor complexity orders in terms of dimensions. We study the extensions of them in the high-dimensional setting and obtain tight complexity bounds by choosing fitted families of drift functions. In Section 3.3, we study the MCMC model in [46] which is related to the James-Stein estimator. We demonstrate how to use both the fitted family of drift functions and the "large sets" to obtain a tight complexity bound.
Note that although the bound in Theorem 2.6 contains different "admissible" growth combinations such as of b, 1/(1 − λ), and 1/ǫ (see also Corollary 2.13), the minorization volume ǫ relies on the small set which is determined by both b and λ. Furthermore, if b is fairly large, it is not surprising that λ can be bounded away from 1. Therefore, we can summarize our general principle in analyzing all the three examples as follows.
We first focus on choosing a fitted family of drift functions so that
E[f (x)] ≤ b where b
has a small order. 2. Next, we establish the drift condition. If λ from the drift condition goes to 1 too fast, we apply the "large set" to rule out certain states. After the first two steps, we get a generalized drift condition which leads to a small set with reasonably "size". 3. Finally, we focus on establishing a (potentially multi-step) minorization condition to obtain ǫ which goes to zero slowly (or bounded away from 0).
A Gaussian Toy Example.
A bivariate Gaussian model was studied in [44, Example 1] as a demonstration of the drift-and-minorization approach. In this subsection, we study an extension of this example to the high-dimensional setting. Suppose our target π is N (µ, Σ),
a 2p-dimensional multivariate Gaussian, where µ = µ 1 µ 2 and Σ = Σ 11 Σ 12 Σ 21 Σ 22 .
To sample from the target distribution, we use a two-step Gibbs sampler as in [44,Example 1]. Writing X = X 1 X 2 , the conditional distribution can be written as (16) and similarly for X 2 | X 1 .
X 1 | X 2 = x 2 ∼ N µ 1 + Σ 12 Σ −1 22 (x 2 − µ 2 ), Σ 11 − Σ 12 Σ −1 22 Σ 21
For simplicity, we only consider the setting such that µ 1 = µ 2 = 0 and Σ 11 = Σ 22 = I d and Σ 12 = Σ 21 = 1 2 I d . It is straightforward to extend our analysis to general cases of µ and Σ. The corresponding Gibbs sampler is
X (1) 1 ∼ N 1 2 X (0) 2 , 3 4 I p (17) X (1) 2 ∼ N 1 2 X (1) 1 , 3 4 I p .(18)
Note that X (0) 1 is not used in the updates. If we choose a drift function similar to the one used in [44], such as
f old (X) := X 2 2 /p.(19)
Then as X
(1) 2 ∼ N ( 1 4 X (0) 2 , 3 4 (1 + 1 4 )I p )
, it can be easily verified that the following drift condition can be established:
E[f old (X (1) )] ≤ 1 16 f old (X (0) ) + 1.(20)
However, as X 2 2 /p concentrates to 1 under stationarity, the drift condition leads to a small set {X : X 2 2 /p = O(1)} in which the states that X 2 2 /p is much smaller than 1 are included.
In our analysis, we choose a fitted family of drift functions which lead to a small set with much smaller size:
f new (X) := X 2 2 p − 1 2 .(21)
We can establish the following drift condition:
E[f new (X (1) )] ≤ 1 4 f new (X (0) ) + O(1/p).(22)
The corresponding small set
{X : 1 − C/ √ p ≤ X 2 2 /p ≤ 1 + C/ √ p} for some constant C,
fits exactly the concentration region of the target as p → ∞. Using the above drift condition and a multi-step minorization condition, we can obtain the mixing time is O(log(p)). Our main result is in the following.
L(X (n) ) − π var ≤ C 1 + X (0) 2 2 d − 1 2 γ k ,(23)
where γ < 1 is a fixed constant and number of steps n = ⌊kC 2 log(p)⌋ + 1 where k is any positive integer.
PROOF. See Appendix F.
This implies the following complexity bound directly. COROLLARY 3.2. Under the assumptions of Theorem 3.1, the mixing time of the Gibbs sampler is O(log(p)).
A Hierarchical Poisson Model.
We study a hierachical Poisson model originally for analyzing a realistic data set in [17]. A Gibbs sampler for this model has been studied by [17] and a (numerical) quantitative bound was studied using the drift-and-minorization approach in [44,Example 2]. In this subsection, we study the Gibbs sampler in the high-dimensional setting.
Suppose the data has the form
{Y i , t i } n i=1
where Y i represents the number of failures over a time interval t i of n nuclear pumps. One can model the failures as a Poisson process with parameter λ i . Thus, during a observation period of length t i , the number of failures Y i follows a Poisson distribution of parameters λ i t i . We are interested in inferring the parameters λ = (λ 1 , . . . , λ n ) from the data {Y i , t i }. We follow a hierarchical Bayesian approach where we assume that λ 1 , . . . , λ n are conditional independent on a hyperparameter β and follow a gamma distribution with density
π(λ i | β) = β α−1 Γ(α) λ α−1 i exp(−βλ i )(24)
where α is a constant. We assume further that the hyperparameter β follows itself a prior gamma distribution Ga(ρ, δ) where ρ and δ are fixed constant
π(β) = δ ρ−1 Γ(ρ) β ρ−1 exp(−δβ).(25)
For simplicity, in this example we assume the time intervals are unit, that is, t i = 1 for all i. It is straightforward to extend our analysis to general cases of time intervals.
Overall, the model can be written as
Y i | λ, β ∼ Poisson(λ i ), i = 1, . . . , n(26)λ i | β ∼ indep Ga(α, β), i = 1, . . . , n,(27)
β ∼ Ga(ρ, δ). (28) In this example, we have p = n + 1 and x = (λ 1 , . . . , λ n , β). The posterior satisfies
π(x | Y 1 , . . . , Y n ) ∝ π(β)π(λ i | β) n i=1 λ Yi i Y i ! exp(−λ i ).(29)
Note that this multidimensional distribution is rather complicated and it is not obvious how the rejection sampling or importance sampling could be efficiently used in this context. As the conditional distributions π(λ 1 , . . . , λ n | β, {Y i }) and π(β | {λ i }, {Y i }) admit standard parametric forms, we can write a Gibbs sampler with the following updating order:
π(λ (k+1) 1 , . . . , λ (k+1) n | β (k) , {Y i }) = n i=1 π(λ (k+1) i | β (k) , Y i ) (30) λ (k+1) i | β (k) , Y i ∼ Ga(Y i + α, 1 + β (k) ), i = 1, . . . , n(31)β (k+1) | {λ (k+1) i }, {Y i } ∼ Ga(ρ + nα, δ + n i=1 λ (k+1) i ).(32)
Next, we present the main result for this Gibbs sampler. The key step is to use a fitted family of drift functions:
f n (x) := i λ i nα − 1 β 2 .(33)
Our main result for this Gibbs sampler is as follows.
THEOREM 3.3. Suppose there exists a constant N that for all n ≥ N the data satisfies Y := 1 n i Y i ∈ [l, u]
where l and u are two fixed constant such that 0 < l < u < ∞. Then there exists a constant C such that for large enough n and for all k, we have
L(X (k) ) − π var ≤ C + 1 n n i=1 λ (0) i α − 1 β (0) 2 γ k ,(34)
where γ < 1 is a constant.
PROOF. See Appendix G.
Note that it is very reasonable to make some reasonable assumptions on the observed data since the posterior depends on the observed data and we are actually studying a sequence of posteriors for the convergence complexity. In Theorem 3.3, we assume there exists a constant N that for all n ≥ N the data satisfiesȲ :
= 1 n i Y i ∈ [l, u]
where l and u are two fixed constant such that 0 < l < u < ∞. This assumption is quite weak. For example, it holds if the data is indeed generated from the model with some "true" parameters. Theorem 3.3 implies the following complexity bound directly.
A Random Effect Model related to the James-Stein Estimator.
In this subsection, we concentrate on a particular MCMC model, which is related to the James-Stein estimator [46]:
Y i | θ i ∼ N (θ i , σ 2 V ), 1 ≤ i ≤ n, θ i | µ, σ 2 A ∼ N (µ, σ 2 A ), 1 ≤ i ≤ n, µ ∼ flat prior on R, σ 2 A ∼ IG(a, b),(35)
where σ 2 V is assumed to be known, (Y 1 , . . . , Y n ) is the observed data, and x = (σ 2 A , µ, θ 1 , . . . , θ n ) are parameters. Note that we have the number of parameters p = n + 2 in this example. For simplicity, we will not mention p but only refer to n for this model. The posterior distribution satisfies
L(σ 2 A , µ, θ 1 , . . . , θ n | Y 1 , . . . , Y n ) ∝ b a Γ(a) (σ 2 A ) −a−1 e −b/σ 2 A n i=1 1 2πσ 2 A e − (θ i −µ) 2 2σ 2 A 1 2πσ 2 V e − (Y i −θ i ) 2 2σ 2 V .(36)
A Gibbs sampler for the posterior distribution of this model has been originally analyzed in [46]. A quantitative bound has been derived by [46] using the drift-and-minorization method
with a drift function f (x) = n i=1 (θ i −Ȳ ) 2 whereȲ = 1 n n i=1 Y i .
We first observe that this drift function doesn't lead to a fitted family of drift functions in high-dimensional setting. For example, select a "typical" statex = (σ 2 A ,μ,θ 1 , . . . ,θ n ) such thatθ
i = Y i , we get f (x) = n i=1 (Y i −Ȳ ) 2 .
Under reasonable assumptions on the observed data {Y i }, we can get the properly scaled drift function 1 [46]. Therefore, the definition of fitted family of drift functions is not satisfied. Furthermore, the established λ in [46] converges to 1 very fast, satisfying 1/(1 − λ) = Ω(n). Therefore, if we translate the quantitative bound in [46] into complexity orders, it requires the size of the "small set" to be Ω(n 2 ), which makes the minorization volume ǫ be exponentially small. This leads to upper bounds on the distance to stationarity which require exponentially large number of iterations to become small. This result also coincides with the observations by [38] when translating the work of [26,6]. We demonstrate the use of the modified drift-and-minorization approach by analyzing a Gibbs sampler for this MCMC model. Defining
n f (x) = 1 n n i=1 (Y i −Ȳ ) 2 = Θ(1). Then b/n = 1 n n i=1 (Y i − Y ) 2 + n+1/4 n σ 2 V = Θ(1) inx (k) = ((σ 2 A ) (k) , µ (k) , θ (k) 1 , . . . , θ (k)
n ) to be the state of the Markov chain at the k-th iteration, we consider the following order of Gibbs sampling for computing the posterior distribution:
µ (k+1) ∼ N θ (k) , (σ 2 A ) (k) n , θ (k+1) i ∼ N µ (k+1) σ 2 V + Y i (σ 2 A ) (k) σ 2 V + (σ 2 A ) (k) , (σ 2 A ) (k) σ 2 V σ 2 V + (σ 2 A ) (k) , i = 1, . . . , n, (σ 2 A ) (k+1) ∼ IG a + n − 1 2 , b + 1 2 n i=1 (θ (k+1) i −θ (k+1) ) 2 .(37)
Note that, in the language of [22], this is an "out-of-order" block Gibbs sampler, so inferences for the posterior distribution should be based on a "shifted" output sample
((σ 2 A ) (k) , µ (k+1) , {θ (k+1) i }).
In any case, it still has the same rate of convergence [22, Proposition 3] so our convergence analysis applies to both our version and the original block Gibbs version of [46].
We prove that convergence of the Gibbs sampler is actually very fast: the number of iterations required is O(1). More precisely, we first make the following assumptions on the observed data {Y i }: there exists δ > 0,σ 2 V < ∞, and a positive integer N 0 , such that, almost surely with respect to the randomness of {Y i }:
σ 2 V + δ ≤ n i=1 (Y i −Ȳ ) 2 n − 1 ≤σ 2 V , ∀n ≥ N 0 .(38)
The assumption in Eq. (38) is quite natural. For example, if the data is indeed generated from the model with a "true" variance σ 2 A > 0 then Eq. (38) obviously holds. More generally, the upper bound is just to ensure n i=1 (Y i −Ȳ ) 2 = O(n). For the lower bound, note that our MCMC model implies that the variance of Y i is larger than σ 2 V because of the uncertainty of θ i . Actually, under the MCMC model, conditional on the parameter σ 2 A , the variance of the data {Y i } equals σ 2 V + σ 2 A . Therefore, the assumption in Eq. (38) is just to assume the observed data is not abnormal under the MCMC model when n is large enough. Note that only the existence of δ is required for establishing our main results. More precisely, the existence of δ is needed to obtain an upper bound for π(R c 0 ). If such δ does not exist, the MCMC model is (seriously) misspecified so the posterior distribution of the parameter σ 2 A , which corresponds to the variance of a Normal distribution, may concentrate on 0. In that case, our upper bound on π(R c 0 ) does not hold. Then we show that, under the assumption Eq. (38), with initial statē
θ (0) =Ȳ , (σ 2 A ) (0) = n i=1 (Yi−Ȳ ) 2 n−1 − σ 2 V , if n i=1 (Yi−Ȳ ) 2 n−1 > σ 2 V , n i=1 (Yi−Ȳ ) 2 n−1 , otherwise,(39)
and µ (0) arbitrary (since µ (0) will be updated in the first step of the Gibbs sampler), the mixing time of the Gibbs sampler to guarantee small total variation distance to stationarity is bounded by some constant when n is large enough.
3.3.1. Main Results. First, we obtain a quantitative bound for large enough n, which is given in the following theorem. THEOREM 3.6. Under the assumption Eq. (38), with initial state Eq. (39), there exists a positive integer N which does not depend on k, some constants C 1 > 0, C 2 > 0, C 3 > 0 and 0 < γ < 1, such that for all n ≥ N and for all k, we have
L(X (k) ) − π var ≤ C 1 γ k + C 2 k(1 + k) n + C 3 k √ n .(40)
PROOF. Let ∆ = n i=1 (Y i −Ȳ ) 2 and x = (σ 2 A , µ, θ 1 , . . . , θ n ). Define the fitted family of drift functions {f n (x)} by
f n (x) := n(θ −Ȳ ) 2 + n ∆ n − 1 − σ 2 V − σ 2 A 2 . (41) Let x (k) = ((σ 2 A ) (k) , µ (k) , θ (k) 1 , . . . , θ (k)
n ) be the state of the Markov chain at the k-th iteration, then we show in Lemma C.1 (see Appendix C) that
E[f n (x (k+1) ) | x (k) ] ≤ (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (k) (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (k) + ((σ 2 A ) (k) ) 2 2 f n (x (k) ) + b, ∀x (k) ∈ X(42)
where b = O(1).
Note that in Eq. (42), the term
(σ 2 V ) 2 +2σ 2 V (σ 2 A ) (k) (σ 2 V ) 2 +2σ 2 V (σ 2 A ) (k) +((σ 2 A ) (k) ) 2
2 depends on the coordinate A (k) of the state x (k) and is not bounded away from 1, since (σ 2 A ) (k) can be arbitrarily close to 0. Therefore,
(σ 2 V ) 2 +2(σ 2 V )(σ 2 A ) (k) V 2 +2σ 2 V (σ 2 A ) (k) +((σ 2 A ) (k) ) 2
2 cannot be bounded by some λ such that 0 < λ < 1
and we cannot directly establish the traditional drift condition Eq. (2) by Eq. (42). In the following, we establish the generalized drift condition Definition 2.3 using a "large set". According to Eq. (38), for large enough n, we have ∆ n−1 > σ 2 V . Then, we choose a threshold T such that, for large enough n, we have 0 < T < ∆ n−1 − σ 2 V . Defining λ T :=
(σ 2 V ) 2 +2σ 2 V T (σ 2 V ) 2 +2σ 2 V T +T 2 2 < 1, we get E[f n (x (k+1) ) | x (k) ] ≤ λ T f n (x (k) ) + b, ∀x ∈ R T .(43)
where the "large set", R T , is defined by
R T := x ∈ X : ∆ n − 1 − σ 2 V − σ 2 A 2 ≤ ∆ n − 1 − σ 2 V − T 2 .(44)
In order to satisfy the new drift condition in Definition 2.3, we verify (C1'). Note that in our example the transition kernel of the Gibbs sampler can be written as a composition of reversible steps and only the last step of the Gibbs sampler updates the parameter σ 2 A which is used for defining the "large set" R T . Therefore, in order to verify Eq. (8), it suffices to check the last step if the value of the drift function increases by updating
x (k) ∈ R T to x (k+1) ∈ R c T . By the definition of R T , we have ∆ n − 1 − σ 2 V − (σ 2 A ) (k) 2 ≤ ∆ n − 1 − σ 2 V − T 2 , ∀x (k) ∈ R T ∆ n − 1 − σ 2 V − (σ 2 A ) (k+1) 2 > ∆ n − 1 − σ 2 V − T 2 , ∀x (k+1) / ∈ R T .(45)
This implies the value of f n (x) increases if the Markov chain is outside of the "large set" after updating σ 2 A . Therefore, the generalized drift condition in Definition 2.3 is satisfied. Now we can use Theorem 2.6 to derive a quantitative bound for the Gibbs sampler. We first show in Lemma D.1 (see Appendix D) that if T = Θ(1), by choosing the size of the "small set" R = {x ∈ X : f n (x) ≤ d} to satisfy d = O(1) and d > b 1−λT , there exists a probability measure Q(·) such that the Markov chain satisfies a minorization condition in Eq. (9) with the minorization volumne ǫ = Θ(1).
Next, we show in Lemma E.1 (see Appendix E) that with the initial state given by Eq. (39), there exists a positive integer N , which does not depend on k, such that for all n ≥ N , we have
k π(R c T ) + k i=1 P i (x (0) , R c T ) ≤ k √ n √ b(2σ 2 V /δ + 1) ∆ n−1 − σ 2 V − T + k(1 + k) 2n b ∆ n−1 − σ 2 V − T 2 .(46)
Now we derive a quantitative bound for the Gibbs sampler for large enough n by combing results together. First, from Eq.
(42), we have b = O(1). Recall that λ T = (σ 2 V ) 2 +2σ 2 V T (σ 2 V ) 2 +2σ 2 V T +T 2 2 .
We obtain b 1−λT = O(1) by choosing T = Θ(1). Since d > b 1−λT , we can choose the size of small set to be d = O(1). Then we have shown that the minorization volume ǫ = Θ(1). For Q(R T ), we know that P (x (0) , R c T ) = O(1/n), where x (0) ∈ R. This implies that ǫQ(R c T ) = O(1/n). Since ǫ = Θ(1), we have ǫQ(R T ) = ǫ − ǫQ(R c T ) = Θ(1). Furthermore, by definition α −1 = 1+2b+λT d 1+d < 1, it can be verified that α −1 is bounded away from 0 when T = Θ(1) and d = O(1). Next, since Λ = 1 + 2(λ T d + b) = Θ(1), ignoring the term α rk in Eq. (10), we choose r = log(α)/ log(αΛ/(1 − ǫQ(R T ))) to balance the order of (1 − ǫQ(R T )) r and α −1 (αΛ) r and define γ := (1 − ǫQ(R T )) r = α −1 (αΛ) r . Then we have γ = Θ(1) and 0 < γ < 1. Furthermore, since f n (x (0) ) = 0 for large enough n and b 1−λT = O(1), we can pick a constant C 1 such that C 1 ≥ 2 + b 1−λT for large enough n. Finally,
we have kπ(R c T ) + k i=1 P i (x (0) , R c T ) ≤ C 2 k(1+k) n + C 3 k
√ n by Eq. (46), then Theorem 3.6 follows from Theorem 2.6.
Next, we translate the quantitative bound in Theorem 3.6 into the convergence complexity in terms of mixing time using similar arguments as Corollary 2.12 and Corollary 2.13. We show the convergence complexity is O(1). Intuitively, to make the term C 1 γ k in Eq. (40) arbitrarily small, k needs to have a complexity order of O(1) since γ does not depend on n. The residual terms C 2
k(1+k) n + C 3 k √ n → 0 when k = o( √ n)
. Therefore, the complexity bound on the mixing time of the Gibbs sampler equals the smaller complexity order between O(1) and o( √ n), which is O(1). The formal result is given in the following.
THEOREM 3.7. For any 0 < c < 1, recall the definition of the mixing time K c,x in Definition 2.11. We write K c,x as K c,x (n) to emphasize its dependence on n. Under the assumptions of Theorem 3.6, with initial state x (0) given by Eq. (39), there exists N c = Θ(1) and
K c = Θ(1) such that K c,x (0) (n) ≤K c , ∀n ≥ N c .(47)
PROOF. See Appendix B.
Initial state.
The main results in Theorem 3.6 and Theorem 3.7 hold for a particular initial state given in Eq. (39). We discuss other initial states than the one given in Eq. (39). Note that the new bound in Lemma C.1 holds for any initial state that is in the "large set". Therefore, we can extend the results in Theorem 3.6 to get bounds when the Markov chain starts from some other initial states in the "large set". Recall the assumption on the observed data {Y i } in Eq. (38), we have assumed there exists δ > 0 such that n i=1 (Yi−Ȳ ) 2 n−1 ≥ σ 2 V + δ for large enough n. Note that the existence of such δ is sufficient to obtain the results in Theorem 3.6 and Theorem 3.7. In order to get bounds when the MCMC algorithm starts from other initial states, we assume δ is known and establish upper bounds using δ explicitly. We define the "large set" Eq. (44) using T = δ and the extension of Theorem 3.6 is given in the following.
THEOREM 3.8. Let ∆ = n i=1 (Y i −Ȳ ) 2 .
Under the assumption Eq. (38), if the Markov chain starts with any initial state x (0) ∈ R δ (defined in Eq. (44) with T = δ), there exists a positive integer N , which does not depend on k, some constants C 1 > 0, C 2 > 0, C 3 > 0, C 4 > 0 and 0 < γ < 1, such that for all n ≥ N and for all k, we have
L(X (k) ) − π var ≤ [C 1 + f n (x (0) )]γ k + C 2 k(1 + k) n + C 3 k √ n + C 4 f n (x (0) ) k n ,(48)
where f n (·) is the fitted family of drift functions defined in Eq. (41).
PROOF. Following the same proof of Theorem 3.6 by keeping the term f n (x (0) ), the first two terms of the upper bound given in Eq. (10) can be replaced by [C 1 + f n (x (0) )]γ k and the last term of the upper bound in Eq. (10) can be replaced by
k i=1 P i (x (0) , R c δ ) ≤ C 2 k(1+k) n + C 4 f n (x (0) ) k n .
From Theorem 3.8, using similar arguments as Corollary 2.12, we can immediately obtain a complexity bound when the Markov chain starts within a subset of the "large set", which is given in the following. This result suggests that if the Markov chain starts from an initial state which is not "too far" from the state given in Eq. Note that {x ∈ R δ : f n (x) = o(n/ log n)} defines a subset of the "large set" R δ , and the above result shows that the mixing time is O(log n) if the initial state is in this subset. The order o(n/ log n) comes from a balance between f n (x (0) )γ k and f n (x (0) ) k n . We conjecture the same complexity order of O(log n) on the mixing time may hold even if the initial state is in a larger subset, for example x (0) ∈ R δ : f n (x (0) ) = Θ(n) . However, in order to prove this, we need to derive tighter upper bound of k i=1 P i (x (0) , R c δ ) which is a non-trivial task. We therefore leave it as an open problem.
Finally, we do not have upper bounds for the Markov chain when the initial state is outside of the "large set" since the new bound in Theorem 2.6 requires the Markov chain starts within the "large set". For this particular Gibbs sampler example, numerical experiments suggest that, if the Markov chain starts from a "bad" state, the number of iterations required for the Markov chain to mix can be much larger than O(log n). In high-dimensional settings, when the dimension of the state space goes to infinity, the Markov chain may not mix fast starting from any state. This observation is loosely consistent with various observations in [20] (42) implies that those states whose value of σ 2 A are close to zero are "bad" states. Therefore, the goal of choosing the "large set" in Eq. (44) is to ruling out those states. Note that we have applied the trick that ruling more states with "high energy" could make Eq. (8) easier to establish. In the "large set" R T defined by Eq. (44), we have also ruled out the states x whose value of σ 2 A are larger than
∆ n−1 − σ 2 V − T + ∆ n−1 − σ 2 V .
Note that these states are not "bad" states. However, by ruling them out, it is easy to establish Eq. (46) is loose, it is already enough for showing the mixing time of the Gibbs sampler is O(1). The proof of Lemma E.1 only makes use of the form of drift function and the definition of "large set", and does not depend on the particular form of the transition kernel of the Gibbs sampler. We expect that, in general, tighter upper bounds on k π(R c T ) + k i=1 P i (x (0) , R c T ) could be obtained, depending on the choice of "large set" and the MCMC algorithm to be analyzed. This may involve carefully bounding the tail probability of the transition kernel.
R c T ) + k i=1 P i (x (0) , R c T ) shown in Eq.
• The constants in Theorem 3.6: In Theorem 3.6, we do not compute the constants N , C 1 , C 2 , and C 3 explicitly. Actually, C 2 is given explicitly in Lemma E.1. C 3 is given in Lemma E.1 but it depends on the unknown constant δ > 0 from the assumption Eq. (38). Furthermore, C 1 can be explicitly computed under much more tedious computations. Finally, N depends on the unknown constant N 0 in Eq. (38) and the resulting concentration property of the posterior distribution for parameter σ 2 A by Eq. (38). Therefore, if we make stronger assumptions on the observed data {Y i }, it is then possible to compute all the constants in Theorem 3.6 explicitly under tedious computations, though we do not pursue that here.
APPENDIX A: PROOF OF THEOREM 2.6
Recall that R denotes the "small set" and R 0 denotes the "large set". We first construct a transition kernel for a "restricted" chain define on R 0 ,P (x, ·), ∀x ∈ R 0 . One goal of this construction is that the stationary distribution of the kernelP equals to the π(·) restricted on the "large set" R 0 , i.e., π ′ (dx) := π(dx)/π(R 0 ), ∀x ∈ R 0 . We consider two different constructions depending on (C1) or (C1') in Definition 2.3 holds.
• If (C1) in Definition 2.3 holds, then we define the kernelP as the transition kernel of the "trace chain" constructed as follows. Let X (m) be a Markov chain with kernel P , we define a sequence of random entrance time {m i } i∈N by m 0 := min{m ≥ 0 :
X (m) ∈ R 0 }, m i := min{m > m i−1 : X (m) ∈ R 0 }.
Then {X (mi) } i∈N is the "trace chain" and the transition kernelP (x, B) := P(X (m1) ∈ B | X (m0) = x), ∀x ∈ R 0 . It is clear that the "trace chain" is obtained by "stopping the clock" when the original chain is outside R 0 , the constructedP is a valid transition kernel. It can be verified that the stationary distribution of this "trace chain" is π ′ . • If (C1') in Definition 2.3 holds, then we construct the "restricted chain" using the ker-nelP = I i=1P i whereP i (x, dy) := P i (x, dy) for x, y ∈ R 0 , x = y, andP i (x, x) := 1 − P i (x, R 0 \{x}), ∀x ∈ R 0 . Note that since each P i is reversible, one can easily verify that eachP i is also reversible and the stationary distribution ofP is π ′ .
Suppose that X (m) and Y (m) are two realizations of the Markov chain, where X (m) starts with the initial distribution ν(·) and Y (m) starts with the stationary distribution π(·). We defineX (m) andỸ (m) to be two realizations of a constructed "restricted" Markov chain on the "large set" with the transition kernelP (x, ·), ∀x ∈ R 0 . We assumeX (m) starts with the same initial distribution ν(·) as X (m) andỸ (m) starts with π ′ (·). Since ν(R 0 ) = 1, we assume X (0) =X (0) . This rest of the proof is a modification of the original proof of the drift-and-minorization method using coupling in [44].
We define the hitting times of (X (m) ,Ỹ (m) ) to R × R as follows.
t 1 : = inf{m ≥ 0 : (X (m) ,Ỹ (m) ) ∈ R × R}, t i : = inf{m ≥ t i−1 + 1 : (X (m) ,Ỹ (m) ) ∈ R × R}, ∀i > 1.(49)
Let N k := max{i : t i < k}. Then N k denotes the number of (X (m) ,Ỹ (m) ) to hit R × R in the first k iterations. The following result gives an upper bound for L(X (k) ) − L(Y (k) ) var .
LEMMA A.1. When the Markov chain satisfies the minorization condition in Eq. (9), for any j > 0, we have
L(X (k) ) − L(Y (k) ) var ≤(1 − ǫQ(R 0 )) j + P(N k < j) + k π(R c 0 ) + k i=1 νP i (R c 0 ).(50)
PROOF. First, by triangle inequality L(X (k) ) − L(Y (k) ) var ≤ L(X (k) ) − L(Ỹ (k) ) var + L(X (k) ) − L(X (k) ) var
+ L(Y (k) ) − L(Ỹ (k) ) var .(51)
By the coupling inequality L(X (k) ) − L(X (k) ) var ≤ P(
X (k) =X (k) ) ≤ k m=1 P(X (m) / ∈ R 0 ), we have L(Y (k) ) − L(Ỹ (k) ) var + L(X (k) ) − L(X (k) ) var ≤ k m=1 P Y (m) / ∈ R 0 + k m=1 P X (m) / ∈ R 0 ≤ k π(R c 0 ) + k i=1 νP i (R c 0 ).(52)
Finally, the Markov chain with kernelP (x, ·) satisfies both drift condition
E(f (X (1) ) |X (0) = x) ≤ λf (x) + b, ∀x ∈ R 0 ,(53)
and minorization conditioñ
P (x, dy) ≥ [ǫQ(R 0 )] Q(dy) Q(R 0 ) , ∀x, y ∈ R 0 .(54)
Using the result from [44, Theorem 1], we have
L(X (k) ) − L(Ỹ (k) ) var ≤ (1 − ǫQ(R 0 )) j + P(N k < j).(55)
Next, we further upper bound the term P(N k < j) slightly tighter than [44]. Define the i-th gap of return times by r i := t i − t i−1 , ∀i > 1, then LEMMA A.2. For any α > 1 and j > 0, and k > j,
P(N k < j) ≤ 1 α k − α j E j i=1 α ri − α j .(56)
PROOF. Note that {N k < j} = {t j ≥ k} = {r 1 + · · · + r j ≥ k} and r 1 + · · · + r j ≥ j by definition. Then the result comes from Markov's inequality P(N k < j) = P(r 1 + · · · + r j ≥ k)
= P(α r1+···+rj − α j ≥ α k − α j ) ≤ 1 α k − α j E j i=1 α ri − α j .(57)
Next, we bound E j i=1 α ri following the exact same arguments as in [44, Proof of Lemma 4 and Theorem 12], which gives
E j i=1 α ri ≤ (αΛ) j−1 [1 + E ν (f (x)) + E π ′ (f (x))] .(58)
By the drift condition forP (x, ·) in Eq. (53), taking expectations on both sides of Eq. (53) leads to E π ′ (f (x)) ≤ b 1−λ . Therefore, setting j = rk + 1 and combining all results together yields
L(X (k) ) − π var ≤ (1 − ǫQ(R 0 )) rk+1 + (αΛ) rk 1 + E ν (f (x)) + b 1−λ − α rk+1 α k − α rk+1 + k π(R c 0 ) + k i=1 νP i (R c 0 ).(59)
Finally, we slightly relax the upper bound by replacing α rk+1 with α rk in both the denominator and numerator. Then Theorem 2.6 is proved by further relaxing
(1 − ǫ) rk+1 to (1 − ǫ) rk .
APPENDIX B: PROOF OF THEOREM 3.7
Using Theorem 3.6, one sufficient condition for
L(X (k) ) − π var ≤ c (60)
is that n ≥ N and
C 1 γ k ≤ c 3 , C 2 (1 + k) 2 n ≤ c 3 , C 3 k √ n ≤ c 3 . (61)
This requires the number of iterations, k, satisfies
log(C 1 ) − log(c/3) log(1/γ) ≤ k ≤ min c/3 C 3 √ n − 1, c/3 C 3 √ n .(62)
Note that any k (if exists) satisfying the above equation provides an upper bound for the mixing time K c,x (0) (n).
That is, for any n ≥ N such that
log(C 1 ) − log(c/3) log(1/γ) ≤ min c/3 C 3 √ n − 1, c/3 C 3 √ n ,(63)
which is equivalent to
n ≥ max N, K c 3C 3 c 2 , K c + 1 3C 3 c 2 =: N c ,(64)f n (x) := n(θ −Ȳ ) 2 + n ∆ n − 1 − σ 2 V − σ 2 A 2 . (65) Let x (k) = ((σ 2 A ) (k) , µ (k) , θ (k) 1 , . . . , θ (k)
n ) be the state of the Markov chain at the k-th iteration, then we have
E[f n (x (k+1) ) | x (k) ] ≤ (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (k) (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (k) + ((σ 2 A ) (k) ) 2 2 f n (x (k) ) + b, ∀x (k) ∈ X (66) where b = O(1).
PROOF. In this proof, we write f n (x) as f (x) for simplicity. Recall that the order of Gibbs sampling for computing the first scan is:
µ (1) ∼ N θ (0) , (σ 2 A ) (0) n , θ (1) i ∼ N µ (1) σ 2 V + Y i (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (1) ∼ IG a + n − 1 2 , b + 1 2 n i=1 (θ (1) i −θ (1) ) 2 .(67)
It suffices to show that for
∆ = n i=1 (Y i −Ȳ ) 2 and f (x) = n(θ −Ȳ ) 2 + n ∆ n − 1 − σ 2 V − σ 2 A 2 ,(68)
we have
E[f (x (1) ) | x (0) ] ≤ (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (0) (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (0) + ((σ 2 A ) (0) ) 2 2 f (x (0) ) + b,(69)where b = O(1).
Note that we can compute the expectation in E[f (x (1) ) | x (0) ] by three steps, according to the reverse order of the Gibbs sampling. To simplify the notation, we define σ-algebras that we condition on:
G A : = σ((σ 2 A ) (0) , {θ(1)
i }, µ (1) ),
G θ : = σ((σ 2 A ) (0) , {θ (0) i }, µ (1) ), G µ : = σ((σ 2 A ) (0) , {θ (0) i }, µ (0) ).(70)
Then we have
E[f (x (1) ) | x (0) ] = E[f (x (1) ) | G µ ] = E[E[E[f (x (1) ) | G A ] | G θ ] | G µ ].(71)
The three steps are as follows:
1. Compute the expectation over (σ 2 A ) (1) given {θ (1) i } and µ (1) . This is to compute the conditional expectation
f ′ (x (1) ) := E[f (x (1) ) | G A ],(72)
where we write E[· | G A ] to denote the the expectation is over (recall that a and b are constants from the prior IG(a, b))
(σ 2 A ) (1) ∼ IG a + n − 1 2 , b + 1 2 n i=1 (θ (1) i −θ (1) ) 2 (73)
for given θ (1) and µ (1) . 2. Compute the expectation over {θ (1) i } given µ (1) . This is to compute the conditional expectation
f ′′ (x (1) ) := E[f ′ (x (1) ) | G θ ],(74)
where we use E[· | G θ ] to denote the expectation is over
θ (1) i ∼ N µ (1) σ 2 V + Y i (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) , i = 1, . . . , n,(75)
for given µ (1) and (σ 2 A ) (0) . 3. Compute the expectation over µ (1) . This is to compute the conditional expectation
E[f (x (1) ) | x (0) ] = E[f ′′ (x (1) ) | G µ ],(76)
where we have used E[· | G µ ] to denote the expectation is over
µ (1) ∼ N θ (0) , (σ 2 A ) (0) n (77) for given {θ (0) i } and (σ 2 A ) (0)
. In the following, we compute the three steps, respectively. We use O(1) to denote terms that can be upper bounded by some constant that does not depend on the state.
C.1. Compute f ′ (x (1) ) = E[f (x (1) ) | G A ]. The first term of f (x (1) ) is n(θ (1) −Ȳ ) 2 , which is G A -measurable by construction. Thus, E[n(θ (1) −Ȳ ) 2 | G A ] = n(θ (1) −Ȳ ) 2 . Then f ′ (x (1) ) = E[f (x (1) ) | G A ] = n(θ (1) −Ȳ ) 2 + nE ∆ n − 1 − σ 2 V − (σ 2 A ) (1) 2 | G A .(78)
Note that
nE ∆ n − 1 − σ 2 V − (σ 2 A ) (1) 2 | G A = n ∆ n − 1 − σ 2 V 2 + nE[((σ 2 A ) (1) ) 2 | G A ] − 2n ∆ n − 1 − σ 2 V E[(σ 2 A ) (1) | G A ].(79)
Recall that E[· | G A ] denotes that the expectation is over
(σ 2 A ) (1) ∼ IG a + n − 1 2 , b + 1 2 n i=1 (θ (1) i −θ (1) ) 2 ,(80)
where a and b are constants from the prior IG(a, b). The mean and variance of (σ 2 A ) (1) can be written in closed forms since (σ 2 A ) (1) follows from an inverse Gamma distribution. Denoting S := i (θ (1) i −θ (1) ) 2 n−1 , we can write the mean of (σ 2 A ) (1) using S as follows:
E[(σ 2 A ) (1) | G A ] = i (θ (1) i −θ (1) ) 2 + 2b n − 1 + 2(a − 1) = i (θ (1) i −θ (1) ) 2 n − 1 + 2b n − 1 + 2(a − 1) − i (θ (1) i −θ (1) ) 2 n − 1 2(a − 1) n − 1 + 2(a − 1) = S + O(1/n) + O(1/n)S.(81)
Similarly, the variance of (σ 2 A ) (1) can be written in terms of S as well:
var[(σ 2 A ) (1) | G A ] = ( i (θ (1) i −θ (1) ) 2 /2 + b) 2 [(n − 1)/2 + (a − 1)] 2 [(n − 1)/2 + (a − 2)] = 1 (n − 1)/2 + (a − 2) E[(σ 2 A ) (1) | G A ] 2 = O(1/n) (S + O(1/n) + O(1/n)S) 2 = O(1/n)S 2 + O(1/n 2 )S + O(1/n 3 ).(82)
Substituting the mean and variance of (σ 2 A ) (1) in terms of S, we have
f ′ (x (1) ) = E[f (x (1) ) | G A ] = n(θ (1) −Ȳ ) 2 + n ∆ n − 1 − σ 2 V 2 + nS 2 − 2n ∆ n − 1 − σ 2 V S + O(1) + O(1)S + O(1)S 2 .
(83)
C.2. Compute f ′′ (x (1) ) = E[f ′ (x (1) ) | G θ ]. Note that the terms in f ′ (x (1) ) involving {θ (1) i } are (θ (1) −Ȳ ) 2 and S = i (θ (1) i −θ (1) ) 2 n−1 . Then f ′′ (x (1) ) = E[f ′ (x (1) ) | G θ ] = nE (θ (1) −Ȳ ) 2 | G θ + n ∆ n − 1 − σ 2 V 2 + nE[S 2 | G θ ] − 2n ∆ n − 1 − σ 2 V E[S | G θ ] + O(1) + O(1)E[S | G θ ] + O(1)E[S 2 | G θ ].(84)
Therefore, it suffices to compute the following terms
E (θ (1) −Ȳ ) 2 | G θ , E[S | G θ ], E[S 2 | G θ ].(85)
Note that {θ (1) i } are independent (but not identically distributed) conditional on G θ . For the first term E (θ (1)
−Ȳ ) 2 | G θ , we have E (θ (1) −Ȳ ) 2 | G θ = E θ (1) − µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) + µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) −Ȳ 2 | G θ = E θ (1) − µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 | G θ + µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) −Ȳ 2 + 2 µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) −Ȳ E θ (1) − µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) | G θ = var[θ (1) | G θ ] + σ 2 V σ 2 V + (σ 2 A ) (0) 2 µ (1) −Ȳ 2 = 1 n (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + σ 2 V σ 2 V + (σ 2 A ) (0) 2 µ (1) −Ȳ 2(86)
For the other two terms involving S, we have the following lemma.
LEMMA C.2. For S = i (θ (1) i −θ (1) ) 2 n−1 , we have E[S | G θ ] = (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 ∆ n − 1 , var[S | G θ ] = O(1/n). (87) PROOF. Define η i := θ (1) i − Yi(σ 2 A ) (0) σ 2 V +(σ 2 A ) (0) thenη =θ (1) −Ȳ (σ 2 A ) (0) σ 2 V +(σ 2 A ) (0) . Note that {η i } are i.i.d. conditional on G θ with η i ∼ N µ (1) σ 2 V σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) ,η ∼ N µ (1) σ 2 V σ 2 V + (σ 2 A ) (0) , 1 n (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) .(88)
Next, we decompose n i=1 (θ
(1) i −θ (1) ) 2 by n i=1 (θ (1) i −θ (1) ) 2 = n i=1 η i + Y i (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) −η −Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 = n i=1 (η i −η) 2 + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 (Y i −Ȳ ) 2 + 2(η i −η)(Y i −Ȳ )(σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) .(89)
Then we can obtain E[S | G θ ] by
E[S | G θ ] = E i (θ (1) i −θ (1) ) 2 n − 1 | G θ = E i (η i −η) 2 n − 1 | G θ + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 n i=1 (Y i −Ȳ ) 2 n − 1 = (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 ∆ n − 1 .(90)
For var[S | G θ ], using the Cauchy-Schwartz inequality
var[S | G θ ] = E (S − E[S | G θ ]) 2 | G θ = E n i=1 (η i −η) 2 n − 1 − E {ηi} i (η i −η) 2 n − 1 + 2 (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) n i=1 (η i −η)(Y i −Ȳ ) n − 1 2 | G θ ≤ 2var i (η i −η) 2 n − 1 | G θ + 8 (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 E i (η i −η)(Y i −Ȳ ) 2 | G θ (n − 1) 2 .
(91)
Note that {η i } are i.i.d conditional on G θ , we know E i (η i −η) 2 n − 1 2 | G θ = E i (η i −η) 2 n − 1 | G θ 2 + O(1/n).(92)
That is, var i (ηi−η) 2 n−1 | G θ = O(1/n). Finally, the term
E i (η i −η)(Y i −Ȳ ) 2 | G θ (n − 1) 2 = E i (η i −η) 2 (Y i −Ȳ ) 2 | G θ + E[η 2 | G θ ] i =j (Y i −Ȳ )(Y j −Ȳ ) (n − 1) 2 = i (Y i −Ȳ ) 2 (n − 1) 2 E (η 1 −η) 2 | G θ + O(1/n) = ∆ (n − 1) 2 (n − 1) (σ 2 A ) (0) σ 2 V σ 2 V +(σ 2 A ) (0) n + O(1/n) = O(1/n).(93)
Therefore, we have var[S | G θ ] = O(1/n).
Next, using the following results
E[S | G θ ] = (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 ∆ n − 1 ≤ σ 2 V + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 ∆ n − 1 = O(1), E[S 2 | G θ ] = (E[S | G θ ]) 2 + O(1/n) = O(1),(94)
we can first write f ′′ (x (1) ) by
f ′′ (x (1) ) =nE (θ (1) −Ȳ ) 2 | G θ + n ∆ n − 1 − σ 2 V 2 + nE[S 2 | G θ ] − 2n ∆ n − 1 − σ 2 V E[S | G θ ] + O(1).(95)
Then, using
nE (θ (1) −Ȳ ) 2 | G θ = (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + n σ 2 V σ 2 V + (σ 2 A ) (0) 2 µ (1) −Ȳ 2 ≤ σ 2 V + n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2(96)
we further bound the terms
nE (θ (1) −Ȳ ) 2 | G θ + n ∆ n − 1 − σ 2 V 2 + nE[S 2 | G θ ] − 2n ∆ n − 1 − σ 2 V E[S | G θ ] ≤ n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n ∆ n − 1 − σ 2 V − E[S | G θ ] 2 = n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 ∆ n − 1 − ∆ n − 1 − σ 2 V 2 = n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n ∆ n − 1 (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) 2 − 1 + (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) + σ 2 V 2 = n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) + 1 2 ∆ n − 1 −σ 2 V σ 2 V + (σ 2 A ) (0) + σ 2 V 2 = n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n(σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 .(97)
Finally, combing all the results yields
f ′′ (x (1) ) = n(σ 2 V ) 2 µ (1) −Ȳ 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n(σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 + O(1).
(98)
C.3. Compute E[f (x (1) ) | x (0) ] = E[f ′′ (x (1) ) | G µ ].
Recall that the expectation E[· | G µ ] is over 28 In the obtained expression of f ′′ (x (1) ) from previous step, the only term involves µ (1) is
µ (1) ∼ N θ (0) , (σ 2 A ) (0) n .(99)n(σ 2 V ) 2 (µ (1) −Ȳ ) 2 (σ 2 V +(σ 2 A ) (0) ) 2 . Since E (µ (1) −Ȳ ) 2 | G µ = (θ (0) −Ȳ ) 2 + (σ 2 A ) (0) /n,(100)
we have
E[f (x (1) ) | x (0) ] = E[f ′′ (x (1) ) | G µ ] ≤ n(σ 2 V ) 2 (σ 2 V + (σ 2 A ) (0) ) 2 (θ (0) −Ȳ ) 2 + (σ 2 A ) (0) n + n(σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 + O(1) = n(σ 2 V ) 2 (θ (0) −Ȳ ) 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n(σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 + O(1).(101)
Finally, we complete the proof by
n(σ 2 V ) 2 (θ (0) −Ȳ ) 2 (σ 2 V + (σ 2 A ) (0) ) 2 + n(σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 + O(1) = n(σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 (σ 2 V + (σ 2 A ) (0) ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (θ (0) −Ȳ ) 2 + ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 + O(1) ≤ (σ 2 V ) 2 (σ 2 V + 2(σ 2 A ) (0) ) 2 (σ 2 V + (σ 2 A ) (0) ) 4 n(θ (0) −Ȳ ) 2 + n ∆ n − 1 − ((σ 2 A ) (0) + σ 2 V ) 2 + O(1) = (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (0) (σ 2 V ) 2 + 2σ 2 V (σ 2 A ) (0) + ((σ 2 A ) (0) ) 2 2 f (x (0) ) + O(1).(102)P (x, ·) ≥ ǫQ(·), ∀x ∈ R,(103)
is asymptotically bounded away from 0.
Denotingσ 2 A := ∆ n−1 − σ 2 V , we have R = x ∈ X : n(θ −Ȳ ) 2 + n ∆ n − 1 − σ 2 V − σ 2 A 2 ≤ d ⊆ x ∈ X : |θ −Ȳ | ≤ d n x ∈ X : |σ 2 A −σ 2 A | ≤ d n(104)
Denoting
R ′ := x ∈ X : |θ −Ȳ | ≤ d n , |σ 2 A −σ 2 A | ≤ d n (105)
since R ⊆ R ′ , it suffices to show the minorization volume ǫ satisfying
P (x (0) , ·) ≥ ǫQ(·), ∀x (0) ∈ R ′ ,(106)
is asymptotically bounded away from 0. One common technique to obtain ǫ is by integrating the infimum of densities of P (x (0) , ·) where in our case the infimum is over allθ (0) and
(σ 2 A ) (0) such that |θ (0) −Ȳ | ≤ d n and |(σ 2 A ) (0) −σ 2 A | ≤ d n . Note that the intuition behind the proof is: since R ′ is determined by |θ (0) −Ȳ | ≤ d n and |(σ 2 A ) (0) −σ 2 A | ≤ d n .
The size of uncertainties of the initialθ (0) and (σ 2 A ) (0) is of order O(1/ √ n). Therefore, for any fixed initial state
x (0) ∈ R ′ , if the transition kernel P (x (0) , ·)
concentrates at a rate of Ω(1/ √ n) then ǫ is bounded away from 0.
For the density function of the Markov transition kernel P (x (0) , ·), recall the order of Gibbs sampler
µ (1) ∼ N θ (0) , (σ 2 A ) (0) n , θ (1) i ∼ N µ (1) σ 2 V + Y i (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) , i = 1, . . . , n (σ 2 A ) (1) ∼ IG a + n − 1 2 , b + 1 2 n i=1 (θ (1) i −θ (1) ) 2 .(107)
Then ǫ can be computed using the three steps of integration according to the reverse order of the Gibbs sampler:
1. For given µ (1) and {θ (1) i }, integrating the infimum of the density of (σ 2 A ) (1) . Note that the infimum is over a subset ofθ (0) and (σ 2 A ) (0) . However,
(σ 2 A ) (1) ∼ IG a + n − 1 2 , b + 1 2 n i=1 (θ (1) i −θ (1) ) 2 (108)
does not depend onθ (0) and (σ 2 A ) (0) . Therefore, the integration of the infimum of the density in this step always equals one; 2. For given µ (1) , integrating the infimum of the densities of {θ (1) i }. We first note that {θ (1) i } appear in the densities only in the forms ofθ (1) and S = i (θ (1) i −θ (1) ) 2 n−1 . Therefore, instead of integrating over (θ (1) 1 , . . . , θ (1) n ) we can integrate overθ (1) and S. Furthermore, we have shownθ (1) is conditional independent with S given (σ 2 A ) (0) in the proof of Lemma C.2, we can integrate them separately. Finally, we note that the infimum is over
(σ 2 A ) (0) : |(σ 2 A ) (0) −σ 2 A | ≤ d n .
Overall, we need to showg n (µ (1) ) is lower bounded away from 0, which is defined bỹ g n (µ (1) ) := dSdθ inf
x (0) ∈R ′ f S ((σ 2 A ) (0) , n; S) N µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V n(σ 2 V + (σ 2 A ) (0) ) ;θ ≥ dS inf x (0) ∈R ′ f S ((σ 2 A ) (0) , n; S) · dθ inf x (0) ∈R ′ N µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V n(σ 2 V + (σ 2 A ) (0) ) ;θ ,(109)
where f S ((σ 2 A ) (0) , n; S) denotes the density function of
S = i (θi−θ) 2 n−1 for given (σ 2 A ) (0) , with θ i ∼ N µ (1) σ 2 V + Y i (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V σ 2 V + (σ 2 A ) (0) , i = 1, . . . , n,(110)
and
N µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V +(σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V n(σ 2 V +(σ 2 A ) (0) ) ;θ denotes the density function of θ ∼ N µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V n(σ 2 V + (σ 2 A ) (0) ) .(111)
3. Finally, we integrate the infimum of the densities of µ (1) to get ǫ. That is, ǫ = dµ g n (µ) inf
x (0) ∈R ′ N θ (0) , (σ 2 A ) (0) n ; µ .(112)
In the following, we show ǫ is lower bounded away from 0 in three steps.
First, it is easy to see that the density of S does not depend on µ (1) . We show dS inf
x (0) ∈R ′ f S ((σ 2 A ) (0) , n; S) = Θ(1).(113)
Second, we show dθ inf
x (0) ∈R ′ N µ (1) σ 2 V +Ȳ (σ 2 A ) (0) σ 2 V + (σ 2 A ) (0) , (σ 2 A ) (0) σ 2 V n(σ 2 V + (σ 2 A ) (0) ) ;θ ≥ 1 − erf C|µ| + C ′ √ 2(114)
where erf(z) := 2 √ π z 0 e −t 2 dt and C and C ′ are some constants. Finally, we complete the proof by showing
dµ 1 − erf( C|µ| + C ′ √ 2 ) inf x (0) ∈R ′ N θ (0) , (σ 2 A ) (0) n ; µ = Θ(1).(115η i := θ i − Y i σ 2 A σ 2 V + σ 2 A ∼ N µσ 2 V σ 2 V + σ 2 A , σ 2 A σ 2 V σ 2 V + σ 2 A ,(117)
we know
E S − i (η i −η) 2 n − 1 − σ 2 A σ 2 V + σ 2 A 2 ∆ n − 1 2 = O(1/n).(118)
Therefore, defining
S ′ := i (η i −η) 2 n − 1 + σ 2 A σ 2 V + σ 2 A 2 ∆ n − 1(119)
and denoting f ′ S ′ (σ 2 A , n; S ′ ) as the density of S ′ , it suffices to show
dS ′ inf σ 2 A :|σ 2 A −σ 2 A |≤ √ d n f ′ S ′ (σ 2 A , n; S ′ ) = Θ(1). (120) Furthermore, note that under |σ 2 A −σ 2 A | ≤ d n , we have σ 2 V +σ 2 A σ 2 A σ 2 V = σ 2 V +σ 2 Â σ 2 A σ 2 V + O(1/ √ n) = Θ(1). Then it suffices to show dS ′′ inf σ 2 A :|σ 2 A −σ 2 A |≤ √ d n f ′′ S ′′ (σ 2 A , n; S ′′ ) = Θ(1),(121)
where
S ′′ : = σ 2 V + σ 2 A σ 2 A σ 2 V S ′ = σ 2 V + σ 2 A σ 2 A σ 2 V i (η i −η) 2 n − 1 + 1 σ 2 V σ 2 A σ 2 V + σ 2 A ∆ n − 1(122)
and f ′′ S ′′ (σ 2 A , n; S ′′ ) is the density function of S ′′ . Next, note that σ 2
V +σ 2 A σ 2 A σ 2 V i (η i −η) 2 ∼ χ 2 n−1 , we have σ 2 V +σ 2 A σ 2 A σ 2 V i (η i −η) 2 − (n − 1) 2(n − 1) d − → N (0, 1),(123)
which does not depend on n. We definef (z, σ 2 A ; x), ∀z ∈ R as the density function of a random variableX z,σ 2
A := z + σ 2 V +σ 2 A σ 2 A σ 2 V i (η i −η) 2 − (n − 1) 2(n − 1) ,(124)
then we knowX z,σ 2 A d − → N (z, 1). The rest of the proof is first to lower bound dS ′′ inf σ 2
A :|σ 2 A −σ 2 A |≤ √ d n f ′′ S ′′ (σ 2
A , n; S ′′ ) using the density functionf (z, σ 2 A ; x) and then show it is asymptotically lower bounded away from 0.
Notice that 1 σ 2 V σ 2 A σ 2 V +σ 2 A ∆
n−1 is not random, and there exists a constant C 0 such that
max {σ 2 A :|σ 2 A −σ 2 A |≤ √ d/n} σ 2 A σ 2 V + σ 2 A − min {σ 2 A :|σ 2 A −σ 2 A |≤ √ d/n} σ 2 A σ 2 V + σ 2 A ∆/σ 2 V n − 1 ≤ C 0 √ n − 1 . (125) Finally we have dS ′′ inf σ 2 A :|σ 2 A −σ 2 A |≤ √ d n f ′′ S ′′ (σ 2 A , n; S ′′ ) ≥ inf σ 2 A :|σ 2 A −σ 2 A |≤ √ d n dx min f − C 0 √ 2 , σ 2 A ; x ,f + C 0 √ 2 , σ 2 A ; x = 1 − sup {σ 2 A :|σ 2 A −σ 2 A |≤ √ d/n} √ 2C0 − √ 2C0 dxf(0, σ 2 A ; x) = 1 − sup {σ 2 A :|σ 2 A −σ 2 A |≤ √ d/n} P(− √ 2C 0 ≤X 0,σ 2 A ≤ √ 2C 0 ) → 1 − √ 2C0 − √ 2C0 dxN (0, 1; x) = Θ(1).(126)
D.2. Proof of Eq. (114). We again omit the subscripts for simplicity. The goal is to lower bound dθ inf
σ 2 A :|σ 2 A −σ 2 A |≤ √ d n N µσ 2 V +Ȳ σ 2 A σ 2 V + σ 2 A , σ 2 A σ 2 V n(σ 2 V + σ 2 A )
;θ (127)
Note that there exists some constants C 1 and C 2 such that
max σ 2 A :|σ 2 A −σ 2 A |≤ √ d n µσ 2 V +Ȳ σ 2 A σ 2 V + σ 2 A − min σ 2 A :|σ 2 A −σ 2 A |≤ √ d n µσ 2 V +Ȳ σ 2 A σ 2 V + σ 2 A ≤ C 1 |µ| + C 2 √ n ,(128)
and another constant C 3 such that
min σ 2 A :|σ 2 A −σ 2 A |≤ √ d n σ 2 A σ 2 V n(σ 2 V + σ 2 A ) ≥ C 3 n .(129)
Therefore, we have dθ inf
σ 2 A :|σ 2 A −σ 2 A |≤ √ d n N µσ 2 V +Ȳ σ 2 A σ 2 V + σ 2 A , σ 2 A σ 2 V n(σ 2 V + σ 2 A ) ;θ ≥ 2 ∞ (C1|µ|+C2)/ √ n dx N (0, C 3 /n; x) = 2 ∞ C4|µ|+C5 dx N (0, 1; x) = 1 − erf C 4 |µ| + C 5 √ 2 ,(130)
where C 4 := C1
√ C3 and C 5 := C2 √ C3 .
D.3. Proof of Eq. (115).
We omit the subscripts for simplicity. We show the following is asymptotically bounded away from 0:
dµ 1 − erf C 4 |µ| + C 5 √ 2 inf x∈R ′ N θ , σ 2 A n ; µ (131) Note that there exists (σ 2 A ) ′ n ∈ [σ 2 A − d/n,σ 2 A + d/n] such that inf (θ,σ 2 A ):|θ−Ȳ |≤ √ d n ,|σ 2 A −σ 2 A |≤ √ d n N θ , σ 2 A n ; µ = min N Ȳ − d n , (σ 2 A ) ′ n n ; µ , N Ȳ + d n , (σ 2 A ) ′ n n ; µ(132)
Therefore, we have
∞ −∞ dµ 1 − erf C 4 |µ| + C 5 √ 2 inf (θ,σ 2 A ):|θ−Ȳ |≤ √ d n ,|σ 2 A −σ 2 A |≤ √ d n N θ , σ 2 A n ; µ ≥ 2Ȳ 0 dµ 1 − erf C 4 |µ| + C 5 √ 2 inf (θ,σ 2 A ):|θ−Ȳ |≤ √ d n ,|σ 2 A −σ 2 A |≤ √ d n N θ , σ 2 A n ; µ ≥ 1 − erf C 4 |2Ȳ | + C 5 √ 2 2Ȳ 0 dµ inf (θ,σ 2 A ):|θ−Ȳ |≤ √ d n ,|σ 2 A −σ 2 A |≤ √ d n N θ , σ 2 A n ; µ = 1 − erf C 4 |2Ȳ | + C 5 √ 2 · Ȳ 0 dµ N Ȳ + d n , (σ 2 A ) ′ n n ; µ + 2Ȳ Y dµ N Ȳ − d n , (σ 2 A ) ′ n n ; µ = 1 − erf C 4 |2Ȳ | + C 5 √ 2 · 0 −Ȳ dµ N d n , (σ 2 A ) ′ n n ; µ + Ȳ 0 dµ N − d n , (σ 2 A ) ′ n n ; µ(133)
Finally, we show
0 −Ȳ dµ N d n , (σ 2 A ) ′ n n ; µ + Ȳ 0 dµ N − d n , (σ 2 A ) ′ n n ; µ(134)
is asymptotically bounded away from 0. Note that when n → ∞, we have (σ 2 A ) ′ n →σ 2 A . So the density functions N ± d n , (σ 2 A ) ′ n n ; µ concentrate on 0. Therefore
0 −Ȳ dµ N d n , (σ 2 A ) ′ n n ; µ + Ȳ 0 dµ N − d n , (σ 2 A ) ′ n n ; µ → 0 −∞ dµ N d n ,σ 2 A n ; µ + ∞ 0 dµ N − d n ,σ 2 A n ; µ = 1 − √ d/n − √ d/n dx N 0,σ 2 A n ; x = 1 − √ d − √ d dx N (0,σ 2 A ; x) = Θ(1).(135)k π(R c T ) + k i=1 P i (x (0) , R c T ) ≤ k √ n √ b(2σ 2 V /δ + 1) ∆ n−1 − σ 2 V − T + k(1 + k) 2n b ∆ n−1 − σ 2 V − T 2 .(136)
PROOF. In this proof, we write f n (x) as f (x) for simplicity. We first consider a Markov chain starting from initial state x (0) defined by Eq. (39). By Eq. (38), we have (σ 2 A ) (0) = n i=1 (Yi−Ȳ ) 2 n−1 − σ 2 V for large enough n, which implies f (x (0) ) = 0. Therefore, for large enough n, we have E(f (x (1) )) ≤ b from Lemma C.1. Furthermore, we can continue to get upper bounds E(f (x (i) )) ≤ ib for all i = 1, . . . , k. This implies
E ∆ n − 1 − σ 2 V − (σ 2 A ) (i) 2 ≤ i b n , i = 1, . . . , k.(137)
By the Markov's inequality, we have
P (σ 2 A ) (i) − ∆ n − 1 − σ 2 V ≥ T − ∆ n − 1 − σ 2 V ≤ i n b T − ∆ n−1 − σ 2 V 2 ,(138)
for i = 1, . . . , k. Therefore, we have
k i=1 P i (x (0) , R c T ) ≤ b T − ∆ n−1 − σ 2 V 2 k i=1 i n = k(1 + k) 2n b T − ∆ n−1 − σ 2 V 2 .(139)
Next, we consider a Markov chain starting from π. According to Lemma C.1, we have
E π 1 − (σ 2 V ) 2 + 2σ 2 V σ 2 A (σ 2 V ) 2 + 2σ 2 V σ 2 A + (σ 2 A ) 2 2 f (x) = E π 1 + (σ 2 V ) 2 + 2σ 2 V σ 2 A (σ 2 V ) 2 + 2σ 2 V σ 2 A + (σ 2 A ) 2 1 − (σ 2 V ) 2 + 2σ 2 V σ 2 A (σ 2 V ) 2 + 2σ 2 V σ 2 A + (σ 2 A ) 2 f (x) = E π 1 + (σ 2 V ) 2 + 2σ 2 V σ 2 A (σ 2 V ) 2 + 2σ 2 V σ 2 A + (σ 2 A ) 2 σ 2 A σ 2 V + σ 2 A 2 f (x) ≤ b,(140)
where E π [·] denotes the expectation is over x ∼ π(·). Note that by Hölder's inequality (in the reverse way)
E π 1 + (σ 2 V ) 2 + 2σ 2 V σ 2 A (σ 2 V ) 2 + 2σ 2 V σ 2 A + (σ 2 A ) 2 σ 2 A σ 2 V + σ 2 A 2 f (x) ≥ E π σ 2 A σ 2 V + σ 2 A 2 f (x) ≥ [E π (f (x) 1 2 )] 2 E π σ 2 A σ 2 V + σ 2 A −2 −1 = [E π (f (x) 1 2 )] 2 /E π [(1 + σ 2 V /σ 2 A ) 2 ].(141)
Therefore, we have
E π (f (x) 1 2 ) ≤ √ b 1 + 2σ 2 V E π (1/σ 2 A ) + (σ 2 V ) 2 E π (1/(σ 2 A ) 2 ).(142)
Next, according to Lemma E.2, we know that E π (1/σ 2 A ) ≤ 2/δ and E π (1/(σ 2 A ) 2 ) ≤ 2/δ 2 for large enough n.
More specifically, by Lemma E.2, we have 1 + 2σ 2
V E π (1/σ 2 A ) + (σ 2 V ) 2 E π (1/(σ 2 A ) 2 ) ≤ 1 + 2σ 2
V /δ for large enough n. Therefore, we get
E π ∆ n − 1 − σ 2 V − σ 2 A ≤ b n (2σ 2 V /δ + 1).(143)
Thus, by the Markov's inequality
π(R c T ) = P π ∆ n − 1 − σ 2 V − σ 2 A ≥ ∆ n − 1 − σ 2 V − T ≤ b n (2σ 2 V /δ + 1) ∆ n−1 − σ 2 V − T .(144)
Finally, we have
k π(R c T ) + k i=1 P i (x (0) , R c T ) ≤ k √ n √ b(2σ 2 V /δ + 1) ∆ n−1 − σ 2 V − T + k(1 + k) 2n b T − ∆ n−1 − σ 2 V 2 .(145)
LEMMA E.2. There exists a positive integer N , which only depends on a, b, σ 2 V , and δ, such that for all n ≥ N , we have
E π (1/σ 2 A ) ≤ 2/δ, E π (1/(σ 2 A ) 2 ) ≤ 2/δ 2 .(146)
PROOF. The posterior distribution can be written as
π(x | Y 1 , . . . , Y n ) = f a (x, Y 1 , . . . , Y n ) f a (x, Y 1 , . . . , Y n )dx ,(147)
where we use f a (x, Y 1 , . . . , Y n ) to denote the joint distribution of x and {Y i } when IG(a, b) is used as the prior for σ 2 A . That is,
f a (x, Y 1 , . . . , Y n ) = b a Γ(a) (σ 2 A ) −a−1 e −b/σ 2 A n i=1 1 2πσ 2 A e − (θ i −µ) 2 2σ 2 A 1 √ 2π e − (Y i −θ i ) 2 2σ 2 V = 1 (2π) n b a Γ(a) (σ 2 A ) −a−1− n 2 e −b/σ 2 A exp − n i=1 (θ i − µ) 2 2σ 2 A + (Y i − θ i ) 2 2σ 2 V .(148)
Now using 1
σ 2 A f a (x, Y 1 , . . . , Y n ) = a b f a+1 (x, Y 1 , . . . , Y n ), we have E π (1/σ 2 A ) = a b f a+1 (x, Y 1 , . . . , Y n )dx f a (x, Y 1 , . . . , Y n )dx , E π (1/(σ 2 A ) 2 ) = a 2 b 2 f a+2 (x, Y 1 , . . . , Y n )dx f a (x, Y 1 , . . . , Y n )dx .(149)
Therefore, it suffices to show the ratios fa(x,Y1,...,Yn)dx are (asymptotically) bounded. Next, we focus on the first ratio. The second ratio can be proved using a similar argument.
Using the fact that
exp − σ 2 V (θ i − µ) 2 + σ 2 A (Y i − θ i ) 2 2σ 2 A σ 2 V dθ i = exp − θ − σ 2 V µ+Y σ 2 A σ 2 A +σ 2 V 2 2σ 2 A σ 2 V σ 2 A +σ 2 V dθ i exp − (Y i − µ) 2 2(σ 2 V + σ 2 A ) = 2π 2σ 2 A σ 2 V σ 2 V + σ 2 A exp − (Y i − µ) 2 2(σ 2 V + σ 2 A ) ,(150)
and
exp − n i=1 (Y i − µ) 2 2(σ 2 V + σ 2 A ) dµ = exp − (µ −Ȳ ) 2 2(σ 2 V + σ 2 A )/n dµ exp − i Y 2 i − nȲ 2 2(σ 2 V + σ 2 A ) = exp − n i=1 (Y i −Ȳ ) 2 2(σ 2 V + σ 2 A ) 2π 2(σ 2 V + σ 2 A ) n ,(151)
we can write E π (1/σ 2 A ) as a function of ∆ = i (Y i −Ȳ ) 2 . Denote h n (∆) := E π (1/σ 2 A ), then we have h n (∆) :=
(σ 2 A ) −a−2 e −b/σ 2 A (σ 2 V + σ 2 A ) −n+1 2 exp − ∆ 2(σ 2 V +σ 2 A ) dσ 2 A (σ 2 A ) −a−1 e −b/σ 2 A (σ 2 V + σ 2 A ) −n+1 2 exp − ∆ 2(σ 2 V +σ 2 A ) dσ 2 A .(152)
Next, we show h n ((n − 1)(c + σ 2 V )) is (asymptotically) bounded for any fixed c > 0. Note that
(σ 2 A ) −a−1 e −b/σ 2 A (σ 2 V + σ 2 A ) −n+1 2 exp − ∆ 2(σ 2 V + σ 2 A ) dσ 2 A = (σ 2 A ) −a−1 e −b/σ 2 A 1 σ 2 V + σ 2 A exp − ∆ n−1 2(σ 2 V + σ 2 A ) n−1 dσ 2 A .(153)
We change variable y =
where the term O((n −1/2 ) only depends on constants a, b, and σ 2 V . Finally, since for all n ≥ N 0 we have ∆ ≥ (n − 1)(σ 2 V + δ), this implies h n (∆) ≤ 1 δ (1 + O(n −1/2 )), ∀n ≥ N 0 . Therefore, there exists large enough positive integer N 0 , which only depends on a, b, σ 2 V , and δ, such that for all n ≥ N 0 , we have E π (1/σ 2 A ) = h n (∆) ≤ 1 δ (1 + O(n −1/2 )) ≤ 2 δ . For E π (1/(σ 2 A ) 2 ), we can follow a similar argument to show that E π (1/(σ 2 A ) 2 ) ≤ 2 δ 2 for large enough n. Therefore, we can conclude that there exists large enough positive integer N , which only depends on a, b, σ 2 V , and δ, such that for all n ≥ N , we have both E π (1/σ 2 A ) ≤ 2
1 4 k x 2 , (1 − 1 4 k+1 )I p ).(163)
Therefore, according to the k-step drift condition, for all the states x in the small set, we have c √ p ≤ x 2 ≤ C √ p for some positive constant c < 1 and C > 1. Then we choose k such that x 2 /4 k = O(1/p) so that the integral of the minimum of the two one-dimensional densities
N ( 1 4 k C √ p, (1 − 1 4 k+1 )) and N (− 1 4 k C √ p, (1 − 1 4 k+1 )) is 1 − O(1/p).
Then by writing the multivariate Gaussian density as product of one-dimensional densities, the total minimization volume can be controlled so that ǫ = (1 − O(1/p)) p > 0 and bounded away from zero as p → ∞. Therefore, we can choose k = ⌊C log(p)⌋ + 1 a large enough constant C. Overall, we have proven that for a k-step drift condition and the corresponding minimization condition gives ǫ which is asymptotically bounded away from zero, which completes the proof.
E[f n (X (k+1) ) | x (k) ] ≤ b,(165)
where b = O(1/n). For simplicity of notation, we omit the index k in the rest of the proof. The computation of E[f n (X (k+1) ) | x (k) ] have two steps. We first compute the conditional expectation over β | λ ∼ Ga(ρ + nα, δ + nλ). Using the fact that 1/β has an inverse gamma distribution, we have
Next, we compute the conditional expectation over λ given β. Note that by summing (conditional) independent Gamma distribution we know nλ | β ∼ Ga(n(Ȳ + α), 1 + β) (167) which gives
E λ|β [λ] =Ȳ + α 1 + β , E λ|β [λ 2 ] = (Ȳ + α)(Ȳ + α + 1 n ) (1 + β) 2 .(168)
Using the assumption onȲ and the fact that 1 1+β ∈ (0, 1], we have Now the proof can be completed by verifying the Gibbs sampler satisfies the minorization condition: P (x, ·) ≥ ǫQ(·) for all x in the small set λ − α β = O(1/ √ n) . We only need to show that ǫ is asymptotically bounded away from 0 as n → ∞. Note that the last step of updating β in the Gibbs sampler doesn't depend on the previous state, it then suffices to derive the minorization condition for the step nλ | β ∼ Ga(n(Ȳ + α), 1 + β) for all β in the small set. Let β max and β min be the maximum and minimum value of β in the small set. Then from the explicit form of the density ofλ, on can see that ǫ must be asymptotically bounded away from 0 if 1/(1 + β min ) − 1/(1 + β max ) = O(1/ √ n), which is satisfied by the small set.
This completes the proof.
APPENDIX H: PROOF OF REMARK 3.5 [23, Appendix C] states another way to obtain samples from the posterior of the MCMC model related to James-Stein estimator. More specifically, recall that the model Y i | θ i ∼ N (θ i , σ 2 V ), 1 ≤ i ≤ n, θ i | µ, σ 2 A ∼ N (µ, σ 2 A ), 1 ≤ i ≤ n, µ ∼ flat prior on R,
σ 2 A ∼ IG(a, b),(171)
where σ 2 V is assumed to be known, Y = (Y 1 , . . . , Y n ) is the observed data, and x = (σ 2 A , µ, θ 1 , . . . , θ n ) are parameters. Then the posterior can be written as π(θ, µ, σ 2
A | Y ) = π(θ | µ, σ 2 A , Y )π(µ | σ 2 A , Y )π(σ 2 A | Y ),(172)
where π(θ | µ, σ 2 A , Y ) is a product of independent univariate normal densities
θ i ∼ N σ 2 A Y i + σ 2 V µ σ 2 V + σ 2 A , σ 2 A σ 2 V σ 2 A + σ 2 V(173)
and π(µ | σ 2 A , Y ) is a normal distribution
µ | σ 2 A , Y ∼ N Ȳ , σ 2 A + σ 2 V n(174)
Therefore, one can use a rejection sampler with proposal from IG(a, b) to obtain independent samples from π(σ 2 A | Y ). However, we show that the acceptance probability of this rejection sampler decreases (typically exponentially) fast with n. To see this, note that
π(σ 2 A | Y ) ∝ 1 (σ 2 A ) a+1 (σ 2 A + σ 2 V ) (n−1)/2 exp(− 1 b − n i=1 (Y i −Ȳ ) 2 2(σ 2 A + σ 2 V )
).
We let g(σ 2 A ) be the density of IG(a, b), then using the fact
π(σ 2 A | Y ) g(σ 2 A ) ∝ 1 (σ 2 A ) a+1 (σ 2 A + σ 2 V ) (n−1)/2 exp(− 1 b − n i=1 (Y i −Ȳ ) 2 2(σ 2 A + σ 2 V ) )/g(σ 2 A ) (176) = (σ 2 A + σ 2 V ) (1−n)/2 exp(− n i=1 (Y i −Ȳ ) 2 /2(σ 2 A + σ 2 V )) (177) ≤ M := n i=1 (Y i −Ȳ ) 2 n − 1 (1−n)/2 e − n−1 2(178)
where the upper bound M is achieved when
σ 2 A = n i=1 (Yi−Ȳ ) 2 n−1 − σ 2 V .
Then the acceptance probability of the rejection sampler is
E σ 2 A ∼IG(a,b) (σ 2 A + σ 2 V ) (1−n)/2 exp(− n i=1 (Y i −Ȳ ) 2 /2(σ 2 A + σ 2 V )) M (179) = E σ 2 A ∼IG(a,b) σ 2 A + σ 2 V n i=1 (Yi−Ȳ ) 2 n−1 (1−n)/2 exp n i=1 (Yi−Ȳ ) 2 n−1 σ 2 A + σ 2 V − 1 (1−n)/2 (180) = E σ 2 A ∼IG(a,b) σ 2 A + σ 2 V n i=1 (σ 2 A + σ 2 V ≤ 1,(182)
where the last inequality comes from exp(x − 1) ≥ x.
We can see that under mild conditions such that n i=1 (Yi−Ȳ ) 2 n−1 converges to a constant, the acceptance probability of the rejection sampler goes to zero, E[Z (n−1)/2 ] → 0, very fast.
DEFINITION 2.3. (Generalized drift condition on a large set) There exists a drift function f : X → R + such that for some λ < 1 and b < ∞,
THEOREM 3. 1 .
1For the two-step Gibbs sampler for our multivariate Gaussian model, suppose the initial state satisfies X O(p), then there exists positive constants C 1 , C 2 such that
COROLLARY 3. 4 .i
4Under the assumptions of Theorem 3.3, if the initial state satisfies = O(1) and 1/β (0) = O(1), the mixing time of the Gibbs sampler is O(1).
REMARK 3. 5 .
5When the dimension of the state space is fixed, [23, Appendix C] states a way to use rejection sampler to obtain samples from the posterior of this model. However, it is easy to show that the acceptance probability of the rejection sampler in [23, Appendix C] decreases very fast as the dimension increases. See Appendix H for more details. As we will show the mixing time of our Gibbs sampler for this model is O(1), the rejection sampler in [23, Appendix C] is not as efficient in high dimensions as our Gibbs sampler.
(39), the Markov chain still mixes fast. The mixing time becomes O(log n) instead of O(1).
COROLLARY 3 . 9 .
39Under the assumption Eq.(38), if the initial state of the Markov chain satisfies x (0) ∈ {x ∈ R δ : f n (x) = o(n/ log n)}, the mixing time of the Gibbs sampler satisfies K c,x (0) = O(log n) for any given 0 < c < 1.
.
Discussions. We end this section by giving some further remarks and comments on the analysis of the Gibbs sampler.• Drift function: In the proof of Theorem 3.6, we actually used a fitted family of drift functions if we scale the drift functions in Eq. (41) by 1/n. To check this, we select a "typical" statex = (σ 2 A ,μ,θ 1 , . . . ,θ n ) such thatθ i = Y i andσ 2 scaled drift function f n (x)/n = n(σ 2 V ) 2 /n = Θ(1). We then hope to establish b such that b/n = o(1), or equivalently, b = o(n). Indeed, the established generalized drift condition has b = O(1) = o(n), which implies the definition of fitted family of drift functions is satisfied for {f n (x)/n}. • "Large set": The result in Eq.
(8) as shown in the proof of Theorem 3.6.• The upper bound in Eq.(46): Although the upper bound of k π(
we haveK c := log(C1)−log(c)+log(3) log(1/γ) is an upper bound of the mixing time. Finally, it can be seen that bothK c = Θ(1) and N c = Θ(1). APPENDIX C: PROOF OF LEMMA C.1 LEMMA C.1. Under the assumptions of Theorem 3.6, let ∆ = n i=1 (Y i −Ȳ ) 2 and x = (σ 2 A , µ, θ 1 , . . . , θ n ). Define the fitted family of drift functions {f n (x)} by
. 1 .
1Under the assumptions of Theorem 3.6, recall the "large set" defined in the proof of Theorem 3.6. If T = Θ(1), by choosing the size of the "small set" R = {x ∈ X : f n (x) ≤ d} to satisfy d = O(1) and d > b 1−λT , there exists a probability measure Q(·) such that the Markov chain satisfies a minorization condition in Eq. (9) with the minorization volumne ǫ = Θ(1). PROOF. Throughout the proof, we write f n (x) as f (x) for simplicity. Recall that the small set is defined by R = {x ∈ X : f (x) ≤ d} where d > 2b/(1 − λ T ) and x = (σ 2 A , µ, θ 1 , . . . , θ n ). When b = O(1) and λ T = Θ(1), we can choose d = O(1). Our goal is to show the minorization volume ǫ satisfying
fa+1(x,Y1,...,Yn)dx fa(x,Y1,...,Yn)dx and fa+2(x,Y1,...,Yn)dx
the Laplace approximation. Note that for any c > 0, let y 0 = arg max y y exp − c+σ 2
i . The key step of the proof is to show the following drift condition
we established the drift condition E[f n (X (k+1) ) | x (k) ] ≤ b where b = O(1/n).
the acceptance probability of the rejection sampler equals to E[Z
)
D.1. Proof of Eq. (113). We omit the superscripts for simplicity. That is, we show Following the proof of Lemma C.2 from Eq. (89) to Eq. (93), definingdS
inf
σ 2
A :|σ 2
A −σ 2
A |≤
√
d
n
f S (σ 2
A , n; S) = Θ(1).
(116)
APPENDIX E: PROOF OF LEMMA E.1 LEMMA E.1. Under the assumptions of Theorem 3.6, recall the definition of drift function and "large set" in the proof of Theorem 3.6. With the initial state x (0) given by Eq. (39), there exists a positive integer N , which does not depend on k, such that for all n ≥ N , we have
Acknowledgments. The authors thank Jim Hobert and Gareth Roberts for helpful discussions, and two referees for their valuable comments which have significantly improved the quality of the paper. J.Y. also thanks Quan Zhou and Aaron Smith for helpful comments on the proof. This research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada.where the last step is byTo complete the proof, we still need to show a multi-step minorization condition with ǫ bounded away from zero. Note that the 1-step drift condition directly implies a k-step drift condition with λ = 1 2 k and b = O(1/p). Next, note that
A tail inequality for suprema of unbounded empirical processes with applications to Markov chains. R Adamczak, Electronic Journal of Probability. 13ADAMCZAK, R. (2008). A tail inequality for suprema of unbounded empirical processes with applications to Markov chains. Electronic Journal of Probability 13 1000-1034.
Exponential concentration inequalities for additive functionals of Markov chains. R Adamczak, W Bednorz, ESAIM: Probability and Statistics. 19ADAMCZAK, R. and BEDNORZ, W. (2015). Exponential concentration inequalities for additive functionals of Markov chains. ESAIM: Probability and Statistics 19 440-481.
Renewal theory and computable convergence rates for geometrically ergodic Markov chains. P H Baxendale, The Annals of Applied Probability. 15BAXENDALE, P. H. (2005). Renewal theory and computable convergence rates for geometrically ergodic Markov chains. The Annals of Applied Probability 15 700-738.
Nonasymptotic mixing of the MALA Algorithm. N Bou-Rabee, M Hairer, IMA Journal of Numerical Analysis. 33BOU-RABEE, N. and HAIRER, M. (2013). Nonasymptotic mixing of the MALA Algorithm. IMA Journal of Numerical Analysis 33 80-110.
Handbook of Markov chain Monte Carlo. S Brooks, A Gelman, G Jones, X.-L Meng, CRC pressBROOKS, S., GELMAN, A., JONES, G. and MENG, X.-L. (2011). Handbook of Markov chain Monte Carlo. CRC press.
The Polya-Gamma Gibbs sampler for Bayesian logistic regression is uniformly ergodic. H M Choi, J P Hobert, Electronic Journal of Statistics. 7CHOI, H. M. and HOBERT, J. P. (2013). The Polya-Gamma Gibbs sampler for Bayesian logistic regression is uniformly ergodic. Electronic Journal of Statistics 7 2054-2064.
The Intrinsic Computational Difficulty of Functions. A Cobham, 24-30Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress (Studies in Logic and the Foundations of Mathematics) (Y. Bar-Hillel. North-Holland PublishingCOBHAM, A. (1965). The Intrinsic Computational Difficulty of Functions. In Logic, Methodology and Phi- losophy of Science: Proceedings of the 1964 International Congress (Studies in Logic and the Founda- tions of Mathematics) (Y. Bar-Hillel, ed.) 24-30. North-Holland Publishing.
The complexity of theorem-proving procedures. S A Cook, Proceedings of the third annual ACM symposium on Theory of computing. the third annual ACM symposium on Theory of computingACMCOOK, S. A. (1971). The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing 151-158. ACM.
Theoretical guarantees for approximate sampling from smooth and log-concave densities. A S Dalalyan, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 79DALALYAN, A. S. (2017). Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79 651-676.
On the convergence complexity of Gibbs samplers for a family of simple Bayesian random effects models. B Davis, J P Hobert, Methodology and Computing in Applied Probability. DAVIS, B. and HOBERT, J. P. (2020). On the convergence complexity of Gibbs samplers for a family of simple Bayesian random effects models. Methodology and Computing in Applied Probability 1-29.
Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. A Durmus, E Moulines, The Annals of Applied Probability. 27DURMUS, A. and MOULINES, E. (2017). Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. The Annals of Applied Probability 27 1551-1587.
Log-concave sampling: Metropolis-Hastings algorithms are fast. R Dwivedi, Y Chen, M J Wainwright, Y U , B , Journal of Machine Learning Research. 20DWIVEDI, R., CHEN, Y., WAINWRIGHT, M. J. and YU, B. (2019). Log-concave sampling: Metropolis- Hastings algorithms are fast. Journal of Machine Learning Research 20 1-42.
Randomly coloring graphs with lower bounds on girth and maximum degree. M Dyer, A Frieze, Random Structures & Algorithms. 23DYER, M. and FRIEZE, A. (2003). Randomly coloring graphs with lower bounds on girth and maximum degree. Random Structures & Algorithms 23 167-179.
Error bounds for Metropolis-Hastings algorithms applied to perturbations of Gaussian measures in high dimensions. A Eberle, The Annals of Applied Probability. 24EBERLE, A. (2014). Error bounds for Metropolis-Hastings algorithms applied to perturbations of Gaussian measures in high dimensions. The Annals of Applied Probability 24 337-377.
Convergence of MCMC and loopy BP in the tree uniqueness region for the hard-core model. C Efthymiou, T P Hayes, D Štefankovic, E Vigoda, Y Yin, Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on. IEEEEFTHYMIOU, C., HAYES, T. P., ŠTEFANKOVIC, D., VIGODA, E. and YIN, Y. (2016). Convergence of MCMC and loopy BP in the tree uniqueness region for the hard-core model. In Foundations of Com- puter Science (FOCS), 2016 IEEE 57th Annual Symposium on 704-713. IEEE.
Markov chain Monte Carlo: Can we trust the third significant figure? Statistical Science. J M Flegal, M Haran, G L Jones, FLEGAL, J. M., HARAN, M. and JONES, G. L. (2008). Markov chain Monte Carlo: Can we trust the third significant figure? Statistical Science 250-260.
Sampling-based approaches to calculating marginal densities. A E Gelfand, A F Smith, Journal of the American statistical association. 85GELFAND, A. E. and SMITH, A. F. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American statistical association 85 398-409.
Inference from iterative simulation using multiple sequences. A Gelman, D B Rubin, Statistical Science. GELMAN, A. and RUBIN, D. B. (1992). Inference from iterative simulation using multiple sequences. Sta- tistical Science 457-472.
Markov chain Monte Carlo in practice. W R Gilks, S Richardson, D Spiegelhalter, CRC pressGILKS, W. R., RICHARDSON, S. and SPIEGELHALTER, D. (1995). Markov chain Monte Carlo in practice. CRC press.
Asymptotic coupling and a general form of Harris' theorem with applications to stochastic delay equations. M Hairer, J C Mattingly, M Scheutzow, Probability Theory and Related Fields. 149HAIRER, M., MATTINGLY, J. C. and SCHEUTZOW, M. (2011). Asymptotic coupling and a general form of Harris' theorem with applications to stochastic delay equations. Probability Theory and Related Fields 149 223-259.
Elementary bounds on Poincaré and log-Sobolev constants for decomposable Markov chains. M Jerrum, J.-B Son, P Tetali, E Vigoda, Annals of Applied Probability. JERRUM, M., SON, J.-B., TETALI, P. and VIGODA, E. (2004). Elementary bounds on Poincaré and log- Sobolev constants for decomposable Markov chains. Annals of Applied Probability 1741-1765.
On the convergence rate of the" out-of-order. Z Jin, J P Hobert, arXiv:2110.14611block Gibbs sampler. arXiv preprintJIN, Z. and HOBERT, J. P. (2021). On the convergence rate of the" out-of-order" block Gibbs sampler. arXiv preprint arXiv:2110.14611.
Fixed-width output analysis for Markov chain Monte Carlo. G L Jones, M Haran, B S Caffo, Neath , R , Journal of the American Statistical Association. 101JONES, G. L., HARAN, M., CAFFO, B. S. and NEATH, R. (2006). Fixed-width output analysis for Markov chain Monte Carlo. Journal of the American Statistical Association 101 1537-1547.
Honest exploration of intractable probability distributions via Markov chain Monte Carlo. G L Jones, J P Hobert, Statistical Science. JONES, G. L. and HOBERT, J. P. (2001). Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statistical Science 312-334.
Sufficient burn-in for Gibbs samplers for a hierarchical random effects model. G L Jones, J P Hobert, The Annals of Statistics. 32JONES, G. L. and HOBERT, J. P. (2004). Sufficient burn-in for Gibbs samplers for a hierarchical random effects model. The Annals of Statistics 32 784-817.
Geometric ergodicity of the Bayesian lasso. K Khare, J P Hobert, Electronic Journal of Statistics. 7KHARE, K. and HOBERT, J. P. (2013). Geometric ergodicity of the Bayesian lasso. Electronic Journal of Statistics 7 2150-2163.
Nonasymptotic bounds on the estimation error of MCMC algorithms. K Łatuszyński, B Miasojedow, W Niemiro, Bernoulli. 19ŁATUSZYŃSKI, K., MIASOJEDOW, B. and NIEMIRO, W. (2013). Nonasymptotic bounds on the estimation error of MCMC algorithms. Bernoulli 19 2033-2066.
Rigorous confidence bounds for MCMC under a geometric drift condition. K Łatuszyński, W Niemiro, Journal of Complexity. 27ŁATUSZYŃSKI, K. and NIEMIRO, W. (2011). Rigorous confidence bounds for MCMC under a geometric drift condition. Journal of Complexity 27 23-38.
Hit-and-run is fast and fun. L Lovász, S Vempala, preprintMicrosoft ResearchLOVÁSZ, L. and VEMPALA, S. (2003). Hit-and-run is fast and fun. preprint, Microsoft Research.
Hit-and-run from a corner. L Lovász, S Vempala, SIAM Journal on Computing. 35LOVÁSZ, L. and VEMPALA, S. (2006). Hit-and-run from a corner. SIAM Journal on Computing 35 985- 1005.
O Mangoubi, A Smith, arXiv:1708.07114Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions. arXiv preprintMANGOUBI, O. and SMITH, A. (2017). Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log- Concave Distributions. arXiv preprint arXiv:1708.07114.
Sampling adsorbing staircase walks using a new Markov chain decomposition method. R A Martin, Randall , D , Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on 492-502. IEEEMARTIN, R. A. and RANDALL, D. (2000). Sampling adsorbing staircase walks using a new Markov chain decomposition method. In Foundations of Computer Science, 2000. Proceedings. 41st Annual Sympo- sium on 492-502. IEEE.
Perturbation bounds for Monte Carlo within Metropolis via restricted approximations. F Medina-Aguayo, D Rudolf, N Schweizer, Stochastic Processes and their Applications. MEDINA-AGUAYO, F., RUDOLF, D. and SCHWEIZER, N. (2019). Perturbation bounds for Monte Carlo within Metropolis via restricted approximations. Stochastic Processes and their Applications.
Computable bounds for geometric convergence rates of Markov chains. S P Meyn, R L Tweedie, The Annals of Applied Probability. MEYN, S. P. and TWEEDIE, R. L. (1994). Computable bounds for geometric convergence rates of Markov chains. The Annals of Applied Probability 981-1011.
Markov chains and stochastic stability. S P Meyn, R L Tweedie, Springer Science & Business MediaMEYN, S. P. and TWEEDIE, R. L. (2012). Markov chains and stochastic stability. Springer Science & Business Media.
Concentration inequalities for Markov chains by Marton couplings and spectral methods. D Paulin, Electronic Journal of Probability. 20PAULIN, D. (2015). Concentration inequalities for Markov chains by Marton couplings and spectral meth- ods. Electronic Journal of Probability 20 1-32.
Convergence complexity analysis of Albert and Chib's algorithm for Bayesian probit regression. Q Qin, J P Hobert, The Annals of Statistics. 47QIN, Q. and HOBERT, J. P. (2019). Convergence complexity analysis of Albert and Chib's algorithm for Bayesian probit regression. The Annals of Statistics 47 2320-2347.
MCMC-based inference in the era of big data: A fundamental analysis of the convergence complexity of high-dimensional chains. B Rajaratnam, D Sparks, arXiv:1508.00947arXiv preprintRAJARATNAM, B. and SPARKS, D. (2015). MCMC-based inference in the era of big data: A fundamental analysis of the convergence complexity of high-dimensional chains. arXiv preprint arXiv:1508.00947.
Weak convergence and optimal scaling of random walk Metropolis algorithms. G O Roberts, A Gelman, W R Gilks, The Annals of Applied Probability. 7ROBERTS, G. O., GELMAN, A. and GILKS, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. The Annals of Applied Probability 7 110-120.
Optimal scaling of discrete approximations to Langevin diffusions. G O Roberts, J S Rosenthal, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 60ROBERTS, G. O. and ROSENTHAL, J. S. (1998). Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 60 255-268.
Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits. G O Roberts, J S Rosenthal, Journal of Applied Probability. 53ROBERTS, G. O. and ROSENTHAL, J. S. (2016). Complexity bounds for Markov chain Monte Carlo algo- rithms via diffusion limits. Journal of Applied Probability 53 410-420.
Bounds on regeneration times and convergence rates for Markov chains. G O Roberts, R L Tweedie, Stochastic Processes and their applications. 80ROBERTS, G. O. and TWEEDIE, R. L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stochastic Processes and their applications 80 211-229.
Rates of convergence of stochastically monotone and continuous time Markov models. G O Roberts, R L Tweedie, Journal of Applied Probability. 37ROBERTS, G. O. and TWEEDIE, R. L. (2000). Rates of convergence of stochastically monotone and con- tinuous time Markov models. Journal of Applied Probability 37 359-373.
Minorization conditions and convergence rates for Markov chain Monte Carlo. J S Rosenthal, Journal of the American Statistical Association. 90ROSENTHAL, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. Journal of the American Statistical Association 90 558-566.
Rates of convergence for Gibbs sampling for variance component models. J S Rosenthal, The Annals of Statistics. ROSENTHAL, J. S. (1995). Rates of convergence for Gibbs sampling for variance component models. The Annals of Statistics 740-761.
Analysis of the Gibbs sampler for a model related to James-Stein estimators. J S Rosenthal, Statistics and Computing. 6ROSENTHAL, J. S. (1996). Analysis of the Gibbs sampler for a model related to James-Stein estimators. Statistics and Computing 6 269-275.
Quantitative convergence rates of Markov chains: A simple account. J S Rosenthal, Electronic Communications in Probability. 7ROSENTHAL, J. S. (2002). Quantitative convergence rates of Markov chains: A simple account. Electronic Communications in Probability 7 123-128.
Explicit error bounds for lazy reversible Markov chain Monte Carlo. D Rudolf, Journal of Complexity. 25RUDOLF, D. (2009). Explicit error bounds for lazy reversible Markov chain Monte Carlo. Journal of Com- plexity 25 11-24.
Error bounds for computing the expectation by Markov chain Monte Carlo. D Rudolf, RUDOLF, D. (2010). Error bounds for computing the expectation by Markov chain Monte Carlo.
Explicit error bounds for Markov chain Monte Carlo. D Rudolf, arXiv:1108.3201arXiv preprintRUDOLF, D. (2011). Explicit error bounds for Markov chain Monte Carlo. arXiv preprint arXiv:1108.3201.
On a generalization of the preconditioned Crank-Nicolson Metropolis algorithm. D Rudolf, B Sprungk, Foundations of Computational Mathematics. 18RUDOLF, D. and SPRUNGK, B. (2018). On a generalization of the preconditioned Crank-Nicolson Metropo- lis algorithm. Foundations of Computational Mathematics 18 309-343.
Approximate counting, uniform generation and rapidly mixing Markov chains. A Sinclair, Jerrum , M , Information and Computation. 82SINCLAIR, A. and JERRUM, M. (1989). Approximate counting, uniform generation and rapidly mixing Markov chains. Information and Computation 82 93-133.
Geometric random walk: a survey. S Vempala, Combinatorial and computational geometry. 52VEMPALA, S. (2005). Geometric random walk: a survey. Combinatorial and computational geometry 52 577-616.
Sufficient conditions for torpid mixing of parallel and simulated tempering. D Woodard, S Schmidler, M Huber, Electronic Journal of Probability. 14WOODARD, D., SCHMIDLER, S. and HUBER, M. (2009). Sufficient conditions for torpid mixing of parallel and simulated tempering. Electronic Journal of Probability 14 780-804.
Conditions for rapid mixing of parallel and simulated tempering on multimodal distributions. D B Woodard, S C Schmidler, M Huber, The Annals of Applied Probability. WOODARD, D. B., SCHMIDLER, S. C. and HUBER, M. (2009). Conditions for rapid mixing of parallel and simulated tempering on multimodal distributions. The Annals of Applied Probability 617-640.
Optimal scaling of random-walk metropolis algorithms on general target distributions. J Yang, G O Roberts, J S Rosenthal, Stochastic Processes and their Applications. 130YANG, J., ROBERTS, G. O. and ROSENTHAL, J. S. (2020). Optimal scaling of random-walk metropolis al- gorithms on general target distributions. Stochastic Processes and their Applications 130 6094-6132.
Dimension-free Mixing for High-dimensional Bayesian Variable Selection. Q Zhou, J Yang, D Vats, G O Roberts, J S Rosenthal, arXiv:2105.05719arXiv preprintZHOU, Q., YANG, J., VATS, D., ROBERTS, G. O. and ROSENTHAL, J. S. (2021). Dimension-free Mixing for High-dimensional Bayesian Variable Selection. arXiv preprint arXiv:2105.05719.
Mathematical analysis II. V A Zorich, R Cooke, Springer Science & Business MediaZORICH, V. A. and COOKE, R. (2004). Mathematical analysis II. Springer Science & Business Media.
| []
|
[
"Reconfigurable Co-Processor Architecture with Limited Numerical Precision to Accelerate Deep Convolutional Neural Networks",
"Reconfigurable Co-Processor Architecture with Limited Numerical Precision to Accelerate Deep Convolutional Neural Networks"
]
| [
"Sasindu Wijeratne \nDept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka\n",
"Sandaruwan Jayaweera \nDept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka\n",
"Mahesh Dananjaya \nDept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka\n",
"Ajith Pasqual [email protected] \nDept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka\n"
]
| [
"Dept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka",
"Dept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka",
"Dept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka",
"Dept. of Electronic and Telecommunication Engineering\nUniversity of Moratuwa\nSri Lanka"
]
| []
| Convolutional Neural Networks (CNNs) are widely used in deep learning applications, e.g. visual systems, robotics etc. However, existing software solutions are not efficient. Therefore, many hardware accelerators have been proposed optimizing performance, power and resource utilization of the implementation. Amongst existing solutions, Field Programmable Gate Array (FPGA) based architecture provides better cost-energyperformance trade-offs as well as scalability and minimizing development time. In this paper, we present a model-independent reconfigurable co-processing architecture to accelerate CNNs. Our architecture consists of parallel Multiply and Accumulate (MAC) units with caching techniques and interconnection networks to exploit maximum data parallelism. In contrast to existing solutions, we introduce limited precision 32 bit Q-format fixed point quantization for arithmetic representations and operations. As a result, our architecture achieved significant reduction in resource utilization with competitive accuracy. Furthermore, we developed an assembly-type microinstructions to access the co-processing fabric to manage layer-wise parallelism, thereby making re-use of limited resources. Finally, we have tested our architecture up to 9x9 kernel size on Xilinx Virtex 7 FPGA, achieving a throughput of up to 226.2 GOp/S for 3x3 kernel size. | 10.1109/asap.2018.8445087 | [
"https://arxiv.org/pdf/2109.03040v1.pdf"
]
| 52,112,126 | 2109.03040 | b3bbb699f57d8e24a227f5ede0ec62c3d341dffa |
Reconfigurable Co-Processor Architecture with Limited Numerical Precision to Accelerate Deep Convolutional Neural Networks
Sasindu Wijeratne
Dept. of Electronic and Telecommunication Engineering
University of Moratuwa
Sri Lanka
Sandaruwan Jayaweera
Dept. of Electronic and Telecommunication Engineering
University of Moratuwa
Sri Lanka
Mahesh Dananjaya
Dept. of Electronic and Telecommunication Engineering
University of Moratuwa
Sri Lanka
Ajith Pasqual [email protected]
Dept. of Electronic and Telecommunication Engineering
University of Moratuwa
Sri Lanka
Reconfigurable Co-Processor Architecture with Limited Numerical Precision to Accelerate Deep Convolutional Neural Networks
Index Terms-CNNReconfigurable Co-ProcessorHigh-Throughput ArchitectureHardware AccelerationProgrammable Processing FabricQ-Point Fixed Precision
Convolutional Neural Networks (CNNs) are widely used in deep learning applications, e.g. visual systems, robotics etc. However, existing software solutions are not efficient. Therefore, many hardware accelerators have been proposed optimizing performance, power and resource utilization of the implementation. Amongst existing solutions, Field Programmable Gate Array (FPGA) based architecture provides better cost-energyperformance trade-offs as well as scalability and minimizing development time. In this paper, we present a model-independent reconfigurable co-processing architecture to accelerate CNNs. Our architecture consists of parallel Multiply and Accumulate (MAC) units with caching techniques and interconnection networks to exploit maximum data parallelism. In contrast to existing solutions, we introduce limited precision 32 bit Q-format fixed point quantization for arithmetic representations and operations. As a result, our architecture achieved significant reduction in resource utilization with competitive accuracy. Furthermore, we developed an assembly-type microinstructions to access the co-processing fabric to manage layer-wise parallelism, thereby making re-use of limited resources. Finally, we have tested our architecture up to 9x9 kernel size on Xilinx Virtex 7 FPGA, achieving a throughput of up to 226.2 GOp/S for 3x3 kernel size.
I. INTRODUCTION
Convolutional Neural Networks (CNN or ConvNet) are perhaps the most widely used neural network model for Deep Learning (DL) applications, e.g., image classification, speech recognition, language processing etc. As an example, LeCun et el. in [1] [2] and Hinton et el. in [3] provided details of such applications with record accuracy. ConvNets have simple kernel based computational structures which significantly reduce their computational time and resources. As a result, ConvNets are algorithmically simpler and more accurate compared to other neural network models [3].
CNNs are very computationally intensive and the convolution layers account for the largest part by far. This is because convolution require a large number of multiply-accumulation operations. Computational complexity of the ConvNet is also increasing with today's complex learning models, representations and dimensionality of data. Therefore, it is increasingly challenging to train DL models and infer based on them. However, existing software solutions are not efficient. Therefore, many accelerators have been proposed over the years to efficiently carry-out CNNs on hardware. Particularly, number of FPGA based hardware accelerators [4][5] [6][7] [8] have been explored, taking advantage of their reconfigurability, programmability and low power. Among FPGA accelerators, co-processing architectures [7] [8] have significant flexibility and scalability. Also, well designed FPGA architectures can be used to exploit high level of data parallelism.
Advancements in today's ConvNet accelerators focus on optimizing cost-energy-performance. However, the complexity of deep learning applications have also created a demand for better performance together with high accuracy. In the recent past, single and double precision floating point arithmetic was mostly used. An architecture based on high precision, e.g. floating-point arithmetic, is also relatively resource intensive and power consuming, but provides higher accuracy to the applications. Therefore, number of studies [9][10] [11] have highlighted the importance of precision in ConvNet implementations. Sakr et al. in [12] and Gupta et al. in [9] have demonstrated competitive results of deep neural networks with limited numerical precision. Also, the precision or representation of numerical values is directly associated with the resource utilization, thus cost. More importantly, with diverse range of deep learning applications, the cost-performance-energyaccuracy trade-off has leveraged its importance. Therefore, it is important to achieve high performance and accuracy with limited and reduced precision. However, most of the previous implementations were heavily depending on the floating point, either single precision or double precision. This approach might not be best suited for limited resource environments, e.g. embedded applications.
In this paper, we present a novel reconfigurable coprocessing architecture to accelerate ConvNets. We have also introduced the Q-format fixed point quantization arithmetic to reduce the resource utilization of the FPGA hardware while maintaining the accuracy at a competitive level. This approach significantly reduces the resource utilization and processing time, enabling the use of this architecture for range of embedded applications. Also, proposed architecture is highly model independent and efficient, but reconfigurable and programmable. In addition, we introduce CISC like microinstructions to control the hardware operations at run time which is used to gain layer-wise parallelism reusing limited resources.
Section II briefly describes the background of our research including Q-point arithmetics. In section III, we explain the design space exploration for our architecture including data parallelism, caching and pipelined operations. Finally, in Section IV, details of the implementation along with results obtained for this new architecture are provided with a comparison. In our implementation, we used ImageNet [13] dataset of 2012 ILSVRC competition. Also, for the comparison purpose we used AlexNet [3] and ZyncNet [8]. Section V provides the conclusion with possible future developments.
II. BACKGROUND
A ConvNet is a multi-layer feed forward neural network with convolution filters and nonlinearity [1] [2]. Over the past few years, many different CNN architectures have been proposed to address the efficiency and accuracy of various learning tasks. As a subsequent outcome, different approaches like LeNet [1], AlexNet [3], VGG [14], GoogLeNet [15], ResNet [16], SqueezeNet [17], have been proposed to accelerate the learning and inferencing with better performance. In general, ConvNet architectures consist of several layers, i.e. convolution layers, activation and pooling layers [3] organized in different configurations.
In CNNs, the output feature map is obtained by following steps. For each ConvNet filter, input feature map of length l in , width w in and depth D in is convoluted with a shifting kˆk large kernel with same depth of D in . Then convoluted data will be passed through activation function. The sigmoid, tanh and ReLu are commonly used as activation functions for ConvNet. In order to reduce the spatial size, computational complexity and number of parameters pooling layers are used in between successive Convolutional layers.
The mathematical representation of convolution layer and activation function is shown in equation 1.
ϑ t " Fp Din´1 ÿ r"0 k´1 ÿ i"0 k´1 ÿ j"0
pweight pi,j,r,tqˆl ocal input pi,j,r,tq q`b t q (1) Here, ϑ t , D in , k, F and b represent the corresponding output feature, depth of input feature, kernel length, activation function and bias respectively.
DiCecco et al. in [7] proposed an end-to-end FPGA accelerated co-processing framework for Caffe CNN in which an FPGA layer can be used as a co-processor alongside other layers running on a host processor. But, in such an approach, memory becomes a bottleneck when the size of the layers and parameters are increasing. Therefore, users have the flexibility to decide which part of the program should be running on FPGA, either a specific portion or the complete program. Therefore, flexibility and programmability is one of the key areas of focus in FPGA accelerated architectures. The problems was addressed by some techniques such as an instruction set to program CNN hardware [3].
Several other research explored the implementation of CNNs on FPGAs [4] [5] [6] to make use of the low power, reconfigurability and programmability. A more comprehensive and state-of-the-art FPGA based accelerator was proposed by David Gschwend in his thesis [8], ZyncNet-An FPGA-Accelerated Embedded Convolutional Neural Network. This architecture demonstrated competitive results in performance, accuracy and memory management compared to existing architectures [3][14] [15][16] [17]. Even though their approach produced highly competitive results, the design used model specific approach and High Level Synthesis (HLS). The authors also used single precision floating point for representations and arithmetic operations, causing high resource utilization.
The main reason behind application specific designs is to support parallelism [18] [7]. According to existing techniques, data parallelism is highly exploited on custom hardware designs alongside model parallelism, layer parallelism and pipeline parallelism. One or more of these techniques can be found in almost all existing accelerators, e.g. ZyncNet [8]. In particular, inherently parallel pixel operations can be carried out concurrently when CNNs are used in image processing applications. Pipeline parallelism is applied when operating different dependent steps of operations concurrently on parallel threads which is well suited for feed forward computations of CNNs. Most accelerators have used these techniques in the past.
The Q-Point representation is a fixed point format where the number of fractional bits and integer bits are specified prior to the usage. Depending on the number of bits in its representation, Q Format limits the range of numbers it can represent with an acceptable degree of accuracy [19]. The Q-Format number is represented as Q pn´m´1,mq where n is the total number of bits, m is fractional bits and single sign bit. Overflow is avoided by using the proper number of fractional and integer bits depending on the weights and the input feature data. This simple representation of numbers makes the arithmetic operators for Q-Point representation hardware efficient. Furthermore, fixed point Q-Format numerical arithmetic significantly reduces resource utilization and power consumption. But, it has a trade off with numerical precision and accuracy. However, some recent research suggests that numerical precision might not be the case for CNNs [12][9] [19].
III. ARCHITECTURE
The proposed architecture is directly focusing on Field Programmable Gate Arrays (FPGAs). This accelerator architecture includes reconfigurable-parameters which provides the ability of re-reconfigurability, depends on the CNN architecture which run on top of the accelerator. The reconfigurableparameters which used in this system are as follows:
‚ The depth of input features (D in ) ‚ Filter Size (S) ‚ Number of filters (N ) ‚ Pooling Filter Size (P ) ‚ Activation Function Selection (Sel AF ) ‚ Pooling Layer Configuration (Conf p )
The Processing Fabric (PF) is the core logic processing area of the design. Moreover, the it is highly parallel and can be reused for multiple layers by accessing the parameters in each layer from the internal instruction memory.
The parallelism of independent operations is the state-ofthe-art for accelerating neural networks. In this architecture, we have identified such independent processes and implemented them in parallel in order to exploit a high level of data parallelism. However, such a design is hardware intensive and consumes a large amount of power. This problem is reduced by reusing limited resources with layer wise parallelism. Also, this architecture use parallel multi-channels to accelerate processing feature data in depth (D in ). Before running a CNN on top of this processing fabric, D in should be set to input feature depth of the processing CNN architecture.
The Q-Format fixed point arithmetic is used in the architecture which consists of less hardware intensive and less time consuming arithmetic operations. Therefore, it has advanced the overall architecture as well.
Instruction set architecture gives the ability to process different input features sizes and enable zero padding. This gives the architecture more flexibility to run different CNN architectures on top of this hardware acceleration platform.
The conceptual design of the architecture is shown in figure 1. It contains separate data flow path and instruction flow path as in Harvard Architecture. As shown in the figure 1, there are 2 major units in this system. They are Process Controller (PC) and the Matrix Web (MW). The main responsibility of the Process Controller is to fetch instruction and execute instructions. The instruction execution is mainly focused on memory addressing. The MW consists of arithmetic and logic units which are the main functional elements of the processing architecture. The input feature data, weights and biases are cached within the Matrix Web. This caching system and interconnection is extensively shown in figure 2. Moreover, using Direct Memory Access controllers (DMA), data is transfered as bulk between main memory and processing fabric in order to reduce the total number of execution Instructions and execution time per layer. After instructions are fetched into the system, first, the weights and bias for each kernel is loaded into dedicated caches through data lines. Input data is then fetched and processed in the MW and processed. Then processed data is passed on to data buffer and finally, output data is moved to main memory through DMA transactions.
The Matrix Web (MW) is the major arithmetic and logic unit in the system. Figure 2 shows a detailed structure of MW. The size of the MW depends on the number of filters (N ), filter As shown in the figure 1, Cell Body Units (CBU) are the basic processing elements to the processing fabric. It simulates the process of convolution filter. The CBU calculates the output feature data according to the equation 1. Each CBU has internal weight caches, internal MAC units, an activation layer and a pooling layer as shown in figure 3. pooling layer can be enabled while configuring processing fabric by setting the Conf p reconfigurable parameter. Conf p parameter expects width of pooling kernel. Activation function to the CBU unit is also a reconfigurable parameter (Sel AF ). For each MAC unit there is a dedicated Weight cache. The weight caches are filled before feeding the input data. The input data is first buffered into a pre-fetch data buffer and pass into the caches through a simple interconnect. Each CBU is synchronized to operate in the same input data in same clock cycle. Therefore, data cache is shared between each CBU, using a Crossbar interconnection. The output data of different kernels can be processed massively parallel by using more CBUs. The final calculated data is buffered and forwarded in to the main memory as shown in the figure 3. This massively parallel architecture reduces the input feature data re-usage. The number of MAC Units per Cell Body depend on the depth of input features and the number of CBUs depend on number Fig. 3. The Cell Body Unit of filters that can be parallel processed depend on the hardware availability of FPGA as shown in figure 2. The instruction set gives the flexibility to use number of parallel CBUs in this manner. Using high number of parallel CBUs we can minimize the input feature data re-usage.
In CBU after Weight cache, bias cache and input feature cache is filled with data, they forwarded MAC units according to the instructions from PC. In MAC units, input feature is multiplied and added together. In the first layer of Addition Plane, 2 pk´1qˆD in addition units are implemented. If the kernel size is not a power of 2, the unoccupied Addition Units (AU) are padded zero. This process is pipelined in order to increase the throughput. Such a implementation makes the system work at a high frequency. The number of Multipliers and Adders depends on S and D in , the reconfigurable parameters to the Processing Fabric.
After processing through MACs, results of the MAC units are forwarded to BA. The number of MAC units connected to the BA depends on (D in ) as shown in 3. They are connected using series of adder layers as in MACs. After the Bias adder, the result is forwarded to AF. The activation function is enabled and selected in the configuration level, therefore it is a reconfigurable parameter. Activation functions include Sigmoid, tanh, ReLU and Max. After AF data is stored in a temporary cache call Pooling cache. Thereafter the data is proceed to the pooling layer. The size of Pooling cache depends on the pooling kernel size. In this implementation MAX pooling is used. The scalability of the Pooling layer is maintained by using the width of pooling kernel (Conf p ). Making pooling kernel size to 1 is similar to deactivating the pooling layer. Finally, processed data is passed to the data buffer. Here, k and D in stand for filter width and input feature data depth.
The Process Controller (PC) is responsible for instruction execution. The scalability in the configuration level is achieved by using the scalable instruction set which scales with the number of kernels in the system, number of weights per kernel and depth of the input. The Instructions that fetch into the PC follows the CISC (complex instruction set computer) instruction type. In the Process Controller unit, there are 2 basic types if instructions as shown below. The memory locations of weights and biases related to each layer are injected into the processing fabric with Memory Control Instructions. MatrixWeb Control Instructions Configure the Matrix Web according the CNN network that is to be processed in the accelerator. The instruction set gives the flexibility to use dynamic number of parallel Cell Bodies which depend on the hardware limitation of FPGA.
As shown in figure 4, MatrixWeb Control Instructions and Memory Control Instructions are the main type of instructions for our system. It contains several fields for each type. In both instruction types, the TYPE field is in common. As the name suggest, it is used to identify the type of the instruction, MatrixWeb Control Instruction or Memory Control Instruction.
In MatrixWeb, Control Instruction CONFIG field is responsible for giving the system the instruction of starting convolution, clearing (or flushing) the weights and the biases that are cached within MAC units and stop convolution. The Input Feature Detail (IFD) field is required to identify the depth and width of the input feature data. It gives the system the flexibility to process different sizes of input data. The Stride Len. (SL) specifies the stride with which the filter is slided. The Zero-Pad (ZP) field is used to zero pad the input features using internal logic which reduces the input feature size. In addition, enabling zero padding through instruction will decrease the requirement of input feature space. In order to configure them, MatrixWeb Control Instructions are released to each Cell Body Unit. The total number of cycles that is needed to load Weights and biases depend on the number of kernels, the depth of the input data set, number of weights and bias per kernel. The number of cycles that is needed to process input feature data is directly correspond to the width and depth of the data set.
C " γ`pD in`2 qˆγ`1
C is the number of cycles to fetch all (total) instructions. Here γ and D in represents the number of cell bodies and input feature data depth.
Data is transferred between the main memory of the processing system and hardware accelerator unit using DMAs over PCIe. Multiple channels are used in order to minimize the data traffic.
IV. PERFORMANCE EVALUATION
We developed a software model identical to our hardware architecture using C/C++. It is capable of handling fixed point as well as single and double precision floating points. Furthermore, the architecture was implemented using Verilog which was later used with software simulation to verify results. Both software and hardware implementations are combined together with System Verilog and Direct Programming Interface (DPI) to create hardware simulation. The final design was synthesized and implemented on Xilinx Virtex-7 FPGA XC7VX485T using Vivado 2015.4 software. Also, our coprocessor was designed to be connected with host machine via PCIe interface.
With the provided reconfigurability and programmability, our framework capable of handling different CNN architectures, e.g. AlexNet, SqueezeNet, ZyncNet etc. But, for the accuracy comparison, in this section, we present results based on accelerating two different CNN architectures on two base designs. In case A, AlexNet CNN architecture and in case B, Zync-Net CNN architecture are processed on top of our processing fabric by setting up reconfigurable parameters accordingly.
For each CNN architecture, we use both software and hardware simulation to train the model based on ImageNet dataset. Thereafter, we use the same setup with trained weights to predict on ImageNet validation data set, particularly with Q p16,15q , Q-point representation in hardware. Since, our software simulation is capable of handling both fixed and floating precision, we calculated the absolute difference (error) for each data point and operation. A similar method is carried-out for finding the accuracy of MAC units. As shown in the figure 6, the accuracy and its variation with respect to different input value ranges and different kernel sizes is presented and evaluated within this comparison. Since the CNNs often provide only a certain range of input values, it is easy to show that the proposed neural network is sufficiently capable of providing an accurate output for the requirement.
In this performance evaluation Q p16,15q , Q-point representation is used. Changing the kernel size and input data values, accumulated error readings have been obtained. These accuracy readings, the error values have been plotted as functions of kernel size and input value.
The test is carried out for different value ranges and different kernel sizes as shown in figure 5. The error for each constant kernel size increases exponentially with the input data values. As in figure 5, the average error for input values 0 to 50 stay well below 0.1 for any kernel width from 3 to 9. A kernel width of 3 is used as the initial parameter value and is increased up to 9 while maintaining the input values in the range 0 to 1. The error shows an exponential increase with the increase of kernel size as shown in figure 6. Still, the error values are extremely small compared to the input values even for a kernel width of 9. Results from Figure 6 and Figure 5 explains how accuracy drops when the model is getting bigger. Increasing kernel sizes as well as parameters can significantly change accuracy value. It causes much larger model in case A to drop its accuracy more than the much smaller case B model. However, obtained accuracy drops, i.e. 0.6% and 0.3% for A and B respectively, lie within acceptable level.
CNN architectures have different requirements. As an example, AlexNet needs up to 11x11 convolutions whereas ZyncNet CNN [16] uses only up to 3x3 convolutions. But for the performance and resource comparison, we used our Case B (with 3x3 kernel size) hardware design and ZyncNet hardware. In this comparison, we use 3x3 kernel size with 16 Cell Body Units. 3X3 2416 1299 36 4X4 3374 2013 64 5X5 5683 3157 100 6X6 7913 4433 144 7X7 10536 5978 196 8X8 13143 7587 256 9X9 17338 9784 324 Table III illustrates the resource utilization of a single Cell Body Unit with input data depth (D in ) = 1 whereas Table IV provides the resource utilization of a single Cell Body Unit with input data depth (D in ) = 3. To compare with ZyncNet, we configured our co-processor with 16 Cell Body Units, 3x3 kernel size and D in =1. We obtained results as shown in Table V. According to the results, our approach shows significant reduction (about 22%) in the DSP utilization. Moreover, our proposed architecture provides 226.2 GOPS at frequency of 200MHz. Compared to 16 bit fixed precision implementations like [6], our approach with 32 bit fixed precision has produce 22.5% more GOPS. More details and a comparison can be found in the Table II. According to the Table V, our approach shows less resources utilization than previous approach. This is a direct result of using fixed precision and developing a ground-up architecture avoiding resource intensive High Level Synthesis (HLS).
V. CONCLUSION
In this paper, we presented a novel co-processing architecture for Convolutional Neural Networks (CNNs), suitable for reconfigurable devices such as FPGAs. It is developed as a co-processor to accelerate existing software frameworks and CNNs. As results showed, our approach is more scalable and high throughput. Targeted for embedded applications, the architecture demonstrates significant design flexibility. The programmable processing fabric can be reused for multiple layers by accessing and storing the hyper-parameters in each layer. The Instruction Set Architecture (ISA) is capable of handling convolutional layer operations. Moreover, our common instruction set can be used to optimize different high level programs into FPGA. Also, fixed point Q-Format precision provided results that are sufficient for CNN computation with reduced time and resource consumption providing less complexity to the hardware. However, our result shows that large models with high number of parameters have significant accuracy deviation compared to small models. Therefore, fixed precision can be efficiently used to accelerate more compressed modes in hardware. It is also possible to further accelerate the overall system by taking other layers including the Pooling layer the hardware layer. We also showed that 32 bit fixed precision has significantly reduced the operation time and resource utilization while maintaining both accuracy and throughput at higher level. These results encourage future advancements with further reduction in precision, e.g. 16 and 8 bit in fixed point, in hardware accelerators.
Fig. 1 .
1The Overall Conceptual Architecture
Fig. 2 .
2The internal structure of Matrix Web size (S) and the depth of the input features(D in ) which are reconfigurable parameters to the processing fabric.
No. of mul. units per MAC Unit = k 2(2) No. of mul. units per Cell Body = D inˆk 2
Fig. 4 .
4Basic Instructions of the Architecture Filter Memory Control Instructions are used to point out the memory space of Weights, the memory space biases and the Memory space of Output feature for each corresponding Cell Body. There are 3 Filter Memory Control Instructions for each cell body. There is one Input Memory Control instruction for the system for single input feature data set.
Fig. 5 .
5Average error as a function of input values for different kernel width
Fig. 6 .
6Average error as a function of kernel width for input values between 0 and 1
MatrixWeb Control Instructions: feature width, feature depth, Stride Length, Zero Padding enable, Convolve. Memory Control Instructions: Address space of Input features, Address space of Weights, Address space of Biases, Address space of Outputs.
TABLE I
IComparison of proposed architecture to existing CNN architectures. 'Layers' is the number of convolution layers. Also, MACCs, Parameters and Activations are in millionsLayers MACCs Params Activs. Top-5 Error
Case B*
18
530
2.5
8.8
15.7%
ZyncNet
18
530
2.5
8.8
15.4%
Case A*
5
1140
62.4
2.4
20.3%
AlexNet
5
1140
62.4
2.4
19.7%
VGG-16
16
15470
138.3
29.0
8.1%
GoogLeNet
22
1600
7.0
10.4
9.2%
ResNet-50
50
3870
25.6
46.9
7.0%
SqueezeNet
18
860
1.2
12.7
19.7%
Table I provides a details of final top-5 accuracy obtained from the results. As shown in the table,final accuracy has dropped, particularly with 0.6% in the Case A and 0.3% in the Case B when using 32 bit fixed precision which is a tolerable amount.0
10
20
30
40
50
60
70
80
90
100
0
2
4
6
8
10
12
14
THE NUMBER RANGE
PRECESSION ERROR
3x3
4x4
5x5
6x6
7x7
8x8
9x9
TABLE II Existing
IIFPGA based CNN compared to proposed architectureMetric
Zhang [4]
Suda [5]
Qui [6]
DiCecco [7]
ZynqNet [8]
Proposed
Frequency(MHz)
100
120
150
200
200
200
Precision
32 bit float
8/16 bit fixed
16 bit fixed
32 bit float
32 bit float
32 bit fixed
FPGA Version
Virtex 7 VX485T
Stratix-V GSD8
Zynq XC7Z045
Virtex 7 XC7VX690T-2
Zynq XC7Z045
Virtex 7 XC7VX485T-2
DSP Utilization
2,240
(Not specified)
780
1,307
739
576
Host Connection
on-chip
PCIe
on-chip
PCIe
on-chip
PCIe
GFLOPS/GOPS
61.62
136.5
187.8
50
(Not Specified)
226.2
TABLE III Resource
IIIUtilization of a single Cell Body Unit (CBU) with DSP for input data depth (D in ) = 1Kernel Size
LUT
FF
DSP
TABLE IV Resource
IVUtilization of a single Cell Body Unit (CBU) with DSP for input data depth (D in ) = 3Kernel Size
LUT
FF
DSP
3X3
6721
3159
108
4X4
9360
4857
192
5X5
15039
7713
300
6X6
22169
10837
432
7X7
29487
14626
588
8X8
36812
18503
768
9X9
48684
23992
972
TABLE V Resource
VComparison between ZynqNet and ProposedMethod
LUT
FF
DSP
ZynqNet [8]
154K
137K
739
Proposed
117K
21K
576
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P., "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov 1998.
Unsupervised learning of invariant feature hierarchies with applications to object recognition. M Ranzato, F J Huang, Y L Boureau, Y Lecun, 2007 IEEE Conference on Computer Vision and Pattern Recognition. Ranzato, M., Huang, F. J., Boureau, Y. L., and LeCun, Y., "Unsuper- vised learning of invariant feature hierarchies with applications to object recognition," in 2007 IEEE Conference on Computer Vision and Pattern Recognition, June 2007, pp. 1-8.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Proceedings of the 25th International Conference on Neural Information Processing Systems. the 25th International Conference on Neural Information Processing SystemsUSACurran Associates Inc1ser. NIPS'12Krizhevsky, A., Sutskever, I., and Hinton, G. E., "Imagenet classification with deep convolutional neural networks," in Proceedings of the 25th International Conference on Neural Information Processing Systems -Volume 1, ser. NIPS'12. USA: Curran Associates Inc., 2012, pp. 1097-1105. [Online]. Available: http://dl.acm.org/citation.cfm?id= 2999134.2999257
Optimizing fpga-based accelerator design for deep convolutional neural networks. C Zhang, P Li, G Sun, Y Guan, B Xiao, J Cong, http:/doi.acm.org/10.1145/2684746.2689060Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '15. the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '15New York, NY, USAACMZhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., and Cong, J., "Optimizing fpga-based accelerator design for deep convolutional neural networks," in Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '15. New York, NY, USA: ACM, 2015, pp. 161-170. [Online]. Available: http://doi.acm.org/10.1145/2684746.2689060
Throughput-optimized opencl-based fpga accelerator for large-scale convolutional neural networks. N Suda, V Chandra, G Dasika, A Mohanty, Y Ma, S Vrudhula, J.-S Seo, Y Cao, http:/doi.acm.org/10.1145/2847263.2847276Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16. the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16New York, NY, USAACMSuda, N., Chandra, V., Dasika, G., Mohanty, A., Ma, Y., Vrudhula, S., Seo, J.-s., and Cao, Y., "Throughput-optimized opencl-based fpga accelerator for large-scale convolutional neural networks," in Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16. New York, NY, USA: ACM, 2016, pp. 16-25. [Online]. Available: http://doi.acm.org/10.1145/2847263.2847276
Going deeper with embedded fpga platform for convolutional neural network. J Qiu, J Wang, S Yao, K Guo, B Li, E Zhou, J Yu, T Tang, N Xu, S Song, Y Wang, Yang , H , http:/doi.acm.org/10.1145/2847263.2847265Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16. the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16New York, NY, USAACMQiu, J., Wang, J., Yao, S., Guo, K., Li, B., Zhou, E., Yu, J., Tang, T., Xu, N., Song, S., Wang, Y., and Yang, H., "Going deeper with embedded fpga platform for convolutional neural network," in Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA '16. New York, NY, USA: ACM, 2016, pp. 26-35. [Online]. Available: http://doi.acm.org/10.1145/2847263.2847265
Caffeinated fpgas: FPGA framework for convolutional neural networks. R Dicecco, G Lacey, J Vasiljevic, P Chow, G W Taylor, Areibi , S , abs/1609.09671CoRR. DiCecco, R., Lacey, G., Vasiljevic, J., Chow, P., Taylor, G. W., and Areibi, S., "Caffeinated fpgas: FPGA framework for convolutional neural networks," CoRR, vol. abs/1609.09671, 2016. [Online]. Available: http://arxiv.org/abs/1609.09671
Zynqnet: An fpga-accelerated embedded convolutional neural network. D Gschwend, Swiss Federal Institute of Technology Zurich (ETH-Zurich), SwitzerlandMaster's thesisGschwend, D., "Zynqnet: An fpga-accelerated embedded convolutional neural network," Master's thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), Switzerland, 2016.
Deep learning with limited numerical precision. S Gupta, A Agrawal, K Gopalakrishnan, P Narayanan, abs/1502.02551CoRR. Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P., "Deep learning with limited numerical precision," CoRR, vol. abs/1502.02551, 2015. [Online]. Available: http://arxiv.org/abs/1502.02551
Hardware-oriented approximation of convolutional neural networks. P Gysel, M Motamedi, S Ghiasi, abs/1604.03168CoRR. Gysel, P., Motamedi, M., and Ghiasi, S., "Hardware-oriented approxi- mation of convolutional neural networks," CoRR, vol. abs/1604.03168, 2016. [Online]. Available: http://arxiv.org/abs/1604.03168
Minimizing computation in convolutional neural networks. J Cong, B Xiao, S Wermter, C Weber, W Duch, T Honkela, P Koprinkova-Hristova, S Magg, G Palm, Villa, Artificial Neural Networks and Machine Learning -ICANN 2014. A. E. P.Springer International PublishingCong, J. and Xiao, B., "Minimizing computation in convolutional neural networks," in Artificial Neural Networks and Machine Learning -ICANN 2014, Wermter, S., Weber, C., Duch, W., Honkela, T., Koprinkova- Hristova, P., Magg, S., Palm, G., and Villa, A. E. P., Eds. Cham: Springer International Publishing, 2014, pp. 281-290.
Analytical guarantees on numerical precision of deep neural networks. C Sakr, Y Kim, N Shanbhag, Proceedings of the 34th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research. Precup, D. and Teh, Y. W.the 34th International Conference on Machine Learning, ser. Machine Learning ResearchSydney, AustraliaPMLR70Sakr, C., Kim, Y., and Shanbhag, N., "Analytical guarantees on numerical precision of deep neural networks," in Proceedings of the 34th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, Precup, D. and Teh, Y. W., Eds., vol. 70. International Convention Centre, Sydney, Australia: PMLR, 06-11 Aug 2017, pp. 3007-3016. [Online]. Available: http://proceedings.mlr.press/ v70/sakr17a.html
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 9Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L., "ImageNet: A Large-Scale Hierarchical Image Database," in CVPR09, 2009.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, abs/1409.1556CoRR. Simonyan, K. and Zisserman, A., "Very deep convolutional networks for large-scale image recognition," CoRR, vol. abs/1409.1556, 2014. [Online]. Available: http://arxiv.org/abs/1409.1556
Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S E Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, abs/1409.4842CoRR. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A., "Going deeper with convolutions," CoRR, vol. abs/1409.4842, 2014. [Online]. Available: http://arxiv.org/abs/1409.4842
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, abs/1512.03385CoRR. He, K., Zhang, X., Ren, S., and Sun, J., "Deep residual learning for image recognition," CoRR, vol. abs/1512.03385, 2015. [Online].
Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size. F N Iandola, M W Moskewicz, K Ashraf, S Han, W J Dally, K Keutzer, abs/1602.07360CoRR. Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S., Dally, W. J., and Keutzer, K., "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size," CoRR, vol. abs/1602.07360, 2016. [Online]. Available: http://arxiv.org/abs/1602.07360
One weird trick for parallelizing convolutional neural networks. A Krizhevsky, abs/1404.5997CoRR. Krizhevsky, A., "One weird trick for parallelizing convolutional neural networks," CoRR, vol. abs/1404.5997, 2014. [Online]. Available: http://arxiv.org/abs/1404.5997
Low precision arithmetic for deep learning. M Courbariaux, Y Bengio, David , J , abs/1412.7024CoRR. Courbariaux, M., Bengio, Y., and David, J., "Low precision arithmetic for deep learning," CoRR, vol. abs/1412.7024, 2014. [Online]. Available: http://arxiv.org/abs/1412.7024
| []
|
[
"Improving Variational Autoencoders with Density Gap-based Regularization",
"Improving Variational Autoencoders with Density Gap-based Regularization"
]
| [
"Jianfei Zhang \nState Key Laboratory of Software Development Environment\nBeihang University\nChina\n\nSchool of Computer Science and Engineering\nBeihang University\nChina\n",
"Jun Bai \nState Key Laboratory of Software Development Environment\nBeihang University\nChina\n\nSchool of Computer Science and Engineering\nBeihang University\nChina\n",
"Chenghua Lin \nDepartment of Computer Science\nUniversity of Sheffield\nUnited Kingdom\n",
"Yanmeng Wang [email protected] \nPing An Technology\nChina\n",
"Wenge Rong [email protected]@sheffield.ac.uk \nState Key Laboratory of Software Development Environment\nBeihang University\nChina\n\nSchool of Computer Science and Engineering\nBeihang University\nChina\n"
]
| [
"State Key Laboratory of Software Development Environment\nBeihang University\nChina",
"School of Computer Science and Engineering\nBeihang University\nChina",
"State Key Laboratory of Software Development Environment\nBeihang University\nChina",
"School of Computer Science and Engineering\nBeihang University\nChina",
"Department of Computer Science\nUniversity of Sheffield\nUnited Kingdom",
"Ping An Technology\nChina",
"State Key Laboratory of Software Development Environment\nBeihang University\nChina",
"School of Computer Science and Engineering\nBeihang University\nChina"
]
| []
| Variational autoencoders (VAEs) are one of the most powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converging to the same degenerated local optimum, namely posterior collapse or KL vanishing. There are effective ways proposed to prevent posterior collapse in VAEs, but we observe that they in essence make trade-offs between posterior collapse and the hole problem, i.e., the mismatch between the aggregated posterior distribution and the prior distribution. To this end, we introduce new training objectives to tackle both problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution. Through experiments on language modeling, latent space visualization, and interpolation, we show that our proposed method can solve both problems effectively and thus outperforms the existing methods in latent-directed generation. To the best of our knowledge, we are the first to jointly solve the hole problem and posterior collapse.Preprint. Under review. | 10.48550/arxiv.2211.00321 | [
"https://export.arxiv.org/pdf/2211.00321v1.pdf"
]
| 253,244,340 | 2211.00321 | e16ff7dbe63795f92c30eeaf6a45674b09bb16eb |
Improving Variational Autoencoders with Density Gap-based Regularization
Jianfei Zhang
State Key Laboratory of Software Development Environment
Beihang University
China
School of Computer Science and Engineering
Beihang University
China
Jun Bai
State Key Laboratory of Software Development Environment
Beihang University
China
School of Computer Science and Engineering
Beihang University
China
Chenghua Lin
Department of Computer Science
University of Sheffield
United Kingdom
Yanmeng Wang [email protected]
Ping An Technology
China
Wenge Rong [email protected]@sheffield.ac.uk
State Key Laboratory of Software Development Environment
Beihang University
China
School of Computer Science and Engineering
Beihang University
China
Improving Variational Autoencoders with Density Gap-based Regularization
Variational autoencoders (VAEs) are one of the most powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converging to the same degenerated local optimum, namely posterior collapse or KL vanishing. There are effective ways proposed to prevent posterior collapse in VAEs, but we observe that they in essence make trade-offs between posterior collapse and the hole problem, i.e., the mismatch between the aggregated posterior distribution and the prior distribution. To this end, we introduce new training objectives to tackle both problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution. Through experiments on language modeling, latent space visualization, and interpolation, we show that our proposed method can solve both problems effectively and thus outperforms the existing methods in latent-directed generation. To the best of our knowledge, we are the first to jointly solve the hole problem and posterior collapse.Preprint. Under review.
Introduction
As one of the most powerful likelihood-based generative models, variational autoencoders (VAEs) [21,32] are designed for probabilistic modeling directed by continuous latent variables, which are successfully applied in many NLP tasks, e.g., dialogue generation [45,14], machine translation [34,12], recommendation [10], and data augmentation [43,39]. One of the major advantages of VAEs is the flexible latent representation space, which enables easy manipulation of high-level semantics on corresponding representations, e.g., guided sentence generation with interpretable vector operators.
Despite the attractive theoretical strengths, VAEs are observed to suffer from a well-known problem named posterior collapse or KL vanishing [21,24], an optimum state of VAEs when the posterior distribution contains little information about the corresponding datapoint, which is particularly obvious when strong auto-regressive decoders are implemented [46,4].
Another challenge for VAEs is the hole problem, the state when the aggregated (approximate) posterior fails to fit the prior distribution, and thus the inference from the prior distribution becomes no longer suitable to describe the global data distribution [33], which can lead to poor generation quality in VAEs [1,25]. [46], β-VAEs [18] and FB-VAEs [20] can solve posterior collapse effectively at the cost of bringing the hole problem, i.e., mismatch between the aggregated posterior and the prior. Our proposed DG-VAE intends to solve both problems through a novel regularization based on the density gap. Illustrations for more datasets, more models, and more dimensions, are shown in Appendix G.
In this work, we perform systematic experiments on VAEs for text generation to study posterior collapse and the hole problem in existing methods. We demonstrate that VAEs with specific network structures [9,45] or modified training strategies [6,13] have limited effect on solving posterior collapse, while VAEs with hard restrictions [8,41,46] or weakened KL regularization [18,20] on the posterior distribution can solve posterior collapse effectively at the expense of the hole problem, as illustrated in Figure 1.
On that basis, we hypothesize that these two problems stem from the conflict between the KL regularization in ELBo and the function definition of the prior distribution. As such, we propose a novel regularization to substitute the KL regularization in ELBo for VAEs, which is based on the density gap between the aggregated posterior distribution and the prior distribution. We provide theoretical proof that our method in essence maximizes the ELBo as well as the mutual information between the input and the latent variable.
In terms of Gaussian distribution-based VAEs, we further propose the corresponding marginal regularization on each dimension respectively, and we prove it in essence maximizes the ELBo as well as the sum of mutual information between the input and the latent variable on all dimensions.
To validate our methods in practice, we take experiments on language modeling, latent-guided generation and latent space visualization. We demonstrate that our methods form latent spaces that are both active and consistent with the prior, and thus generate smoother sentences from latent interpolation. The code and data are available at https://github.com/zhangjf-nlp/DG-VAEs.
Background and Related Work
VAEs and ELBo
VAEs are proposed to perform efficient inference and learning in directed probabilistic models [21], where the random generation process consists of two steps: (1) sample a latent value z from the prior p θ (z); (2) generate a datapoint x from the conditional distribution p θ (x|z). As the true posterior p θ (z|x) is intractable, a recognition model q φ (z|x) is introduced to approximate the true posterior [21].
Specifically, VAEs represent an observation x as a latent distribution q φ (z|x), from which latent variables are sampled to direct the reconstruction of x. As the optimization goal of VAEs, the Evidence Lower Bound (ELBo) is composed of a Kullback-Leibler (KL) divergence for regularization on the posterior and a log likelihood for reconstruction conditioned on posterior. The ELBo in fact forms a lower bound on the marginal likelihood given prior latent variables [21], as illustrated in Eq. 1.
L ELBo (θ, φ; x) = E q φ (z|x) [log p θ (x|z)] − D KL (q φ (z|x) p θ (z)) ≤ log p θ (x) = log p θ (x|z)p θ (z)dz(1)
Posterior Collapse
Intuitively, the KL divergence in ELBo (i.e., D KL (q φ (z|x) p θ (z))) encourages the approximate posterior distribution of every single datapoint to be close to the the prior [21]. This intends to ensure the prior distribution can depict the latent variable distribution over the data distribution, but it can also lead to posterior collapse when D KL (q φ (z|x) p θ (z)) has much stronger force on q φ (z|x) than E q φ (z|x) [log p θ (x|z)] does, which leads that q φ (z|x) ≈ p θ (z)∀x. In such condition, the sampled latent variable, z ∼ q φ (z|x), contains much more noise than useful information about x [27], and thus the decoder p θ (x|z) becomes insensitive to z [46].
Early works to solve posterior collapse attribute it to the difficulty in optimizing ELBo, so their methods mainly focus on the training strategies [6,13,23]. Some works put emphasis on the semantic learning of the latent variable through specific model structures [9,45,29,17]. Instead of a Gaussian distribution, vMF-VAE [8,41] adopts the von Mises-Fisher distribution for latent variables, which restricts the posterior latent to a hyperspherical space and forms a constant KL divergence. BN-VAE [46] restricts the posterior distribution through a batch normalization layer with a fixed scale γ, so as to guarantee a positive lower bound of the KL divergence. β-VAE [18] directly changes the weight (denoted as β) of the KL term inside ELBo, while free-bits [20] changes the KL term inside ELBo to a hinge loss term.
In contrast, we hypothesize posterior collapse is due to the conflict between the KL regularization in ELBo and the function definition of the prior distribution, and tackle it through replacing the KL regularization in ELBo with a novel regularization on the aggregated posterior distribution.
Hole Problem
The aggregated (approximate) posterior q φ (z) refers to the expectation of the approximate posterior distribution on the data distribution, as defined in Eq. 2, where the distribution of observation x is represented by the discrete distribution of datapoints in the dataset, i.e.,
X = {x n } N n=1 , q φ (n) ≡ 1 N , in practice. q φ (z) = E x (q φ (z|x)) = 1 N N n=1 q φ (z|x n )(2)
Formally, the hole problem refers to the phenomenon that the aggregated posterior distribution q φ (z) fails to fit the prior distribution p θ (z). Inferences located in the holes (i.e., areas with mismatch between density in q φ (z) and p θ (z)) are observed to generate images that are obscure and corrupted [1], or sentences with incorrect syntax and abnormal semantics [25].
The hole problem of VAEs for image generation is observed in several studies, commonly ascribed to the limited expressivity of the prior distribution and tackled by increasing the flexibility of the prior distribution via hierarchical priors [22], auto-regressive models [16], a mixture of encoders [37], normalizing flows [40], resampled priors [3], and energy-based models [1]. In contrast, we observe that the vanilla VAEs (with standard prior distributions) for text generation have no hole problem, but it arises when existing methods are applied to solve posterior collapse. Therefore, our work is targeted at solving posterior collapse and avoiding the hole problem at the same time, for VAEs with standard prior distributions.
Regularization on Aggregated Posterior
As the approximate posterior distribution is introduced to approximate the true posterior, i.e., q φ (z|x) ≈ p θ (z|x), the aggregated posterior should be close to the prior as a result, i.e., q φ (z) ≈ p θ (z). From this point of view, several works are proposed to replace KL regularization (on the posterior distribution of each datapoint separately) in ELBo with a regularization on the aggregated posterior distribution, which can be summarized as Eq. 3, where D is the divergence (or discrepancy) between two distributions.
L D (θ, φ; x) = E q φ (z|x) [log p θ (x|z)] − D(q φ (z) p θ (z))(3)
Among them, Adversarial Auto-Encoder (AAE) [28] adopts the Generative Adversarial Network (GAN) [15] framework to regularize the aggregated posterior distribution through Jensen-Shannon divergence D JS ; Wasserstein Auto-Encoder (WAE) [36,2] regularizes the aggregated posterior distribution through minimizing Maximum Mean Discrepancy (MMD); Implicit VAE with Mutual Information regularization (iVAE M I ) regularizes the aggregated posterior through a dual form of KL divergence D KL on the basis of Implicit VAE (iVAE) [11]. These methods have the same weakness that their approximations of the divergence between two continuous distributions are depicted by merely sampling sets from the distributions, which involves noise from random sampling and can hardly be zero, even for the same distributions.
In contrast, our method approximates the divergence between two continuous distributions in the perspective of their mismatch in PDFs, which we quantify through the density gap that can be zero if and only if they are the same, which we describe in section 3.
We validate our method against the aforementioned methods through experiments on a synthetic dataset, the details and results of which are presented in Appendix A.
Methodology
Density Gap-based Discrepancy One of the straight manifestations of holes in latent space is the mismatch of probabilistic density between q φ (z) and p θ (z). We quantify this mismatch at a specific position z in the latent space through DG(θ, φ; z), which we refer to as Density Gap. Here we only consider z ∈ {z|q φ (z) > 0}. 1 We assume q φ (z) and p θ (z) are differentiable and p θ (z) > 0.
DG(θ, φ; z) = log q φ (z) p θ (z) = log 1 N N n=1 q φ (z|x n ) p θ (z)(4)
It can be inferred that the expectation of DG(θ, φ; z) on q φ (z) equals to the KL divergence between q φ (z) and p θ (z), as illustrated in Eq. 5, which is a strict divergence, i.e.,
E z∼q φ (z) [DG(θ, φ; z)] = 0 iff q φ (z) = p θ (z). E z∼q φ (z) [DG(θ, φ; z)] = E z∼q φ (z) [log q φ (z) p θ (z) ] = D KL (q φ (z) p θ (z)) ≥ 0(5)
So, we can approximate and optimize D KL (q φ (z) p θ (z)) via Monte Carlo, as illustrated in Eq. 6, where z s idd ∼ q φ (z) denotes the s th random sample from the aggregated posterior distribution.
D KL (q φ (z) p θ (z)) ≈ 1 S S s=1 DG(θ, φ; z s )(6)
It should be noted that D KL (q φ (z) p θ (z)) approximated by this is an overall divergence, as it considers the posterior distribution of all datapoints as a whole, instead of averaging D KL (q φ (z|x) p θ (z)) across all datapoints as ELBo does.
On that basis, we can implement L D (θ, φ; x) with D = D KL , which is equivalent to replacing the KL term in ELBo with D KL (q φ (z) p θ (z)) approximated by Eq. 6. According to the decomposition (illustrated in Eq. 7) of the KL term in ELBo given by Hoffman et al. [19], maximizing
L D KL (θ, φ; x n ) on the whole dataset, X = {x n } N n=1 , q φ (n) ≡ 1 N ,
is equivalent to maximizing ELBo as well as I q φ (n,z) [n, z], 2 the mutual information of z and n in their joint distribution q φ (n, z), as illustrated in Eq. 8.
1 N N n=1 D KL (q φ (z|x n ) p θ (z)) = D KL (q φ (z) p θ (z)) + I q φ (n,z) [n, z] where I q φ (n,z) [n, z] = E q φ (n,z) [log q φ (n, z) q φ (n)q φ (z) ]
where q φ (n, z) = q φ (n)q φ (z|n) = 1 N q φ (z|x n )
(7) 1 N N n=1 [L D KL (θ, φ; x n )] = 1 N N n=1 [E q φ (z|xn) [log p θ (x n |z)]] − D KL (q φ (z) p θ (z)) = 1 N N n=1 [L ELBo (θ, φ; x n )] + I q φ (n,z) [n, z](8)
Optimization on Mini-Batch Theoretically attractive as it is, maximizing L D KL (θ, φ; x n ) approximated by DG(θ, φ; z s ) is undesirable for training VAEs on large datasets, because the probabilistic density of q φ (z) at z needs computation across the whole dataset and is changing in every training step. In practice, training deep networks such as VAEs commonly adopts mini-batch gradient descent, where only a small subset of the dataset is used for calculating gradients and updating parameters in each iteration step.
So, a practicable way is to aggregate the posterior of datapoints inside a mini-batch B = {x n } |B| n=1 , q φ (n) ≡ 1 |B| , as stated in Eq. 9, where z n,m is the m th sample from the posterior of datapoint x n . Here, stratified sampling is used to ensure a steady Monte Carlo approximation, 3 and the reparameterization trick [21] is applied to ensure a differentiable output.
DG(θ, φ, B; z) = log q φ,B (z) p θ (z) = log 1 |B| |B| n=1 q φ (z|x n ) p θ (z) D KL (q φ,B (z) p θ (z)) ≈ 1 |B| |B| n=1 1 M M m=1 DG(θ, φ, B; z n,m ), where z n,m idd ∼ q φ (z|x n )(9)
Through this approximation, we can implement L D KL (θ, φ, B; x n ) that regularizes q φ,B (z) towards p θ (z) for a mini-batch B, as stated in Eq. 10.
|B|
|B| n=1 [L D KL (θ, φ, B; x n )] = 1 |B| |B| n=1 [E q φ (z|xn) [log p θ (x n |z)]] − D KL (q φ,B (z) p θ (z)) = 1 |B| |B| n=1 [L ELBo (θ, φ; x n )] + I q φ (n,z) [n, z](10)
It should be noticed that the mutual information term I q φ (n,z) [n, z] is different in Eq. 8 (on the whole dataset) and Eq. 10 (on a mini-batch), because the range of discrete variable n is from 1 to N in Eq. 8, but 1 to |B| in Eq. 10. Consequently, I q φ (n,z) [n, z] has an upper bound of H(n) = log N in Eq. 8, but H(n) = log |B| in Eq. 10. In other words, maximizing I q φ (n,z) [n, z] in Eq. 8 intends to distinguish z of x n from that of N − 1 other datapoints, while it is limited to |B| − 1 other datapoints in Eq. 10.
Marginal Regularization for More Mutual Information As described above, approximating and optimizing D KL (q φ,B (z) p θ (z)) is practicable but has limited effect. Empirically, Gaussian distribution-based VAEs trained by Eq. 10 still have limited active units, which means the encoded latent variable z still collapses to the prior on most dimensions, where it provides little information.
To activate z on all dimensions, we propose to regularize q φ,B (z) towards p θ (z) on each dimension respectively, i.e., regularize the marginal distribution of q φ,B (z) on each dimension, as illustrated in Eq. 11, where z i ∈ R is the i th component of z ∈ R Dim and the corresponding probability density functions are of the marginal distributions on the i th dimension; and z n,m,i is the i th component of the m th sample from the posterior of datapoint x n .
DG mrg (θ, φ, B; z i ) = log q φ,B (z i ) p θ (z i ) = log 1 |B| |B| n=1 q φ (z i |x n ) p θ (z i ) D KL,mrg (q φ,B (z) p θ (z)) = Dim i=1 D KL (q φ,B (z i ) p θ (z i )) ≈ Dim i=1 1 |B| |B| n=1 1 M M m=1 DG mrg (θ, φ, B; z n,m,i )(11)
In Gaussian distribution-based VAEs, the marginal distributions of z on different dimensions are independent, i.e., q φ (z|x n ) =
Dim i q φ (z i |x n ) and p θ (z) = Dim i p θ (z i ), so their KL divergence can be decomposed as D KL (q φ (z|x n ) p θ (z)) = Dim i D KL (q φ (z i |x n ) p θ (z i ))
. Thus, we can infer the decomposition of D KL,mrg (q φ,B (z) p θ (z)) through Eq. 12.
D KL,mrg (q φ,B (z) p θ (z)) = Dim i=1 D KL (q φ,B (z i ) p θ (z i )) = Dim i=1 [ 1 |B| |B| n [D KL (q φ (z i |x n ) p θ (z i ))] − I q φ (n,zi) [n, z i ]] = 1 |B| |B| n [D KL (q φ (z|x n ) p θ (z))] − Dim i=1 [I q φ (n,zi) [n, z i ]](12)
So, maximizing L D KL,mrg (θ, φ, B; x n ) derived from Eq. 11 is equivalent to maximizing ELBo as well as the mutual information of n and z i for each dimension i respectively, as stated in Eq. 13. In other words, it intends to distinguish z i of x n from that of |B| − 1 other datapoints for each dimension i respectively. We refer to our models based on this Density Gap-based regularization as DG-VAEs.
1 |B| |B| n=1 [L D KL,mrg (θ, φ, B; x n )] = 1 |B| |B| n=1 [E q φ (z|xn) [log p θ (x n |z)]] − D KL,mrg (q φ,B (z) p θ (z)) = 1 |B| |B| n=1 [L ELBo (θ, φ; x n )] + Dim i=1 [I q φ (n,zi) [n, z i ]](13)
Aggregation Size for Ablation As discussed above, the size of mini-batch |B| sets an upper bound of the mutual information term I q φ (n,z) [n, z] (or I q φ (n,zi) [n, z i ]). To validate this impact, we further extend DG-VAEs through dividing the mini-batch into non-overlapping subsets
B = C i=1 b i , s.t. b j ∩ b i = ∅ iff i = j,
calculating and optimizing the KL divergence over each subset, i.e.,
1 C C i=1 1 |bi| |bi| j=1 [L D KL,mrg (θ, φ, b i ; x n )],
where C denotes the number of subsets and |b i | = |B| C is the size of those subsets, which we refer to as the aggregation size and denote as |b| for simplification. It can be inferred that the DG-VAE with |b| = 1 is equivalent to the vanilla VAE trained by ELBo, except that it approximates the KL term through Monte Carlo instead of integration.
Extension to von Mises-Fisher Distribution-based VAEs Besides the commonly used Gaussian distribution-based VAEs, we also consider von Mises-Fisher (vMF) distribution-based VAEs. As the decomposition q φ (z) = Dim i q φ (z i ) is not established for latent variables following vMF distributions (i.e., z ∼ vM F (µ, κ)), marginal regularization for vMF-VAEs may be not interpretable, so we only implement Eq. 10 in vMF-VAEs. We refer to those extensions as DG-vMF-VAEs. [42], Yahoo [42,44], a downsampled version of Yelp [35] (we denote this as Short-Yelp), and a downsampled version of SNLI [5,23]. The statistics of these datasets are illustrated in Table 1. It can be viewed that Yelp and Yahoo contain long sentences while Short-Yelp and SNLI contain short sentences.
Baselines We consider a wide range of VAEs for solving posterior collapse in text generation, where the hyperparameters are set according to Zhu et al. [46]:
• VAEs with modified training strategies (i.e., KL annealing): VAE with linear KL annealing in the first 10 epochs (default) [6]; VAE with linear KL annealing for 10 epochs at the start of every 20 epochs (cyclic-VAE) [13];
• VAEs with specific model structures: VAE with additional Bag-of-Words loss (bow-VAE) [45], and VAE with skip connection from the latent variable z to the vocabulary classifier for generation (skip-VAE) [9];
• VAEs with hard restrictions on the posterior distribution: δ-VAE with the committed rate δ = 0.15 [31]; Configurations We completely follow Zhu et al. [46] in the models' backbone structures, data pre-processing, and training procedure, which we describe in detail in Appendix B.
Language Modeling
We evaluate the performance of our methods and the baselines on language modeling, where the following metrics are reported: the prior log likelihood priorLL(θ) and the posterior log likelihood postLL(θ, φ) for generation quality; the KL term in ELBo KL(φ), the mutual information M I(φ) of z and n and the number of active units AU (φ) [7] for posterior collapse; and the number of consistent units CU (φ) (we propose) for the hole problem. The corresponding expressions and explanations are presented in Appendix C.
We illustrate part of the results on Yahoo in Table 2 and all results on all datasets in Appendix D. It can be observed that: (1) models with modified training strategies or specific model structures can alleviate the problem of posterior collapse but has limited effect according to M I(φ) and AU (φ);
(2) models with hard restrictions or weakened KL regularization on the posterior can solve posterior collapse better through harder restrictions or further weakening according to the increase of KL(φ), M I(φ), and AU (φ), but the decrease of CU (φ) indicates that their posterior latent spaces tend to be increasingly inconsistent with that of the prior; (3) in contrast, our proposed DG-VAE has similar performance to the vanilla VAE when |b| = 1, and with the increase of |b|, it can solve posterior collapse effectively and avoid the hole problem at the same time. 4 It can also be viewed that with the increase of M I(φ) or KL(φ), postLL(θ, φ) tends to increase, while priorLL(θ) tends to decrease, as the decoder θ becomes more dependent on the encoder φ. We further plot the curves of priorLL(θ) and postLL(θ, φ) for models with different hyperparameters in Figure 2, where we can observe that DG-VAEs make better trade-offs than BN-VAEs and β-VAEs do on short datasets and perform competitively to BN-VAEs and FB-VAEs on long datasets. We also compare the performance of DG-vMF-VAEs with vMF-VAEs under different settings of κ. As they have the same KL(φ), while AU (φ) and CU (φ) are inappropriate to report for vMF distributions, we only plot their curves of priorLL(θ) and postLL(θ, φ) in Figure 3. It can be observed that DG-vMF-VAEs outperform vMF-VAEs in most cases.
Visualization of the Posterior
To further investigate the posterior distribution in latent space of those models, we visualize the aggregated posterior distributions and the posterior centers distributions on the 2 most active dimensions, i.e., the two dimensions with the highest V ar x∼X [E q φ (z|x) [z]], as depicted in Figure 4. Here, we can observe that BN-VAEs, FB-VAEs and β-VAEs can better solve posterior collapse with harder restrictions or further weakening, but meanwhile they are faced with different kinds of mismatch between the aggregated posterior distribution and the prior distribution, i.e., the hole problem. In contrast, with the increase of aggregation size |b|, DG-VAE can better solve posterior collapse and avoid the hole problem in the meantime.
Interpolation Study
One of the main advantages of VAEs over normal language models (e.g., GPT-2 [30]) is that VAEs embed datapoints into a continuous latent space and thus enable latent-guided generation. We evaluate this ability through interpolation, where the models encode two sentences x a and x b as their posterior centers, i.e., z a = E q φ (z|xa) [z] and z b = E q φ (z|x b ) [z], and decode the variables between them, i.e., z λ = z a · (1 − λ) + z b · λ, λ ∈ {0.0, 0.1, . . . , 1.0}. 5 The interpolated sentences are wished to be semantically smooth and meaningful, which we evaluate through the average Rouge-L F1-score [26], as stated in Eq. 14, where F lcs denotes the F1-score of Longest Common Subsequence (LCS). We plot the curves of Rouge-L F1-score and λ for models on Yahoo dataset in Figure 5 and those curves on other datasets in Appendix E.
RougeL F 1 (x a , x b , x λ ) = 1 2 (F lcs (x a , x λ ) + F lcs (x b , x λ )) (14) Figure 5: The curves of Rouge-L F1-score and λ for models' interpolation performance on Yahoo.
As shown in Figure 5, the average F1-score of LCS tends to be lower in the middle than at the ends, which indicates that generated sentences tend to not be smooth in the middle, which corresponds to the phenomenon of generation near to holes observed in previous work [25] 6 . The vanilla VAE performs the worst as it suffers from posterior collapse, and only generates the same plain sentence; meanwhile, DG-VAE outperforms BN-VAEs, FB-VAEs and β-VAEs in the quality of interpolation on the Yahoo dataset as it can solve posterior collapse and avoid the hole problem at the same time.
In summary, the existing methods for solving posterior collapse in VAEs either have limited effect or can effectively solve posterior collapse at the cost of bringing the hole problem. In contrast, our proposed DG-VAE can effectively solve posterior collapse and avoid the hole problem at the same time, which is demonstrated by the posterior centers spread in latent space and the aggregated posterior distribution consistent with the prior distribution. Furthermore, our proposed DG-VAE outperforms the existing methods in the quality of latent-guided generation due to these improvements in latent space.
Discussion
Conclusion In this work, we perform systematic experiments to demonstrate posterior collapse and the hole problem in existing continuous VAEs for text generation. To solve both problems at the same time, we propose a density gap-based regularization on the aggregated posterior distribution to replace the KL regularization in ELBo, and prove it in essence maximizes the ELBo as well as the mutual information between the latent and the input. Experiments on real-world datasets prove the effectiveness of our method in solving both problems and its improvement in latent-guided generation.
Limitation & Future work Both the theory and the ablation study show that the effectiveness of our proposed method depends on the aggregation size |b|, which is still limited by the batch size during training. Therefore a promising future direction is to find a solution to break this limit, such like the memory bank mechanism in contrastive learning [38].
Figure 1 :
1The visualization of the aggregated posterior distributions (the first line) and the posterior centers distributions (the second line) for models on the Yahoo test-set. The vanilla VAEs suffer from posterior collapse, i.e., the posterior centers collapse to the same position. Meanwhile, BN-VAEs
BN-VAEs with the scale of BN layer γ ∈ {0.6, 0.7, 0.9, 1.2, 1.5, 1.8} [46]; vMF-VAEs with the distribution's concentration κ ∈ {13, 25, 50, 100, 200} [8, 41]; • VAEs with weakened KL regularization: FB-VAEs (free-bits) with the target KL λ KL ∈ {4, 9, 16, 25, 36, 49} [20]; β-VAEs with the weight of the KL term in ELBo β ∈ {0.0, 0.1, 0.2, 0.4, 0.8} [18].
Figure 2 :
2The curves of priorLL(θ) and postLL(θ, φ) in Gaussian distribution-based VAEs.
Figure 3 :
3The curves of priorLL(θ) and postLL(θ, φ) in vMF distribution-based VAEs.
Figure 4 :
4The visualization of the aggregated posterior distributions (red-in-black) and the posterior centers distributions (blue-in-white) for BN-VAEs, FB-VAEs, β-VAEs, and DG-VAEs on the Yahoo test-set. Illustrations for more datasets, more models, and more dimensions, are shown in Appendix G.
Table 1 :
1Statistics of sentences in the datasetsDataset
Train
Valid
Test
Vocab size Length (avg ± std)
Yelp
100,000 10,000 10,000 19997
98.01 ± 48.86
Yahoo
100,000 10,000 10,000 20001
80.76 ± 46.21
Short-Yelp 100,000 10,000 10,000 8411
10.96 ± 3.60
SNLI
100,000 10,000 10,000 9990
11.73 ± 4.33
4 Experiments
4.1 Experimental Setup
Datasets We consider four public available datasets commonly used for VAE-based language
modeling tasks in our experiments: Yelp
Table 2 :
2Results of Language Modeling on Yahoo dataset.We bold up M I(φ) ≥ 9.0, AU (φ) ≥ 30,
Although we can have q φ (z) = 0, z ∈ R Dim when the latent variable follows a von Mises-Fisher (vMF) distribution, we do not need to consider such points in regularization.2 Posterior collapse (or KL vanishing) can be solved effectively by maximizing this mutual information term as it is a lower bound of the vanished KL divergence term in ELBo according to Eq. 7.
In other words, we sample S = |B| × M samples from q φ,B (z) through sampling M samples from q φ (z|xn) for each datapoint xn ∈ B.
There's a little difference between DG-VAE (|b| = 32) and DG-VAE (default): DG-VAE (|b| = 32) ignores data batch B if |B| < 32 while DG-VAE (default) accepts it through adapting to its batch size.
We only consider greedy search for generation in this work.6 For further illustration on this phenomenon, we provide case study in Appendix F.
Acknowledgments and Disclosure of Funding
A contrastive learning approach for training variational autoencoder priors. Jyoti Aneja, Alexander G Schwing, Jan Kautz, Arash Vahdat, Proceedings of the 2021 Annual Conference on Neural Information Processing Systems. the 2021 Annual Conference on Neural Information Processing SystemsJyoti Aneja, Alexander G. Schwing, Jan Kautz, and Arash Vahdat. A contrastive learning approach for training variational autoencoder priors. In Proceedings of the 2021 Annual Conference on Neural Information Processing Systems, pages 480-493, 2021.
Stochastic Wasserstein autoencoder for probabilistic sentence generation. Hareesh Bahuleyan, Lili Mou, Hao Zhou, Olga Vechtomova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesHareesh Bahuleyan, Lili Mou, Hao Zhou, and Olga Vechtomova. Stochastic Wasserstein autoencoder for probabilistic sentence generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4068-4076, 2019.
Resampled priors for variational autoencoders. Matthias Bauer, Andriy Mnih, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. the 22nd International Conference on Artificial Intelligence and StatisticsMatthias Bauer and Andriy Mnih. Resampled priors for variational autoencoders. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pages 66-75, 2019.
Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models. Sam Bond-Taylor, Adam Leach, Yang Long, Chris G Willcocks, abs/2103.04922CoRRSam Bond-Taylor, Adam Leach, Yang Long, and Chris G. Willcocks. Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models. CoRR, abs/2103.04922, 2021.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, 2015.
Generating sentences from a continuous space. R Samuel, Luke Bowman, Oriol Vilnis, Andrew M Vinyals, Rafal Dai, Samy Józefowicz, Bengio, Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. the 20th SIGNLL Conference on Computational Natural Language LearningSamuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Józefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21, 2016.
Importance weighted autoencoders. Yuri Burda, Roger B Grosse, Ruslan Salakhutdinov, Proceedings of the 4th International Conference on Learning Representations. the 4th International Conference on Learning RepresentationsYuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In Proceedings of the 4th International Conference on Learning Representations, 2016.
Hyperspherical variational auto-encoders. Tim R Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, Jakub M Tomczak, Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence. the 34th Conference on Uncertainty in Artificial IntelligenceTim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. Hyper- spherical variational auto-encoders. In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence, pages 856-865, 2018.
Avoiding latent variable collapse with generative skip models. B Adji, Yoon Dieng, Alexander M Kim, David M Rush, Blei, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. the 22nd International Conference on Artificial Intelligence and StatisticsAdji B. Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. Avoiding latent variable collapse with generative skip models. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pages 2397-2405, 2019.
Semi-deterministic and contrastive variational graph autoencoder for recommendation. Yue Ding, Yuxiang Shi, Bo Chen, Chenghua Lin, Hongtao Lu, Jie Li, Ruiming Tang, Dong Wang, Proceedings of the 30th ACM International Conference on Information and Knowledge Management. the 30th ACM International Conference on Information and Knowledge ManagementYue Ding, Yuxiang Shi, Bo Chen, Chenghua Lin, Hongtao Lu, Jie Li, Ruiming Tang, and Dong Wang. Semi-deterministic and contrastive variational graph autoencoder for recommendation. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, pages 382-391, 2021.
Implicit deep latent variable models for text generation. Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, Changyou Chen, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingLe Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, and Changyou Chen. Implicit deep latent variable models for text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3944-3954, 2019.
Neural machine translation with phrase-level universal visual representations. Qingkai Fang, Yang Feng, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsQingkai Fang and Yang Feng. Neural machine translation with phrase-level universal visual representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 5687-5698, 2022.
Cyclical annealing schedule: A simple approach to mitigating KL vanishing. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, Lawrence Carin, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesHao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. Cyclical annealing schedule: A simple approach to mitigating KL vanishing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 240-250, 2019.
There are a thousand hamlets in a thousand people's eyes: Enhancing knowledge-grounded dialogue with personal memory. Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsTingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, and Rui Yan. There are a thousand hamlets in a thousand people's eyes: Enhancing knowledge-grounded dialogue with personal memory. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 3901-3913, 2022.
Generative adversarial nets. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, Yoshua Bengio, Proceedings of 2014 Annual Conference on Neural Information Processing Systems. 2014 Annual Conference on Neural Information Processing SystemsIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of 2014 Annual Conference on Neural Information Processing Systems, pages 2672-2680, 2014.
PixelVAE: A latent variable model for natural images. Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taïga, Francesco Visin, David Vázquez, Aaron C Courville, Proceedings of the 5th International Conference on Learning Representations. the 5th International Conference on Learning RepresentationsIshaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taïga, Francesco Visin, David Vázquez, and Aaron C. Courville. PixelVAE: A latent variable model for natural images. In Proceedings of the 5th International Conference on Learning Representations, 2017.
Preventing posterior collapse with levenshtein variational autoencoder. CoRR, abs. Serhii Havrylov, Ivan Titov, Serhii Havrylov and Ivan Titov. Preventing posterior collapse with levenshtein variational autoencoder. CoRR, abs/2004.14758, 2020.
Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. Irina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Proceedings of the 5th International Conference on Learning Representations. the 5th International Conference on Learning RepresentationsIrina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of the 5th International Conference on Learning Representations, 2017.
ELBO surgery: Yet another way to carve up the variational evidence lower bound. D Matthew, Matthew J Johnson Hoffman, Proceedings of 2016 NIPS Workshop on Advances in Approximate Bayesian Inference. 2016 NIPS Workshop on Advances in Approximate Bayesian InferenceMatthew D Hoffman and Matthew J Johnson. ELBO surgery: Yet another way to carve up the variational evidence lower bound. In Proceedings of 2016 NIPS Workshop on Advances in Approximate Bayesian Inference, 2016.
Improving variational autoencoders with inverse autoregressive flow. P Diederik, Tim Kingma, Rafal Salimans, Xi Józefowicz, Ilya Chen, Max Sutskever, Welling, Proceedings of the 2016 Annual Conference on Neural Information Processing Systems. the 2016 Annual Conference on Neural Information Processing SystemsDiederik P. Kingma, Tim Salimans, Rafal Józefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving variational autoencoders with inverse autoregressive flow. In Proceedings of the 2016 Annual Conference on Neural Information Processing Systems, pages 4736-4744, 2016.
Auto-encoding variational Bayes. P Diederik, Max Kingma, Welling, Proceedings of the 2nd International Conference on Learning Representations. the 2nd International Conference on Learning RepresentationsDiederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations, 2014.
Learning hierarchical priors in VAEs. Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, Patrick Van Der, Smagt, Proceedings of the 2019 Annual Conference on Neural Information Processing Systems. the 2019 Annual Conference on Neural Information Processing SystemsAlexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, and Patrick van der Smagt. Learn- ing hierarchical priors in VAEs. In Proceedings of the 2019 Annual Conference on Neural Information Processing Systems, pages 2866-2875, 2019.
A surprisingly effective fix for deep latent variable modeling of text. Bohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, Yiming Yang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingBohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, and Yiming Yang. A sur- prisingly effective fix for deep latent variable modeling of text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3601-3612, 2019.
Improving variational autoencoder for text modelling with timestep-wise regularisation. Ruizhe Li, Xiao Li, Guanyi Chen, Chenghua Lin, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsRuizhe Li, Xiao Li, Guanyi Chen, and Chenghua Lin. Improving variational autoencoder for text modelling with timestep-wise regularisation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2381-2397, 2020.
On the latent holes of VAEs for text generation. Ruizhe Li, Xutan Peng, Chenghua Lin, abs/2110.03318CoRRRuizhe Li, Xutan Peng, and Chenghua Lin. On the latent holes of VAEs for text generation. CoRR, abs/2110.03318, 2021.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Proceedings of 2004 Text Summarization Branches Out. 2004 Text Summarization Branches OutChin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Proceedings of 2004 Text Summarization Branches Out, 2004.
SentenceMIM: A latent variable language model. CoRR, abs. Micha Livne, Kevin Swersky, David J Fleet, Micha Livne, Kevin Swersky, and David J. Fleet. SentenceMIM: A latent variable language model. CoRR, abs/2003.02645, 2020.
Adversarial autoencoders. CoRR. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian J Goodfellow, abs/1511.05644Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. Adversarial autoencoders. CoRR, abs/1511.05644, 2015.
Preventing posterior collapse in variational autoencoders for text generation via decoder regularization. Alban Petit, Caio Corro, abs/2110.14945CoRRAlban Petit and Caio Corro. Preventing posterior collapse in variational autoencoders for text generation via decoder regularization. CoRR, abs/2110.14945, 2021.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Preventing posterior collapse with delta-VAEs. Ali Razavi, Aäron Van Den, Ben Oord, Oriol Poole, Vinyals, Proceedings of the 7th International Conference on Learning Representations. the 7th International Conference on Learning RepresentationsAli Razavi, Aäron van den Oord, Ben Poole, and Oriol Vinyals. Preventing posterior collapse with delta-VAEs. In Proceedings of the 7th International Conference on Learning Representa- tions, 2019.
Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, Proceedings of the 31th International Conference on Machine Learning. the 31th International Conference on Machine LearningDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, pages 1278-1286, 2014.
Taming VAEs. CoRR. Danilo Jimenez Rezende, Fabio Viola, abs/1810.00597Danilo Jimenez Rezende and Fabio Viola. Taming VAEs. CoRR, abs/1810.00597, 2018.
Generative neural machine translation. Harshil Shah, David Barber, Proceedings of the 2018 Annual Conference on Neural Information Processing Systems. the 2018 Annual Conference on Neural Information Processing SystemsHarshil Shah and David Barber. Generative neural machine translation. In Proceedings of the 2018 Annual Conference on Neural Information Processing Systems, pages 1353-1362, 2018.
Style transfer from nonparallel text by cross-alignment. Tianxiao Shen, Tao Lei, Regina Barzilay, Tommi S Jaakkola, Proceedings of the 2017 Annual Conference on Neural Information Processing Systems. the 2017 Annual Conference on Neural Information Processing SystemsTianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. Style transfer from non- parallel text by cross-alignment. In Proceedings of the 2017 Annual Conference on Neural Information Processing Systems, pages 6830-6841, 2017.
Wasserstein auto-encoders. O Ilya, Olivier Tolstikhin, Sylvain Bousquet, Bernhard Gelly, Schölkopf, Proceedings of the 6th International Conference on Learning Representations. the 6th International Conference on Learning RepresentationsIlya O. Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schölkopf. Wasserstein auto-encoders. In Proceedings of the 6th International Conference on Learning Representations, 2018.
VAE with a vampprior. M Jakub, Max Tomczak, Welling, Proceedings of the 21st International Conference on Artificial Intelligence and Statistics. the 21st International Conference on Artificial Intelligence and StatisticsJakub M. Tomczak and Max Welling. VAE with a vampprior. In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics, pages 1214-1223, 2018.
Unsupervised feature learning via non-parametric instance discrimination. Zhirong Wu, Yuanjun Xiong, Stella X Yu, Dahua Lin, Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. the 2018 IEEE Conference on Computer Vision and Pattern RecognitionZhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, pages 3733-3742, 2018.
Pseudo siamese network for few-shot intent generation. Congying Xia, Caiming Xiong, Philip S Yu, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalCongying Xia, Caiming Xiong, and Philip S. Yu. Pseudo siamese network for few-shot intent generation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2005-2009, 2021.
On the necessity and effectiveness of learning the prior of variational auto-encoder. Haowen Xu, Wenxiao Chen, Jinlin Lai, Zhihan Li, Youjian Zhao, Dan Pei, abs/1905.13452CoRRHaowen Xu, Wenxiao Chen, Jinlin Lai, Zhihan Li, Youjian Zhao, and Dan Pei. On the necessity and effectiveness of learning the prior of variational auto-encoder. CoRR, abs/1905.13452, 2019.
Spherical latent spaces for stable variational autoencoders. Jiacheng Xu, Greg Durrett, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingJiacheng Xu and Greg Durrett. Spherical latent spaces for stable variational autoencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4503-4513, 2018.
Improved variational autoencoders for text modeling using dilated convolutions. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, Taylor Berg-Kirkpatrick, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningZichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved variational autoencoders for text modeling using dilated convolutions. In Proceedings of the 34th International Conference on Machine Learning, pages 3881-3890, 2017.
Data augmentation for spoken language understanding via joint variational generation. Youhyun Kang Min Yoo, Sang-Goo Shin, Lee, Proceedings of the 33rd AAAI Conference on Artificial Intelligence. the 33rd AAAI Conference on Artificial IntelligenceKang Min Yoo, Youhyun Shin, and Sang-goo Lee. Data augmentation for spoken language understanding via joint variational generation. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pages 7402-7409, 2019.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Jake Zhao, Yann Lecun, Proceedings of the 2015 Annual Conference on Neural Information Processing Systems. the 2015 Annual Conference on Neural Information Processing SystemsXiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Proceedings of the 2015 Annual Conference on Neural Information Processing Systems, pages 649-657, 2015.
Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. Tiancheng Zhao, Ran Zhao, Maxine Eskénazi, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsTiancheng Zhao, Ran Zhao, and Maxine Eskénazi. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 654-664, 2017.
A batch normalized inference network keeps the KL vanishing away. Qile Zhu, Wei Bi, Xiaojiang Liu, Xiyao Ma, Xiaolin Li, Dapeng Wu, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsChecklist 1. For all authorsQile Zhu, Wei Bi, Xiaojiang Liu, Xiyao Ma, Xiaolin Li, and Dapeng Wu. A batch normalized inference network keeps the KL vanishing away. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2636-2649, 2020. Checklist 1. For all authors...
Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes] We propose a novel regularization for VAEs to solve both posterior collapse and the hole problem. (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes] We propose a novel regularization for VAEs to solve both posterior collapse and the hole problem.
Did you describe the limitations of your work? [Yes] See Section 5. (c) Did you discuss any potential negative societal impacts of your work. Did you describe the limitations of your work? [Yes] See Section 5. (c) Did you discuss any potential negative societal impacts of your work? [No]
Have you read the ethics review guidelines and ensured that your paper conforms to them. Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
If you are including theoretical results. If you are including theoretical results...
Did you state the full set of assumptions of all theoretical results? [Yes] We state the full set of assumptions in the first paragraph of Section 3. (a) Did you state the full set of assumptions of all theoretical results? [Yes] We state the full set of assumptions in the first paragraph of Section 3.
Did you include complete proofs of all theoretical results? [Yes] We provide complete proofs for all theoretical results in Section 3. Did you include complete proofs of all theoretical results? [Yes] We provide complete proofs for all theoretical results in Section 3.
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We include the code, scripts for downloading data and instructions for reproduction in the supplemental material. If you ran experiments...If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [Yes] We include the code, scripts for downloading data and instructions for reproduction in the supplemental material.
Did you report error bars (e.g., with respect to the random seed after running experiments multiple times. Did you report error bars (e.g., with respect to the random seed after running experi- ments multiple times)? [No]
data, models) or curating/releasing new assets... (a) If your work uses existing assets. If you are using existing assets (e.g., code,. did you cite the creators? [Yes] See Section 4. (b) Did you mention the license of the assets? [No] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/AIf you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4. (b) Did you mention the license of the assets? [No] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [No]
If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots. N/AIf you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?. N/ADid you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation. N/ADid you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
| [
"https://github.com/zhangjf-nlp/DG-VAEs."
]
|
[
"BINARY SEQUENCE SET OPTIMIZATION FOR CDMA APPLICATIONS VIA MIXED-INTEGER QUADRATIC PROGRAMMING",
"BINARY SEQUENCE SET OPTIMIZATION FOR CDMA APPLICATIONS VIA MIXED-INTEGER QUADRATIC PROGRAMMING"
]
| [
"Alan Yang \nDepartment of Electrical Engineering\nStanford University\n\n",
"Tara Mina [email protected] \nDepartment of Electrical Engineering\nStanford University\n\n",
"Grace Gao [email protected] \nDepartment of Aeronautics and Astronautics\nStanford University\n\n"
]
| [
"Department of Electrical Engineering\nStanford University\n",
"Department of Electrical Engineering\nStanford University\n",
"Department of Aeronautics and Astronautics\nStanford University\n"
]
| []
| Finding sets of binary sequences with low auto-and cross-correlation properties is a hard combinatorial optimization problem with numerous applications, including multiple-input-multiple-output (MIMO) radar and global navigation satellite systems (GNSS). The sum of squared correlations, sometimes referred to as the integrated sidelobe level (ISL), is a quartic function in the variables and is a commonly-used metric of sequence set quality. In this paper, we show that the ISL minimization problem may be formulated as a mixed-integer quadratic program (MIQP). We then present a block coordinate descent (BCD) algorithm that iteratively optimizes over subsets of variables. The subset optimization subproblems are also MIQPs which may be handled more efficiently using specialized solvers than using exhaustive search; this allows us to perform BCD over larger variable subsets than previously possible. Our approach was used to find sets of four binary sequences of lengths up to 1023 with better ISL performance than Gold codes and sequence sets found using existing BCD methods. | 10.1109/icassp49357.2023.10095359 | [
"https://export.arxiv.org/pdf/2211.00285v2.pdf"
]
| 253,244,344 | 2211.00285 | b6a26180fe6f66c5894a7ab454ae3ef9e6d17c30 |
BINARY SEQUENCE SET OPTIMIZATION FOR CDMA APPLICATIONS VIA MIXED-INTEGER QUADRATIC PROGRAMMING
Alan Yang
Department of Electrical Engineering
Stanford University
Tara Mina [email protected]
Department of Electrical Engineering
Stanford University
Grace Gao [email protected]
Department of Aeronautics and Astronautics
Stanford University
BINARY SEQUENCE SET OPTIMIZATION FOR CDMA APPLICATIONS VIA MIXED-INTEGER QUADRATIC PROGRAMMING
Index Terms-Auto-and cross-correlationbinary sequence setscode division multiple access (CDMA)integrated sidelobe level (ISL)spreading codes
Finding sets of binary sequences with low auto-and cross-correlation properties is a hard combinatorial optimization problem with numerous applications, including multiple-input-multiple-output (MIMO) radar and global navigation satellite systems (GNSS). The sum of squared correlations, sometimes referred to as the integrated sidelobe level (ISL), is a quartic function in the variables and is a commonly-used metric of sequence set quality. In this paper, we show that the ISL minimization problem may be formulated as a mixed-integer quadratic program (MIQP). We then present a block coordinate descent (BCD) algorithm that iteratively optimizes over subsets of variables. The subset optimization subproblems are also MIQPs which may be handled more efficiently using specialized solvers than using exhaustive search; this allows us to perform BCD over larger variable subsets than previously possible. Our approach was used to find sets of four binary sequences of lengths up to 1023 with better ISL performance than Gold codes and sequence sets found using existing BCD methods.
INTRODUCTION
Code division multiple access (CDMA) is a multiple access technique that allows multiple signals to occupy a common communication channel [1,2]. In CDMA, all transmitters broadcast at the same carrier frequency, but each transmitter modulates the data signal with a unique and pre-determined (typically binary-valued) spreading code sequence. A wide variety of applications currently utilize CDMA, including wireless and cellular network communications [3], multi-input multi-output (MIMO) radar systems [4,5], and Global Navigation Satellite Systems (GNSS) [6].
The choice of spreading code sequences directly influences the performance of the CDMA system. In particular, to extract the signal from a particular transmitter, the received signal is correlated with a local replica of the spreading code. A strong correlation peak indicates the presence of the corresponding transmitter's signal, and by aligning the local replica to the transmitter's received signal, the The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of Defense or of the United States Air Force. Statement from DoD: The appearance of external hyperlinks does not constitute endorsement by the United States Department of Defense (DoD) of the linked websites, or the information, products, or services contained therein. The DoD does not exercise any editorial, security, or other control over the information you may find at these locations.
corresponding transmitted data can be recovered. In order to reduce interference from different transmitters during signal extraction, it is desirable for the transmitters' spreading codes to have low cross-correlations with one another. In addition, it is desirable for the spreading codes to have low autocorrelation at non-zero relative temporal shifts. If a spreading code correlates strongly with a temporally shifted version of itself, receivers are susceptible to false temporal alignment as a result of multipath interference. Therefore, it is desirable for the set of spreading codes to have both good crosscorrelation and autocorrelation properties.
Algorithmically generable, or algebraic spreading codes such as Gold codes [7], Kasami codes [8], and m-sequences [9] have been widely used in wireless communications applications including MIMO radar and GNSS. The main advantage of algorithmically generable codes is that they can be produced on-the-fly (for example, using shift registers) and do not need to be stored in memory. However, the aforementioned codes are limited to only certain lengths of 2 n − 1 where n is a natural number, and their autocorrelation and cross-correlation performance, as measured by the integrated sidelobe level (ISL) metric [1,4], is suboptimal, in particular when the number of code sequences is much smaller than the sequence length.
Over the years, decreasing memory storage costs have relaxed the requirement that spreading codes be algorithmically generable. It has become practical to store entire sets of spreading codes in memory, and there has been increasing interest in optimizing sequence sets of specific sizes and lengths to fit specific applications [2]. In this work we consider binary spreading codes, that is, we search for sets of binary sequences with desirable correlation properties.
Block coordinate descent (BCD) is an approach that has been successfully applied to various binary sequence set design problems [4,10,5,11]. BCD iteratively solves subproblems in which subsets of N binary variables are optimized with the others held fixed; the subproblems are frequently solved via exhaustive search. However, since the subproblem search space grows exponentially with N , there is a practical limit on how large N can be, if the subproblems are solved by exhaustive search.
In this paper, we introduce an approach that allows us to perform BCD-based binary sequence set optimization with larger variable subset sizes that would otherwise not be possible with exhaustive search. We first show that the cross-correlation function may be expressed as a linear function of the variables by adding auxiliary variables and linear inequality constraints. This allows us to formulate the binary sequence set design problem as a mixed-integer quadratic program (MIQP). In our approach, we minimize a version of the integrated sidelobe level (ISL) objective, which is a common metric for evaluating binary sequence sets [1,4,11,12]. The ISL consists of a sum of squared correlation values, and is typically ex-pressed as a non-convex quartic function.
In our formulation, the BCD subproblems are also MIQPs. This structure allows specialized solvers based on branch-and-bound, such as Gurobi [13], to quickly solve subproblems involving larger variable subset sizes than previously possible with exhaustive search. We used our approach to find sets of four binary sequences of lengths up to 1023 that have better ISL than possible using existing BCD methods, by optimizing over subsets of 20 variables at a time. Our approach is closely related to the class of methods proposed by Yuan et al., which combines block coordinate descent with exact combinatorial search, for solving discrete optimization problems [14].
The rest of the paper is organized as follows. Sections 2 and 3 review prior work and develop the binary sequence set optimization problem, respectively. Our representation of cross-correlation and proposed formulation for the ISL minimization problem are presented in Section 4, BCD is discussed in Section 5, and experimental results are presented in Section 6.
PRIOR WORK
Continuous optimization techniques, such as penalty methods [15] and semidefinite relaxations [16] have been proposed to design sets of complex-valued, continuous-phase sequences with constant magnitude. Continuous-valued sequences need to be discretized in practice, and binary sequences are often preferred due to ease of implementation [4,10]. Since the discretization of continuous sequences has been found to give poor performance, BCD methods have been proposed to directly optimize binary sequence sets for various applications [4,10,5,11]. Our work enables BCD methods to optimize over larger variable subset sizes at a time, which can lead to improved performance.
Both Bose and Soltanalian [17] and Boukerma et al. [18] construct new sequences and sequence sets by combining pre-existing binary sequences with desirable correlation properties, such as Gold codes or optimized sequence sets; those approaches may be directly combined with BCD methods. Population-based methods, such as genetic algorithms [19] and natural evolution strategies [20] have also been developed, although these methods do not consider the structure in the objective, instead treating it as a general nonlinear function.
DEFINITIONS AND NOTATION
We represent a set of K length-L binary sequences using a matrix X ∈ {±1} L×K . Each column of X represents one of the K sequences in the set. In what follows, we use the notation Xi to denote the i th column of X, and we refer to "columns," "sequences," and "codes" interchangeably. Furthermore, we use the notation Xm,i to denote the m th entry of the i th sequence, or the m th row of the i th column. For convenience, the indices are defined to start at zero. We denote the cross-correlation between columns i and j of X at shift k by
(Xi Xj) k = L−1 m=0 Xm,i · X (m+k) mod L ,j .(1)
We refer to the cross-correlation of column i with itself as the autocorrelation of column i. In this work, we exclusively consider the periodic autocorrelation and cross-correlation functions, although the extension to the aperiodic case is straightforward.
An ideal sequence set X has correlation values (Xi Xj) k that are simultaneously close to zero for all values of i, j, and k, except for autocorrelations of shift k = 0. From (1), we see that (Xi Xi) 0 = L for any Xi ∈ {±1} L . In this work, we minimize a sum of squared correlation values
f (X) = K−1 i=0 K−1 j=i L−1 k=0 (Xi Xj) 2 k 1 {i =j or k =0} ,(2)
where 1 {·} denotes the indicator function. We refer to this objective function as the integrated sidelobe level (ISL), due to its close similarity with functions of the same name defined in prior works [1,4,11]. The indicator function ensures that zero-shift autocorrelations (Xi Xi) 0 are not included in the objective. The sequence set design problem is to find an X ∈ {±1} L×K such that f (X) is small. Note that f is a quartic, non-convex function of X, and the problem is NP-hard.
REFORMULATING ISL MINIMIZATION AS A MIQP
In Subsection 4.1, we show that, by adding auxiliary variables, the cross-correlation (1) may be replaced by a linear function of the variables, subject to linear inequality constraints. In Subsection 4.2, this fact is used to reformulate the ISL minimization problem as a mixedinteger quadratic program (MIQP). In Subsection 4.4, we introduce branch-and-bound methods, which may in principle be used to solve MIQPs.
A linearization of cross-correlation
Consider two binary variables a and b, which may take values in {±1}. The product a · b is bilinear in the variables, and is therefore not convex in a and b. However, since a and b are binary we may represent their product using an auxiliary variable z if we impose a set of linking constraints:
z ≤ b − a + 1, (3a) z ≤ a − b + 1, (3b) z ≥ −1 − a − b, (3c) z ≥ −1 + a + b.(3d)
One may verify that z = a · b for each of the four combinations of values for a and b, if and only if z satisfies (3a) -(3d) [21]. We now introduce a set of auxiliary variables Z i,j m,k that satisfy the linking constraints
Z i,j m,k ≤ X (m+k) mod L ,j − Xm,i + 1, Z i,j m,k ≤ Xm,i − X (m+k) mod L ,j + 1, Z i,j m,k ≥ −1 − Xm,i − X (m+k) mod L ,j , Z i,j m,k ≥ −1 + Xm,i + X (m+k) mod L ,j ,
which ensure that Z i,j m,k = Xm,i · X (m+k) mod L ,j . Subject to those constraints, the cross-correlation (1) may therefore be written as
(Xi Xj) k = L−1 m=0 Z i,j m,k ,(5)
which is linear in the (auxiliary) variables.
ISL minimization
Inserting (5) into the ISL objective (2) leads to the problem
minimize K−1 i=0 K−1 j=i L−1 k=0 L−1 m=0 Z i,j m,k 2 1 {i =j or k =0} (6a) subject to X ∈ {±1} L×K ,(6b)Z i,j m,k ≤ X (m+k) mod L ,j − Xm,i + 1,(6c)Z i,j m,k ≤ Xm,i − X (m+k) mod L ,j + 1,(6d)Z i,j m,k ≥ −1 − Xm,i − X (m+k) mod L ,j ,(6e)Z i,j m,k ≥ −1 + Xm,i + X (m+k) mod L ,j ,(6f)
for all i, j, m, k.
This is a MIQP, since it involves the minimization of a convex quadratic function, subject to linear inequality constraints and binary constraints on X. The auxiliary variables Z i,j m,k do not have explicit integer constraints, but are binary-valued in the feasible set. Note that when the binary constraints (6b) are relaxed, the problem (6) becomes a convex quadratic program (QP), which may be efficiently solved to give a lower bound on the optimal objective value [22]. Adding additional linear inequality constraints or performing partial minimization over some of the variables with the others held fixed also lead to MIQPs, for which lower bounds are also available by solving QPs.
Extension to general convex objectives
In Subsection 4.1, we showed that the correlation terms (Xi Xj) k terms may be replaced by linear functions of the variables. Using that approach, we may replace the ISL objective f (X) given in (2) with any other objective function g(X) that is also convex in the correlation values (Xi Xj) k . By the affine pre-composition rule, g will also be convex with respect to X.
For example, we may replace the square terms (Xi Xj) 2 k in (2) with another convex function of (Xi Xj) k , such as (Xi Xj) k . Another example is the function g(X) = max i,j,k (Xi Xj) k , which is referred to as the peak sidelobe level (PSL) [1,4].
Branch-and-Bound
Branch-and-bound algorithms are commonly used for solving MIQPs [23]. During optimization, they maintain lower and upper bounds on the optimal objective value and return a solution that is provably optimal, up to a specified tolerance level. The bounds are used to rule out suboptimal regions in the search space and potentially solve the problem to optimality faster than exhaustive search. In this subsection, we give a brief sketch of the intuition behind the method; for more details see, for example, the references [23,24].
As mentioned in Subsection 4.2, lower bounds may be obtained by relaxing integer constraints and solving the resulting QPs. Suppose that the relaxed solution has a variable Xi,j that violates the integer constraints. If no such variable exists, then the relaxed solution is optimal. We proceed by choosing Xi,j to be a branching variable, and form two new subproblems: one with Xi,j = −1, and another with Xi,j = 1. If we can solve the two subproblems to optimality, then the better of the two resulting solutions will be optimal for the original problem. In this way, we have replaced the original problem with two more tightly constrained (and therefore easier to solve) subproblems. This procedure may be repeated starting from each subproblem to form a search tree, where each node corresponds to a subproblem associated with a different branching variable.
A exhaustive search traverses all 2 L×K nodes in the search tree. The idea in branch-and-bound is to leverage the bounds to prune the search tree. For example, if the lower bound at a node has objective value larger than the best feasible solution found so far, the entire sub-tree starting from that node may be eliminated from the search tree.
Commercial [13] and open-source [25] solvers implement many sophisticated techniques and heuristics to accelerate branch-andbound, and have been used to solve many real-world problems to optimality with reasonable speed [24]. Although the time complexity of branch-and-bound is exponential in the worst case, we expect the tightness of the lower bound obtained by convex relaxation to be a strong indicator of its performance in practice.
Challenges with branch-and-bound
Directly solving the MIQP (6) using a commerical branch-andbound solver such as Gurobi [13] is only practical for small problem instances. The first challenge is symmetry [26]; permuting the columns of X or circularly shifting the entries of a given column does not change the objective value. Second, note that if we relax the binary constraint (6b), the resulting QP gives a trivial lower bound of zero. Due to the aforementioned issues, it may take a very large number of branching steps to arrive at a subproblem with a useful lower bound obtained by convex relaxation. Moreover, the time needed to perform each branching step may become prohibitive, since the number of variables, linear inequality constraints, and terms in the objective function grows as O(L 2 K 2 ).
In the following section, we present a block coordinate descent algorithm that iteratively optimizes over subsets of the binary variables, while keeping the rest of the sequence set fixed. Indeed, the two aforementioned issues are in part circumvented if we settle for solving (6) only over a subset of the variables. Fixing some of the variables to constant values can break symmetries and lead to more useful lower bounds for branch-and-bound.
BLOCK COORDINATE DESCENT
Block coordinate descent (BCD) repeatedly solves the optimization problem (6) over only a subset of the variables at a time, while keeping the others fixed. In each iteration, we choose a subset of variable indices S and solve (6) with Xi,j held fixed if (i, j) / ∈ S. Note that BCD is a descent method, since the objective value
The BiST coordinate descent algorithm [4] optimizes a single entry Xi,j at a time, with the others held fixed. The row i is incremented at every iteration, and the column j is incremented when the column j reaches a local optimum, that is, the objective cannot be improved by changing any single row in the column. Algorithm 1 illustrates the BiST algorithm, when do BCD is false. When do BCD is set to be true in Algorithm 1, we extend BiST to the BCD case.
In BCD, we optimize over a total of N > 1 indices, including (i, j). The additional indices are randomly selected from two columns j and j = j, where j is also randomly chosen. We solve the BCD subproblem in line 12 of Algorithm 1 by instead solving (6), using a branch-and-bound method. When the subset size is small, exhaustive search may be used instead; Cui et al. considered four variables at a time [10]. Set X (t) to be the solution of the minimization
minimize f (X) subject to X ∈ {±1} L×K X i ,j = X (t−1) i ,j , ∀(i , j ) ∈ S 13:
if f (X (t) ) has not improved in LK steps then 14: break 15: else if f (X (t) ) has not improved in L steps then 16: j ← (j + 1)mod K
17:
end if 18: i ← (i + 1)mod L 19: until convergence 20: return X
Choosing the BCD subset size
In general, increasing the variable subset size N can lead to better performance, but lead to more expensive BCD steps. Figure 1 compares the median time taken per iteration by BCD as the variable subset size N is increased, for varying L and K = 4, over ten random variable subsets each. When the variable subset sizes are small (N ≤ 15), the iteration time increases with sequence length, as expected. However, for N ≥ 15, the situation is reversed; choosing a variable subset size of 30 is more practical for L = 1023 than it is for L = 127. This may be explained by the issues discussed in Subsection 4.5. As the variable subset size approaches the sequence length, the quality of the relaxed lower bound is expected to decrease, which means that the number of branching steps is expected to increase.
The results in Figure 1, as well as the results in the following section, were obtained using the Gurobi solver [13], along with JuMP [27], which is implemented using the Julia programming language. Our code has been made publicly available 1
PERFORMANCE COMPARISON
We compared the performance of BCD using different variable subset sizes N with BiST [4] and Gold codes [7]. Note that BiST is equivalent to BCD with N = 1. First, we ran BiST for L = 63, 127, 511, and 1023 all with K = 4 until convergence, in each case starting from ten different randomly generated initial codes. For each L, the ten BiST solutions were then used as initial conditions for BCD, which we tested with variable subset sizes N = 4 and N = 20.
When N = 4, the BCD subproblem was solved using exhaustive search, similar to the approach taken by Cui et al. [10]. The Gurobi optimizer [13] was used for the N = 20 case. We also compared with Gold codes [7], which are used by the Global Positioning System (GPS). For each L, we sampled one million random subsets of K = 4 Gold codes, and chose the subset with the best ISL. Table 1 compares the performance of the Gold codes, BiST, and BCD methods. For BiST and the BCD methods, the table shows the best ISL achieved out of the ten runs. In each case, increasing the variable subset size lead to improved solutions. Since BiST cannot improve after the objective has not decreased for LK steps and the BCD methods were initialized from the outputs of BiST, the results indicate that increasing N can consistently improve the performance of BCD. For the BCD methods, it is possible that better solutions could have been found by running the methods for more iterations.
CONCLUSION
In this paper, we showed that the cross-correlation function may be expressed as a linear function of the variables, subject to linear inequality constraints. Using this approach, we formulated the binary sequence set optimization problem as a MIQP. Our formulation allowed us to perform BCD over larger variable subsets than previously possible by using an MIQP solver. Finally, we demonstrated that our approach outperforms Gold codes and existing BCD methods on several binary sequence set optimization problems, relevant to MIMO radar, GNSS, and other CDMA applications. Possible directions for future work include alternate variable subset selection schemes and convex objective functions other than ISL.
ACKNOWLEDGMENTS
This material is based upon work supported by the Air Force Research Lab (AFRL) under grant number FA9453-20-1-0002.
S = {(un, vn) | n = 1, . . . , N − 1} at random, where each un ∈ {0, . . . , L} and vn ∈ {j,
Fig. 1 .
1Median BCD iteration time vs. variable subset size, for sets of K = 4 sequences and varying sequence lengths L.
.Table 1. ISL minimization performance comparison for K = 4 sets of sequences with varying L. BiST[12] is equivalent to BCD with N = 1.1 https://github.com/Stanford-NavLab/binary_seq_optSequence length L
63
127
511
1023
Gold
27,506 123,538 2,053,810 8,784,498
BiST
26,018 104,418 1,692,626 6,778,098
BCD (N = 4)
25,826 104,418 1,692,626 6,778,098
BCD (N = 20) 25,386 103,930 1,690,906 6,769,906
Waveform design for active sensing systems: a computational approach. Jian Hao He, Petre Li, Stoica, Cambridge University PressHao He, Jian Li, and Petre Stoica, Waveform design for ac- tive sensing systems: a computational approach, Cambridge University Press, 2012.
Sequence sets in wireless communication systems: A survey. M Juan, Cesar Velazquez-Gutierrez, Vargas-Rosales, IEEE Communications Surveys & Tutorials. 192Juan M Velazquez-Gutierrez and Cesar Vargas-Rosales, "Se- quence sets in wireless communication systems: A survey," IEEE Communications Surveys & Tutorials, vol. 19, no. 2, pp. 1225-1248, 2016.
. Mosa Ali Abu-Rgheff, Academic PressIntroduction to CDMA wireless communicationsMosa Ali Abu-Rgheff, Introduction to CDMA wireless com- munications, Academic Press, 2007.
Designing sets of binary sequences for MIMO radar systems. Mohammad Alaee-Kerahroodi, Mahmoud Modarres-Hashemi, Mohammad Mahdi Naghsh, IEEE Transactions on Signal Processing. 6713Mohammad Alaee-Kerahroodi, Mahmoud Modarres-Hashemi, and Mohammad Mahdi Naghsh, "Designing sets of binary se- quences for MIMO radar systems," IEEE Transactions on Sig- nal Processing, vol. 67, no. 13, pp. 3347-3360, 2019.
On binary sequence set design with applications to automotive radar. Ronghao Lin, Jian Li, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEERonghao Lin and Jian Li, "On binary sequence set design with applications to automotive radar," in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP). IEEE, 2020, pp. 8639-8643.
Y Jade Morton, Frank Van Diggelen, James J SpilkerJr, Bradford W Parkinson, Sherman Lo, Grace Gao, Position, Navigation, and Timing Technologies in the 21st Century: Integrated Satellite Navigation, Sensor Systems, and Civil Applications. John Wiley & SonsY. Jade Morton, Frank van Diggelen, James J. Spilker Jr, Brad- ford W. Parkinson, Sherman Lo, and Grace Gao, Position, Navigation, and Timing Technologies in the 21st Century: In- tegrated Satellite Navigation, Sensor Systems, and Civil Appli- cations, John Wiley & Sons, 2021.
Optimal binary sequences for spread spectrum multiplexing. Robert Gold, IEEE Transactions on information theory. 134Robert Gold, "Optimal binary sequences for spread spectrum multiplexing," IEEE Transactions on information theory, vol. 13, no. 4, pp. 619-621, 1967.
Weight distribution formula for some class of cyclic codes. Tadao Kasami, R-285Coordinated Science Laboratory Report. Tadao Kasami, "Weight distribution formula for some class of cyclic codes," Coordinated Science Laboratory Report no. R-285, 1966.
Digital communications. G John, Proakis, McGraw-Hill Companies, IncNew York, NYJohn G Proakis, "Digital communications," McGraw-Hill Companies, Inc., New York, NY, 2001.
Quadratic optimization with similarity constraint for unimodular sequence synthesis. Guolong Cui, Xianxiang Yu, Goffredo Foglia, Yongwei Huang, Jian Li, IEEE Transactions on Signal Processing. 6518Guolong Cui, Xianxiang Yu, Goffredo Foglia, Yongwei Huang, and Jian Li, "Quadratic optimization with similarity constraint for unimodular sequence synthesis," IEEE Trans- actions on Signal Processing, vol. 65, no. 18, pp. 4756-4769, 2017.
Efficient design of Doppler sensitive long discrete-phase periodic sequence sets for automotive radars. Wenjie Huang, Ronghao Lin, 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM). IEEEWenjie Huang and Ronghao Lin, "Efficient design of Doppler sensitive long discrete-phase periodic sequence sets for auto- motive radars," in 2020 IEEE 11th Sensor Array and Multi- channel Signal Processing Workshop (SAM). IEEE, 2020, pp. 1-5.
Binary sequences set with small ISL for MIMO radar systems. Mohammad Alaee-Kerahroodi, Mahmoud Modarres-Hashemi, Mohammad Mahdi Naghsh, Bhavani Shankar, Björn Ottersten, IEEE26th European signal processing conference (EUSIPCOMohammad Alaee-Kerahroodi, Mahmoud Modarres-Hashemi, Mohammad Mahdi Naghsh, Bhavani Shankar, and Björn Ot- tersten, "Binary sequences set with small ISL for MIMO radar systems," in 2018 26th European signal processing conference (EUSIPCO). IEEE, 2018, pp. 2395-2399.
Gurobi Optimizer Reference Manual. 2022Gurobi Optimization, LLCGurobi Optimization, LLC, "Gurobi Optimizer Reference Manual," 2022.
A hybrid method of combinatorial search and coordinate descent for discrete optimization. Ganzhao Yuan, Li Shen, Wei-Shi Zheng, arXiv:1706.06493arXiv preprintGanzhao Yuan, Li Shen, and Wei-Shi Zheng, "A hybrid method of combinatorial search and coordinate descent for dis- crete optimization," arXiv preprint arXiv:1706.06493, 2017.
Quadratic optimization for unimodular sequence design via an ADPM framework. Xianxiang Yu, Guolong Cui, Jing Yang, Jian Li, Lingjiang Kong, IEEE Transactions on Signal Processing. 68Xianxiang Yu, Guolong Cui, Jing Yang, Jian Li, and Lingjiang Kong, "Quadratic optimization for unimodular sequence de- sign via an ADPM framework," IEEE Transactions on Signal Processing, vol. 68, pp. 3619-3634, 2020.
Code design to optimize radar detection performance under accuracy and similarity constraints. Antonio De Maio, Silvio De Nicola, Yongwei Huang, Shuzhong Zhang, Alfonso Farina, IEEE Transactions on Signal Processing. 5611Antonio De Maio, Silvio De Nicola, Yongwei Huang, Shuzhong Zhang, and Alfonso Farina, "Code design to op- timize radar detection performance under accuracy and simi- larity constraints," IEEE Transactions on Signal Processing, vol. 56, no. 11, pp. 5618-5629, 2008.
Constructing binary sequences with good correlation properties: An efficient analytical-computational interplay. Arindam Bose, Mojtaba Soltanalian, IEEE Transactions on Signal Processing. 6611Arindam Bose and Mojtaba Soltanalian, "Constructing bi- nary sequences with good correlation properties: An efficient analytical-computational interplay," IEEE Transactions on Signal Processing, vol. 66, no. 11, pp. 2998-3007, 2018.
Efficient method for constructing optimized long binary spreading sequences. Sabrina Boukerma, Khaled Rouabah, Salaheddine Mezaache, Salim Atia, International Journal of Communication Systems. 3444719Sabrina Boukerma, Khaled Rouabah, SalahEddine Mezaache, and Salim Atia, "Efficient method for constructing optimized long binary spreading sequences," International Journal of Communication Systems, vol. 34, no. 4, pp. e4719, 2021.
A multi-objective quantum genetic algorithm for MIMO radar waveform design. Tianqu Liu, Jinping Sun, Guohua Wang, Yilong Lu, Remote Sensing. 14102387Tianqu Liu, Jinping Sun, Guohua Wang, and Yilong Lu, "A multi-objective quantum genetic algorithm for MIMO radar waveform design," Remote Sensing, vol. 14, no. 10, pp. 2387, 2022.
Designing lowcorrelation GPS spreading codes with a natural evolution strategy machine-learning algorithm. Yasmin Tara, Grace Xingxin Mina, Gao, NAVIGATION, Journal of the Institute of Navigation. 691Tara Yasmin Mina and Grace Xingxin Gao, "Designing low- correlation GPS spreading codes with a natural evolution strat- egy machine-learning algorithm," NAVIGATION, Journal of the Institute of Navigation, vol. 69, no. 1, 2022.
Converting the 0-1 polynomial programming problem to a 0-1 linear program. Fred Glover, Eugene Woolsey, Operations Research. 221Fred Glover and Eugene Woolsey, "Converting the 0-1 poly- nomial programming problem to a 0-1 linear program," Oper- ations Research, vol. 22, no. 1, pp. 180-182, 1974.
Stephen Boyd, Lieven Vandenberghe, Convex optimization. Cambridge university pressStephen Boyd and Lieven Vandenberghe, Convex optimization, Cambridge university press, 2004.
Branch-and-bound methods: A survey. L Eugene, David E Lawler, Wood, Operations research. 144Eugene L Lawler and David E Wood, "Branch-and-bound methods: A survey," Operations research, vol. 14, no. 4, pp. 699-719, 1966.
Integer programming. Michele Conforti, Gérard Cornuéjols, Giacomo Zambelli, Springer271Michele Conforti, Gérard Cornuéjols, and Giacomo Zambelli, Integer programming, vol. 271, Springer, 2014.
Juniper: An open-source nonlinear branch-andbound solver in Julia. Ole Kröger, Carleton Coffrin, Hassan Hijazi, Harsha Nagarajan, Integration of Constraint Programming. Springer International PublishingOle Kröger, Carleton Coffrin, Hassan Hijazi, and Harsha Na- garajan, "Juniper: An open-source nonlinear branch-and- bound solver in Julia," in Integration of Constraint Program- ming, Artificial Intelligence, and Operations Research. 2018, pp. 377-386, Springer International Publishing.
Symmetry in integer linear programming. François Margot, 50 Years of Integer ProgrammingFrançois Margot, "Symmetry in integer linear programming," 50 Years of Integer Programming 1958-2008, pp. 647-686, 2010.
JuMP: A modeling language for mathematical optimization. Iain Dunning, Joey Huchette, Miles Lubin, SIAM Review. 592Iain Dunning, Joey Huchette, and Miles Lubin, "JuMP: A modeling language for mathematical optimization," SIAM Re- view, vol. 59, no. 2, pp. 295-320, 2017.
| [
"https://github.com/Stanford-NavLab/binary_seq_optSequence"
]
|
[
"Extreme values and the level-crossing problem. An application to the Feller process",
"Extreme values and the level-crossing problem. An application to the Feller process"
]
| [
"Jaume Masoliver \nDepartament de Física Fonamental\nUniversitat de Barcelona, Diagonal\n647, E-08028BarcelonaSpain\n"
]
| [
"Departament de Física Fonamental\nUniversitat de Barcelona, Diagonal\n647, E-08028BarcelonaSpain"
]
| []
| We review the question of the extreme values attained by a random process. We relate it to level crossings either to one boundary (first-passage problems) and two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes. | 10.1103/physreve.89.042106 | [
"https://arxiv.org/pdf/1401.4939v1.pdf"
]
| 45,243,749 | 1401.4939 | 2d15b8c9cde26ba2148b51f53d165dadda54b43c |
Extreme values and the level-crossing problem. An application to the Feller process
20 Jan 2014
Jaume Masoliver
Departament de Física Fonamental
Universitat de Barcelona, Diagonal
647, E-08028BarcelonaSpain
Extreme values and the level-crossing problem. An application to the Feller process
20 Jan 2014(Dated: January 21, 2014)numbers: 8965Gh0250Ey0540Jc0545Tp
We review the question of the extreme values attained by a random process. We relate it to level crossings either to one boundary (first-passage problems) and two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
I. INTRODUCTION
Level-crossing problems -including first-passage and escape problems-have a long and standing tradition in physics, engineering and natural sciences, with great theoretical interest in, for instance, bistability and phase transitions and countless practical applications ranging from meteorology, seismology, reliable theory, structural and electrical engineering and finance, just to name a few [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15].
The level crossing problem is closely related to the theory of extremes, the latter initiated in the late nineteen twenties by the works of Frechet, Fisher and Tippet and subsequently developed by Gnedenko and Gumbel later in the forties and early fifties [16]. It applied to series of independent random variables and the central result is the Frechet-Tippet theorem which states that under suitable conditions the asymptotic distribution of extremes are restricted to be of three types (Gumbel, Frechet and Weibull) [4,16,17]. As remarked in Refs. [4] and [6], when extreme events are rare (which is often the case) they can be approximately treated as independent variables for which the Fisher-Tippet theorem holds. This approximation, however, reduces the question to a problem of statistics and time series analysis and neglects the underlying dynamics and the correlations induced by it.
The extreme-value problem basically includes the maximum and minimum values attained by a given random process during a certain time interval. It also encompasses the range or span defined as the difference between the maximum and the minimum. In physics this problem has been traditionally related to level crossings and first-passage times and it has been basically restricted to diffusion processes [4,18,19] (see also [20] for similar developments aimed also to diffusion processes but oriented to the pure mathematician). This is a complicated business because obtaining firstpassage probabilities is essentially difficult. This is one of * Electronic address: [email protected] the reasons why, to my knowledge, few exact analytical approaches have appeared except for the Wiener process and, to a less extend, for the Ornstein-Uhlenbeck process [4,18,19]. Despite the intrinsic difficulty there are, however, recent works investigating this kind of problems in subdiffusions and other anomalous diffusion processes as well (see [21] and references therein).
In a recent paper [22] we have studied the first-passage problem for the Feller process and presented a complete solution of it, including first-passage and exit probabilities and mean first-passage and mean exit times. One of our goals here is to apply those results to obtain the extreme values attained by the Feller process. Another objective is to review the link between level crossings and extremes by presenting a complete account of the results involved (some of them in a new and simpler form) because the connection among both problems is not widely known in the current physics literature.
In level-crossing problems the issue of primary interest is to ascertain the statistical information on the time taken by a random process to reach, or return to, a given boundary for the first time. If the boundary consists of only one point -which we usually call critical value or threshold-one deals with a first-passage or hitting problem. If the boundary consists of two points we have an escape or exit problem out of the interval spanned by the boundary points. As we will see maximum and minimum are the extremes related to the hitting problem while the maximum absolute value and the span are related to the exit problem.
The paper is organized as follows. In Sec. II we review the relationship between first-passage and extreme-value problems. In Sec. III we review the link between the escape problem and, both, the maximum absolute value and the span. In Secs. IV and V we explicitly obtain these results for the Wiener and Feller processes respectively. A short summary of main results is presented in the last section. Some mathematical proofs and more technical details are in appendices.
II. FIRST PASSAGE AND EXTREMES
The hitting problem of a random process X(t) is solved if we know the first-passage probability, W c (t|x), of reaching for the first time threshold x c when the process starts at x = X(t 0 ) at some initial time t 0 (in what follows we deal with time-homogeneous processes so that t 0 = 0). In terms of the hitting probability the survival probability -i.e., the probability S c (t|x) that at time t, or during any previous time, the process has not reached x c -is simply given by
S c (t|x) = 1 − W c (t|x).(1)
For one-dimensional diffusion processes charaterized by drift f (x) and diffusion coefficient D(x), the hitting probability satisfies the Fokker-Planck equation (FPE) [1,22]
∂ t W c (t|x) = f (x)∂ x W c (t|x) + 1 2 D(x)∂ 2 xx W c (t|x),(2)
with initial and boundary conditions given by
W c (0|x) = 0, W c (t|x c ) = 1.(3)
Equation (1) shows that the survival probability obeys the same FPE but with initial and boundary conditions reversed.
We will now relate the first-passage problem with the extreme values (the maximum and the minimum) reached by the process during a given interval of time. There are other extremes, such as the range or span, which will be discussed in the next section.
A. The maximum
We denote by M (t) the maximum value reached by X(t) over the time span (0, t). Formally,
M (t) = max{X(τ ); 0 ≤ τ ≤ t}.
Note that M (t) is a random quantity whose value depends on the particular trajectory of X(t) and its distribution function is defined by
Φ max (ξ, t|x) = Prob{M (t) < ξ|X(0) = x}.(4)
In order to relate this function with the hitting probability we distinguish two cases: ξ > x and ξ < x. Suppose first that the value of the maximum ξ is greater than the initial value, ξ > x, in this case the process X(t) has not crossed threshold ξ at time t and the probability of the event {M (t) < ξ|X(0) = x} equals the survival probability S ξ (t|x). That is
Φ max (ξ, t|x) = S ξ (t|x), (ξ > x).
If on the other hand the value of the maximum is lower than the initial point, ξ < x, the event {M (t) < ξ|X(0) = x} is impossible and has zero probability. In other words, Φ max (ξ, t|x) = 0, if ξ < x. We summarize both cases into the single expression:
Φ max (ξ, t|x) = S ξ (t|x)Θ(ξ − x),(5)
where Θ(x) is the Heaviside step function. By taking the derivative with respect to ξ and recalling that S x (t|x) = 0 (survival is impossible starting at the boundary) we get the following expression for the probability density function (PDF) ϕ max (ξ, t|x) of the maximum
ϕ max (ξ, t|x) = ∂S ξ (t|x) ∂ξ Θ(ξ − x).(6)
Let us denote by M (t) x the mean maximum value,
M (t) x = ∞ −∞ ξϕ max (ξ, t|x)dξ.(7)
We have
M (t) x = ∞ x ξ ∂S ξ (t|x) ∂ξ dξ.(8)
At first sight this expression can be simplified by an integration by parts. This is, however, not possible because S ξ → 1 as ξ → ∞ leading to a divergent result. The situation can be amended using W ξ instead of S ξ . Substituting Eq. (1) into Eq. (8) followed by an integration by parts then yields
M (t) x = x + ∞ x W ξ (t|x)dξ,(9)
where we have assumed that W ξ decreases faster than 1/ξ (i.e., ξW ξ → 0 as ξ → ∞). Attending that W ξ is always positive this equation shows, the otherwise obvious result, that the mean maximum is greater than the initial value.
Following an analogous reasoning we can easily see that the moments of the maximum, defined by
M n (t) x = ∞ −∞ ξ n ϕ max (ξ, t|x)dξ,(10)
are given by
M n (t) x = x n + n ∞ x ξ n−1 W ξ (t|x)dξ,(11)
(n = 1, 2, 3, . . . ). In writing this equation we have assumed that ξ n W ξ → 0 as ξ → ∞ which is the condition imposed on W ξ for moments to exist.
B. The minimum
We denote by m(t) = min{X(τ ); 0 ≤ τ ≤ t} the minimum value attained by X(t) during the time interval (0, t), and let Φ min (ξ, t|x) = Prob{m(t) < ξ|X(0) = x} be its distribution function. Note that if ξ < x the event {m(t) < ξ|X(0) = x} implies that the process has crossed threshold ξ at time t or before. Hence the the distribution function agrees with the hitting probability to level ξ, i.e. Φ min (ξ, t|x) = W ξ (t|x). On the other hand, when ξ > x the event {m(t) < ξ|X(0) = x} is certain and Φ min (ξ, t|x) = 1. Summing up
Φ min (ξ, t|x) = Θ(ξ − x) + W ξ (t|x)Θ(x − ξ).(12)
Let us denote by ϕ min (ξ, t|x) the PDF of the minimum m(t). Taking the derivative with respect to ξ of Φ min and noting that
W ξ (t|x)δ(x − ξ) = δ(x − ξ) (recall that W ξ (t|ξ) = 1) we get ϕ min (ξ, t|x) = ∂W ξ (t|x) ∂ξ Θ(x − ξ).(13)
The mean minimum value, defined as
m(t) x = ∞ −∞ ξϕ min (ξ, t|x)dξ,(14)
is then given by
m(t) x = x −∞ ξ ∂W ξ (t|x) ∂ξ dξ.(15)
An integration by parts yields
m(t) x = x − lim ξ→−∞ [ξW ξ (t|x)] − x −∞ W ξ (t|x)dξ.
Because W −∞ (t|x) = 0 (i.e., hitting an infinite threshold is impossible) then, if we also assume that W ξ decreases faster than 1/|ξ|, we have ξW ξ → 0 as ξ → −∞ and
m(t) x = x − x −∞ W ξ (t|x)dξ,(16)
which shows that the mean minimum value is indeed lower than the initial value.
Analogously to the maximum value, the moments of the minimum are given by
m n (t) x = x n − n x −∞ ξ n−1 W ξ (t|x)dξ,(17)
as long as W ξ decreases faster than |ξ| −n as ξ → −∞.
III. EXTREMES AND THE ESCAPE PROBLEM
The escape, or exit, problem addresses the question of whether or not a given process X(t) starting inside an interval (a, b) has departed from it for the first time. The problem is solved when one knows the escape probability W a,b (t|x), which is defined as the probability of leaving (a, b) at time t (or before) for the first time and starting at x ∈ (a, b). Closely related to the W a,b is the survival probability,
S a,b (t|x) = 1 − W a,b (t|x),(18)
giving the probability that, starting inside (a, b), the process has not exited this interval at time t or before. For one dimensional diffusion processes, the escape probability satisfies the FPE [1,22]
∂ t W a,b (t|x) = f (x)∂ x W a,b (t|x) + 1 2 D(x)∂ 2 xx W a,b (t|x),(19)
with initial and boundary conditions given by
W a,b (0|x) = 0, W a,b (t|a) = W a,b (t|a) = 1.(20)
Note that S a,b (t|x) also obeys Eq. (19) but with initial and boundary conditions reversed; that is,
S a,b (0|x) = 1, S a,b (t|a) = S a,b (t|a) = 0.
Extreme values related to the escape probability are essentially two: the maximum absolute value and the span. Let us next address them.
A. The maximum absolute value
We now consider the maximum absolute value attained by X(t) during the time span (0, t). Denote by G max (ξ, t|x) its distribution function, G max (ξ, t|x) = Prob max X(τ ) < ξ X(0) = x , (21) where 0 ≤ τ ≤ t and ξ > 0. Certainly ξ cannot be negative and hence G max (ξ, t|x) = 0, (ξ < 0).
In order to connect this distribution function with the escape problem we must distinguish two cases according to which the initial point is inside or outside the interval (−ξ, ξ) spanned by the level ξ > 0 of the absolute maximum. For the first case where −ξ < x < ξ, we have
max X(τ ) < ξ; 0 ≤ τ ≤ t X(0) = x = −ξ < X(τ ) < ξ; 0 ≤ τ ≤ t X(0) = x ,
meaning that during the time span (0, t) the process X(t) has not left the interval (−ξ, ξ). Hence, the distribution function (21) coincides with the survival probability
G max (ξ, t|x) = S −ξ,ξ (t|x), (|x| < ξ).
Note that when the initial value is outside the interval
(−ξ, ξ), the event {max|X(τ )| < ξ|X(0) = x} (0 ≤ τ ≤ t) is impossible and G max (ξ, t|x) = 0, (|x| > ξ).
Therefore,
G max (ξ, t|x) = S −ξ,ξ (t|x)Θ(ξ − |x|),(22)
(ξ > 0). The PDF of the absolute maximum is defined by
g max (ξ, t|x) = ∂ ∂ξ G max (ξ, t|x).
Substituting for Eq. (22) and noting that
S −ξ,ξ (t|x)δ(ξ − |x|) = S −|x|,|x| (t|x)δ(ξ − |x|) = 0, we get g max (ξ, t|x) = ∂S −ξ,ξ (t|x) ∂ξ Θ(ξ − |x|),(23)
(ξ > 0). In terms of the escape probability W −ξ,ξ this PDF can be written as
g max (ξ, t|x) = − ∂W −ξ,ξ (t|x) ∂ξ Θ(ξ − |x|).(24)
Let us next evaluate the mean value of the absolute maximum defined by
max |X(t)| x = ∞ 0 ξg max (ξ, t|x)dξ. From Eq. (24) we have max |X(t)| x = − ∞ |x| ξ ∂W −ξ,ξ (t|x) ∂ξ dξ.
Integration by parts yields
max |X(t)| x = |x| + ∞ |x| W −ξ,ξ (t|x)dξ,(25)
where we have taken into account that W −|x|,|x| (t|x) = 1 and made the reasonable assumption that the escape probability W −ξ,ξ decreases faster than 1/ξ, that is,
ξW −ξ,ξ → 0 as ξ → ∞.
Again, the moments of the maximum absolute value can be written as
max |X(t)| n x = |x| n + n ∞ |x| ξ n−1 W −ξ,ξ (t|x)dξ,(26)
(n = 1, 2, 3, . . . ). These moments exist as long as W −ξ,ξ decreases faster than |ξ| −n as |ξ| → ∞.
We finally remark that obtaining the minimum absolute value is meaningless, for this value is not a random variable: it is always zero.
B. The range or span
The range or span (also termed as "the oscillation") of a random process X(t) over the time interval (0, t) is defined as the difference between the maximum and the minimum:
R(t) = M (t) − m(t).(27)
This random quantity is either characterized by the distribution function,
F R (r, t|x) = Prob{R(t) < r|X(0) = x},
or by the PDF
f R (r, t|x) = ∂ ∂r F R (r, t|x).(28)
We can relate the span distribution to the escape problem out of a variable interval. This connection is a bit convoluted and we show in Appendix A that
f R (r, t|x) = x x−r ∂ 2 S v,r+v (t|x) ∂r 2 dv,(29)(r > 0), where S v,r+v (t|x) is the survival probability in the (variable) interval (v, r + v).
Having the expression for the span PDF we next address the issue of the mean span:
R(t) x = ∞ 0 rf R (r, t|x)dr.(30)
Unfortunately the introduction of Eq. (29) into this definition leads to indeterminate boundary terms as the reader can easily check. In the Appendix B we present a way of avoiding these inconsistencies and the final result reads
R(t) x = ∞ −∞ ξ ∂S ξ (t|x) ∂ξ dξ,(31)
where S ξ (t|x) if the survival probability up to threshold ξ. Let us incidentally note the curious fact that the complete probability distribution of the span is determined by the escape problem out of the variable inter-
val (v, v + r) where x − r < v < x.
However, the first moment of this distribution depends only on the firstpassage problem of a varying threshold −∞ < ξ < ∞.
In terms of the the hitting probability W ξ (t|x) the expression above for the mean span is greatly simplified. Indeed, substituting S ξ = 1 − W ξ into Eq. (31), followed by an integration by parts, yield
R(t) x = − ∞ −∞ ξ ∂W ξ (t|x) ∂ξ dξ = −ξW ξ (t|x) ξ=+∞ ξ=−∞ + ∞ −∞ W ξ (t|x)dξ.
However, W ξ → 0 as ξ → ±∞ (i.e., crossing becomes impossible as threshold grows). If, in addition, we assume that this decay is faster than 1/ξ, i.e., ξW ξ → 0 (ξ → ±∞), we have
R(t) x = ∞ −∞ W ξ (t|x)dξ.(32)
It is worth noticing that one can arrive at this expression in a more direct way. In effect, recalling the definition of the range as the difference between the maximum and the minimum, we have
R(t) x = M (t) x − m(t) x ,(33)
and substituting for Eqs. (9) and (16) we get
R(t) x = ∞ x W ξ (t|x)dξ + x −∞ W ξ (t|x)dξ, which is Eq. (32).
There is no simple expressions, beside Eq. (32), for the span higher moments as it is for the other extremes. In the present case moments have to be evaluated through their definition and the use of Eq. (29)
R n (t) x = ∞ 0 r n f R (r, t|x)dr = ∞ 0 r n dr x x−r ∂ 2 S v,r+v (t|x) ∂r 2 dv.
This is quite unfortunate because the evaluation of span moments becomes a complicated business even numerically. The reason for not having a more convenient expression lies in the fact that maxima and minima are generally correlated quantities and these correlations appear in all moments greater than the first one.
IV. THE WIENER PROCESS
We now illustrate the expressions obtained above by reviewing one of the simplest, albeit very relevant, cases: the Wiener process or free Brownian motion, a diffusion process with zero drift and constant diffusion coefficient. Although some results related to first-passage and extremes for the Brownian motion can be traced as far back as to Bechelier, Levy and Feller [18], many results are found scattered in the mathematics and physics literature [18,19]. It is, therefore, useful to have a summary of the main results about the extreme values of the Wiener process.
A. The maximum and the minimum
The first-passage probability W c (t|x) to some threshold x c will be determined by the solution of the FPE
dx 2 = (2/D)sŴ c ,Ŵ c (s|x c ) = 1/s.(34)
The solution to this problem that is finite for both x > x c and x < x c is straightforward and readŝ
W c (s|x) = 1 s exp − 2s D |x − x c | .
Laplace inversion yields [23]
W c (t|x) = Erfc |x − x c | √ 2Dt ,(35)
where Erfc(z) is the complementary error function. The PDF of the maximum value is then given by Eq. (6) or, equivalently, by
ϕ max (ξ, t|x) = − ∂W ξ (t|x) ∂ξ Θ(ξ − x),
which results in the following truncated Gaussian density
ϕ max (ξ, t|x) = 2 πDt 1/2 e −(ξ−x) 2 /2Dt Θ(ξ − x). (36)
The mean maximum is then given by (cf Eqs. (7) or (9))
M (t) x = x + 2Dt π 1/2 ,(37)
Likewise, the PDF of the minimum value is given by (cf Eq. (13))
ϕ min (ξ, t|x) = 2 πDt 1/2 e −(x−ξ) 2 /2Dt Θ(x − ξ),(38)
and the mean minimum reads
m(t) x = x − 2Dt π 1/2 .(39)
Notice that both extreme values grow like t 1/2 as t → ∞, the otherwise typical behavior of the Wiener process. These results can be generalized to include any moment of the maximum and the minimum. By combining Eqs. (10) and (36) we easily see that
M n (t) x = 1 √ π n k=0 n k Γ k + 1 2 (2Dt) k/2 x n−k(40)
(n = 1, 2, 3, . . . ). Following an analogous reasoning we show that the moments of of the minimum are 1, 2, 3, . . . ). With increasing n these expressions become rather clumsy. We can get, however, simpler expressions if instead of the maximum or the minimum we consider their "distance" from the initial position. This is defined by M (t) − x in the case of the maximum or by x − m(t) for the minimum. We have
m n (t) x = 1 √ π n k=0 (−1) k n k Γ k + 1 2 (2Dt) k/2 x n−k (41) (n =M (t) − x n (t) x = x − m(t) n (t) x = 1 √ π Γ n + 1 2 (2Dt) n/2 .(42)
Both distances are equal showing the otherwise obvious symmetry of the process.
B. The maximum absolute value
As shown in the previous section in order to characterize both the maximum absolute value and the span, we need to know the escape probability, W a,b (t|x), out of an interval (a, b). For the maximum absolute value the interval is symmetric while for the span is asymmetric.
The Laplace transform of the exit probability obeys the same equation than that of the first-passage probability, Eq. (34), but with two boundary points:
W a,b (s|a) =Ŵ a,b (s|b) = 1 s .
The solution to this problem iŝ
W a,b (t|x) = cosh 2s/D[x − (a + b)/2] s cosh 2s/D[(a − b)/2] .(43)
The Laplace transform can be easily inverted [23]. In the case of a symmetric interval (−ξ, ξ) the inverse transform is somewhat simpler yielding [18,23]
W −ξ,ξ (t|x) = 1 − 2 π ∞ n=0 (−1) n n + 1/2 e −D(n+1/2) 2 π 2 t/ξ 2 × cos (n + 1/2)πx/ξ . (44)
The PDF for the maximum absolute value, g max (ξ, t|x), is readily obtained by introducing Eq. (44) into Eq. (24) (we will not write this expression). Likewise the mean absolute maximum can be obtained from this form of the escape probability after substituting it into Eq. (25). The resulting expression is given by complicated infinite sums of exponential functions of little practical use, since from it is hard to figure out the asymptotic time behavior of that average. It turns out to be more efficient to proceed from the Laplace transform of the average. We thus definê µ(s|x) = L max |X(t)| x as the (time) Laplace transform of the mean absolute maximum. Transforming Eq. (25) yieldŝ
µ(s|x) = 1 s |x| + ∞ |x|Ŵ −ξ,ξ (s|x)dξ.
Plugging Eq. (43) we see that the resulting integrals can be done in close form and writê
µ(s|x) = 1 s |x| + √ 2D s 3/2 cosh x 2s/D × π 2 − arctan e x √ 2s/D(45)
We now use this exact expression for the asymptotic analysis of the mean because, as Tauberian theorems prove [24], the long time behavior of the mean is determined by the small s behavior of its Laplace transform. It is a matter of simple algebra to show that as s → 0 we havê
µ(s|x) = 1 s |x| + π 4 √ 2D s 3/2 + O 1 s 1/2 ,
which after Laplace inversion yields the asymptotic form of the mean absolute maximum max |X(t)| x ≃ |x| + πDt 2
1/2 + O 1 t 1/2 ,(46)
showing again the t 1/2 growth.
C. The span
Let us finally describe the span of the Wiener process. As before we better work with Laplace transforms. Thus from Eq. (29) we writê
f R (r, s|x) = − ∂ 2 ∂r 2 x x−rŴ v,r+v (s|x)dv, (r > 0)
, where the escape probabilityŴ v,r+v (s|x) is given by Eq. (43) (note that the second derivative can be pulled out of the integral because the lower limit is linear in r).
For the Wiener process the escape probability is given by Eq. (43) and the integral above can be done in close form yieldinĝ
f R (r, s|x) = −(2D) 1/2 ∂ 2 ∂r 2 1 s 3/2 tanh s 2D 1/2 r .(47)
The Laplace transform of the mean span is then given by
L R(t) x = ∞ 0 rf R (r, s|x)dr = −(2D) 1/2 ∞ 0 r ∂ 2 ∂r 2 1 s 3/2 tanh s 2D 1/2 r dr.
Integration by parts yields
L R(t) x = (2D) 1/2 s 3/2 ,
and after inversion we get the exact result
R(t) x = 2 2Dt π 1/2 ,(48)
which is, of course, the difference between the mean maximum (37) and the mean minimum (39) (see Eq. (33)). An interesting fact to note is that the long-time ratio between the mean absolute maximum (46) and the mean span is fixed and given by
lim t→∞ max |X(t)| x R(t) x = π 4 ,
which means that at long times the mean maximum absolute value is always smaller than the mean span.
V. EXTREMES OF THE FELLER PROCESS
The Feller process is another example of diffusion process having linear drift and linear diffusion coefficient vanishing at the origin [25]. The process has been applied not only to the modeling of socio-economic systems (the CIR-Heston model [26]) but also in theoretical biology such as population dynamics and neuron firing processes [27,28]. It has been recently applied to reproduce cholera epidemics as a susceptible-infected-recovered model [29]. It is also a significant model for single neuron dynamics where functionals of the first-passage time are employed to characterize the parameters of the model [30,31].
The process is governed by a stochastic differential equation which in non-dimensional units (see [22]) can be written as
dX(t) = −[X(t) − θ]dt + 2X(t)dW (t),(49)
where W (t) is the Wiener process and θ > 0 is a dimensionless parameter -called saturation or normal levelrepresenting the value to which X(t) is attracted. This parameter has a key role in the behavior of the process for it is related to the important question of the possibility of reaching the origin (which, for instance, in population dynamics would imply extinction [32]). Indeed, if θ ≤ 1 the probability of reaching the origin is greater than zero and x = 0 is an accessible boundary. On the other hand, if θ > 1 such a probability is zero which renders the origin unaccessible (see [22] for a simple proof and more details).
The linear drift f (x) = −(x − θ) drives the process towards level θ, a deterministic pull which is increased near the origin where the noise term is very small. In effect, the state-dependent diffusion coefficient D(x) = 2x for large values of x enhances the the effect of noise while as x goes to zero this effect vanishes. Therefore, when the process reaches the origin the drift drags it towards θ and since θ is positive the process remains always positive. The very fact that X(t) never attains negative values makes the process a suitable candidate for modeling a number of phenomena in natural and social sciences [22].
We now study the extreme values attained by the Feller process. We will basically obtain expressions for the maximum and minimum values because, due the positive character of the process, extremes such as the maximum absolute value coincide with the maximum.
For X(t) described by Eq. (49) the first-passage probability to some threshold ξ is the solution of the Fokker-Planck equation (cf. Eqs. (2)-(3))
∂ t W ξ (t|x) = −(x − θ)∂ x W ξ (t|x) + x∂ 2 xx W ξ (t|x),(50)
with initial and boundary conditions given by
W ξ (0|x) = 0, W ξ (t|ξ) = 1.(51)
We have recently proved that the solution to this problem for the time Laplace transform of W ξ is given by [22] W ξ (s|x) =
F (s, θ, x) sF (s, θ, ξ) , ξ ≥ x, U (s, θ, x) sU (s, θ, ξ) , ξ ≤ x,(52)
where F and U are confluent hypergeometric (Kummer) functions of first and second kind respectively [33].
A. The maximum
The distribution function of the maximum is related to the survival probability S ξ (t|x) by Eq. (5) which we write in terms of the hitting probability, W ξ (t|x), as
Φ max (ξ, t|x) = [1 − W ξ (t|x)] Θ(ξ − x).
In terms of W ξ the mean maximum is given by Eq. (16):
M (t) x = x + ∞ x W ξ (t|x)dξ.
Looking at Eq. (52) we see that for the Feller process the time Laplace transform of the distribution function and that of the mean are respectively given bŷ
Φ max (ξ, s|x) = 1 s 1 − F (s, θ, x) F (s, θ, ξ) Θ(ξ − x).(53)andM (s|x) = 1 s x + F (s, θ, x) ∞ x dξ F (s, θ, ξ) ,(54)whereM (s|x) = L M (t) x
is the time Laplace transform of the mean maximum. The PDF of the maximum is readily obtained by taking the derivative with respect to ξ of the distribution function (53). We havê
ϕ max (ξ, s|x) = F (s, θ, x)F ′ (s, θ, ξ) sF 2 (s, θ, ξ) Θ(ξ − x),(55)
where [33]
F ′ (s, θ, ξ) = d dξ F (s, θ, ξ) = s θ F (s + 1, θ + 1, ξ). (56)
Unfortunately the analytical inversion of these expressions to get their values in real time seems to be beyond reach, even though numerical inversion is always possible. We will find, nonetheless, some approximations that may be appropriate in practical cases.
Let us first show that, like Brownian motion, the mean maximum value of the Feller process diverges as t → ∞. One might have thought that since -unlike Brownian motion-the Feller process possesses a force drifting the process towards the value θ, the mean maximum would tend to a finite value (not far from θ) as time increases. Let us show that this is not the case. Indeed, recalling the following property of the Laplace transform [24]:
lim t→∞ f (t) = lim s→0 sf (s) .(57)
and the value of the Kummer function F (s = 0, θ, z) = 1 [33], we see that the limit s → 0 in (54) leads to
lim s→0 sM (s|x) = x + ∞ x dξ = ∞. Whence M (t) x → ∞, (t → ∞)(58)
and the mean maximum diverges as time increases. We next refine this asymptotic behavior. As is well known [1][2][3] the long-time expressions of first-passage probabilities are related to the mean first-passage time by (see also [22] for a simple derivation)
W ξ (t|x) ≃ 1 − e −t/T ξ (x) , (t → ∞),(59)
where T ξ (x) is the mean first-passage time to threshold ξ starting from x. Obviously this asymptotic expression is valid as long as the mean firs-passage time exists which is not always the case. Thus, for instance, in the Wiener process T ξ (x) = ∞ and the approximation given by Eq.
(59) is meaningless. For the Feller process this time exists and, as we have proved in [22], reads
T ξ (x) = (1/θ) ξ x F (1, 1 + θ, z)dz, ξ > x, x ξ U (1, 1 + θ, z)dz, ξ < x.(60)
If the mean first-passage time exists, the distribution function of the maximum and its mean are, as t → ∞, approximately given by
Φ max (ξ, t|x) ≃ e −t/T ξ (x) Θ(ξ − x),(61)
and
M (t) x ≃ x + ∞ x 1 − e −t/T ξ (x) dξ,(62)
where here
T ξ (x) = 1 θ ξ x F (1, 1 + θ, z)dz
since the maximum is always greater than the initial point (ξ > x). Note that 1 − e −t/T ξ (x) → 0 as ξ → ∞ because the mean first-passage time to an infinite threshold is infinite and the integral in Eq. (62) converges [34]. Equation (62) is a compact expression that may be suitable for the numerical evaluation of the mean maximum for large values of time. As far as I can see it is, however, of little use for further analytical approximations.
Let us thus obtain another asymptotic expansion of the maximum value which is valid for large values of the initial position x. Our starting point is the time Laplace transform of the mean maximum given by Eq. (54). Assume now that x → ∞ we can then use the following approximation [33]
F (s, θ, x) = Γ(θ) Γ(s) e x x s−θ 1 + O x −1 ,(63)
and since ξ > x then ξ is also large and we have an analogous expression for F (s, θ, ξ). Substituting both approximations into Eq. (54) we get as x → ∞
M (s|x) ≃ 1 s x + e x x s−θ ∞ x e −ξ ξ θ−s dξ .
But the integral can written in terms of the incomplete Gamma function Γ(1 + θ − s, x) and within the same approximation we have [33] ∞
x e −ξ ξ θ−s dξ = Γ(1+θ−s, x) ≃ e −x x θ−s 1 + O x −1 .
Substituting into the previous equation yieldsM (s|x) ≃ (x + 1)/s + O(x −1 ) which after Laplace inversion results in the simple asymptotic approximation:
M (t) x ≃ x + 1 + O x −1 .(64)
Despite its appeal, this approximations merely means that the mean maximum value grows at the same pace as it does the starting value, as can be otherwise seen from Eq. (9).
B. The minimum
We recall from Sec. II that in terms of the hitting probability the distribution function of the minimum is (see Eq. (12))
Φ min (ξ, t|x) = Θ(ξ − x) + W ξ (t|x)Θ(x − ξ).
The mean minimum is given in Eq. (16) where, due to the positive character of the Feller process, we replace −∞ in the lower limit of integration by 0:
m(t) x = x − x 0 W ξ (t|x)dξ.(65)
Taking into account Eq. (52), the time Laplace transform of these quantities readŝ
Φ min (ξ, s|x) = 1 s Θ(ξ − x) + U (s, θ, x) U (s, θ, ξ) Θ(x − ξ) ,(66)
andm
(s|x) = 1 s x − U (s, θ, x) x 0 dξ U (s, θ, ξ) ,(67)
wherem(s|x) is the time Laplace transform of the mean minimum and the U 's are Kummer functions of second kind [33].
Taking the ξ-derivative of Eq. (66) we get the PDF of the minimum
ϕ min (ξ, s|x) = − U (s, θ, x)U ′ (s, θ, ξ) sU 2 (s, θ, ξ) Θ(x − ξ),(68)
where [33] U ′ (s, θ, ξ) = d dξ U (s, θ, ξ) = −sU (s + 1, θ + 1, ξ). (69)
Starting form Eq. (67) and using the property given in Eq. (57) we can obtain the limiting value of the mean minimum when t → ∞. We begin with the relationship between Kummer functions U and F [33]:
U (s, θ, x) = Γ(1 − θ) Γ(1 + s − θ) F (s, θ, x)(70)+ Γ(θ − 1) Γ(s) x 1−θ F (1 + s − θ, 2 − θ, x).
Recalling that as s → 0 F (s, θ, x) → 1 and Γ(s) → ∞ we see that U (s, θ, x) → 1. Hence
m(t) x → 0, (t → ∞).(71)
The mean minimum thus converges to the origin as time increases. We next refine this crude estimate for large, but finite, values of time. When t → ∞ and after using the asymptotic form of the hitting probability given in Eq. (59), we get
Φ min (ξ, t|x) ≃ 1 − e −t/T ξ (x) Θ(x − ξ),(72)(t → ∞), where T ξ (x)
is the MFPT to threshold ξ which when ξ < x is given by (cf. Eq. (60))
T ξ (x) = x ξ U (1, 1 + θ, z)dz, (ξ < x).
Substituting Eq. (59) into Eq. (65) we find the following long-time approximation of the mean minimum
m(t) x ≃ x 0 e −t/T ξ (x) dξ, (t → ∞).(73)
Likewise the long-time behavior of the maximum value discussed above, these asymptotic expressions related to the minimum value are more appropriate for numerical evaluation rather than for obtaining further practical analytical approximations.
We will find, nonetheless, approximations of the mean minimum when the initial value x is small and close to the origin. Our starting point is the expression of the Laplace transform of the mean minimum given in Eq. (67). We next assume that x is small then from Eq, (70) and the fact that F (a, b, x) = 1 + O(x) [33] we write
U (s, θ, x) = Γ(1 − θ) Γ(s + 1 − θ) 1 + O(x) + Γ(θ − 1) Γ(s) x 1−θ 1 + O(x) .(74)
Note that the leading term in this expansion depends on whether θ > 1 or θ < 1. We, therefore, distinguish the cases: (i) θ > 1 (recall that in this case the origin is unattainable by the dynamical evolution of the process [22]). Now Eq. (74) yields the approximation
U (s, θ, x) ≃ Γ(θ − 1) Γ(s) x 1−θ 1 + O(x) .(75)
Since the integral in Eq. (67) runs from ξ = 0 to ξ = x when x is small ξ is also small. We can thus use approximation (75) for U (s, θ, ξ) inside the integral and write
x 0 dξ U (s, θ, ξ) ≃ Γ(s) Γ(θ − 1) x 0 ξ θ−1 dξ = Γ(s) Γ(θ − 1) x θ θ .
Plugging this approximation along with Eq. (75) into Eq. (67) we getm(s|x) ≃ x(1 − 1/θ)/s which after Laplace inversion yields
m(t) x ≃ 1 − 1 θ x, (x → 0).(76)
(ii) θ < 1 (the origin is attainable [22]). In this case Eq. (74) provides the following consistent expansion
U (s, θ, x) = Γ(1 − θ) Γ(1 + s − θ) + Γ(θ − 1) Γ(s) x 1−θ + O(x). (77)
Substituting this into the integral in Eq. (67), expanding the denominator to the lowest order in ξ (recall that ξ < x is small when x is small) and integrating we obtain
x 0 dξ U (s, θ, ξ) = Γ(1 + s − θ) Γ(1 − θ) x + O(x 2−θ ).(78)
In order to proceed further it is more convenient to use an integral representation for the Kummer function U (s, θ, x) (which multiplies the integral in Eq. (67)) instead of using the expansion (77). Thus, taking into account the transformation formula [33] U (s, θ, x) = x 1−θ U (s + 1 − θ, 2 − θ, x), and using the integral representation [33]
U (a, b, x) = 1 Γ(a) ∞ 0 e −xz z a−1 (1 + z) b−a−1 dz, we get U (s, θ, x) = x 1−θ Γ(1 + s − θ) ∞ 0 e −xz z −θ z 1 + z s dz.m(s|x) = 1 s x − x 2−θ Γ(1 − θ) (80) × ∞ 0 e −xz z −θ z 1 + z s dz + O(x 3−2θ ).
In the Appendix C we invert this equation and obtain the power law
m(t) x ≃ A(t)x 2−θ , (x → 0),(81)
where
A(t) = 1 Γ(2 − θ) e −t 1 − e −t 1−θ .(82)
We finally note that θ < 1 implies 2 − θ > 1 and the mean minimum (81) decays sharper than the linear law (76), the latter applicable when θ > 1. This is a somewhat intuitive and interesting behavior meaning that as the process starts near the origin the average minimum tends faster to x = 0 if the boundary is accessible than otherwise.
C. The span
As shown in Sec. III the PDF of the range or span is given by Eq. (29) which in terms of the escape probability and taking the Laplace transform with respect to time readsf
R (r, s|x) = − x x−r ∂ 2Ŵ v,r+v (s|x) ∂r 2 dv.(83)
We have proved elsewhere [22] that in the Feller process the Laplace transform of the escape probability is given bŷ
W v,v+r (s|x) = U (s, θ, v + r) − U (s, θ, v) F (s, θ, x) − F (s, θ, v + r) − F (s, θ, v) U (s, θ, x) s F (s, θ, v)U (s, θ, v + r) − F (s, θ, v + r)U (s, θ, v) .(84)
Unfortunately the introduction of Eq. (84) into Eq. (83) does not lead to an expression amenable to further analytical simplifications, being only suitable for numerical work.
The mean span is simpler because we only need to know the hitting probability instead of the escape probability. Thus substituting Eq. (52) into the Laplace trans-form of Eq. (32) we get
R(s|x) = 1 s U (s, θ, x) x 0 dξ U (s, θ, ξ) +F (s, θ, x) ∞ x dξ F (s, θ, ξ) ,(85)
whereR(s|x) is the Laplace transform of the mean span,
R(s|x) = ∞ 0 e −st R(t) x dt.
Note that the analytical simplifications carried out for the maximum and the minimum are of no use here, for when x is small we can obtain a simpler expression for the first integral but not for the second, while when x is large the situation is reversed. A similar difficulty arises when t → ∞. We, therefore, conclude that Eq. (85) seems to be only appropriate for numerical work.
VI. SUMMARY OF MAIN RESULTS AND CLOSING REMARKS
We have reviewed the relationship between levelcrossing problems and the distribution of extreme values for continuous-time random processes. We have compiled and rederived in a simpler way many general results which would remain otherwise scattered in the literature. We have applied them to the Wiener and Feller processes; the latter, we believe, for the first time.
Let us recall that level-crossing problems are solved when one knows the hitting probability (in first-passage problems) or the exit probability (in escape problems). We have denoted these probabilities by W ξ (t|x) and W a,b (t|x) respectively. In both cases x is the initial value of the process whereas ξ is the threshold, or critical value, and (a, b) is the exit interval. For one-dimensional diffusion processes characterized by drift f (x) and diffusion coefficient D(x) both probabilities satisfy the FPE
∂ t W (t|x) = f (x)∂ x W (t|x) + 1 2 D(x)∂ 2 xx W (t|x)
with initial condition W (0|x) = 0. The boundary conditions are W ξ (t|ξ) = 1 (first-passage) or W a,b (t|a) = W a,b (t|b) = 1 (escape). We denote by M (t|x) and m(t|x) the maximum and minimum values attained by the process during the time span (0, t) and starting at x at t = 0. The PDF's of these random quantities are respectively given by
ϕ max (ξ, t|x) = − ∂W ξ (t|x) ∂ξ Θ(ξ − x), and ϕ min (ξ, t|x) = ∂W ξ (t|x) ∂ξ Θ(x − ξ),
where ϕ max (ξ, t|x)dξ = Prob{ξ < M (t) < ξ + dξ|x} and similarly for ϕ min (ξ, t|x). Moments of order n = 1, 2, 3, . . . of the maximum and the minimum are also written in terms of the hitting probability as
M n (t) x = x n + n ∞ x ξ n−1 W ξ (t|x)dξ, and m n (t) x = x n − n x 0 ξ n−1 W ξ (t|x)dξ.
If we denote by g max (ξ, t|x) the PDF of the maximum absolute value of the random process X(t), i.e.
g max (ξ, t|x)dξ = Prob ξ < max |X(t)| < ξ + dξ x , then g max (ξ, t|x) = − ∂W −ξ,ξ (t|x) ∂ξ Θ(ξ − |x|),
where W −ξ,ξ (t|x) is the escape probability out of the symmetric interval (−ξ, ξ). Moments of this statistic are
max |X(t)| n x = |x| n + n ∞ |x| ξ n−1 W −ξ,ξ (t|x)dξ.
The second quantity related to the escape problem is the range or span, that is, the difference between maximum and minimum R(t) = M (t) − m(t). We define the PDF of this random oscillation as f R (r, t|x)dr = Prob r < R(t) < r + dt x , (r > 0), and it reads:
f R (r, t|x) = − x x−r ∂ 2 W v,r+v (t|x) ∂r 2 dv,
where W v,r+v (t|x) is the escape probability out of the variable interval (v, v + r), where v runs from x − r to x.
The mean range has a simple expression in terms of the hitting probability W ξ (t|x) to a variable threshold:
R(t) x = ∞ −∞ W ξ (t|x)dξ.
Due to correlations between maximum and minimum, the moments of the span have no simple expression in terms of the hitting probability and we need to know the entire escape probability to evaluate moments higher than the first (see the end of Sec. III).
We have applied the above results to the Wiener process. The PDF's of the maximum and minimum are given by simple truncated Gaussian densities and the PDF's of the maximum absolute values and of the span are given by more complicated expressions written in terms of infinite series. We refer the reader to Sec. IV for the explicit expressions of these quantities and more information about mean values and moments.
We have finally dealt with the maximum and minimum values achieved by the Feller process. This is a linear diffusion process which never attains negative values. The behavior of the process near the origin is governed by a dimensionless parameter θ > 0 (cf. Eq. (49)). When θ < 1 the origin is an accessible boundary while if θ > 1 it is unattainable [22].
In a recent work we solved the level-crossing problem for the Feller process and obtained the time Laplace transform of the hitting and escape probabilities [22] x 0 dξ U (s, θ, ξ) .
As t → ∞ these mean values in real time are approximated by where T ξ (x) is given in Eq. (60). We have also proved that as t → ∞, the mean maximum diverges while the mean minimum converges towards the origin:
M (t) x ≃ x + ∞ x 1 − e −t/T ξ (x) dξ,lim t→∞ M (t) x = ∞, lim t→∞ m(t) x = 0.
An interesting behavior is provided by the mean minimum as x → 0. Here we find a different result according to whether the natural boundary x = 0 is unaccessible (θ > 1) or accessible (θ < 1) by the dynamics of the process. In the first case the average minimum decays linearly with x while in the second it decays by a steeper power law. This is summarized by (x → 0)
m(t) x ≃ (1 − 1/θ)x, θ > 1, A(t)x 2−θ , θ < 1,
where A(t) is defined in Eq. (82).
In this paper we have studied the extreme problem in a complete fashion where all extreme statistics are assumed to depend on the initial value X(0) = x taken by the process under study. However, in many practical situations and in some theoretical settings it is not possible to know the exact value of the initial value and one has to resort to averaging over all possible values of x. In such cases one can, for instance, define the averaged (or reduced) maximum PDF as [4] ϕ max (ξ, t) = ∞ −∞ ϕ max (ξ, t|x)p(x)dx, where p(x) is the PDF of the initial value. In those cases where the underlying process X(t) is stationary it is sensible to assume that the process has been functioning since the infinitely distant past so that the initial PDF p(x) is given by the stationary distribution:
p(x) = lim t0→−∞ p(x, t = 0|x 0 , t 0 ),
where p(x, t|x 0 , t 0 ) is the propagator of the underlying process. Obviously such a procedure requires the existence of a stationary distribution, something that, for instance, the Wiener process does not possess but Feller process does (i.e., the Gamma distribution [22]). This averaging procedure and some practical applications of the formalism are under present investigation.
( 79 )
79Substituting Eqs. (78) and (79) into Eq. (67) results in the following approximate expression for the Laplace transform of the mean minimum
e
(ξ, s|x) = F (s + 1, θ + 1, ξ) θF 2 (s, θ, ξ)F (s, θ, x)Θ(ξ − x) andφ min (ξ, s|x) = U (s + 1, θ + 1, ξ) U 2 (s, θ, ξ) U (s, θ, x)Θ(x − ξ), where F (a, b, z) and U (a, b, z) are Kummer functions [−st ϕ(ξ, t|x)dtis the time Laplace transform of ϕ. These exact expressions for the Laplace transform of the PDF's do not seem to be invertible analytically. However, as we have shown in Sec. V there exist asymptotic analytical approximations in real time. Thus, as t → ∞ and after taking the derivative with respect to ξ of Eqs. (61) and (72), we haveϕ max (ξ, t|x) ≃ t F (1, 1 + θ, ξ) θT 2 ξ (x) e −t/T ξ (x) Θ(ξ − x)andϕ min (ξ, t|x) ≃ t U (1, 1 + θ, ξ) T 2 ξ (x) e −t/T ξ (x) Θ(x − ξ),where T ξ (x) is the mean first-passage time given in Eq.(60). The Laplace transforms of the mean maximum and
e
−t/T ξ (x) dξ,
AcknowledgmentsPartial financial support from the Ministerio de Ciencia e Innovación under Contract No. FIS 2009-09689 is acknowledged.Appendix A: The probability distribution of the span Let us denote by F 2 (ξ, η, t|x) the joint distribution function of the maximum and the minimum: F 2 (ξ, η, t|x) = Prob{M (t) < ξ, m(t) < η|X(0) = x}.Note that the event {M (t) < ξ} is the union of two disjoint events:where we have dropped the dependence on the initial value x which is, nonetheless, implied in all what follows. We thus havebut (see Eqs.(4)and(5))where S ξ (t|x) is the survival probability up to the single threshold ξ. If, on the other hand, S η,ξ (t|x) is the survival probability of the interval (η, ξ) one easily realizes thatCollecting results we writeThe joint PDF of the maximum and the minimum, defined as the second derivative of the joint distribution functionis then given byRecalling that starting at any boundary point renders survival impossible we see thatIn terms of the joint density the PDF of the span, Eq.(28), is given by(A2) which, after substituting for Eq. (A1) and integrating the delta function, yieldswhere r > 0 (recall that, by definition, R(t) is always positive). This expression for f R is more conveniently written by making the change of variablesIndeed, dη = dv andSubstituting into Eq. (A3) and taking into account (recall that S x,x+r (t|x) = S x−r,x (t|x) = 0)we finally get(r > 0), which is Eq. (29).Appendix B: The mean spanIn order to avoid divergencies appearing in the evaluation of the mean span we proceed as follows. Instead of using Eq. (29) as the expression for the span PDF we will use the following expression of f R which results of combining Eqs. (A1) and (A2):Plugging intoand performing the integration over r using the delta function we obtainWe rewrite this equation asHowever, S ξ,x (t|x) = 0 andbecause the escape problem out of the semi-infinite interval (−∞, ξ) coincides with the first-passage problem to threshold ξ. HenceProceeding similarly we getPlugging Eqs. (B3)-(B4) into Eq. (B2) and applying the Heaviside functions Θ(ξ − x) and Θ(x − η) we getThat iswhich is Eq. (31).Appendix C: Derivation of Eq. (81)We write the Laplace inversion of Eq. (80) in the formwhere Θ(·) is the Heaviside step function, we havewhere Γ(a, z) is the incomplete Gamma function[33]. Substituting into Eq. (C1) yieldsFor small values of x and t > 0 the argument of the incomplete Gamma function is small and we can use the following expansion[33]Γ(a, z) = Γ(a) − z a a + O(z a+1 ),
C W Gardiner, Handbook of Stochastic Methods. BerlinSpringerC. W. Gardiner, Handbook of Stochastic Methods (Springer, Berlin, 1985).
S Redner, A Guide to First-Passage Processes. Cambridge, EnglandCambridge University PressS. Redner, A Guide to First-Passage Processes (Cam- bridge University Press, Cambridge, England, 2001).
First-Passage Time Problems in Chemical Physics. G H Weiss, Advances in Chemical Physiscs. I. Prigogine (J. WileyHoboken, NJ13G. H. Weiss, First-Passage Time Problems in Chemi- cal Physics, in Advances in Chemical Physiscs, Vol. 13, edited by I. Prigogine (J. Wiley, Hoboken, NJ, 2007).
. K Lindenberg, B West, J. Stat. Phys. 42201K. Lindenberg and B. West, J. Stat. Phys. 42, 201 (1986).
. J Masoliver, J Porrà, Phys. Rev. Lett. 75189J. Masoliver and J. Porrà, Phys. Rev. Lett. 75, 189 (1995).
. B J West, Chemical Physics. 28445B. J. West, Chemical Physics 284, 45 (2002).
. J Eichner, J Kantelhardt, A Bunde, S Havlin, Phys. Rev. E. 7316130J. Eichner, J. Kantelhardt, A. Bunde and S. Havlin Phys. Rev. E 73, 016130 (2006).
. M F Shlesinger, Nature. 45040M. F. Shlesinger, Nature (London) 450, 40 (2007).
. S Condamin, O Bénichou, V Tejedor, R Volturiez, J Klafter, Nature. 45077S. Condamin, O. Bénichou, V. Tejedor, R. Volturiez and J. Klafter, Nature 450, 77 (2007).
G Salvadori, S , De Michelle, N , T Kottegoda, R Rosso, Extremes in Nature. BerlinSpringerG. Salvadori, S, De Michelle, N, T. Kottegoda, and R. Rosso, Extremes in Nature (Springer, Berlin, 2007).
. J Masoliver, J Perelló, Phys. Rev. E. 7546110J. Masoliver and J. Perelló, Phys. Rev. E 75, 046110 (2007).
. M Montero, J Masoliver, Eur. Phys. J. B. 57181M. Montero and J. Masoliver, Eur. Phys. J. B 57, 181 (2007).
. J Masoliver, J Perelló, Phys. Rev. E. 7856104J. Masoliver and J. Perelló, Phys. Rev. E 78, 056104 (2008).
. J Masoliver, J Perelló, Phys. Rev. E. 8016108J. Masoliver and J. Perelló, Phys. Rev. E 80, 016108 (2009).
. J Perelló, M Gutiérrez-Roig, J Masoliver, Phys. Rev. E. 8466110J. Perelló, M. Gutiérrez-Roig and J. Masoliver Phys. Rev. E 84, 066110 (2011).
E J Gumbel, Statistics of Extremes. New YorkDoverE. J. Gumbel, Statistics of Extremes (Dover, New York, 2004).
Extremes and Related Properties of Random Sequences and Processes. M R Leadbetter, G Lindgren, H Rootzen, SpringerBerlinM. R. Leadbetter, G. Lindgren and H. Rootzen, Extremes and Related Properties of Random Sequences and Pro- cesses (Springer, Berlin, 2011).
. A J F Siegert, Phys. Rev. 81617A. J. F. Siegert, Phys. Rev 81, 617 (1951);
. D A Darling, A F J Siegert, Ann. Math. Statist. 24624D. A. Darling and A. F. J. Siegert, Ann. Math. Statist. 24, 624 (1953).
. I F Blake, W C Lindsay, IEEE Transactions on Information Theory, IT. 19295I. F. Blake and W. C. Lindsay, IEEE Transactions on Information Theory, IT-19, 295 (1973).
S M Berman, Sojourns and Extremes of Stochastic Processes. Belmont, CAWadsworth and Brooks/ColeS. M. Berman, Sojourns and Extremes of Stochastic Processes (Wadsworth and Brooks/Cole, Belmont, CA, 1992).
. E Abad, S B Yuste, K Lindenberg, Phys. Rev. E. 8661120E. Abad, S. B. Yuste and K. Lindenberg, Phys. Rev. E 86, 061120 (2012).
. J Masoliver, J Perelló, Phys. Rev. E. 8641116J. Masoliver and J. Perelló, Phys. Rev. E 86, 041116 (2012).
G E Roberts, H Kaufman, Table of Laplace Transforms. PhiladelphiaW. B. SaudersG. E. Roberts and H. Kaufman, Table of Laplace Trans- forms (W. B. Sauders, Philadelphia, 1966).
. R A Handelsman, J S Lew, SIAM J. Math. Analysis. 5R. A. Handelsman and J. S. Lew, SIAM J. Math. Anal- ysis 5, 425-451 (1974).
. W Feller, Ann. Math. 54W. Feller, Ann. Math. 54, 173-182 (1951);
. Ann. Math. 55Ann. Math. 55, 468-519 (1952);
. Trans. Am. Math. Soc. 71Trans. Am. Math. Soc. 71, 1-31 (1954).
. J C Cox, J E Ingersoll, S A Ross, Econometrica. 53385J. C. Cox, J. E. Ingersoll and S. A. Ross, Econometrica 53, 385 (1985).
. R M Capocelli, L M Ricciardi, J. Theor. Biol. 40R.M. Capocelli and L.M. Ricciardi, J. Theor. Biol. 40, 369-387 (1973).
W Gerstner, W M Kistler, Spiking Neuron Models. CambridgeCambridge University PressW. Gerstner and W. M. Kistler, Spiking Neuron Models (Cambridge University Press, Cambridge, 2002).
. S Azaele, A Maritan, E Bertuzzo, I Rodriguez-Iturbe, A Rinaldo, Phys. Rev. E. 8151901S. Azaele, A. Maritan, E. Bertuzzo, I. Rodriguez-Iturbe and A. Rinaldo, Phys. Rev. E 81, 051901 (2010).
. S Ditlevsen, P Lansky, Phys. Rev. E. 7361910S. Ditlevsen and P. Lansky, Phys. Rev. E 73, 061910 (2006).
. E Bibbona, P Lansky, R Sirovich, Phys. Rev. E. 8131916E. Bibbona, P. Lansky and R. Sirovich, Phys. Rev. E 81, 031916 (2010).
. R M Capocelli, L M Ricciardi, Theoretical Population Biology. 5R.M. Capocelli and L.M. Ricciardi, Theoretical Popula- tion Biology 5, 28-41 (1974).
Formulas and Theorems for the Special Functions of Mathematical Physics. W Magnus, F Oberhettinger, R P Soni, Springer-VerlagBerlin and New YorkW. Magnus, F. Oberhettinger and R. P. Soni, Formulas and Theorems for the Special Functions of Mathematical Physics (Springer-Verlag, Berlin and New York, 1966).
| []
|
[
"Observables with τ leptons at LHC and LC structure of event records and Monte Carlo Algorithms",
"Observables with τ leptons at LHC and LC structure of event records and Monte Carlo Algorithms"
]
| [
"Z Was \nInstitute of Nuclear Physics\nKawiory 26a30-055CracowPoland\n"
]
| [
"Institute of Nuclear Physics\nKawiory 26a30-055CracowPoland"
]
| []
| In the present report, let us adress the issues related to simulation of decays for particle embodied in full production and decay chains of Monte Carlo programs set-up for experiments such as at LHC or LC. Both technical issues related to the way how the events may be stored in event records and issues related to physics (in particular non-factorizable correlations of the Einstein-Rosen-Podolsky type) will be reviewed on the basis of practical examples. We will limit our discussion to the case of τ lepton and W boson decays, but similar problems (and solutions) may arise also in case of simulation for other intermediate states or particles. Examples related to construction of physics observables will be also given. In particular the method of measuring the CP parity properties of the h − τ τ coupling at LC will be explained. | 10.1016/j.nima.2004.07.097 | [
"https://export.arxiv.org/pdf/hep-ph/0402129v1.pdf"
]
| 16,963,109 | hep-ph/0402129 | 03ef589330e918652d55760c698bc6db47e90e3b |
Observables with τ leptons at LHC and LC structure of event records and Monte Carlo Algorithms
12 Feb 2004
Z Was
Institute of Nuclear Physics
Kawiory 26a30-055CracowPoland
Observables with τ leptons at LHC and LC structure of event records and Monte Carlo Algorithms
12 Feb 2004Presented at IX Workshop on A C A T in Physics Research, December 1-5, 2003, KEK, Tsukuba, Japan
In the present report, let us adress the issues related to simulation of decays for particle embodied in full production and decay chains of Monte Carlo programs set-up for experiments such as at LHC or LC. Both technical issues related to the way how the events may be stored in event records and issues related to physics (in particular non-factorizable correlations of the Einstein-Rosen-Podolsky type) will be reviewed on the basis of practical examples. We will limit our discussion to the case of τ lepton and W boson decays, but similar problems (and solutions) may arise also in case of simulation for other intermediate states or particles. Examples related to construction of physics observables will be also given. In particular the method of measuring the CP parity properties of the h − τ τ coupling at LC will be explained.
Introduction
Since many years, intensive studies are being performed to design future software architectures for experiments on proton proton colliders, such as the Tevatron or the LHC [1] and high energy e + e − linear colliders such as JLC, NLC [2] or TESLA [3].
One of the important ingredients in such designs is the data structure for storing the Monte Carlo events. It is generally accepted that the data structures based on objects such as particles, clusters, strings, etc. with properties such as tracks, momenta, colour, spin, mass, etc. and on the relations explaining the origins and descendants of the objects is the most convenient one. This is the case at present [4], and it is also envisaged for the future, see [5]. At the same time such a picture is in conflict with the basic principles of quantum mechanics. Einstein-Rosen-Podolsky paradox is an example of such phenomena. A general problem is that the quantum state of a multiparticle system cannot (at least in principle) be represented as a statistical combination of the states defined by the products of the pure quantum states of the individual particles. It is thus of the utmost importance to examine whether the approximation enforced by the data structure is purely academic, or if it rather represents a real difficulty, which may affect the interpretation of the future data. In fact in some cases alternative methods can be designed and are in fact used as well.
It would not be a serious problem if the predictions of the Standard Model used in the interpretation of the future data could be provided by a single program, black box, without any need of analysing its parts. Then anything that would be measured beyond the prediction of such a hypothetical Monte Carlo program would be interpreted as "new physics". Agreement, on the other hand, would constitute confirmation of the Standard Model, as it is understood at present (and proper functioning of the detector as well). How-ever, even in such an extreme case it is very useful, for the purpose of experimental studies, to manipulate with the terms responsable for the signature of the 'new physics'. In this way experimental strategies can be refined, if the new effects can be placed in well defined and phenomenologically simple modules.
Because of the complexity of the problem, Monte Carlo predictions need to be dealt with by programs describing: the action of the detector and of the analysis, on the experimental side, and various effects, such as those from hard processes, hadronization, decay of resonances, etc., on the theoretical side. Every part is inevitably calculated with some approximation, which need to be controlled.
Event record and decay interface
In the first part I will discuss solution we used in KORALZ [6] -the program widely used at LEP for the simulation of τ -lepton pair production and decay, including spin and QED bremssstrahlung effects. Even though spin effects are non-treatable in the scheme where properties are attributed to individual particles only, it is the very method used there. As described in ref. [6] the algorithm of spin generation for any individual event was consisting of the following steps:
1. An event consisting of a pair of τ leptons, bremsstrahlung photons, etc., was generated.
2. Helicity states for both τ + and τ − were generated. At this point, an approximation with respect to quantum mechanisc was introduced.
3. Information on these helicty states, including the definition of quantization frames, i.e. the relation between τ 's rest frame and laboratory frame, was then transmitted to TAUOLA [7,8,9], the package for the generation of τ -lepton decays.
4. Finally TAUOLA performed decays of 100% polarized τ 's, and the event in the HEPEVT common block was completed.
The solution for the spin treatment of τ leptons at LEP was optimal. On one side, a convenient picture of particles with properties, origins and descendants could be used and, on the other, a complete full spin solution [10,11] was available, if necessary.
Let us now turn to another example of the spin implementation algorithm.
It is taken from ref. [12]. The algorithm, essentially that of KORALZ, was adopted to work with any Monte Carlo program providing the production of τleptons. If the generated events are stored in the format of a HEPEVT common block, then the algorithm consisting of the following basic steps can be used:
1. Search for τ -leptons in a HEPEVT common block (filled by any MC program).
2. Check what the origin of τ -lepton is: Z, γ, W, h, H ± or eventually, 2 → 2-body process such as: e + e − , (uū), (dd) → τ + τ − .
3. For the 2 → 2-body process of τ -pair production, it is sometimes possible to calculate the τ polarization as a function of the invariant mass of the τ -lepton pair and angle between the directions of τ -leptons and incoming effective beams (in the rest frame of τ -pair).
4. If in addition to the τ -leptons, photons or partons (gluons, quarks, etc.) are stored in HEPEVT common block, one needs to define the "effective incoming beams".
5. From such an information one can generate τ helicity states and define the relation between the τ rest frame and the laboratory frame. Optionally complete spin effects can be implemented as well, see [18].
6. The τ decay is generated with the help of TAUOLA and HEPEVT common block is appended with the τ 's decay products.
Leading spin effects are nicely reproduced by the above set of programs. A more complete discussion can be found in ref. [12].
Let us stress, that the presented above solution, require certain minimal discipline in a way Daughter index JDAHEP points upward in a tree, however mother index JMOHEP may point to the previous copy of the event as well. The status code ISTHEP is ocasionally used like a pointer toward higher level of the event record listing. As a consequence in the event record there may appear ambiguities and/or loops for searching algorithms.
how the event records are filled in. At present, there is a strong tendency to store in event record, not only information on the 'real particles', but also on the results of the simulation at the parton shower level as well as of the hard process alone. As a consequence not only the same entries for the otherwise well defined partilces like τ -leptons are duplicated or even triplicated, but also the relation between different entries is not anymore of the tree-type and links between particles are not reversible. The link upwards from A to B does not mean that there is downward link from B to A. This creates multitude of troubles for the algorithms analyzing events, see fig. 1. However at present our solution for interfacing decay packages with the host programs based on HEPEVT event record as filled by PYTHIA and HERWIG seem to work in all cases studied by us [13]. For the case of complete spin correlations, we found it more convenient to abandon direct use of spin information provided by the host programs. Instead we choose to calculate complete density matrix anew, from the kinematical configuration provided by the host program.
Higgs boson parity measurement
Let us sketch the basic principles behind the proposed measurement, in the case, when simultaneosly scalar and pseudoscalar couplings are allowed in hτ τ vertex hτ N (cos φ + i sin φγ 5 )τ.
(1)
If non-zero CP-odd admixture to the Higgs is present, the distribution of the Higgs production angle is modified [14,15,16]. We have simulated production angular distributions as in the SM, but this assumption has no influence on the validity of the analysis. In order to study the sensitivity of h → τ + τ − observables, we assume a SM production rate inependent of the size of the CP-odd admixture.
The production process e + e − → Zh → µ + µ − (qq)τ + τ − has been chosen, as an representative example, and simulated with the Monte Carlo program PYTHIA 6.1 [17]. The Higgs boson mass of 120 GeV and a centre-of-mass energy of 350 GeV was chosen. The effects of initial state bremsstrahlung were included. For the sake of our discussion and in all of our samples the τ decays have been generated with the TAUOLA Monte Carlo library [9,8,7]. As usual, to facilitate the interpretation of the results, bremsstrahlung effects in decays were not taken into account. Anyway, with the help of additional simulation, we have found this effect to be rather small. To include the full spin effects in the h → τ + τ − , τ ± → ρ ±ν τ (ν τ ), ρ ± → π ± π 0 decay chain, the interface explained in Ref. [18] was used.
The Higgs boson parity information must be extracted from the correlations between τ + and τ − spin components, which are further reflected in correlations between the τ decay products in the plane transverse to the τ + τ − axes [19,20]. To better visualize the effect, let us write the decay probability, using the conventions of Ref. [15]:
Γ(h mix → τ + τ − ) ∼ 1−s τ + s τ − +s τ + ⊥ R(2φ) s τ − ⊥ ,
(2) where R(2φ) can be understood as an operator for the rotation by an angle 2φ around the direction. The s τ − and s τ + are the τ ± polarization vectors, which are defined in their respective rest frames. The symbols /⊥ denote components parallel/transverse to the Higgs boson momentum as seen from the respective τ ± rest frames.
The method relies on measuring the acoplanarity angle of the two planes, spanned on ρ ± decay products and defined in the ρ + ρ − pair rest frame. The acoplanarity angle ϕ * , between the planes of the ρ + and ρ − decay products is defined. The angle is defined first, with the help of its cosine and two vectors n ± normal to the planes, namely n ± = p π ± × p π 0 , cos ϕ * = n+·n− |n+||n−| . To distinguish between the two cases: ϕ * and 2π − ϕ * it is sufficient, for example, to find the sign of p π − ·n + . When it is negative, the angle ϕ * as defined above (and in the range 0 < ϕ * < π) is used. Otherwise it is replaced by 2π − ϕ * .
Additional selection cuts need to be applied. The events need to be divided into two classes, depending on the sign of y 1 y 2 , where
y 1 = E π + − E π 0 E π + + E π 0 ; y 2 = E π − − E π 0 E π − + E π 0 .(3)
The energies of π ± , π 0 are to be taken in the respective τ ± rest frames. In Refs. [19,20] the methods of reconstruction of the replacement τ ± rest frames were proposed with and without the help of the τ impact parameter. We will use these methods here as well, without any modification.
To test the feasibility of the measurement, some assumptions about the detector effects had to be made, see refs. [19,20] for more details.
Numerical results
We have used the scalar-pseudoscalar mixing angle φ = π 4 . In Fig. 2 the acoplanarity distribu-tion angle ϕ * of the ρ + ρ − decay products which was defined in the rest frame of the reconstructed ρ + ρ − pair, is shown. The two plots represent events selected by the differences of π ± π 0 energies, defined in their respective τ ± rest frames. In the left plot, it is required that y 1 y 2 > 0, whereas in the right one, events with y 1 y 2 < 0 are taken. This figure quantifies the size of the parity effect. The size of the effect is substantially diminished when a detector-like set-up was included for τ ± rest frames reconstruction, in exactly the same proportion as in Ref. [19], nonetheless parity effect remain visible. The fitting procedure was repeated 400 times with acoplanarity distributions extracted from independent samples of 1 ab −1 luminosity each, with a nominal value of φ = π/4. A precision on φ from such a pseudo-experiment of approximately 6 • can be anticipated.
Summary
The combination of generators for production and decay of intermediate states, require careful treatment of the spin degrees of freedom. In some cases one can restrict spin states to pure helicities; then generation of intermediate states for individual particles can be performed first, and decays of each individual particle can be performed later. The general case, when full quantum mechanical spin correlations are included was also discussed. Technical constraints for the solution based on kinematical information provided by the production programme to be used by the decay routines, were presented. In this context gramatic rules on how event records are filled in were discussed as well.
Finally discussion of observable for the Higgs boson parity measurement at LC, based on such a technical solution was presented in detail, as an example. It was shown, on the basis of careful Monte Carlo simulation of both theoretical and detector effects that with the typical parameters of the future detector and Linear Collider set-up the hypotesis of the admixture of pseudoscalar coupling to the otherwise Standard Model 120 GeV Higgs boson can be measured up to 6 o error on the mixing angle. Figure 2. The acoplanarity distribution (angle ϕ * ) of the ρ + ρ − decay products in the rest frame of the ρ + ρ − pair. Gaussian smearing of π's and Higgs boson momenta, are included. Only events where the signs of the energy differences y 1 and y 2 are the same, if calculated using the method described in Ref. [19] and if calculated with the help of the τ impact parameter Ref. [20], are taken. The thick line corresponds to a scalar Higgs boson, the thin line to a mixed one. The left figure contains events with y 1 y 2 > 0, the right one is for y 1 y 2 < 0.
³ £ ℄ ³ £ ½ ¼ ¾ ¼ ¿ ¼ ¼ ¼ ¼ ¼ ¼¼ ¡ AE ¼ ¾ ¡ AE ¼ ¼ ¡ AE ¼ ¡ AE ½ ¼¼ ¡ AE ½ ¾ ¡ AE 2φ ³ £ ℄ ³ £ ½ ¼ ¾ ¼ ¿ ¼ ¼ ¼ ¼ ¼ ¼¼ ¡ AE ¼ ¾ ¡ AE ¼ ¼ ¡ AE ¼ ¡ AE ½ ¼¼ ¡ AE ½ ¾ ¡ AE 2 φ
Acknowledgements
I am grateful to co-authors: G. Bower, C. Biscarat, K. Desch, P. Golonka, A. Imhof, B. Kersevan, T. Pierzcha la, E. Richter-Was and M. Worek of the papers and related activities which lead to the presented talk.
Figure 1 .
1Typical relations in present day event record.
CERN/LHCC. ATLAS Collaboration, CERN/LHCC/99-15.
. F Richard, J R Schneider, D Trines, A Wagner, hep-ph/0106314F. . Richard, J. R. Schneider, , D. Trines, and A. Wagner, (eds.) hep-ph/0106314.
. C Caso, Particle Data Group CollaborationEur. Phys. J. 3Particle Data Group Collaboration, C. Caso et al., Eur. Phys. J. C3 (1998) 1-794.
. E Boos, hep-ph/0109068E. Boos et al., hep-ph/0109068.
. S Jadach, B F L Ward, Z Was, Comput. Phys. Commun. 79503S. Jadach, B. F. L. Ward, and Z. Was, Com- put. Phys. Commun. 79 (1994) 503.
. S Jadach, Z Was, R Decker, J Kühn, Comput. Phys. Commun. 76361S. Jadach, Z. Was, R. Decker, and J. Kühn, Comput. Phys. Commun. 76 (1993) 361.
. M Jeżabek, Z Was, S Jadach, J Kühn, Comput. Phys. Commun. 7069M. Jeżabek, Z. Was, S. Jadach, and J. Kühn, Comput. Phys. Commun. 70 (1992) 69.
. S Jadach, J H Kühn, Z Was, Comput. Phys. Commun. 64275S. Jadach, J. H. Kühn, and Z. Was, Comput. Phys. Commun. 64 (1990) 275.
. S Jadach, Z Was, Comput. Phys. Commun. 36191S. Jadach and Z. Was, Comput. Phys. Com- mun. 36 (1985) 191.
. S Jadach, Z Was, Acta Phys. Polon. 151151S. Jadach and Z. Was, Acta Phys. Polon. B15 (1984) 1151.
. T Pierzcha La, E Richter-Was, Z Was, M Worek, hep-ph/0101311Acta Phys. Polon. 32T. Pierzcha la, E. Richter-Was, Z. Was, and M. Worek, Acta Phys. Polon. B32 (2001) 1277, hep-ph/0101311.
. P Golonka, hep-ph/0312240P. Golonka et al., hep-ph/0312240, .
. T Abe, American Linear Collider Working Group Collaborationhep-ex/0106056page 123 and references thereinAmerican Linear Collider Working Group Collaboration, T. Abe et al., page 123 and references therein., hep-ex/0106056.
. M Kramer, J H Kühn, M L Stong, P M Zerwas, hep-ph/9404280Z. Phys. 64M. Kramer, J. H. Kühn, M. L. Stong, and P. M. Zerwas, Z. Phys. C64 (1994) 21, hep-ph/9404280.
. B Grzadkowski, J F Gunion, hep-ph/9501339Phys. Lett. 350B. Grzadkowski and J. F. Gunion, Phys. Lett. B350 (1995) 218, hep-ph/9501339.
. T Sjostrand, hep-ph/0010017Comput. Phys. Commun. 135T. Sjostrand et al., Comput. Phys. Commun. 135 (2001) 238, hep-ph/0010017.
. Z Was, M Worek, hep-ph/0202007Acta Phys. Polon. 33Z. Was and M. Worek, Acta Phys. Polon. B33 (2002) 1875, hep-ph/0202007.
. G R Bower, T Pierzcha La, Z Was, M Worek, Phys. Lett. 543G. R. Bower, T. Pierzcha la, Z. Was, and M. Worek, Phys. Lett. B543 (2002) 227, .
. K Desch, Z Was, M Worek, hep-ph/0302046Eur. Phys. J. 29K. Desch, Z. Was, and M. Worek, Eur. Phys. J. C29 (2003) 491, hep-ph/0302046.
| []
|
[
"基于矩阵填充模型的成绩预测 Based on Graph-VAE Model to Predict Student's Score 1",
"基于矩阵填充模型的成绩预测 Based on Graph-VAE Model to Predict Student's Score 1"
]
| []
| []
| []
| The OECD pointed out that the best way to keep students up to school is to intervene as early as possible[1]. Using education big data and deep learning to predict student's score provides new resources and perspectives for early intervention. Previous forecasting schemes often requires manual filter of features , a large amount of prior knowledge and expert knowledge. Deep learning can automatically extract features without manual intervention to achieve better predictive performance. In this paper, the graph neural network matrix filling model (Graph-VAE) based on deep learning can automatically extract features without a large amount of prior knowledge. The experiment proves that our model is better than the traditional solution in the student's score dataset, and it better describes the correlation and difference between the students and the curriculum, and dimensionality reducing the vector of coding result is visualized, the clustering effect is consistent with the real data distribution clustering. In addition, we use gradient-based attribution methods to analyze the key factors that influence performance prediction. XXXX 资助 收稿日期: ; 修回日期: ;网络出版时间: 网络出版地址: | null | [
"https://arxiv.org/pdf/1903.03609v1.pdf"
]
| 73,729,383 | 1903.03609 | 81f7a7a3de484f674758f57faeaac48cd403d7e1 |
基于矩阵填充模型的成绩预测 Based on Graph-VAE Model to Predict Student's Score 1
基于矩阵填充模型的成绩预测 Based on Graph-VAE Model to Predict Student's Score 1
The OECD pointed out that the best way to keep students up to school is to intervene as early as possible[1]. Using education big data and deep learning to predict student's score provides new resources and perspectives for early intervention. Previous forecasting schemes often requires manual filter of features , a large amount of prior knowledge and expert knowledge. Deep learning can automatically extract features without manual intervention to achieve better predictive performance. In this paper, the graph neural network matrix filling model (Graph-VAE) based on deep learning can automatically extract features without a large amount of prior knowledge. The experiment proves that our model is better than the traditional solution in the student's score dataset, and it better describes the correlation and difference between the students and the curriculum, and dimensionality reducing the vector of coding result is visualized, the clustering effect is consistent with the real data distribution clustering. In addition, we use gradient-based attribution methods to analyze the key factors that influence performance prediction. XXXX 资助 收稿日期: ; 修回日期: ;网络出版时间: 网络出版地址:
摘要 经合组织指出想要让学生跟上学业进度,最好的方式是尽早干预 [ [2,3,4] 、决策树模型 [5,6,7] 和典型的 推荐算法 [8,9,10, 11] 分解 [21] (nmf)、biased 矩阵分解 (biasedmf) 、 局部低秩张量矩阵分解算法 (llorma) 以及贝叶斯概率矩阵分解算法 [22] (bpmf)等 5 种方法;协同过滤成绩预测 [
,, G U V R , U 为学生节点 集合,V 为课程节点集合,R 为图 G 的边集。 i uU (其中 1, , m i )表示第 i 个 学生节点, j vV ( 1, , n j )表示第 j 个课程节点,m 为学生总数,n 为课程 总数,图的总节点数 N = m+n 。边的权重 ⸤课程成绩/10⸤ +1 , 1, ,10 r 。 二分图转化成邻接矩阵 0 0 T A M A ,MR NN 形式存储,如图 1(b)所示, 矩阵 A 大小为 mn , , ij uv Ar ,表示学生 i u 在课程 i v 上的成绩为 r, T A 是 A 的 转置矩阵(无向图的邻接矩阵为对称矩阵) 。 (a)学生-课程二分图 (b)邻接矩阵表示 图 1 二分图及其邻接矩阵示意图 将成绩划分成 1-10 共 10 个等级,可以大大减少计算量,且我们假设获得不 同成绩等级的学生之间有较大差别,则倾向给出不同的成绩等级的课程间亦然。 按照权值 r 将邻接矩阵 M 分解成 10 个等级的 0-1 矩阵 M1-M10, 0 0 T r r r A M A 。 r A 为大小为 mn 的 0-1 矩阵, 1, ,10 r 。若学生 i 所选课程 j 的成绩为 r,则 r A 第 i 行第 j 列的值为 1,而 r A 对应的第 i 行第 j 列的值为 0, ' 1, ,10 r 且 ' rr 。 3.2 整体架构 本文提出的学生成绩预测模型 Graph-VAE 整体架构如图 2 所示。我们将现 有的课程信息、学生信息和成绩数据整合成二分图的形式,将二分图 G 的邻接 矩阵 M 和顶点的初始化表示 X 作为 VAE 编码器的输入,经过 GCN 编码之后, 输出成绩预测矩阵 M ,完成矩阵填充。Graph-VAE 模型主要包括 encoder 部分和 decoder 部分。输入数据 X 经过 2 层 GCN+ReLu 之后输出编码的均值 Z_mean 和 标准差的对数 Z_log_std,通过 N(0,1)采样得到输出 Z,作为 decoder 的输入部分, 数据经过 decoder 得到成绩预测矩阵。 图 2 Graph-VAE 模型工作流程 3.3 编码器 图 G 的度矩阵 D 为对角矩阵, NN DR , 顶点 i u 的度 10 ,, 11 dA ij n r u v i rj u , 其中 ,, A ij r u v 表示学生 i u 所选的课程 i v 成绩在 r 等级区间。 1 2 10 , ,..., W W W , KK W 为初始化的权重矩阵,K 为特征的维度数。每个节点 i 的特征描述 i x 组成的特征 矩阵 X NK R , NK X 也为初始化的矩阵。将图 G 的邻接矩阵 M 和初始化的 节点矩阵 X 输入 GCN 编码器学习数据之间的潜在规则,经过激活函数 ReLU 得 到隐藏层表示,最终得到学生顶点编码矩阵 K u m E 和课程顶点编码矩阵 n K v E 的过程如下: 1 10 11 1 1 10 10 , , Re , Re ( ( , , )) uu vv u v EH encoder M M LU W EH H LU sum D M XW D M XW H 其中, u N u E H 、 v N v E H 为中间结果。 1 ,
近邻算法(userknn)、基于课程邻域推荐的 k 近邻算法(itemknn)、基于 学生用户推荐的聚类算法(usercluster)、基于课程推荐的聚类算法(itemcluster) 等 4 种方法;矩阵分解方法成绩预测主要包含奇异值分解++(svdpp) 、非负矩阵
测试集、验证集。模型学习率为 0.1,优化算法为 ADAM。Dropout 率为 0.1。预 设有两个 GCN 编码器单元。前者维度数 E 为 64,后者维度数 E'为 32。 图 5 是模型在三组数据集中训练集 loss 值的变化情况。实验结果表明模型 在迭代数 epochs 达 200 时稳定收敛,因此选定模型参数 epochs=200。 进行对比;二是我们调用开源实现 LibRec 包中三组共 13 中代表性的预测算法 (分类聚类成绩预测、矩阵分解成绩预测以及协同过滤成绩预测)与 Graph-VAE1, ,10
rr
D M XW r
相当于
综合度矩阵、邻接矩阵包含的信息。邻接矩阵用以引入"按成绩等级分割后"的信
息。X 的引入是调节矩阵的尺寸。
r
W 用以分别学习 10 个成绩等级的模式,先验
假设是不同等级之间的模式也不同,需要分别对待。Sum(·)加和操作是对各分块
的消息进一步传递融合。
模型预测能力体现在随着损失函数的下降,模型的各项参数发生更迭,原图
和子图、学生顶点和课程顶点的信息会产生交换。
令:
1
n
n 1
[ , , ,
, , ]
T
T
T
T
N
T
u
v
E
E
Z Z
Z Z
Z
。
则第 i 名学生选修第 j 门课程可表示为
n
,
={1,2,
}
ij
Z Z
i
,
...,n 。显然,上
述的编码器是可以序贯式堆叠(前一个编码器的输出作为下一个编码器的输入)
以达到更大程度抽取数据特征的目的。 如图 3 所示, (a) 部分表示原始的二分图,
其中圆形表示学生节点,矩形表示课程节点,边上的数字表示为成绩。( b)部分
表示节点之间的消息传递,其中圆形表示学生节点,矩形表示课程节点,圆角矩
形表示神经元,节点右上角表示节点所携带的信息(分别用红、橙、黄、绿、青、
蓝、紫颜色的矩形表示节点 A、B、C、D、E、F、G 自身所携带信息) 。在卷积前,
节点只携带自身信息。第一层卷积时,节点将其携带信息传递到其相邻节点,经
(公式 1)
(公式 2)
(公式 3)
过卷积之后节点学习到其邻居节点的信息。第二层卷积时,节点及其携带的邻居
节点的信息传递到其下一跳节点,经过卷积之后节点学习到其两跳节点的信息。
要预测节点 A 到节点 F 的成绩,经过 GCN 可以学习到节点 A、B、C 成绩分布相
似,从而根据 B、C 到 F 成绩区间预测节点 A 到节点 F 的成绩。
图 3 (a) 二分图数据
图 3 (b) 消息在 GCN 中的传递
图 3 节点消息在 GCN 中的传递
3.3 变分重整化
i
和 i
分别表示 Z
i 的均值向量和标准差向量。
i
是从标准正态分布 (0, )
NI
取样得到的 E 维向量。
11
( {1, 2,..., }
[ , ,
,
, , ]
uu
i
i
i
T
T
T
T
i
N
N
N
iN
Z
Z
Z
Z
Z
Z
图 4 所示即为向量
i
Z 的变分重整化示意图,对应于图 2 中 Encoder 的后半部
分。 由于 VAE 的特性, 需要约束 P(Z)中的每一维都向标准正态分布 N(0, )
I 接近,
即极小化下式:
2
i
1
2
1
2
(0,
cos
* [ ( ||
1 log
2
)]
ii
E
i
i
N
t N L Z
I
KP
(公式 4)
(公式 5)
(公式 6)
图 4 变分重整化示意图
3.4 解码器
VAE 自编码器经过变分重整化输出的 Z 输入解码器,解码器输出 r
M :
'
T
rr
Z
Z H
Z
其中, r
H 表示 r 通道的权重矩阵, '
r
Z 表示第 r 通道的输出。
r
H 大小为
N E
。
使用 softmax 将矩阵元素的值"挤压"到(0,1)区间内:
'
,
,
'
,
11
exp(
)
r ij
r ij
NN
r ij
ij
Z
M
Z
r,ij
m 表示
r
M 中第 i 行第 j 列的元素。 '
,
r ij
Z 同理。
成绩预测模型的输出
r
M 和模型的输入
r
M 为同型矩阵,即为预测对应课程的成绩结果。
3.5 优化目标
不同的损失函数,对应 p(x|z)的不同概率分布假设,我们优化的是 M 的最大
似然值,并非利用 M 和
M 的差值距离衡量。训练过程,我们最小化负的 log 似
然值,并用交叉熵 cost (见公式(9) )来度量 M 和
M 的差异,cost 越小,M 和
M 越接近。
(公式 7)
(公式 8)
10
2
r,
r,
r,
r,
r 1
1 1
cos t -
log
1
log 1
NN
ij
ij
ij
ij
ji
M
M
M
M
结合变分重整化时的目标函数公式(6) ,模型 Graph-VAE 的最小化目标函
数为
12
cos t + cos t
Loss
:
10
2
r,
r,
r,
r,
i
r 1 1 1
2
2
1
-
log
1
log 1
1 log
2
N N
E
ij
ij
ij
ij
j i
i
i
i
N
Loss
M
M
M
M
4 Experiment Evaluation
4.1 数据集
成绩数据集取自中南大学信息院 2010 年至 2016 年 12 月的研究生成绩数据,
共有学生顶点数
369
u
N
、课程顶点数
142
v
N
。由于学校培养方案 4 年变动
一次,因此将总数据集分割为三部分:2015 年 9 月-2016 年 7 月(记为 data1)、
2012 年 9 月-2016 年 7 月(记为 data2)以及 2010 年 1 月-2016 年 12 月(记为
data3)。
本文使用 RMSE [20] (均方根误差)来评估成绩预测模型的预测误差。RMSE
是均方误差的算术平方根, 是最常用来评估预测数据与真实数据差异的一个指标,
计算如公式(11)所示。该评价指标反映预测值和真实值的差距,值越接近 0,
差距越小。
10
2
1
10
N
rr
r
MSE
M M
RMSE
MSE
4.2 基本实验
将 data1、data2、data3 三组数据集各自划分成 75%、10%和 5%作为训练集、
(公式 9)
(公式 10)
公式(11)
图 5 三组数据训练 loss 值的变化
图 6 表示三组数据随着训练迭代次数的增加,RMSE 值的变化情况。图中 3
条曲线变化的趋势是一致的。由图观察到数据集小的(data1)训练过程存在一个
小幅度的波动。同时也说明即使在不同时间跨度不同数量级的训练数据,模型都
能学习数据的特征。
图 6 三组数据训练 RMSE 变化
图 7 表示使用 data1 训练模型得到的测试集评估指标。随机实验 10 次得到
的各项评估指标。由于神经网络模型有很多训练参数,我们一般采用随机赋值网
络中的初始参数变量,每次训练随机产生的初始变量值存在差异,因此模型训练
的 loss 值以及预测结果也存在一个稳定范围内的波动,我们采用随机实验取均
值的方式表示模型的训练效果。
图 7 模型 10 次训练测试集的三个指标
4.3 对比试验
我们进行了两方面的对比实验,一是将我们的模型(Graph-VAE)和 GCMC
图 13 影响节点 ID 为 326_479 成绩预测较大的前 12 个节点。其中,节点 326 和节点 479 影 响最大,属于自身对节点成绩预测的影响,紧跟其后分别有两种节点:○ 1 学生节点,分别是 节点 145、62 和 21;○ 2 课程节点,分别是节点 469、379 和 466,其对应课程名称为专题研 究、应用统计和高等计算机算法。8] 主要
包含基于全局的协同过滤推荐算法( globalaverage) 、基于内容的推荐算法
(aspectmodelrating) 、基于课程的协同过滤推荐(Itemaverage) 、基于学生的协同
过滤推荐(useraverage)等 4 种方法。
我们实验对比了以上 13 种方法, 如图 9 所示 Graph-VAE 的 RMSE 明显小于
其他推荐算法,验证了模型的有效性。
图 9 Graph-VAE 与推荐算法对比图
4.4 可视化分析
t-SNE(t-Distributed Stochastic Neighbor Embedding) [18] 是一种应用广泛的用
于非线性降维的算法,适于对高维数据进行可视化。
使用 t-SNE 可视化 GCN 学习到的学生向量 1 ,,
u
N
ZZ ,图 10 左边表示聚类
之前向量的分布,右边为迭代次数为 2746 次后向量的分布。图中两个红圈内部
的点可以认为是相近的学生向量。
0.821
0.672
0.745
0.908
0.842
2.639
0.954
0.756
2.360
1.044
0.907
2.860
1.118
0.967
2.561
0.000
0.500
1.000
1.500
2.000
2.500
3.000
3.500
D A T A 1
D A T A 2
D A T A 3
RMSE
Graph-VAE
userknn
itemcluster
itemknn
usercluster
Graph-VAE与分类、聚类算法对比图
0.821
0.672
0.745
0.924
0.799
2.739
0.927
0.793
2.607
0.955
0.831
2.296
0.963
0.945
2.474
1.000
0.821
2.347
0.000
0.500
1.000
1.500
2.000
2.500
3.000
D A T A 1
D A T A 2
D A T A 3
RMSE
Graph-VAE
nmf
bpmf
svdpp
llorma
biasedmf
Graph-VAE与矩阵分解算法对比图
0.821
0.672
0.745
0.918
0.751
2.367
1.124
1.003
2.760
1.133
0.983
2.558
1.133
0.961
2.558
0.000
0.500
1.000
1.500
2.000
2.500
3.000
D A T A 1
D A T A 2
D A T A 3
RMSE
Graph-VAE
itemaverage
useraverage
aspectmodelrating
globalaverage
Graph-VAE与协同过滤算法对比图
利用 t-sne 可视化节点向量可以将相近的节点聚到一起,根据聚类效果,实
验选择了两个学生用户(图 10 右边两个红色圈上的点)来进行观察聚类在一起
的学生的成绩分布。分别以选择的两个节点找与其相近的节点分成两组,图 11
为两个学生节点的特征向量聚类后相近节点成绩分布,可以看出两组学生各自
每组的分布比较接近,尤其是高分分布和低分分布的规律比较明显。
图 10 t-SNE 可视化结果
4.5 归因分析
通过基于梯度的归因方法 DeepExplain
[23] 分析可知影响成绩预测的因素。通
过分析可以知道课程之间的相互影响。此部分,我们共进行两种分析,一是分析
学生节点所选课程(节点的相邻节点)中对成绩预测影响较大的节点;二是分析
所有与学生节点相关的节点(节点的二跳邻居)对成绩预测影响较大的节点,企
图发现学生之间发关系。
通过梯度归因方法对学生节点 ID 为 261 的相邻节点进行分析,研究该学生
所选课程中对成绩预测影响较大的因素,可视化结果如图 12 所示。
图 11 两个学生节点的特征向量聚类后相近节点成绩分布
图 12 影响学生节点 261 成绩预测的因素。其中,其他节点是节点 261 的邻居节点颜色由
浅到深表示表示节点对成绩预测影响程度的由低到高。颜色较深的为 370、375 和 381,它
们分别对应课程自然辨证法、中国特色社会主义以及矩阵论三门公共课。
对于学生节点 261 来说,对成绩预测影响较大的是公共课成绩。经过调查发
现,学生节点 261 是研究生,而该学校的研究生课程中,专业课考试相较于公共
课简单,成绩普遍偏高,学生之间差异性较少,但是公共课考试困难,要学生花
很大的精力学习和复习,学生成绩差异性较大。公共课考试成绩较好的一般都是
上课认真听课,课下认真复习的学习态度好的学生,而专业课的高分成绩不能说
明该学生学习认真。
通过归因分析方法对影响节点 ID 为 326_479(ID=326 的学生,所选课程的
ID=479,节点 479 对应课程机器学习与数据挖掘)的成绩预测的所有因素进行分
析,研究影响对该同学该课程影响较大的因素是哪些,结果如图 13 所示。
对于节点 326 的 479 这门课程的成绩预测,通过对所有结点进行分析,发
现影响较大的是因素分别有学生节点和课程节点。调查研究发现,模型 Graph-
VAE 对该学生该成绩预测为 77 分,而影响较大的几个学生成绩均在 70~80 之
间,学生成绩分布相似。影响较大的课程分别是专题研究、应用统计和高等计算
机算法。前两门课程均是公共课,符合第一个实验的说明,而高等计算机算法课
程是机器学习与数据挖掘课程的基础课程, 机器学习与数据挖掘课程中设计很多
算法问题, 也就是说高等计算机算法学习较好的同学相对来说更容易接受机器学
习与数据挖掘课程所学知识,这符合现实基础课程对上层课程影响的现象,由此
可以对学生选课提供帮助。
5 Conclusion
成绩预测是对学生个体进行建模并针对性改进教学的手段。针对学生学习过
程中存在的问题,我们应该早发现,早预防。教育大数据的涌现使得深度学习应
用于成绩预测成为可能, 深度学习自动提取特征的结构尤其适于挖掘大样本数据
的隐藏模式。本文提出了 Graph-VAE 模型,这种模型相比以往模型能自动抽取
学生成绩数据的关键特征,从而完成成绩预测任务,促进教育教学。此外通过梯
度归因分析,挖掘学生成绩数据中隐含的信息,为学生选课提供帮助。本文是针
对此类问题的一次积极尝试。
Education at a Glance. 10.1787/eag-2018-enOECD. OECD PublishingOECD (2018), Education at a Glance 2018: OECD Indicators, OECD Publishing, Paris. http://dx.doi.org/10.1787/eag-2018-en
. 黄建明. 贝 叶 斯 网 络 在 学 生 成 绩 预 测 中 的 应 用, 黄建明. 贝 叶 斯 网 络 在 学 生 成 绩 预 测 中 的 应 用 [J].
. 计 算 机 科 学, 39计 算 机 科 学 , 2012, 39(s3):280-282.
. 远俊红 王小丽, 王小丽, 远俊红. 基于加权朴素贝叶斯分类法的成绩预测模型[J].
. 电子技术 与软件工程, 电子技术 与软件工程, 2013(19):225-226.
. 任 韩婧阳, 雪 利 . 贝 叶 斯 网 络 在 学 生 成 绩 管 理 中 的 应 用, 韩婧阳, 任 雪 利 . 贝 叶 斯 网 络 在 学 生 成 绩 管 理 中 的 应 用 [J].
. 时 代 教 育 , 时 代 教 育 , 2015(5):167-168.
. 王秀坤 武彤, 武彤, 王秀坤. 决策树算法在学生成绩预测分析中的应用[J].
. 微计算机信息, 26微计算机信息, 2010, 26(3):209-211.
. 刘志妩, 基于决策树算法的学生成绩的预测分析, 刘志妩. 基于决策树算法的学生成绩的预测分析[J].
. 计算机应用与软件. 201211计算机应用与软件, 2012(11):312-314.
. 李婷 刘俊岭, 孙焕良 , 等 , 刘俊岭, 李婷, 孙焕良, 等. 利用电子签到数据预测课程成绩[J].
. 计算机科学 与探索, 2018-04-16计算机科学 与探索: 1-11[2018-04-16].
Collaborative filtering recommender systems. J B Schafer, D Frankowski, J Herlocker, Schafer J B, Frankowski D, Herlocker J, et al. Collaborative filtering recommender systems[J].
. Acm Transactions on Information Systems. 221Acm Transactions on Information Systems, 2004, 22(1):5-53.
. 王娟娟, 最小二乘支持向量机和预测误差校正的运动员成绩预测, 王娟娟. 最小二乘支持向量机和预测误差校正的运动员成绩预测[J].
. 现代电 子技术. 4105现代电 子技术, 2018,41(05):163-166.
The Research of Grade Prediction Model Based on Improved K-means Algorithm. Yongguang Zhang, Proceedings of 2016 2nd International Conference on Artificial Intelligence and Industrial Engineering. 2016 2nd International Conference on Artificial Intelligence and Industrial Engineering4Science and Engineering Research CenterYongguang Zhang. The Research of Grade Prediction Model Based on Improved K-means Algorithm[A]. Science and Engineering Research Center.Proceedings of 2016 2nd International Conference on Artificial Intelligence and Industrial Engineering (AIIE2016)[C].Science and Engineering Research Center, 2016:4.
. 曹毅, 基于内容和协同过滤的混合模式推荐技术研究, 曹毅. 基于内容和协同过滤的混合模式推荐技术研究[D].
. 中南大学, 中南大学, 2007.
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, arXiv:1609.02907arXiv preprintKipf T N, Welling M. Semi-supervised classification with graph convolutional networks[J]. arXiv preprint arXiv:1609.02907, 2016.
Fan R K C, Spectral graph theory. Fan R K C. Spectral graph theory[M].
Graph Convolutional Matrix Completion. Berg R V D, T N Kipf, M Welling, Berg R V D, Kipf T N, Welling M. Graph Convolutional Matrix Completion[J]. 2017.
Auto-Encoding Variational Bayes. D P Kingma, M Welling, Kingma D P, Welling M. Auto-Encoding Variational Bayes[J]. 2013.
Deep auto-encoder neural networks in reinforcement learning. S Lange, M Riedmiller, The 2010 International Joint Conference on. IEEE. Neural Networks (IJCNN)Lange S, Riedmiller M. Deep auto-encoder neural networks in reinforcement learning[C]. In Neural Networks (IJCNN), The 2010 International Joint Conference on. IEEE, 2010: 1-8.
Learning Important Features Through Propagating Activation Differences. A Shrikumar, P Greenside, A Kundaje, Shrikumar A , Greenside P , Kundaje A . Learning Important Features Through Propagating Activation Differences[J]. 2017.
Visualizing data using t-SNE. L Maaten, G Hinton, J]. Journal of machine learning research. 9Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008, 9(Nov): 2579-2605.
Variational inference: A review for statisticians. D M Blei, A Kucukelbir, J Mcauliffe, Journal of the American Statistical Association. 112518Blei D M, Kucukelbir A, McAuliffe J D. Variational inference: A review for statisticians[J]. Journal of the American Statistical Association, 2017, 112(518): 859-877.
Root mean square error (RMSE) or mean absolute error (MAE)? -Arguments against avoiding RMSE in the literature. T Chai, R R Draxler, Chai T , Draxler R R . Root mean square error (RMSE) or mean absolute error (MAE)? -Arguments against avoiding RMSE in the literature[J].
. Geoscientific Model Development. 73Geoscientific Model Development, 2014, 7(3):1247-1250.
Learning the parts of objects by non-negative matrix factorization. D D Lee, H S Seung, Nature. 4016755788Lee D D, Seung H S. Learning the parts of objects by non-negative matrix factorization[J]. Nature, 1999, 401(6755): 788.
Bayesian probabilistic matrix factorization with social relations and item contents for recommendation. J Liu, C Wu, W Liu, Liu J, Wu C, Liu W. Bayesian probabilistic matrix factorization with social relations and item contents for recommendation[J].
. Decision Support Systems. 553Decision Support Systems, 2013, 55(3): 838-850.
Towards better understanding of gradient-based attribution methods for Deep Neural Networks. M Ancona, E Ceolini, Cengiz Öztireli, Ancona M , Ceolini E , Cengiz Öztireli, et al. Towards better understanding of gradient-based attribution methods for Deep Neural Networks[J]. 2018.
| []
|
[
"Quantum nondemolition measurements in a Paul trap",
"Quantum nondemolition measurements in a Paul trap"
]
| [
"A Camacho \nAstrophysikalisches Institut Potsdam\nAn der Sternwarte 16D-14482PotsdamGermany\n"
]
| [
"Astrophysikalisches Institut Potsdam\nAn der Sternwarte 16D-14482PotsdamGermany"
]
| []
| In this work a family of quantum nondemolition variables for the case of a particle caught in a Paul trap is obtained. Afterwards, in the context of the so called restricted path integral formalism, a continuous measuring process for this family of parameters is considered, and then the corresponding propagators are calculated. In other words, the time evolution of a particle in a Paul trap, when the corresponding quantum nondemolition parameter is being continuously monitored, is deduced. The probabilities associated with the possible measurement outputs are also obtained, and in this way new theoretical results emerge, which could allow us to confront the predictions of this restricted path integral formalism with the readouts of some future experiments. * | 10.1016/s0375-9601(00)00693-9 | [
"https://export.arxiv.org/pdf/quant-ph/0010037v1.pdf"
]
| 17,519,984 | quant-ph/0010037 | 822baac7a9480c18cfd78d88665ed511389d7dde |
Quantum nondemolition measurements in a Paul trap
9 Oct 2000
A Camacho
Astrophysikalisches Institut Potsdam
An der Sternwarte 16D-14482PotsdamGermany
Quantum nondemolition measurements in a Paul trap
9 Oct 2000
In this work a family of quantum nondemolition variables for the case of a particle caught in a Paul trap is obtained. Afterwards, in the context of the so called restricted path integral formalism, a continuous measuring process for this family of parameters is considered, and then the corresponding propagators are calculated. In other words, the time evolution of a particle in a Paul trap, when the corresponding quantum nondemolition parameter is being continuously monitored, is deduced. The probabilities associated with the possible measurement outputs are also obtained, and in this way new theoretical results emerge, which could allow us to confront the predictions of this restricted path integral formalism with the readouts of some future experiments. *
Introduction
One of the more long-standing conundrums in modern physics is the so called quantum measurement problem (QMP) [1], the one from the very outset of quantum theory (QT) has provoked a deep interest by its unusual and paradoxical characteristics. Nowadays the attempts in the quest for a solution of this conceptual difficulty have been spawned by theoretical and also by practical reasons. On practical grounds, for instance, the necessity of detecting very small displacements in the case of gravitational-wave antenna [2], or the attempts to obtain high sensitivity in parametric transducers [3] (a topic closely related to the design of gravitational radiation detectors), requires the analysis of QMP.
On the theoretical side, QMP plays a fundamental role in the understanding of the foundations of QT [1], and therefore a better comprehension of this issue is closely related to the advancement of QT.
Concerning QMP it is noteworthy to comment that one of the most interesting topics in this field is the so called quantum nondemolition measuring process [4], in which certain class of observables may be measured repeatedly with arbitrary precision. Of course, the dynamical evolution of the corresponding system limits this class of observables. Clearly this restriction is a direct consequence of the unavoidable back reaction of the measuring device upon the measured system. The fundamental idea behind a quantum nondemolition measurement (QNDM) is to monitor a variable such that the unavoidable disturbance of the conjugate one does not perturb the time evolution of the chosen parameter [3].
On experimental grounds QNDM is very promising [5], for instance, in the dynamics of the interaction between measuring device and a mechanical oscillator the most important hurdles, that currently impede the achievement of the corresponding quantum regime, have already been identified, and the exploration of the quantum behavior in the context of macroscopic mechanical oscillators could bring in the near future decisive results [6].
In order to confront the theoretical predictions of RPIF against experimental outputs the necessity of performing continuous measurements upon a quantum system is imperative. In this respect we must mention that the current technology allows us to carry out repetitive measurements on a single quantum system. For instance, it is already possible [7] to confine and observe an individual electron in a Penning trap, or we can now also have a trapped single atom and observe its interaction with a radiation field, for example, by means of a laser fluorescence [8]. In the case of the so called Paul trap, which has led to the construction of a mass spectrometer [9], a ion is trapped employing a high-frequency electric quadrupole field [10]. As is known, this idea can also be extended to the case of neutral atoms, laser cooled and stopped atoms are confined in a magnetic quadrupole trap formed by two opposed, separated, coaxial current loops [11].
The aforementioned experiments open the possibility of confronting, in a near future, the theoretical predictions of some formalisms that claim to describe the interaction between measuring apparatus and measured system against experimental results. One of these formalisms is the so called Restricted Path-Integral Formalism (RPIF) [12]. The main idea in this approach is the restriction by means of a weight functional of the integration domain of the path-integral that renders the corresponding propagator of the analyzed system, when one or more of its parameters are subject to a continuous measurement process.
Let us explain this point a little bit better, and suppose that we have a particle which has a one-dimensional movement. The amplitudeÛ (q ′′ , q ′ ) for this particle to move from the point q ′ to the point q ′′ is called propagator, and it is given by
Feynman [13]Û (q ′′ , q ′ ) = d[q]exp ī h S[q] ,(1)
here we must integrate over all the possible trajectories q(t), and S[q] is the action of the system, which is defined as
S[q] = t ′′ t ′ dtL(q,q).(2)
Let us now suppose that we perform a continuous measurement of the position of this particle, such that we obtain as result of this measurement process a certain output a(t). In other words, the measurement process gives the value a(t) for the coordinate q(t) at each time t, and this output has associated a certain error ∆a, which is determined by the experimental resolution of the measuring device. The amplitudeÛ [a] (q ′′ , q ′ ) can be now thought of as a probability amplitude for the continuous measurement process to give the result a(t). Taking the square modulus of this amplitude allows us to find the probability density for different measurement outputs.
Clearly, the integration in the Feynman path-integral should be restricted to those trajectories that match with the experimental output. RPIF states that this condition can be introduced by means of a weight functionalω [a] [14]. This means that expression (1) becomes now under a continuous measurement procesŝ
U [a] = d[q]ω [a] exp(iS[q]).(3)
The more probable the trajectory [q] is, according to the output a, the bigger thatω [a] becomes [12], this means that the value ofω [a] is approximately one for all trajectories, [q], that agree with the measurement output a and it is almost 0 for those that do not match with the result of the experiment.
Clearly, the weight functional contains all the information about the interaction between measuring device and measured system, and the problems that in this context appear are two:
(i) The concrete form of the weight functionalω [a] depends on the measuring device [15], in other words, the involved experimental constructions determine these weight functionals. We must determineω [a] starting from the knowledge that we have of the measuring device, a non-trivial problem.
(ii) If we wish an analytical expression forÛ [a] , then the resulting weight functional must render an analytically handleable functional integral.
In this work we will consider the case of a particle caught in a Paul trap, and the nonlinear differential equation, the one defines the corresponding quantum nondemolition variables, will be solved. As solution a family of quantum nondemolition parameters will be obtained. Afterwards, a continuous measuring process for one of these parameters will be considered, and the associated propagators will be calculated. Finally, the probabilities of the different measurement outputs are obtained.
Quantum nondemolition variables in a Paul trap
It is already known that in the case of a particle in a Paul trap we have a harmonic oscillator, the one possesses a frequency equal toŪ −V cos(ωt), beingŪ ,V , and ω constants which depend on the electric quadrupole field used to trap the particle. The motion equations of an electrically charged particle caught in a Paul trap are [10] (here we will assume that ions are injected in the y-direction and that there is electric field only along the x-and z-coordinates)
x(t) + e mr 2 [Ū −V cos(ωt)]x(t) = 0,(4)z(t) − e mr 2 [Ū −V cos(ωt)]z(t) = 0.(5)
here e is the electric charge of the particle, m its mass, and 2r the distance between the electrodes that conform part of the experimental apparatus. The solutions to these equations are the so called Mathieu functions [16].
Let us now first consider the motion only along the x-axis. This is no restriction at all, because the complete motion can be separated in two independent motions. Thus, our starting point is the following Lagrangian
L = 1 2 mẋ 2 (t) − 1 2 m[U − V cos(ωt)]x 2 (t).(6)
here we have introduced the following definitions: U = e mr 2Ū and V = e mr 2V . In the case of a harmonic oscillator with a time dependent frequency a quantum nondemolition (QND) variable may be found in the form [12] A = ρp + σq,
where p and q denote the momentum and position, respectively, and ρ and σ satisfy the following differential equation [12] df
dt = f 2 m + m[U − V cos(ωt)],(8)
here f = ρ σ . Let us now consider the following function:
F = − m x(t) dx dt ,(9)
where x(t) is any of the possible solutions tö
x(t) + [U − V cos(ωt)]x(t) = 0.(10)
It is straightforward to check that F is a solution to (8). Hence, with the solutions of the motion equation we may easily find QND-variables. In other words, we have found a family of QND-parameters for the case of a particle in a Paul trap.
A(t) = σ(t) −m 1 x(t) dx dt q + p .(11)
If we set V = 0, then we recover, from expression (9), the situation already known of a harmonic oscillator with time independent frequency [12]. Indeed, in that case, x(t) = − cos( √ U t), and then ρ/σ = m √ U tan ( √ Ut), where √ U = ω is the corresponding frequency of oscillation.
Continuous measurements, propagators, and probability densities
Let us now consider the situation in which our QND-variable, A, is being continuously monitored, here we set σ(t) = 1. According to RPIF [12] the corresponding propagator (when the measurement output reads a(t)) is given by
Û [a] = q ′′ q ′ d[q]d[p] exp ī h t ′′ t ′ p 2 2m + 1 2m [U − V cos(ωt)]q 2 dt ω [a] .(12)
As was mentioned at the end of the first section, the whole interaction between measuring apparatus and measured system is contained inω [a] . At this point we must choose a particular weight functional, and it will be a gaussian onẽ
ω [a] = exp − 1 T ∆a 2 t ′′ t ′ [A(t) − a(t)] 2 dt .(13)
The reasons for this choice lie on the fact that the results coming from a Heaveside weight functional [14] and those coming from a gaussian one [17] coincide up to the order of magnitude. These last remarks allow us to consider a gaussian weight functional as an approximation of the correct expression. Hence, it will be supposed that the weight functional of our measuring device has precisely this gaussian form. We may wonder if this is not an unphysical assumption, and in favor of this argument we may comment that recently it has been proved that there are measuring apparatuses which show this kind of behavior [18]. Therefore, expression (12) becomeŝ
U [a] = q ′′ q ′ d[q]d[p] exp ī h t ′′ t ′ p 2 2m + U − V cos(ωt) 2m q 2 + ih T ∆a 2 [A − a] 2 dt .(14)
This last path integral is gaussian in p and q, and therefore, it can be easily calculated [19]Û
[a] = exp − 1 T ∆a 2 t ′′ t ′ a 2 dt × exp T ∆a 2 − 2imh 2m 2h γ t ′′ t ′ (aẋ x ) 2 [2m 2h α + imT ∆a 2 β] α 2 + T 2 ∆a 4 4m 2h2 β 2 ,(15)
where we have introduced three new definitions, namely, α = (ẋ x ) 2 + U − V cos(ωt), β = U − V cos(ωt), and γ = 4m 2h2 + T 2 ∆a 4 . The probability densities, associated with the different measurement outputs, read (according to the expression P [a] = |Û [a] | 2 )
P [a] = exp − 2 T ∆a 2 t ′′ t ′ a 2 dt exp T ∆a 2 γ t ′′ t ′ (aẋ x ) 2 (ẋ x ) 2 + 2β α 2 + T 2 ∆a 4 4m 2h2 β 2 .(16)
Conclusions
In this work we have considered the case of a particle caught in a Paul trap, and, after solving the corresponding nonlinear differential equation, a family of quantum nondemolition variables has been obtained, expression (11). Afterwards, a continuous quantum measurement for an element of this family was considered, and, along the ideas of the so called restricted path integral, the corresponding propagator has been calculated, expression (15). Finally, the associated probability densities were derived, expression (16). The present work complements a previous paper in which a quantum demolition measurement for a particle in a Paul trap was analyzed [20]. Clearly, we may notice in our last equation that there is no standard quantum limit, in other words, we may measure A with an arbitrarily small error, and in consequence all the necessary information can be extracted.
Looking at (16) we may notice that in the limit ∆a → 0 we obtain P [a] → 0. In other words, in the limit of very precise measurements all the possible readouts have the same probability density. This last fact is a quantum feature, indeed, in the nonquantum case only the solution to the classical motion equations has a nonvanishing probability. This last remark does not mean that a strong disturbance of the corresponding observable is present. Indeed, we may find this kind of behavior even in a much simpler situation, namely, in the case of a harmonic oscillator the limit of very precise measurements (∆a → 0) renders also the result P [a] → 0 (see equation (6.32) in [12]). This characteristic is also present in the situation when a quantum nondemolition variable, for the case of a particle moving in an inhomogeneous gravitational field, is monitored in the limit of very small intrumental error [21]. The opposite case, namely, the limit of rough measurements, ∆a → ∞, renders also the result P [a] → 0.
Of course, we could have a very small experimental error, but the case ∆a → 0 is an idealization, and this limit has to be understood in the sense that if the experimental resolution is much smaller than all the relevant physical variables, then we could expect to have a probability independent of the measurement outputs. Experimentally this case could be a very difficult one, consider, for example, the current experimental resolution in the case of Paul or Penning traps [10], which lies very far from this idealization.
There are already some theoretical results [22,23] that provide a framework which could allow us to confront the predictions of RPIF against some possible future experiments. Nevertheless, the present work renders new theoretical predictions coming from RPIF, which in the future could be confronted with experimental readouts. For example, expression (16) predicts a very particular dependence (on the involved measurement readouts) for the ratio of the probability densities associated with two different experimental results. Indeed, if b(t) = a(t), then its is readily seen that P [a] /P [b] = g(a 2 − b 2 ), where g is a real function
P [a] /P [b] = exp − 2 T ∆a 2 t ′′ t ′ a 2 − b 2 dt × exp T ∆a 2 γ t ′′ t ′ (a 2 − b 2 )(ẋ x ) 2 (ẋ x ) 2 + 2β α 2 + T 2 ∆a 4 4m 2h2 β 2 .(17)
To finish, let us comment an additional characteristic of expression (16). We know that there are some QNDMs in which an absolute limit may appear. For instance, if the linear momentum of a free particle is monitored, this aforementioned limit may emerge, when the instantaneous measurement of position of the test particle, before and after the monitoring of the linear momentum, is done (see page 99 of reference [12]). Clearly, position and linear momentum are canonical conjugate variables to each other, and that is why this absolute limit appears. At this point we may wonder why (16) shows no absolute limit, in our case, position, before and after monitoring of A(t), has also been instantaneously measured, i.e., q ′ and q ′′ are present from the very begining in our mathematical expressions, see (12).
The answer to this question stems from the fact that in our case the measured quantity, A = σ(t)[−m 1
x(t) dx dt q + p], is not the canonical conjugate variable of position, q.
AcknowledgmentsThe author would like to thank A. A. Cuevas-Sosa for his help, and D.-E. Liebscher for the fruitful discussions on the subject. The hospitality of the Astrophysikalisches Institut Potsdam is also kindly acknowledged. This work was supported by CONACYT (México) Posdoctoral Grant No. 983023.
R Omnés, The Interpretation of Quantum Mechanics. Princeton, New JerseyPrinceton University PressR. Omnés, "The Interpretation of Quantum Mechanics", Princeton University Press, Princeton, New Jersey (1994).
. K S Thorne, Rev. Mod. Phys. 52299K. S. Thorne, Rev. Mod. Phys. 52, 299 (1980).
Quantum measurement. V B Braginsky, F Ya, Khalili, Cambrige University pressCambridgeV. B. Braginsky and F. Ya. Khalili, "Quantum measurement", Cambrige Uni- versity press, Cambridge (1995).
. V B Braginsky, Zh. Eksp. Teor. Fiz. 531434V. B. Braginsky, Zh. Eksp. Teor. Fiz. 53, 1434 (1967).
. M F Bocko, R Onofrio, Rev. Mod. Phys. 68755M. F. Bocko and R. Onofrio, Rev. Mod. Phys. 68, 755 (1996).
. C Cinquegrana, E Majorana, N Pergola, P Puppo, P Rapagnani, F Ricci, Departimento de Fisica, Universitá di Roma "La SapienzaPreprint No. 1032C. Cinquegrana, E. Majorana, N. Pergola, P. Puppo, P. Rapagnani, and F. Ricci, Preprint No. 1032, Departimento de Fisica, Universitá di Roma "La Sapienza".
. D J Wineland, P Ekstrom, H Dehmelt, Phys. Rev. Lett. 311279D. J. Wineland, P. Ekstrom, and H. Dehmelt, Phys. Rev. Lett. 31, 1279 (1973).
. W Neuhauser, M Hohenstatt, P E Toschek, Phys. Rev. 221137W. Neuhauser, M. Hohenstatt, and P. E. Toschek, Phys. Rev. A22, 1137 (1980).
. W Paul, M Raether, Z. Phys. 140262W. Paul and M. Raether, Z. Phys. 140, 262 (1955).
. W Paul, Rev. Mod. Phys. 62531W. Paul, Rev. Mod. Phys. 62, 531 (1990).
. A L Migdall, J V Prodan, W D Phillips, T H Bergeman, H J Metcalf, Phys. Rev. Lett. 542596A. L. Migdall, J. V. Prodan, W. D. Phillips, T. H. Bergeman, and H. J. Metcalf, Phys. Rev. Lett. 54, 2596 (1985).
M B Mensky, Continuous Quantum Measurements and Path Integrals. M. B. Mensky, "Continuous Quantum Measurements and Path Integrals", IOP, Bristol and Philadelphia (1993).
. R P Feynman, Rev. Mod. Phys. 20367R. P. Feynman, Rev. Mod. Phys. 20, 367 (1948).
. M B Mensky, Phys. Rev. 20384M. B. Mensky, Phys. Rev. D20, 384 (1979).
M B Mensky, The Path Group: Measurements, Fields, Particles. MoscowNaukaM. B. Mensky, "The Path Group: Measurements, Fields, Particles", Nauka, Moscow (1983).
G Blanch, Handbook of Mathematical Functions. M. Abramowitz and I. Stegun eds.New YorkDover Publications, IncG. Blanch, in "Handbook of Mathematical Functions", M. Abramowitz and I. Stegun eds., Dover Publications, Inc., New York (1972).
. M B Mensky, Sov. Phys. JETP. 50667M. B. Mensky, Sov. Phys. JETP. 50, 667 (1979).
. M B Mensky, Physics-Uspekhi. 41923M. B. Mensky, Physics-Uspekhi 41, 923 (1998).
Classical and Quantum Dynamics. W Dittrich, M Reuter, SpringerBerlinW. Dittrich and M. Reuter, "Classical and Quantum Dynamics", Springer, Berlin (1996).
. A Camacho, A Camacho-Galván, Phys. Letts. 247373A. Camacho and A. Camacho-Galván, Phys. Letts. A247, 373 (1998).
Quantum nondemolition measurements of a particle in an inhomogeneous gravitational field. A Camacho, quant- ph/9911106Gen. Rel. Grav. LANL-Preprintin pressA. Camacho, "Quantum nondemolition measurements of a particle in an inhomo- geneous gravitational field", Gen. Rel. Grav. in press (LANL-Preprint: quant- ph/9911106).
. J Audretsch, M B Mensky, V Namiot, Phys. Letts. 2371J. Audretsch, M. B. Mensky, and V. Namiot, Phys. Letts. A237, 1 (1997).
. A Camacho, Physics Letters. 256339A. Camacho, Physics Letters A256, 339 (1999);
. A Camacho, Physics Letters. 262110A. Camacho, Physics Le- tters A262, 110 (1999);
Quantum-mechanical detection of non-Newtonian gravity. A Camacho, gr- qc/0005112)Int. J. Mod. Phys. A. LANL-Preprintin pressA. Camacho, "Quantum-mechanical detection of non- Newtonian gravity", Int. J. Mod. Phys. A, in press (LANL-Preprint: gr- qc/0005112).
| []
|
[
"Multi-Modal Data Augmentation for End-to-End ASR",
"Multi-Modal Data Augmentation for End-to-End ASR"
]
| [
"Adithya Renduchintala \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n",
"Shuoyang Ding [email protected] \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n",
"Matthew Wiesner [email protected] \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n",
"Shinji Watanabe [email protected] \nCenter for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA\n"
]
| [
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA",
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA",
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA",
"Center for Language and Speech Processing\nJohns Hopkins University\n21218BaltimoreMDUSA"
]
| []
| We present a new end-to-end architecture for automatic speech recognition (ASR) that can be trained using symbolic input in addition to the traditional acoustic input. This architecture utilizes two separate encoders: one for acoustic input and another for symbolic input, both sharing the attention and decoder parameters. We call this architecture a multi-modal data augmentation network (MMDA), as it can support multi-modal (acoustic and symbolic) input and enables seamless mixing of large text datasets with significantly smaller transcribed speech corpora during training. We study different ways of transforming large text corpora into a symbolic form suitable for training our MMDA network. Our best MMDA setup obtains small improvements on character error rate (CER), and as much as 7-10% relative word error rate (WER) improvement over a baseline both with and without an external language model. | 10.21437/interspeech.2018-2456 | [
"https://arxiv.org/pdf/1803.10299v3.pdf"
]
| 4,548,683 | 1812.03919 | 35161b1758e75cf5c25523d9ca90d594cbca2a3a |
Multi-Modal Data Augmentation for End-to-End ASR
Adithya Renduchintala
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Shuoyang Ding [email protected]
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Matthew Wiesner [email protected]
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Shinji Watanabe [email protected]
Center for Language and Speech Processing
Johns Hopkins University
21218BaltimoreMDUSA
Multi-Modal Data Augmentation for End-to-End ASR
We present a new end-to-end architecture for automatic speech recognition (ASR) that can be trained using symbolic input in addition to the traditional acoustic input. This architecture utilizes two separate encoders: one for acoustic input and another for symbolic input, both sharing the attention and decoder parameters. We call this architecture a multi-modal data augmentation network (MMDA), as it can support multi-modal (acoustic and symbolic) input and enables seamless mixing of large text datasets with significantly smaller transcribed speech corpora during training. We study different ways of transforming large text corpora into a symbolic form suitable for training our MMDA network. Our best MMDA setup obtains small improvements on character error rate (CER), and as much as 7-10% relative word error rate (WER) improvement over a baseline both with and without an external language model.
Introduction
The simplicity of "end-to-end" models and their recent success in neural machine-translation (NMT) have prompted considerable research into replacing conventional ASR architectures with a single "end-to-end" model, which trains the acoustic and language models jointly rather than separately. Recently, [1] achieved state-of-the-art results using an attention-based encoder-decoder model trained on over 12K hours of speech data. However, on large publicly available corpora, such as "Librispeech" or "Fisher English", which are an order of magnitude smaller, performance still lags behind that of conventional systems. [2,3,4]. Our goal is to leverage much larger text corpora alongside limited amounts of speech datasets to improve the performance of end-to-end ASR systems.
Various methods of leveraging these text corpora have improved end-to-end ASR performance. [5], for instance, composes RNN-output lattices with a lexicon and word-level language model, while [6] simply re-scores beams with an external language model. [7,8] incorporate a character level language model during beam search, possibly disallowing character sequences absent from a dictionary, while [9] includes a full word level language model in decoding by simultaneously keeping track of word histories and word prefixes. As our approach does not change any aspect of the traditional decoding process in end-to-end ASR, the methods mentioned above can still be used in conjunction with our MMDA network.
An alternative method, proposed for NMT, augments the source (input) with "synthetic" data obtained via backtranslation from monolingual target-side data [10]. We draw inspiration from this approach and attempt to augment the ASR input with text-based synthesized input generated from large text corpora. Figure 1a highlights the network engaged when acoustic features are given as input to an acoustic encoder (shaded blue). Alternatively, when synthetic input is supplied the network (Figure 1b) uses an augmenting encoder (green). In both cases a shared attention mechanism and decoder are used to predict the output sequence. For simplicity we show 2 layers without down-sampling in the acoustic encoder and omit the input embedding layer in the augmenting encoder.
Approach
While text-based augmenting input data is a natural fit for NMT, it cannot be directly used in end-to-end ASR systems which expect acoustic input. To utilize text-based input, we use two separate encoders in our ASR architecture: one for acoustic input and another for synthetic text-based augmenting input. Figure 1 gives an overview of our proposed architecture. Figure 1a shows a sequence of acoustic frames {x0, x1, . . .} fed into an acoustic encoder shown with blue cross hatching. The attention mechanism takes the output of the encoder and generates a context vector (gray cross hatching) which is uti-
MMDA Architecture
Synthetic Input
Example Sequence Charstream
J O H N B L A R E A N D C O M P A N Y Phonestream JH AA1 N B L EH1 R AE1 N D K AH1 M P AH0 N IY0 Rep-Phonestream JH JH JH AA1 AA1 AA1 AA1 N B L L L EH1 R AE1 AE1 AE1 N D K K K AH1 AH1 M M P AH0 AH0 AH0 N IY0 IY0 IY0
lized by the decoder (red cross hatching) to generate each token in the output sequence {y0, y1, . . .}. In figure 1b, the network is given a sequence of "synthetic" input tokens, {z0, z1, . . .}, where zi ∈ Z and the set Z is the vocabulary of the synthetic input. The size and items in Z depend on the type of synthetic input scheme used (see Table 1 for examples and Section 5.2 for more details). As the synthetic inputs are categorical, we use an input embedding layer which learns a vector representation of each symbol in Z. The vector representation is then fed into an augmenting encoder (shown in green cross hatching). Following this, the same attention mechanism and decoder are used to generate an output sequence. Note that some details such as the exact number of layers, down-sampling in the acoustic encoder, and the embedding layer in the augmenting encoder are omitted in Figure 1 for sake of clarity.
Synthetic Inputs
A desirable synthetic input should be easy to construct from plain text corpora, and should be as similar as possible to acoustic input. We propose three types of synthetic inputs that can be easily generated from text corpora and with varied similarity to acoustic inputs (see Table 1).
Charstream:
The output character sequence is supplied as synthetic input without word boundaries.
Phonestream:
We make use of a pronunciation lexicon to expand words into phonemes where unknown pronunciations are recovered via grapheme-to-phoneme transduction (G2P).
3. Rep-Phonestream: We explicitly model phoneme duration by repeating each phoneme such that the relative durations of phonemes to each other mimic what is observed in data (e.g. vowels last longer than stop consonants).
Multi-task Training
Let D be the ASR dataset, with acoustic input and character sequence output pairs (Xj, y j ) where j ∈ {1, . . . , |D|}. Using a text corpus S with sentences s k where k ∈ {1, . . . , |S|}, we can generate synthetic inputs z k = syn(s k ), where syn(.) is one of the synthetic input creation schemes discussed in the previous section. Under the assumption that both y j and s k are sequences with the same character vocabulary and from the same language, our augmenting dataset A is comprised of training pairs (z k , s k ) k ∈ {1, . . . , |S|}. Typically the corpus S is much larger than the original ASR training set D. During training, we alternate between batches from acoustic training data D (primary task) and synthesized augmenting data A (secondary task). In each batch we maximize the primary objective or the secondary objective. Note that in both cases the attention and decoder parameters (denoted by θatt and θdec, see equation 1) are shared, while the acoustic encoder parameters (θenc) and augmenting encoder parameters (θaug) are only updated in their respective training batches.
L(θ) = log P (y | X ; θenc, θatt, θdec) primary objective log P (s | z ; θaug, θatt, θdec) secondary objective(1)
We evaluate our model on a held out ASR dataset D which only contains acoustic batches as our ultimate goal is to obtain the best ASR system. In the remainder of the paper we place our work in context of other multi-modal, multi-task, and data-augmentation schemes for ASR. We propose a novel architecture to seamlessly train on both text (with synthetic inputs) and speech corpora. We analyze the merit of these approaches on WSJ, and finally report the performance of our best performing architecture on WSJ [11] and the HUB4 Spanish [12] and Voxforge Italian corpora [13].
Related Work
Augmenting the ASR source with synthetically generated data is already a widely used technique. Generally, label-preserving perturbations are applied to the ASR source to ensure that the system is robust to variations in source-side data not seen in training. Such perturbations include Vocal Tract Length Perturbations (VTLN) as in [14] to expose the ASR to a variety of synthetic speaker variations, as well as speed, tempo and volume perturbations [15]. Speech is also commonly corrupted with synthetic noise or reverberation [16,17].
Importantly, these perturbations are added to help learn more robust acoustic representations, but not to expose the ASR system to new output utterances, nor do they alter the network architecture. By contrast, our proposed method for data augmentation from external text exposes the ASR system to new output utterances, rather than to new acoustic inputs.
Another line of work involves data-augmentation for NMT. In [18], improvements in low-resource settings were obtained by simply copying the source-side (input) monolingual data to the target side (output). Our approach is loosely based off of [10], which improves NMT performance by creating pseudo parallel data using an auxiliary translation model in the reverse direction on target-side text.
Previous work has also tried to incorporate other modalities during both training and testing, but have focused primarily on learning better feature representations via correlative objective functions or on fusing representations across modalities [19,20]. The fusion methods require both modalities to be present at test time, while the multiview methods require both views to be conditionally independent given a common source. Our method has no such requirements and only makes use of the alternate modality during training.
Lastly, we note that considerable work has applied multitask training to "end-to-end" ASR. In [21], the CTC objective is used as an auxiliary task to force the attention to learn monotonic alignments between input and output. In [22], a multi-task framework is used to jointly perform language-id and speechto-text in a multilingual ASR setting. In this work our use of phoneme-based augmenting data is effectively using G2P (P2G) as an auxiliary task in end-to-end ASR, though only implicitly.
Method
Our MMDA architecture is a straightforward extension to Attention-Based Encoder-Decoder network [23], which is described by components as follows.
Acoustic Encoder
For a single utterance, the acoustic frames form a matrix X ∈ IR Lx×Dx are encoded by a multi-layer bi-directional LSTM (biLSTM) with hidden dimension H for each direction, Lx and Dx being the length of the utterance in frames and the number of acoustic features per frame, respectively. After each layers' encoding, the hidden vectors of IR 2H are projected back to vectors of IR H using a projection layer and fed as the input into the next layer. We also use a pyramidal encoder following [24] to down-sample the frame encodings and capture a coarsergrained resolution.
Augmenting Encoder
The augmenting encoder is a single-layer biLSTM -essentially a "shallow" acoustic encoder. As the synthetic input is symbolic (e.g. phoneme, character), we use an embedding layer which learns a real-valued vector representation for each symbol, thus converting a sequence of symbols z ∈ Z Lz ×1 into a matrix Z ∈ IR Lz ×Dz , where Z is the set of possible augmenting input symbols, Lz is the length of the augmenting input sequence and Dz is embedding size respectively. We set Dx = Dz to ensure that the acoustic and augmenting encoders work smoothly with the attention mechanism.
Decoder
We used a uni-directional LSTM for the decoder [23,25].
sj = LSTM(y j−1 , sj−1, cj)(2)
where y j−1 is the embedding of the last output token, sj−1 is the LSTM hidden state in the previous time step, and cj is the attention-based context vector which will be discussed in the following section. We omitted all the layer index notations for simplicity. The hidden state of the final LSTM layer is passed through another linear transformation followed by a softmax layer generating a probability distribution over the outputs.
Attention Mechanism
We used Location-aware attention [26], which extends the content-based attention mechanism [23] by using the attention weights from the previous output time-step αj−1 when computing the weights for the current output αj. The previous timestep attention weights αj−1 are "smoothed" by a convolution operation and fed into the attention weight computation Once attention weights are computed, a weighted sum over encoder hidden states generates the the context vector cj.
Experiments
Data
We compared the proposed types of synthetic data by evaluating character and word error rates (CER, WER) of ASR systems trained on the Wall Street Journal corpus (LDC93S6B and LDC94S13B), using the standard SI-284 set containing ∼37K utterances or 80 hours of speech. We used the"dev93" set as a development set and selection criteria for the best model, which was then evaluated on the "eval92" dataset. We also tested the performance of MMDA using the best performing synthetic input type on the Hub4 Spanish and Italian Voxforge datasets. The Hub4 Spanish corpus consists of 30 hours of 16kHz speech from three different broadcast news sources [12]. We used the same evaluation set as used in the Kaldi Hub4 Spanish recipe [27], and constructed a development set with the same number of utterances as the evaluation set by randomly selecting from the remaining training data.
For the Voxforge Italian corpus, which consist of 16 hours of broadband speech, [13] we created training, development, and evaluation sets, by randomly selecting 80%, 10%, and 10% of the data for each set respectively, ensuring that no sentence was repeated in any of the sets. In all experiments we represented each frame of audio by a vector of 83 dimensions (80 Mel-filter bank coefficients 3 pitch features).
Generating Synthetic Input
The augmenting data used for the WSJ experiments are generated from section (13-32.1 87,88,89) of WSJ, which is typically used for training language models applied during decoding. We made 3 different synthetic inputs for this section of WSJ. For Charstream synthetic input the target-side character sequence was copied to the input while omitting word boundaries. For Phonestream synthetic input we constructed phone sequences using CMUDICT to which 46k words from the WSJ corpus are added [28] as the lexicon as described in section 5.2. We trained the G2P on CMUDICT using the Phonetisaurus toolkit. For certain words consisting only of rare graphemes, we were unable to infer pronunciations and simply assigned to these words a single unk phoneme. Finally, we filtered out sentences with more than 1 unk phoneme symbol, and those above 250 characters in length. The resulting augmenting dataset contained ∼ 1.5M sentences.
In the Rep-Phonestream scheme, we modified the augmenting input phonemes to further emulate the ASR input by modeling the variable durations of phonemes. We assumed that a phoneme's duration in frames is normally distributed N (µp, σ 2 p ) and we estimated these distributions for each phoneme from frame-level phoneme transcripts in the TIMIT dataset. For example, given a phoneme sequence like JH AA N (for the word "John"), we would sample a sequence of frame durations fp ∼ N (µp, σ 2 p ) p ∈ {JH, AA, N} and repeat each phoneme r times, where r = max(1, Round(fp)/4). Dividing by 4 accounts for the down-sampling performed by the pyramidal scheme in the acoustic encoder.
The augmenting data for both Spanish and Italian was generated by using Wikipedia data dumps 1 and then scraping Wiktionary using wikt2pron 2 for pronunciations of all words seen in the text. We used the resulting seed pronunciation lexicon for G2P training as before and again filtered out long sentences and those with resulting unk words after phonemic expansion of all words in the augmenting data. In order to generate the Rep-Phonestream data we manually mapped TIMIT phonemes to similar Italian and Spanish phonemes and applied the corresponding durations learned on TIMIT.
Training
We implemented our MMDA model on-top of ESPNET using the PyTorch backend [21,29]. A 4 layer biLSTM with a "pyra- midal" structure was used for the acoustic encoder [6]. The biL-STMs in the encoder used 320 hidden units (in each direction) followed by a projection layer. For the augmenting encoder, we used a single layer biLSTM with the same number of units and projection scheme as the acoustic encoder. No down-sampling was done on the augmenting input. Location-aware attention was used in all our experiments [26]. For WSJ experiments, the decoder was a 2-layer LSTM with 300 hidden units, while a single layer was used for both Spanish and Italian. We used Adadelta to optimize all our models for 15 epochs [30]. The model with best validation accuracy (at the end of each epoch) was used for evaluation. For decoding, a beam-size of 10 for WSJ and 20 for Spanish and Italian was used. In both cases we restricted the output using a minimum-length and maximum-length threshold. The min and max output lengths were set as 0.3F and 0.8F , where F denotes the length of down-sampled input. For RNNLM integration, we trained a 2-layer LSTM language model with 650 hidden units. The RNNLM for each experiment was trained on the same sentences used for augmentation.
Results
Table 2 (part 1) shows the ASR results on WSJ. Rep-Phonestream augmentation improved the baseline WER by a margin of 2%, while none of the other augmentations helped. This corroborates our intuition that data augmentation works better when synthetic inputs are similar to the real training data. Furthermore, we continued to observe gains in WER when an RNNLM was incorporated in the decoding process [8]. This suggests that while MMDA and LM have a similar effect, they can still be used in conjunction to extract further improvement.
The best performing synthetic input scheme was applied to Spanish and Italian, where a similar trend was observed. MMDA consistently achieved better WER and obtained small improvements in CER (see table 2, parts 2 and 3). The relative gains in English (WSJ) were higher than Spanish and Italian; we suspect the ad-hoc phone duration mapping we employed for these languages and mismatch in augmenting text data might have contributed to the lower relative gains.
We found that the Rep-Phonestream MMDA system tended to replace entire words when incorrect, while the baseline system incorrectly changed a few characters in a word, even if the resulting word did not exist in English (for WSJ). This behav- We verified this hypothesis by computing the ratio of substitutions and insertions resulting in nonsense words to the total number of such errors on the WSJ development and evaluation data for the baseline system and MMDA, both with and without RNNLM re-scoring. We see that RNNLM re-scoring actually behaves like MMDA in this regard (see Table 3).
Future Work
Enhancing MMDA
We identify three possible future research directions:
(i) Augmenting encoders: More elaborate designs for the augmenting encoder could be used to generate more "speech-like" encodings from symbolic synthetic inputs.
(ii) Synthetic inputs: Other synthetic inputs should be explored, as our choice was motivated in large part by simplicity and speed of generating synthetic inputs. Using something approaching a text-to-speech system to generate augmenting data may be greatly beneficial.
(iii) Training schedules: Is the 1:1 ratio for augmenting and acoustic training ideal? Using more augmenting data initially may be beneficial, and a systematic study of various training schedules would reveal more insights. Furthermore, automatically adjusting the amount of augmenting data used also seems worthy of inquiry.
Applications
Our framework is easily expandable to other end-to-end sequence transduction applications, examples of which include domain adaptation and speech-translation. To adapt ASR to a new domain (or even new dialect/language), we can train on additional augmenting data derived from the new domain (or dialect/language). We also believe the MMDA framework may be well suited to speech-translation due to its similarity to backtranslation in NMT.
Conclusion
We proposed the MMDA framework which exposes our endto-end ASR system to a much wider range of training data. To the best of our knowledge, this the first attempt in truly endto-end multi-modal data augmentation for ASR. Experiments show promising results for our MMDA architecture and we highlight possible extensions and future research in this area.
Figure 1 :
1Overview of our Multi-modal Data Augmentation (MMDA) model.
Table 1 :
1Examples of sequences under different synthetic input generation schemes. The original text for these examples is the phrase JOHN BLARE AND COMPANY.
Table 2 :
2Experiments on WSJ corpus using different augmentation input types (part 1). The best performing augmentation was then applied to Italian(Voxforge) and Spanish (HUB4) datasets(part 2 & 3 of the table).Corpus
Augmentation
CER
(eval, dev)
WER
(eval, dev)
English
(WSJ)
No-Augmentation
7.0, 9.9
19.5, 24.8
Charstream
7.5, 10.5
20.3, 25.7
Phonestream
7.4, 10.1
20.4, 25.3
Rep-Phonestream
7.1, 9.8
17.5, 22.7
No-Augmentation + LM 7.0, 9.8
17.2, 22.2
Rep-Phonestream + LM
6.7, 9.4
16.0, 20.8
Italian
(Voxforge)
No-Augmentation + LM 16.4, 15.9
47.2, 46.1
Rep-Phonestream + LM
14.8, 14.5
44.3, 44.0
Spanish
(HUB4)
No-Augmentation + LM 12.6, 12.8
31.5, 33.5
Rep-Phonestream + LM
12.1, 13.1
29.5, 32.6
Table 3 :
3Error type differences between the Rep-Phonestream MMDA trained system and the baseline system on WSJ (dev & test combined). "Nonsense errors" are substitutions or insertions that result in non-legal English words, e.g. CASINO substituted with ACCINO . "Legal errors" are errors that result legal English words, e.g. BOEING substituted with BOLDING.ior tended to improve WER while harming CER. For example, in the WSJ experiments, the baseline substitutes QUOTA with COLOTA, while the Rep-Phonestream MMDA predicts COLORS.Augmentation
Nonsense
errors %
Legal
errors%
No-Augmentation
32.93
67.07
No-Augmentation + LM 24.34
75.66
Rep-Phonestream
24.99
75.11
Rep-Phonestream + LM 20.25
79.75
https://dumps.wikimedia.org/backup-index.html 2 https://github.com/abuccts/wikt2pron
Stateof-the-art speech recognition with sequence-to-sequence models. C.-C Chiu, T N Sainath, Y Wu, R Prabhavalkar, P Nguyen, Z Chen, A Kannan, R J Weiss, K Rao, K Gonina, arXiv:1712.01769arXiv preprintC.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, K. Gonina et al., "State- of-the-art speech recognition with sequence-to-sequence models," arXiv preprint arXiv:1712.01769, 2017.
Librispeech: an ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP). the International Conference on Acoustics, Speech and Signal Processing (ICASSP)IEEEV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: an ASR corpus based on public domain audio books," in Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015.
Fisher english training speech part 1 transcripts. C Cieri, D Graff, O Kimball, D Miller, K Walker, Philadelphia: Linguistic Data Consortium. C. Cieri, D. Graff, O. Kimball, D. Miller, and K. Walker, "Fisher english training speech part 1 transcripts," Philadelphia: Linguis- tic Data Consortium, 2004.
Fisher english training part 2. PhiladelphiaLinguistic Data Consortium--, "Fisher english training part 2," Linguistic Data Consor- tium, Philadelphia, 2005.
Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. Y Miao, M Gowayyed, F Metze, Automatic Speech Recognition and Understanding (ASRU). Y. Miao, M. Gowayyed, and F. Metze, "Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding," in Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on. IEEE, 2015, pp. 167-174.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q V Le, O Vinyals, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), 2015.
Lexicon-free conversational speech recognition with neural networks. A Maas, Z Xie, D Jurafsky, A Ng, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesA. Maas, Z. Xie, D. Jurafsky, and A. Ng, "Lexicon-free conversa- tional speech recognition with neural networks," in Proceedings of the 2015 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Tech- nologies, 2015, pp. 345-354.
Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM. T Hori, S Watanabe, Y Zhang, W Chan, T. Hori, S. Watanabe, Y. Zhang, and W. Chan, "Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM," in Interspeech, 2017, pp. 949-953.
Towards end-to-end speech recognition with recurrent neural networks. A Graves, N Jaitly, International Conference on Machine Learning (ICML). A. Graves and N. Jaitly, "Towards end-to-end speech recognition with recurrent neural networks," in International Conference on Machine Learning (ICML), 2014, pp. 1764-1772.
Improving Neural Machine Translation Models with Monolingual Data. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)R. Sennrich, B. Haddow, and A. Birch, "Improving Neural Machine Translation Models with Monolingual Data," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, August 2016, pp. 86-96. [Online]. Available: http://www.aclweb.org/ anthology/P16-1009.pdf
The design for the Wall Street Journal-based CSR corpus. D B Paul, J M Baker, Proceedings of the workshop on Speech and Natural Language. the workshop on Speech and Natural LanguageAssociation for Computational LinguisticsD. B. Paul and J. M. Baker, "The design for the Wall Street Journal-based CSR corpus," in Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics, 1992, pp. 357-362.
1997 spanish broadcast news speech (hub 4-ne) ldc98s74. L D Consortium, PhiladelphiaL. D. Consortium, "1997 spanish broadcast news speech (hub 4- ne) ldc98s74," Philadelphia, 1998.
Free speech recognition. Voxforge, Org, Voxforge.org, "Free speech recognition," http://www.voxforge. org/, accessed 06/25/2014.
Data augmentation for low resource languages. A Ragni, K M Knill, S P Rath, M J Gales, Fifteenth Annual Conference of the International Speech Communication Association. A. Ragni, K. M. Knill, S. P. Rath, and M. J. Gales, "Data aug- mentation for low resource languages," in Fifteenth Annual Con- ference of the International Speech Communication Association, 2014.
Audio augmentation for speech recognition. T Ko, V Peddinti, D Povey, S Khudanpur, Sixteenth Annual Conference of the International Speech Communication Association. T. Ko, V. Peddinti, D. Povey, and S. Khudanpur, "Audio augmen- tation for speech recognition," in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
Deep speech: Scaling up end-to-end speech recognition. A Hannun, C Case, J Casper, B Catanzaro, G Diamos, E Elsen, R Prenger, S Satheesh, S Sengupta, A Coates, arXiv:1412.5567arXiv preprintA. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates et al., "Deep speech: Scaling up end-to-end speech recognition," arXiv preprint arXiv:1412.5567, 2014.
An analysis of environment, microphone and data simulation mismatches in robust speech recognition. E Vincent, S Watanabe, A A Nugraha, J Barker, R Marxer, Computer Speech & Language. 46E. Vincent, S. Watanabe, A. A. Nugraha, J. Barker, and R. Marxer, "An analysis of environment, microphone and data simulation mismatches in robust speech recognition," Computer Speech & Language, vol. 46, pp. 535-557, 2017.
Copied monolingual data improves low-resource neural machine translation. A Currey, A V M Barone, K Heafield, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationA. Currey, A. V. M. Barone, and K. Heafield, "Copied monolin- gual data improves low-resource neural machine translation," in Proceedings of the Second Conference on Machine Translation, 2017, pp. 148-156.
Multi-view cca-based acoustic features for phonetic recognition across speakers and domains. R Arora, K Livescu, Acoustics, Speech and Signal Processing (ICASSP). IEEER. Arora and K. Livescu, "Multi-view cca-based acoustic features for phonetic recognition across speakers and domains," in Acous- tics, Speech and Signal Processing (ICASSP), 2013 IEEE Inter- national Conference on. IEEE, 2013, pp. 7135-7139.
Deep multimodal learning for audio-visual speech recognition. Y Mroueh, E Marcheret, V Goel, Acoustics, Speech and Signal Processing. IEEE2015 IEEE International Conference onY. Mroueh, E. Marcheret, and V. Goel, "Deep multimodal learn- ing for audio-visual speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Confer- ence on. IEEE, 2015, pp. 2130-2134.
Hybrid CTC/attention architecture for end-to-end speech recognition. S Watanabe, T Hori, S Kim, J R Hershey, T Hayashi, IEEE Journal of Selected Topics in Signal Processing. 118S. Watanabe, T. Hori, S. Kim, J. R. Hershey, and T. Hayashi, "Hybrid CTC/attention architecture for end-to-end speech recog- nition," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240-1253, 2017.
Multilingual speech recognition with a single end-to-end model. S Toshniwal, T N Sainath, R J Weiss, B Li, P Moreno, E Weinstein, K Rao, arXiv:1711.01694arXiv preprintS. Toshniwal, T. N. Sainath, R. J. Weiss, B. Li, P. Moreno, E. We- instein, and K. Rao, "Multilingual speech recognition with a sin- gle end-to-end model," arXiv preprint arXiv:1711.01694, 2017.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintD. Bahdanau, K. Cho, and Y. Bengio, "Neural machine trans- lation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q Le, O Vinyals, Acoustics, Speech and Signal Processing (ICASSP). IEEEW. Chan, N. Jaitly, Q. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4960-4964.
End-to-end continuous speech recognition using attention-based recurrent nn: First results. J Chorowski, D Bahdanau, K Cho, Y Bengio, arXiv:1412.1602arXiv preprintJ. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio, "End-to-end continuous speech recognition using attention-based recurrent nn: First results," arXiv preprint arXiv:1412.1602, 2014.
Attention-based models for speech recognition. J K Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio, Advances in Neural Information Processing Systems (NIPS). J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Ben- gio, "Attention-based models for speech recognition," in Ad- vances in Neural Information Processing Systems (NIPS), 2015, pp. 577-585.
The kaldi speech recognition toolkit. D Povey, A Ghoshal, G Boulianne, L Burget, O Glembek, N Goel, M Hannemann, P Motlicek, Y Qian, P Schwarz, no. EPFL- CONF-192584IEEE 2011 workshop on automatic speech recognition and understanding. D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., "The kaldi speech recognition toolkit," in IEEE 2011 workshop on automatic speech recognition and understanding, no. EPFL- CONF-192584. IEEE Signal Processing Society, 2011.
The cmu arctic speech databases. J Kominek, A W Black, Fifth ISCA Workshop on Speech Synthesis. J. Kominek and A. W. Black, "The cmu arctic speech databases," in Fifth ISCA Workshop on Speech Synthesis, 2004.
Joint ctc-attention based endto-end speech recognition using multi-task learning. S Kim, T Hori, S Watanabe, Acoustics, Speech and Signal Processing. S. Kim, T. Hori, and S. Watanabe, "Joint ctc-attention based end- to-end speech recognition using multi-task learning," in Acous- tics, Speech and Signal Processing (ICASSP), 2017 IEEE Inter- national Conference on. IEEE, 2017, pp. 4835-4839.
Adadelta: an adaptive learning rate method. M D Zeiler, arXiv:1212.5701arXiv preprintM. D. Zeiler, "Adadelta: an adaptive learning rate method," arXiv preprint arXiv:1212.5701, 2012.
| [
"https://github.com/abuccts/wikt2pron"
]
|
[
"Minimax Lower Bounds for Kronecker-Structured Dictionary Learning",
"Minimax Lower Bounds for Kronecker-Structured Dictionary Learning",
"Minimax Lower Bounds for Kronecker-Structured Dictionary Learning",
"Minimax Lower Bounds for Kronecker-Structured Dictionary Learning"
]
| [
"Zahra Shakeri [email protected] \nDept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey\n",
"Waheed U Bajwa [email protected] \nDept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey\n",
"Anand D Sarwate [email protected] \nDept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey\n",
"Zahra Shakeri [email protected] \nDept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey\n",
"Waheed U Bajwa [email protected] \nDept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey\n",
"Anand D Sarwate [email protected] \nDept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey\n"
]
| [
"Dept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey",
"Dept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey",
"Dept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey",
"Dept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey",
"Dept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey",
"Dept. of Electrical and Computer Engineering\nRutgers University\n08854PiscatawayNew Jersey"
]
| []
| Dictionary learning is the problem of estimating the collection of atomic elements that provide a sparse representation of measured/collected signals or data. This paper finds fundamental limits on the sample complexity of estimating dictionaries for tensor data by proving a lower bound on the minimax risk. This lower bound depends on the dimensions of the tensor and parameters of the generative model. The focus of this paper is on second-order tensor data, with the underlying dictionaries constructed by taking the Kronecker product of two smaller dictionaries and the observed data generated by sparse linear combinations of dictionary atoms observed through white Gaussian noise. In this regard, the paper provides a general lower bound on the minimax risk and also adapts the proof techniques for equivalent results using sparse and Gaussian coefficient models. The reported results suggest that the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data. | 10.1109/isit.2016.7541479 | [
"https://arxiv.org/pdf/1605.05284v1.pdf"
]
| 3,684,665 | 1605.05284 | f4ad4ccde49fa8ebeb83ee4e02c799d4591c02b7 |
Minimax Lower Bounds for Kronecker-Structured Dictionary Learning
Zahra Shakeri [email protected]
Dept. of Electrical and Computer Engineering
Rutgers University
08854PiscatawayNew Jersey
Waheed U Bajwa [email protected]
Dept. of Electrical and Computer Engineering
Rutgers University
08854PiscatawayNew Jersey
Anand D Sarwate [email protected]
Dept. of Electrical and Computer Engineering
Rutgers University
08854PiscatawayNew Jersey
Minimax Lower Bounds for Kronecker-Structured Dictionary Learning
Dictionary learning is the problem of estimating the collection of atomic elements that provide a sparse representation of measured/collected signals or data. This paper finds fundamental limits on the sample complexity of estimating dictionaries for tensor data by proving a lower bound on the minimax risk. This lower bound depends on the dimensions of the tensor and parameters of the generative model. The focus of this paper is on second-order tensor data, with the underlying dictionaries constructed by taking the Kronecker product of two smaller dictionaries and the observed data generated by sparse linear combinations of dictionary atoms observed through white Gaussian noise. In this regard, the paper provides a general lower bound on the minimax risk and also adapts the proof techniques for equivalent results using sparse and Gaussian coefficient models. The reported results suggest that the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data.
I. INTRODUCTION
Dictionary learning has recently received significant attention due to the increased importance of finding sparse representations of signals/data. In dictionary learning, the goal is to construct an overcomplete basis using input signals such that each signal can be described by a small number of atoms (columns) [1]. Although the existing literature has focused on one-dimensional data, many signals in practice are multidimensional and have a tensor structure: examples include 2-dimensional images and 3-dimensional signals produced via magnetic resonance imaging or computed tomography systems. In traditional dictionary learning techniques, multidimensional data are processed after vectorizing of signals. This can result in poor sparse representations as the structure of the data is neglected [2].
In this paper we provide fundamental limits on learning dictionaries for multi-dimensional data with tensor structure: we call such dictionaries Kronecker-structured (KS). Several algorithms have been proposed to learn KS dictionaries [2]- [7] but there has been little work on the theoretical guarantees of such algorithms. The lower bounds we provide on the minimax risk of learning a KS dictionary give a measure to evaluate the performance of the existing algorithms.
In terms of relation to prior work, theoretical insights into classical dictionary learning techniques [8]- [16] have either focused on achievability of existing algorithms [8]- [14] or lower bounds on minimax risk for one-dimensional
The work of the authors was supported in part by the National Science Foundation under awards CCF-1525276 and CCF-1453073, and by the Army Research Office under award W911NF-14-1-0295. data [15], [16]. The former works provide sample complexity results for reliable dictionary estimation based on the appropriate minimization criteria [8]- [14]. Specifically, given a probabilistic model for sparse signals and a finite number of samples, a dictionary is recoverable within some distance of the true dictionary as a local minimum of some minimization criterion [12]- [14]. In contrast, works like Jung et al. [15], [16] provide minimax lower bounds for dictionary learning under several coefficient vector distributions and discuss a regime where the bounds are tight for some signal-to-noise (SNR) values. Particularly, for a dictionary D ∈ R m×p and neighborhood radius r, they show N = O(r 2 mp) samples suffices for reliable recovery of the dictionary within its local neighborhood.
While our work is related to that of Jung et al. [15], [16], our main contribution is providing lower bounds for the minimax risk of dictionaries consisting of two coordinate dictionaries that sparsely represent 2-dimensional tensor data. The full version of this work generalizes the results to higher-order tensors [17]. The main approach taken in this regard is the well-understood technique of lower bounding the minimax risk in nonparametric estimation by the maximum probability of error in a carefully constructed multiple hypothesis testing problem [18], [19]. As such, our general approach is similar to the vector case [16]. Nonetheless, the major challenge in such minimax risk analyses is the construction of appropriate multiple hypotheses, which are fundamentally different in our problem setup due to the Kronecker structure of the true dictionary. In particular, for a dictionary D consisting of the Kronecker product of two coordinate dictionaries A ∈ R m1×p1 and B ∈ R m2×p2 , where m = m 1 m 2 and p = p 1 p 2 , our analysis reduces the sample complexity from O(r 2 mp) for vectorized data [16] to O(r 2 (m 1 p 1 +m 2 p 2 )). Our results hold even when one of the coordinate dictionaries is not overcomplete (note that both A and B cannot be undercomplete, otherwise D won't be overcomplete). Like previous work [16], our analysis is local and our lower bounds depend on the distribution of multidimensional data. Finally, some of our analysis relies on the availability of side information about the signal samples. This suggests that the lower bounds can be improved by deriving them in the absence of such side information.
Notational Convention: Underlined bold upper-case, bold upper-case, bold lower-case and lower-case letters are used to denote tensors, matrices, vectors, and scalars, respectively. We write [K] for {1, . . . , K}. The k-th column of a matrix X is denoted by x k , while X I denotes the matrix consisting of columns of X with indices I, X denotes the sum of all elements of X, and I d denotes the d × d identity matrix. Also, v 0 and v 2 denote the 0 and 2 norms of the vector v, respectively, while X 2 and X F denote the spectral and Frobenius norms of X, respectively.
We write X 1 ⊗X 2 for the Kronecker product of two matrices X 1 ∈ R m×n and X 2 ∈ R p×q : the result is an mp×nq matrix. Given X 1 ∈ R m×n and X 2 ∈ R p×n , we write X 1 * X 2 for their mp × n Khatri-Rao product [20]: this is essentially the column-wise Kronecker product of matrices. Given two matrices of the same dimension X 1 , X 2 ∈ R m×n , their m × n Hadamard product is denoted by X 1 X 2 , which is the element-wise product of X 1 and X 2 . For matrices X 1 and X 2 , we define their distance to be
X 1 − X 2 F . We use f (ε) = O(g(ε)) if lim ε→0 f (ε)/g(ε) = c < ∞ for some constant c.
II. BACKGROUND AND PROBLEM FORMULATION
In the conventional dictionary learning setup, it is assumed that an observation y ∈ R m is generated via a fixed dictionary,
y = Dx + n,(1)
in which the dictionary D ∈ R m×p is an overcomplete basis (m < p) with unit-norm columns, x ∈ R p is the coefficient vector, and n ∈ R m is the underlying noise vector. In contrast to this conventional setup, our focus in this paper is on secondorder tensor data. Consider the 2-dimensional observation Y ∈ R m1×m2 . Using any separable transform, Y can be written as
Y = (T −1 1 ) T XT −1 2 ,(2)
where X ∈ R p1×p2 is the matrix of coefficients and T 1 ∈ R p1×m1 and T 2 ∈ R p2×m2 are non-singular matrices transforming the columns and rows of Y, respectively. Defining A (T −1 2 ) T and B (T −1 1 ) T , we can use a property of Kronecker products [21], vec(BXA T ) = (A ⊗ B) vec(X), to get the following expression for y vec(Y):
y = (A ⊗ B)x + n(3)
for coefficient vector x vec(X) ∈ R p , and noise vector n ∈ R m , where p p 1 p 2 and m m 1 m 2 . In this work, we assume N independent and identically distributed (i.i.d.) noisy observations y k that are generated according to the model in (3). Concatenating these observations in Y ∈ R m×N , we have
Y = DX + N,(4)
where D A ⊗ B is the unknown KS dictionary, X ∈ R p×N is the coefficient matrix which we initially assume to consist of zero-mean random coefficient vectors with known distribution and covariance Σ x , and N ∈ R m×N is additive white Gaussian noise (AWGN) with zero mean and variance σ 2 .
Our main goal in this paper is to derive conditions under which the dictionary D can possibly be learned from the noisy observations given in (4). In this regard, we assume the true KS dictionary D consists of unit norm columns and we carry out local analysis. That is, the true KS dictionary D is assumed to belong to a neighborhood around a fixed (normalized) reference KS dictionary D 0 = A 0 ⊗ B 0 , i.e., a 0,j 2 = 1 ∀j ∈ [p 1 ], b 0,j 2 = 1 ∀j ∈ [p 2 ], and D 0 ∈ D:
D D ∈ R m×p : d j 2 = 1 ∀j ∈ [p], D = A ⊗ B , A ∈ R m1×p1 , B ∈ R m2×p2 , and(5)D ∈ X (D 0 , r) D ∈ D : D − D 0 2 F < r ,(6)
where the radius r is known. It is worth noting here that, similar to the analysis for vector data [16], our analysis is applicable to the global KS dictionary learning problem. Finally, some of our analysis in the following also relies on the notion of the restricted isometry property (RIP). Specifically,
D satisfies the RIP of order s with constant δ s if ∀ s-sparse x, (1 − δ s ) x 2 2 ≤ Dx 2 2 ≤ (1 + δ s ) x 2 2 . (7)
A. Minimax risk analysis
We are interested in lower bounding the minimax risk for estimating D based on observations Y, which is defined as the worst-case mean squared error (MSE) that can be obtained by the best KS dictionary estimator D(Y). That is,
ε * = inf D sup D∈X (D0,r) E Y D(Y) − D 2 F .(8)
In order to lower bound this minimax risk ε * , we resort to the multiple hypothesis testing approach taken in the literature on nonparametric estimation [18], [19]. This approach is equivalent to generating a KS dictionary D l uniformly at random from a carefully constructed class
D L = {D 1 , . . . , D L } ⊆ X (D 0 , r), L ≥ 2, for a given (D 0 , r).
Observations Y = D l X + N in this setting can be interpreted as channel outputs that are fed into an estimator that must decode D l . A lowerbound on the minimax risk in this setting depends not only on problem parameters such as the number of observations N , noise variance σ 2 , dimensions of the true KS dictionary, neighborhood radius r, and coefficient distribution, but also on various aspects of the constructed class D L [18].
To ensure a tight lower bound, we must construct D L such that the distance between any two dictionaries in D L is sufficiently large and the hypothesis testing problem is sufficiently hard, i.e., distinct dictionaries result in similar observations. Specifically, for l, l ∈ [L], we desire a construction such that
∀l = l , D l − D l F ≥ 2 √ 2ε
and
D KL f D l (Y)||f D l (Y) ≤ α L ,(9)
where
D KL f D l (Y)||f D l (Y)
denotes the Kullback-Leibler (KL) divergence between the distributions of observations based on D l ∈ D L and D l ∈ D L , while ε and α L are non-negative parameters. Roughly, the minimax risk analysis proceeds as follows. Considering D(Y) to be an estimator that achieves ε * , and assuming ε * < ε and D l generated uniformly at random from D L , we have P( l(Y) = l) = 0 for the minimum-distance detector l(Y) as long as D(Y)−D l F < √ 2ε. The goal then is to relate ε * to P( D(Y)−D l F ≥ √ 2ε) and P( l(Y) = l) using Fano's inequality [19]:
(1 − P( l(Y) = l)) log 2 L − 1 ≤ I(Y; l),(10)
where I(Y; l) denotes the mutual information (MI) between the observations Y and the dictionary D l . Notice that the smaller α L is in (9), the smaller I(Y; l) will be in (10). Unfortunately, explicitly evaluating I(Y; l) is a challenging task in our setup because of the underlying distributions. Similar to [16], we will instead resort to upper bounding I(Y; l) by assuming access to some side information T(X) that will make the observations Y conditionally multivariate Gaussian (recall that I(Y; l) ≤ I(Y; l|T(X))). Our final results will then follow from the fact that any lower bound for ε * given the side information T(X) will also be a lower bound for the general case [16].
B. Coefficient distribution
The minimax lower bounds in this paper are derived for various coefficient distributions. First, similar to [16], we consider arbitrary coefficient distributions for which the covariance matrix Σ x exists. We then specialize our results for sparse coefficient vectors and, under additional assumptions on the reference dictionary D 0 , obtain a tighter lower bound for some signal-to-noise ratio (SNR) regimes, where SNR = Ex( Dx 2 2 ) En( n 2 2 ) . 1) General coefficients: The coefficient vector x in this case is assumed to be a zero-mean random vector with covariance Σ x . We also assume access to the side information T(X) = X to obtain a lower bound on ε * in this setup.
2) Sparse coefficients: In this case, we assume x to be an ssparse vector such that the support of x, denoted by supp(x), is uniformly distributed over E = {S ⊆ [p] : |S| = s}:
P(supp(x) = S) = 1 p s for any S ∈ E.(11)
Further, we model the nonzero entries of x, i.e., x S , as drawn in an i.i.d. fashion from a distribution with variance σ 2 a :
E x {x S x T S |S} = σ 2 a I s .(12)
Notice that an x under the assumptions of (11) and (12) has
Σ x = s p σ 2 a I p .(13)
Further, it is easy to see in this case that SNR = sσ 2 a mσ 2 . Finally, the side information assumed in this sparse coefficients setup will either be T(X) = X or T(X) = supp(X).
III. LOWER BOUND FOR GENERAL COEFFICIENTS
We now provide our main result for the lower bound for the minimax risk of the KS dictionary learning problem for the case of general (i.i.d.) coefficient vectors. Theorem 1. Consider a KS dictionary learning problem with N i.i.d observations generated according to model (3) and the true dictionary satisfying (6) for some r and D 0 . Suppose Σ x exists for the zero-mean random coefficient vectors. If there exists an estimator with worst-case MSE ε * ≤ 2p(1−t) 8 min{1, r 2 4p }, then the minimax risk is lower bounded by
ε * ≥ C 1 r 2 σ 2 N p Σ x 2 (c 1 (p 1 (m 1 − 1) + p 2 (m 2 − 1)) − 3) (14)
for any 0 < c 1 < t 8 log 2 and 0 < t < 1, where C 1 = (1−t)p 32r 2 . Outline of Proof: The idea of the proof, as discussed in section II-A, is that we construct a set of L distinct KS dictionaries that satisfy:
• D L = {D 1 , . . . , D L } ⊂ X (D 0 , r) • Any two distinct dictionaries in D L are separated by a minimum distance in the neighborhood, i.e., for any l, l ∈ [L] and some positive ε ≤ 2p(1−t) 8 min{1, r 2 4p }:
D l − D l F ≥ 2 √ 2ε, for l = l .(15)
Notice that if the true dictionary, D l ∈ D L , is selected uniformly at random from D L in this case then, given side information T(X) = X, the observations Y follow a multivariate Gaussian distribution and an upper bound on the conditional MI I(Y; l|T(X)) can be obtained by using an upper bound for KL-divergence of multivariate Gaussian distributions. This bound depends on parameters ε, N, m 1 , m 2 , p 1 , p 2 , Σ x , s, r, and σ 2 . Next, assuming (15) holds for D L , if there exists an estimator D(Y) achieving the minimax risk ε * ≤ ε and the recovered dictionary D(Y) satisfies D(Y) − D l F < √ 2ε, the minimum distance detector l(Y) can recover D l . Consequently, the probability of error P
( D(Y) = D l ) ≤ P( D(Y) − D l F ≥ √ 2ε)
can be used to lower bound the conditional MI using Fano's inequality. The obtained lower bound in our case will only be a function of L.
Finally, using the obtained upper and lower bounds for the conditional MI:
η 2 ≤ I(Y; l|T(X)) ≤ η 1 ,(16)
a lower bound for the minimax risk ε * is attained. A formal proof of Theorem 1 relies on the following lemmas whose proofs appear in the full version of this work [17]. Note that since our construction of D L is more complex than the vector case [16, Theorem 1], it requires a different sequence of lemmas, with the exception of Lemma 3, which follows from the vector case. Lemma 1. There exists a set of L = 2 c1(mp)− 1 2 matrices A l ∈ R m×p , where elements of A l take values ±α for some α > 0, such that for l, l ∈ [L], l = l , any t > 0 and
c 1 < 1 2 log 2 t 2α 2 mp 2
, the following relation is satisfied:
(A l A l ) ≤ t.(17)
Lemma 2. Considering the generative model in (3), given some r > 0 and reference dictionary D 0 , there exists a set D L ⊆ X (D 0 , r) of cardinality L = 2 c1((m1−1)p1+(m2−1)p2)−1 such that for any 0 < c 1 < t 2 8 log 2 , any 0 < t < 1, and any ε > 0 satisfying ε < min r 2 , r 4 4p (18) and any l, l ∈ [L], with l = l , we have
2p r 2 (1 − t)ε ≤ D l − D l 2 F ≤ 8p r 2 ε .(19)
Furthermore, considering the general coefficient model for X and assuming side information T(X) = X, we have
∀ l, I(Y; l|T(X)) ≤ 4N p Σ x 2 r 2 σ 2 ε .(20)
Lemma 3. Consider the generative model in (3) with minimax risk ε * ≤ ε for some ε > 0. Assume there exists a finite set D L ⊆ D with L dictionaries satisfying
D l − D l 2 F ≥ 8ε(21)
for l = l . Then for any side information T(X), we have
I(Y; l|T(X)) ≥ 1 2 log 2 (L) − 1.(22)
Proof of Theorem 1. According to Lemma 2, for any ε satisfying (18), there exists a set D L ⊆ X (D 0 , r) of cardinality L = 2 c1((m1−1)p1+(m2−1)p2)−1 that satisfies (20) for any c 1 < t 8 log 2 and any 0 < t < 1. According to Lemma 3, if we set 2p r 2 (1 − t)ε = 8ε, (21) is satisfied for D L and provided there exists an estimator with worst case MSE satisfying ε * ≤ 2p(1−t) 8 min{1, r 2 4p }, (22) holds. Combining (20) and (22) we get
1 2 log 2 (L) − 1 ≤ I(Y; l|T(X)) ≤ 32N p Σ x 2 c 2 r 2 σ 2 ε,(23)
where
c 2 = 2p r 2 (1 − t). Defining C 1 = (1−t)p 32r 2 , (23) translates into ε ≥ C 1 r 2 σ 2 N p Σ x 2 (c 1 (p 1 (m 1 − 1) + p 2 (m 2 − 1)) − 3) . (24)
IV. LOWER BOUND FOR SPARSE COEFFICIENTS We now turn our attention to the case of sparse coefficients and obtain lower bounds for the corresponding minimax risk. We first state a corollary of Theorem 1, for T(X) = X. Corollary 1. Consider a KS dictionary learning problem with N i.i.d observations according to model (3). Assuming the true dictionary satisfies (6) for some r and the reference dictionary D 0 satisfies RIP(s, 1 2 ), if the random coefficient vector x is selected according to (11) and there exists an estimator with worst-case MSE error ε * ≤ 2p(1−t) 8 min{1, r 2 4p }, the minimax risk is lower bounded by
ε * ≥ C 1 r 2 σ 2 N sσ 2 a (c 1 (p 1 (m 1 − 1) + p 2 (m 2 − 1)) − 3) (25)
for any 0 < c 1 < t 8 log 2 and 0 < t < 1, where C 1 = (1−t)p 32r 2 . This result is a direct consequence of Theorem 1, by substituting the covariance matrix of X given in (13) in (14).
A. Sparse Gaussian coefficients
In this section, we make an additional assumption on the coefficient vector generated according to (11) and assume non-zero elements of x follow a Gaussian distribution. By additionally assuming the non-zero entries of x are i.i.d., we can write x S as Therefore, given side information T(x) = supp(x), observations y follow a multivariate Gaussian distribution. We now provide a theorem for the lower bound attained for this coefficient distribution.
x S ∼ N (0, σ 2 a I s ). (26) a1 ⌦ b1 a1 ⌦ b2 a1 ⌦ b3 a2 ⌦ b1 a2 ⌦ b4 a3 ⌦ b5 a3 ⌦ b6 a1 ⌦ b3 a2 ⌦ b1 a2 ⌦ b4 a3 ⌦ b5 D = A ⌦ B D Sk
Theorem 2. Consider a KS dictionary learning problem with N i.i.d observations according to model (3). Assuming the true dictionary satisfies (6) for some r and the reference coordinate dictionaries A 0 and B 0 satisfy RIP(s, 1 2 ), if the random coefficient vector x is selected according to (11) and (26) and there exists an estimator with worst-case MSE error ε * ≤ 2p(1−t) 8 min{ 1 s , r 2 4p }, then the minimax risk is lower bounded by
ε * ≥ C 2 r 2 σ 4 N s 2 σ 4 a (c 1 (p 1 (m 1 − 1) + p 2 (m 2 − 1)) − 3) (27)
for any 0 < c 1 < t 8 log 2 and 0 < t < 1, where C 2 = 1.58 × 10 −5 .
p(1 − t) r 2 . Outline of Proof: The constructed dictionary class D L in Theorem 2 is similar to that in Theorem 1. But the upper bound for the conditional MI, I(Y; l| supp(X)), differs from that in Theorem 1 as the side information is different.
Given the true dictionary D l and support S k for the kth coefficient vector x k , let D l,S k denote the columns of D l corresponding to the non-zeros elements of x k . In this case, we have
y k = D l,S k x S k + n k , k ∈ [N ].(28)
We can write the subdictionary D l,S k in terms of the Khatri-Rao product of two smaller matrices:
D l,S k = A la,S ka * B l b ,S kb ,(29)
where S ka = {i k } s k=1 , i k ∈ [p 1 ], and S kb = {i k } s k=1 , i k ∈ [p 2 ], are multisets with the following relationship with
S k = {i k } s k=1 , i k ∈ [p]: i k = (i k − 1)p 2 + i k , k ∈ [s]
. Note that A la,S ka and B l b ,S kb are not submatrices of A la and B l b , as S ka and S kb are multisets. Figure 1 provides a visual illustration of (29). Therefore, the observations follow a multivariate Gaussian distribution with zero mean and covariance matrix:
Σ k,l = σ 2 a (A la,S ka * B l b ,S kb )(A la,S ka * B l b ,S kb ) T + σ 2 I s(30)
and we need to obtain an upper bound for the conditional MI using (30). We state a variation of Lemma 2 necessary for the proof of Theorem 2. The proof of the lemma is again provided in [17].
r 2 (m 1 p 1 + m 2 p 2 ) N m 2 SNR 2 D L ⊆ X (D 0 , r) of cardinality L = 2 c1((m1−1)p1+(m2−1)p2)−1
such that for any 0 < c 1 < t 2 8 log 2 , any 0 < t < 1, and any ε > 0 satisfying
0 < ε ≤ min r 2 s , r 4 4p ,(31)
and any l, l ∈ [L], with l = l , we have
2p r 2 (1 − t)ε ≤ D l − D l 2 F ≤ 8p r 2 ε .(32)
Furthermore, assuming the reference coordinate dictionaries A 0 and B 0 satisfy RIP(s, 1 2 ) and the coefficient matrix X is selected according to (11) and (26), considering side information T(X) = supp(X), we have:
I(Y; l|T(X)) ≤ 7921 σ a σ 4 N s 2 r 2 ε .(33)
Proof of Theorem 2. According to Lemma 4, for any ε satisfying (31), there exists a set D L ⊆ X (D 0 , r) of cardinality L = 2 c1((m1−1)p1+(m2−1)p2)−1 that satisfies (33) for any c 1 < t 8 log 2 and any 0 < t < 1. Setting 2p r 2 (1 − t)ε = 8ε,
where c 2 = 2p r 2 (1 − t). Defining C 2 = 1.58 × 10 −5 . p(1−t) r 2 , (34) can be written as
ε ≥ C 2 σ σ a 4 r 2 (c 1 (p 1 (m 1 − 1) + p 2 (m 2 − 1)) − 3) N s 2 .(35)
V. DISCUSSION AND CONCLUSION In this paper we follow an information-theoretic approach to provide lower bounds for the worst-case MSE of KS dictionaries that generate 2-dimensional tensor data. Table I lists the dependence of the known lower bounds on the minimax rates on various parameters of the dictionary learning problem and the SNR= sσ 2 a mσ 2 . Compared to the results in [16] for the unstructured dictionary learning problem, which are not stated in this form, but can be reduced to this, we are able to decrease the lower bound in all cases by reducing the scaling O(pm) to O(p 1 m 1 + p 2 m 2 ) for KS dictionaries. This is intuitively pleasing since the minimax lower bound has a linear relationship with the number of degrees of freedom of the KS dictionary, which is (p 1 m 1 + p 2 m 2 ), and the square of the neighborhood radius r 2 . The results also show that the minimax risk decreases with a larger number of samples N and increased SNR. Notice also that in high SNR regimes, the lower bound in (25) is tighter, while (27) results in a tighter lower bound in low SNR regimes. Our bounds depend on the signal distribution and imply necessary sample complexity scaling N = O(r 2 (m 1 p 1 + m 2 p 2 ). Future work includes extending the lower bounds for higher-order tensors and also specifying a learning scheme that achieves these lower bounds.
Fig. 1 .
1An illustration of D l,S k with p 1 = 3, p 2 = 6 and sparsity s = 4. Here, S ka = {1, 2, 2, 3}, S kb = {3, 1, 4, 5}, and S k = {3, 7, 10, 17}.
Lemma 4 .
4Considering the generative model in(3), given some r > 0 and reference dictionary D 0 , there exists a set
(21) is satisfied for D L and, provided there exists an estimator with worst case MSE satisfying ε * ≤ 2p(
TABLE I
IORDER-WISE LOWER BOUNDS ON THE MINIMAX RISK FOR VARIOUS COEFFICIENT DISTRIBUTIONSDistribution
Dictionary
Unstructured [16]
Kronecker (this paper)
Sparse
r 2 p
N SNR
r 2 (m 1 p 1 + m 2 p 2 )
N mSNR
Gaussian Sparse
r 2 p
N mSNR 2
Dictionary learning algorithms for sparse representation. K Kreutz-Delgado, J F Murray, B D Rao, K Engan, T Lee, T J Sejnowski, Neural computation. 152K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T. Lee, and T. J. Sejnowski, "Dictionary learning algorithms for sparse representation," Neural computation, vol. 15, no. 2, pp. 349-396, 2003.
Denoising and completion of 3D data via multidimensional dictionary learning. Z Zhang, S Aeron, arXiv:1512.09227arXiv preprintZ. Zhang and S. Aeron, "Denoising and completion of 3D data via multidimensional dictionary learning," arXiv preprint arXiv:1512.09227, 2015.
K-CPD: Learning of overcomplete dictionaries for tensor sparse coding. G Duan, H Wang, Z Liu, J Deng, Y.-W Chen, Proc. IEEE 21st Int. Conf. Pattern Recognition (ICPR). IEEE 21st Int. Conf. Pattern Recognition (ICPR)G. Duan, H. Wang, Z. Liu, J. Deng, and Y.-W. Chen, "K-CPD: Learning of overcomplete dictionaries for tensor sparse coding," in Proc. IEEE 21st Int. Conf. Pattern Recognition (ICPR), 2012, pp. 493-496.
Separable dictionary learning. S Hawe, M Seibert, M Kleinsteuber, Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR). IEEE Conf. Comput. Vision and Pattern Recognition (CVPR)S. Hawe, M. Seibert, and M. Kleinsteuber, "Separable dictionary learn- ing," in Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR), 2013, pp. 438-445.
Tensor dictionary learning with sparse Tucker decomposition. S Zubair, W Wang, Proc. IEEE 18th Int. Conf. Digital Signal Process. (DSP). IEEE 18th Int. Conf. Digital Signal ess. (DSP)S. Zubair and W. Wang, "Tensor dictionary learning with sparse Tucker decomposition," in Proc. IEEE 18th Int. Conf. Digital Signal Process. (DSP), 2013, pp. 1-6.
Decomposable nonlocal tensor dictionary learning for multispectral image denoising. Y Peng, D Meng, Z Xu, C Gao, Y Yang, B Zhang, Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR). IEEE Conf. Comput. Vision and Pattern Recognition (CVPR)Y. Peng, D. Meng, Z. Xu, C. Gao, Y. Yang, and B. Zhang, "De- composable nonlocal tensor dictionary learning for multispectral image denoising," in Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR), 2014, pp. 2949-2956.
A tensor-based dictionary learning approach to tomographic image reconstruction. S Soltani, M E Kilmer, P C Hansen, arXiv:1506.04954arXiv preprintS. Soltani, M. E. Kilmer, and P. C. Hansen, "A tensor-based dictionary learning approach to tomographic image reconstruction," arXiv preprint arXiv:1506.04954, 2015.
On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them. M Aharon, M Elad, A M Bruckstein, Linear algebra and its applications. 4161M. Aharon, M. Elad, and A. M. Bruckstein, "On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them," Linear algebra and its applications, vol. 416, no. 1, pp. 48-67, 2006.
Learning sparsely used overcomplete dictionaries via alternating minimization. A Agarwal, A Anandkumar, P Jain, P Netrapalli, arXiv:1310.7991arXiv preprintA. Agarwal, A. Anandkumar, P. Jain, and P. Netrapalli, "Learning sparsely used overcomplete dictionaries via alternating minimization," arXiv preprint arXiv:1310.7991, 2013.
Exact recovery of sparsely used overcomplete dictionaries. A Agarwal, A Anandkumar, P Netrapalli, arXiv:1309.1952arXiv preprintA. Agarwal, A. Anandkumar, and P. Netrapalli, "Exact recov- ery of sparsely used overcomplete dictionaries," arXiv preprint arXiv:1309.1952., 2013.
New algorithms for learning incoherent and overcomplete dictionaries. S Arora, R Ge, A Moitra, Proc. 27th Conf. Learning Theory. 27th Conf. Learning TheoryS. Arora, R. Ge, and A. Moitra, "New algorithms for learning incoherent and overcomplete dictionaries," in Proc. 27th Conf. Learning Theory, 2014, pp. 779-806.
On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD. K Schnass, Applied and Computational Harmonic Analysis. 373K. Schnass, "On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD," Applied and Computational Harmonic Analysis, vol. 37, no. 3, pp. 464-491, 2014.
Local identification of overcomplete dictionaries. Journal of Machine Learning Research. 16--, "Local identification of overcomplete dictionaries," Journal of Machine Learning Research, vol. 16, pp. 1211-1242, 2015.
Sparse and spurious: dictionary learning with noise and outliers. R Gribonval, R Jenatton, F Bach, arXiv:1407.5155arXiv preprintR. Gribonval, R. Jenatton, and F. Bach, "Sparse and spurious: dictionary learning with noise and outliers," arXiv preprint arXiv:1407.5155, 2014.
Performance limits of dictionary learning for sparse coding. A Jung, Y C Eldar, N Gortz, Proc. IEEE 22nd European Signal Process. Conf. (EUSIPCO). IEEE 22nd European Signal ess. Conf. (EUSIPCO)A. Jung, Y. C. Eldar, and N. Gortz, "Performance limits of dictionary learning for sparse coding," in Proc. IEEE 22nd European Signal Process. Conf. (EUSIPCO), 2014, pp. 765-769.
On the minimax risk of dictionary learning. A Jung, Y C Eldar, N Görtz, arXiv:1507.05498arXiv preprintA. Jung, Y. C. Eldar, and N. Görtz, "On the minimax risk of dictionary learning," arXiv preprint arXiv:1507.05498, 2015.
Minimax lower bounds on dictionary learning for tensor data. Z Shakeri, W U Bajwa, A D Sarwate, preprintZ. Shakeri, W. U. Bajwa, and A. D. Sarwate, "Minimax lower bounds on dictionary learning for tensor data," 2016, preprint.
Introduction to nonparametric estimation. A B Tsybakov, Springer Series in Statistics. SpringerA. B. Tsybakov, Introduction to nonparametric estimation. Springer Series in Statistics. Springer, New York, 2009.
. B Yu ; Assouad, Fano , Le Cam, Festschrift for Lucien Le CamSpringerB. Yu, "Assouad, Fano, and Le Cam," in Festschrift for Lucien Le Cam. Springer, 1997, pp. 423-435.
Multi-way analysis: Applications in the chemical sciences. A Smilde, R Bro, P Geladi, John Wiley & SonsA. Smilde, R. Bro, and P. Geladi, Multi-way analysis: Applications in the chemical sciences. John Wiley & Sons, 2005.
The ubiquitous Kronecker product. C F Van Loan, Journal of computational and applied mathematics. 1231C. F. Van Loan, "The ubiquitous Kronecker product," Journal of computational and applied mathematics, vol. 123, no. 1, pp. 85-100, 2000.
| []
|
[
"Prepared for submission to JHEP Flavour anti-k T algorithm applied to W bb production at the LHC",
"Prepared for submission to JHEP Flavour anti-k T algorithm applied to W bb production at the LHC"
]
| [
"Heribertus Bayu Hartanto [email protected] \nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"Rene Poncelet [email protected] \nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"Andrei Popescu [email protected] \nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"Simone Zoia [email protected] \nDipartimento di Fisica and Arnold-Regge Center\nUniversità di Torino\nINFN\nSezione di Torino\nVia P. Giuria 1I-10125TorinoItaly\n"
]
| [
"Cavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Dipartimento di Fisica and Arnold-Regge Center\nUniversità di Torino\nINFN\nSezione di Torino\nVia P. Giuria 1I-10125TorinoItaly"
]
| []
| We apply the recently proposed flavoured anti-k T jet algorithm to Wbb production at the Large Hadron Collider at √ s = 8 TeV. We present results for the total cross section and differential distributions at the next-to-next-to-leading order (NNLO) in QCD. We discuss the effects of the remaining parametric freedom in the flavoured anti-k T prescription, and compare it against the standard flavour-k T algorithm. We compare the total cross section results against the CMS data, finding good agreement. The NNLO QCD corrections are significant, and their inclusion substantially improves the agreement with the data.IntroductionThe Large Hadron Collider (LHC) has recently started its Run 3 phase, following the successful Run 1 and 2 campaigns. The LHC dataset collected from the previous runs, combined with the current Run 3 and the upcoming high-luminosity LHC (HL-LHC) phase, will provide opportunities to stress test the Standard Model of Particle Physics (SM) and constrain new physics scenarios. To keep pace with the increasingly accurate experimental measurements, theory predictions must be provided with the highest possible precision. One of the current precision frontiers at the LHC is the next-to-next-to-leading order (NNLO) in QCD for 2 → 3 scattering processes. The last few years have seen a major breakthrough in the massless 2 → 3 computations, where NNLO QCD predictions for pp → γγγ [1, 2], pp → γγ+jet [3, 4] and, most notably, pp → 3-jet production [5, 6] have become available. The first NNLO QCD calculation for 2 → 3 process involving one external mass has recently been completed for the production of W-boson in association with a bb pair, where the Wboson decays leptonically and the bottom quark is treated as a massless particle [7]. We call this process 'Wbb production' hereafter.Wbb production has been studied quite extensively, particularly at next-to-leading order (NLO) in QCD[8][9][10][11][12][13][14][15]. The studies carried out at NLO QCD demonstrated that, for inclusive production (W in association with at least two b-jets in the final state), the higher-order corrections are significant and the theoretical uncertainty at NLO is larger than that of the LO prediction. This pathological behaviour is attributed to the contribution of qg-initiated subprocesses, which start contributing at NLO. The inclusion of the NNLO QCD corrections improves the perturbative convergence: the corrections become moderate, compared to NLO QCD, and the theoretical uncertainties appear to be under control. Experimental measurements for Wbb production have been undertaken at both the Tevatron [16] and the LHC[17,18]. | null | [
"https://export.arxiv.org/pdf/2209.03280v1.pdf"
]
| 252,110,730 | 2209.03280 | 80fcf48971b2d0bdfe00ed73471cc8f95b8a1012 |
Prepared for submission to JHEP Flavour anti-k T algorithm applied to W bb production at the LHC
Heribertus Bayu Hartanto [email protected]
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
Rene Poncelet [email protected]
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
Andrei Popescu [email protected]
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
Simone Zoia [email protected]
Dipartimento di Fisica and Arnold-Regge Center
Università di Torino
INFN
Sezione di Torino
Via P. Giuria 1I-10125TorinoItaly
Prepared for submission to JHEP Flavour anti-k T algorithm applied to W bb production at the LHC
We apply the recently proposed flavoured anti-k T jet algorithm to Wbb production at the Large Hadron Collider at √ s = 8 TeV. We present results for the total cross section and differential distributions at the next-to-next-to-leading order (NNLO) in QCD. We discuss the effects of the remaining parametric freedom in the flavoured anti-k T prescription, and compare it against the standard flavour-k T algorithm. We compare the total cross section results against the CMS data, finding good agreement. The NNLO QCD corrections are significant, and their inclusion substantially improves the agreement with the data.IntroductionThe Large Hadron Collider (LHC) has recently started its Run 3 phase, following the successful Run 1 and 2 campaigns. The LHC dataset collected from the previous runs, combined with the current Run 3 and the upcoming high-luminosity LHC (HL-LHC) phase, will provide opportunities to stress test the Standard Model of Particle Physics (SM) and constrain new physics scenarios. To keep pace with the increasingly accurate experimental measurements, theory predictions must be provided with the highest possible precision. One of the current precision frontiers at the LHC is the next-to-next-to-leading order (NNLO) in QCD for 2 → 3 scattering processes. The last few years have seen a major breakthrough in the massless 2 → 3 computations, where NNLO QCD predictions for pp → γγγ [1, 2], pp → γγ+jet [3, 4] and, most notably, pp → 3-jet production [5, 6] have become available. The first NNLO QCD calculation for 2 → 3 process involving one external mass has recently been completed for the production of W-boson in association with a bb pair, where the Wboson decays leptonically and the bottom quark is treated as a massless particle [7]. We call this process 'Wbb production' hereafter.Wbb production has been studied quite extensively, particularly at next-to-leading order (NLO) in QCD[8][9][10][11][12][13][14][15]. The studies carried out at NLO QCD demonstrated that, for inclusive production (W in association with at least two b-jets in the final state), the higher-order corrections are significant and the theoretical uncertainty at NLO is larger than that of the LO prediction. This pathological behaviour is attributed to the contribution of qg-initiated subprocesses, which start contributing at NLO. The inclusion of the NNLO QCD corrections improves the perturbative convergence: the corrections become moderate, compared to NLO QCD, and the theoretical uncertainties appear to be under control. Experimental measurements for Wbb production have been undertaken at both the Tevatron [16] and the LHC[17,18].
This interest in Wbb production stems from multiple reasons. First of all, the W+2bjets signature is also produced by other SM scattering processes, such as the W+Higgsstrahlung process, where the Higgs boson decays to a bottom-quark pair (i.e. pp → W(H → bb)), and single top production (i.e. pp → (t → Wb)b). The analysis of these important processes therefore requires good control of the irreducible background, represented by Wbb production. We note that the theoretical predictions for both the WH → bb) and single top productions are known through NNLO in QCD [19][20][21][22][23][24][25][26]. Processes involving new physics may lead to the very same experimental signature as well. Wbb production is also a perfect testing ground to study the modelling of flavoured jets from both theoretical and experimental points of view. An exclusive final state, where additional jets are vetoed, is often required in the experimental analyses to suppress the contributions from certain background processes. The exclusive setup requires a more careful treatment of the theoretical uncertainties, as we already showed in Ref. [7], where we compared the standard and the more conservative prescription of Ref. [27].
The presence of b-quark-flavoured jets in the final state requires a flavour tagging procedure. Experimentally, b-jets are tagged by identifying jets that contain a B-hadron by exploiting its long lifetime, looking for secondary vertices and tracks with a large impact parameter. In theoretical simulations, the procedure depends on the type of calculation employed. If a parton shower is involved, one can basically apply the experimental prescription, i.e. sequentially hadronise the showered event and put 'b-jet' labels on the clustered jets which contain a B-hadron. In a fixed-order calculation, the final state partons are assigned to jets by means of a clustering algorithm.
If the bottom quark is treated as a massless particle in a fixed-order calculation, care must be taken to ensure the infrared (IR) safety. A problem might in fact arise from the assignment of flavoured particles which make up a soft pair to different clusters, thus spoiling the IR safety of the jet algorithm for the soft gluon emission. This issue indeed sets in for the standard (flavour-blind) clustering algorithms, starting at NNLO. We note that, if the bottom quark mass is instead taken into account, these IR divergences in the fixed-order calculation are regulated.
This problem can be remedied by employing a flavour-sensitive jet algorithm. The flavour-k T jet algorithm of Ref. [28] was proposed for this reason, and has been applied to several scattering processes at the LHC which involve final state bottom-or charm-quark jet(s) [7,[21][22][23][29][30][31]. The major drawback in using the flavour-k T algorithm is that the experimental analyses at the LHC widely employ the standard anti-k T prescription [32]. A comparison of the flavour-k T -based theoretical predictions to the experiments requires the experimental data to be unfolded from the standard anti-k T to the flavour-k T clustering, through the use of an NLO calculation matched to a parton shower (NLO+PS) [30]. The unfolding procedure represents an additional source of uncertainty, which becomes sizeable when the clustering behaviours are significantly different. In order to minimise the effect of unfolding, a flavour-sensitive anti-k T jet algorithm [33] has been recently introduced. It matches more closely the standard anti-k T prescription, and thus enables a more direct comparison with the experimental measurements at the LHC. Recently, a new approach to dress anti-k T jets with flavour has been proposed in Ref. [34].
In this work we apply the newly proposed flavoured anti-k T jet algorithm to Wbb production at NNLO QCD, compare with the results obtained using the flavour-k T algorithm, and compare the resulting cross sections to the measurements by the CMS collaboration [18]. This paper is structured as follows. In Section 2 we briefly review the ingredients of the calculation. Phenomenological results employing both variants of flavour sensitive jet algorithm are discussed in Section 3. The genuine impact of NNLO QCD corrections is further studied in Section 4 by comparing the NNLO calculation against an improved NLO prediction. We finally present our conclusions in Section 5.
Calculation and setup
We present the calculation of the pp → W(→ ν)bb process at NNLO in QCD. The computation has been performed within the Stripper framework, a C++ implementation of the four-dimensional formulation of the sector-improved residue subtraction scheme [35][36][37]. The tree-level matrix elements are supplied by the AvH library [38], while the one-loop matrix elements are provided by the OpenLoops package [39,40]. We computed the two-loop virtual amplitudes analytically in the leading colour approximation as described in Ref. [7], following the strategy of Ref. [41]. The resulting analytic expressions were implemented in C++ for a fast numerical evaluation with the PentagonFunctions++ library [42] as a backend for evaluating the special functions.
We work in the five-flavour scheme (5FS), where the bottom quark is treated as massless and is present in the initial state. We present numerical results for the LHC center-ofmass energy √ s = 8 TeV. We take the Cabbibo-Kobayashi-Maskawa (CKM) matrix to be diagonal and use the following input parameters:
M W =G F = 1.16638 · 10 −5 GeV −2 .
The electromagnetic coupling α is determined within the G µ scheme using the following relation:
α = √ 2 π G F M 2 W 1 − M 2 W M 2 Z . (2.1)
We use the NNPDF3.1 PDF sets [43] via the LHAPDF interface [44]. Specifically, we employ NNPDF31_lo_as_0118, NNPDF31_nlo_as_0118, NNPDF31_nnlo_as_0118 for the leading order (LO), NLO and NNLO QCD calculations, respectively. As we mentioned in Section 1, we employ both the flavour-sensitive k T and anti-k T jet algorithms, with the separation parameter R = 0.5. To define the fiducial region, we adopt the following selection cuts for the jets and charged leptons [7,18]:
p T, > 30 GeV, |η | < 2.1, p T,j > 25 GeV, |η j | < 2.4, (2.2) p T,b > 25 GeV, |η b | < 2.4.
In addition, we consider two different final state signatures:
• inclusive: at least two b-jets are required in the final state;
• exclusive: exactly two b-jets and no other jets are required in the final state.
The jet veto parameter required to define the exclusive final state follows from Eq. (2.2).
To obtain our predictions, we use a kinematic-dependent quantity for the renormalisation (µ R ) and factorisation (µ F ) scales. In particular, we utilise
H T = E T ( ν) + p T (b 1 ) + p T (b 2 ) , (2.3) where E T ( ν) = M 2 ( ν) + p 2 T ( ν)
is the transverse energy of the lepton-neutrino system (i.e. the off-shell W boson), M ( ν) and p T ( ν) are the corresponding invariant mass and transverse momentum, and p T (b 1 ) (p T (b 2 )) is the hardest (second hardest) bottom-quarkflavoured jet. We set µ R = µ F = H T for the central scale.
Before presenting our phenomenological results, let us briefly review the flavoured antik T jet algorithm proposed in Ref. [33]. The standard anti-k T distance is multiplied by the following damping function S ij if both pseudo-jets i and j have the same non-zero flavour of opposite sign:
S ij = 1 − θ(1 − x) cos π 2 x ≤ 1 , (2.4)
where θ is the Heaviside step function, and
x ≡ 1 a k 2 T,i + k 2 T,j 2k 2 T,max , (2.5)
k T,max being the transverse momentum of the hardest pseudo-jet at each clustering step. The parameter a in eq. (2.5) is arbitrary and regulates the turn-on of the damping function. Small non-zero values of a are favourable, as the resulting clustering is closer to that of the standard anti-k T algorithm. Large logarithms of a may however spoil the perturbative convergence if a is chosen small. The value of a should therefore be tuned, possibly process-wise, to minimise unfolding effects and perturbative corrections at the same time.
Unfolding effects can be estimated by studying partonic NLO+PS predictions. This however is beyond the scope of this study. As suggested in Ref. [33], we investigate the values of a = 0.05, 0.1, 0.2. Furthermore, there is some additional degree of freedom in the definition of k T,max . For the Wbb process, we decided to include the transverse momentum of the final state leptons, i.e. k T,max is the lepton transverse momentum if the latter is the largest transverse momentum at a given clustering step.
It is instructive to apply the standard k T and anti-k T jet algorithms and their flavoursensitive versions at NLO, where they are all IR-safe and can be compared to each other. In particular, let us consider the distribution of the distance between the b-flavoured jets, as displayed in Figure 1, for W + bb production at NLO QCD. One can see that the distribution, obtained with the flavoured k T algorithm, is significantly suppressed with respect to all the others in the region of small distances ∆R bb < 1.5. We also note that the flavoured k T algorithm deviates substantially even from the standard k T prescription, which is instead aligned with all the other anti-k T -based algorithms. The suppression for small ∆R bb can be attributed to the modification of the beam distance in the case of the flavour k T algorithm [28]. The reason is that, in the flavour k T algorithm, the flavoured pseudo-jets have a larger beam distance than in the standard prescription, which favours the recombination of the b andb quarks when they are close in angular separation. We will encounter this sharp difference in the low ∆R bb region again in the next section, where we will extend the analysis to NNLO, using exclusively the flavour-sensitive jet algorithms.
Phenomenology
In this section we discuss the results obtained with both the flavoured k T and flavoured anti-k T jet algorithms for W + (→ ν)bb production, focussing on the W + signature. The previous study of the flavoured version of the anti-k T algorithm [33] investigated examples of Z+b and tt processes. By comparing NLO+PS and NNLO QCD corrections, it was concluded that an optimal value for the jet-algorithm parameter a is around a = 0.1. This choice minimises the estimated unfolding corrections and, at the same time, leads to perturbatively stable predictions. We extend the previous studies of this algorithm to a high multiplicity process: Wbb production. We structure our presentation to highlight two important aspects in particular:
• the difference between the flavoured k T and flavoured anti-k T algorithms;
• the impact of the jet-algorithm parameter a. Table 1: Fiducial cross sections for inclusive pp → W + (→ + ν)bb production at the LHC with √ s = 8 TeV at LO, NLO and NNLO QCD using the flavour-k T and flavour anti-k T algorithms. The corresponding K-factors are defined in Eq. 3.1. The statistical errors are shown in parentheses and correspond to the central predictions, while sub-and superscripts denote the theoretical uncertainty calculated using the standard 7-point scale variation.
Inclusive W + (→ + ν)bb cross sections Jet algorithm σ LO [fb] σ NLO [fb] K NLO σ NNLO [fb] K NNLO
We quantify the latter as the difference between the results for different values of a, which we choose as a = 0.05, 0.1, 0.2 following Ref. [33]. We therefore present differential distributions for which the difference between k T and anti-k T is particularly strong, to highlight its origins. Similarly, we present differential distributions which are particularly dependent on the value of a, and identify the corresponding sensitive regions.
In addition, the usage of the flavoured anti-k T algorithm puts us in a good position to compare our theoretical predictions against the experimental analyses, which are typically done using the anti-k T algorithm. To this end, we compare our theoretical predictions for the total cross section to the measurement of Ref. [18], showing good agreement and corroborating the need for NNLO QCD corrections in this process.
We present results for the total cross sections in Section 3.1, and for a number of differential distributions in Section 3.2. Unless explicitly indicated by 'standard', we will be referring to the flavoured versions of the jet algorithms when mentioning the k T and anti-k T algorithms.
Total cross sections and comparison with the CMS data
In this section we present our results for the total cross section in both the inclusive and exclusive setups. We compare the latter against the available CMS data [18]. The NLO and NNLO K-factors are defined by
K NLO = σ NLO σ LO , K NNLO = σ NNLO σ NLO . (3.1)
The integrated cross-sections for the W + signature in the inclusive setup are presented in Table 1. We also report the theoretical uncertainty of the cross section due to missing higher orders, which we estimate as customary by varying the renormalisation and factorisation scales. The scale dependence of the inclusive cross section is estimated using the Table 2: Fiducial cross sections for exclusive pp → W(→ ν)bb (combined W ± signature) production at the LHC with √ s = 8 TeV at LO, NLO and NNLO QCD using the flavourk T and flavour anti-k T algorithms. The structure is the same as in Table 1, except we additionally provide the uncorrelated scale variation (shown in parentheses). The results for the flavour-k T algorithm were already presented in Ref. [7], and are provided here for comparison. The relative difference between the flavoured algorithms is significant, reaching ∼ 50% at NNLO, and gets modified with perturbative corrections due to different K-factors. As pointed out in Section 2, this difference comes from the particular behaviour of the flavour modification of the k T algorithm in the small ∆R bb region. The scale uncertainty of the anti-k T setups is larger than the uncertainty of the flavour-k T algorithm and depends only slightly on the jet-algorithm parameter a. The K-factors are also larger by the same amount, and increase for decreasing a-parameter, indicating an expected breakdown of the perturbative convergence due to logarithms of small a. In the next section we will see that the larger values and scale uncertainties of the anti-k T setup originate from the region where the distance between the b-flavoured jets ∆R bb is small.
Exclusive W ± (→ ± ν)bb cross sections Jet algorithm σ LO [fb] σ NLO [fb] K NLO σ NNLO [fb] K NNLO
Next, we turn our attention to the exclusive setup. The measurement of Ref. [18] presented a cross section in the exclusive setup for the combined W ± signature. The corresponding theoretical total cross sections, obtained using both the flavoured k T and anti-k T jet algorithms, are presented in Table 2. As in the inclusive case, the theoretical uncertainty is estimated using the standard 7-point scale variation. In addition, in order to mitigate the well-known underestimation of theoretical uncertainty in the exclusive setup, we also include the more conservative uncorrelated scale variation, proposed in Ref. [27]. To obtain the uncorrelated theoretical uncertainty, we write the exclusive Wbb cross section at NLO (NNLO) as : Comparison between CMS data from Ref. [18] and the theoretical predictions using the flavoured anti-k T jet algorithm with different a-parameters in the exclusive setup for the W + and W − combined signature. The theoretical uncertainty is estimated by the standard 7-point scale variation (thick band). At NLO and NNLO we also show the theoretical uncertainty calculated using the uncorrelated prescription as described in the text (thin band). The multiplicative hadronisation and additive DPI correction factors are taken into account in all theoretical predictions. Additionally, we include (in quadrature) uncertainties on the DPI factor and hadronisation corrections [18] to the uncorrelated theoretical uncertainties via a dotted extension to the bands.
The first term on the right-hand side is the original inclusive Wbb production, while the second term is the inclusive Wbb production in association with a hard jet. The theoretical uncertainty of the exclusive Wbb cross section is then taken to be where ∆σ (N)NLO,Wbb,inc and ∆σ (N)LO,Wbbj,inc are the theoretical uncertainties of the Wbb and Wbbj inclusive cross sections respectively, as obtained from direct 7-point scale variation. We observe that the corrections, as well as the standard and uncorrelated scalevariation uncertainties, behave similarly to the inclusive case. In Figure 2 we show the comparison of the CMS measurement against our theoretical predictions, obtained with the flavoured anti-k T algorithm. We note that our theoretical predictions include both the multiplicative hadronisation correction factor (0.81 ± 0.07 pb) and the additive double parton interaction (DPI) correction (0.06 ± 0.06 pb), as specified in Ref. [18]. Since uncertainties on these factors do not originate from fixed order predictions, we separate them from the estimate of missing higher orders. After being added in quadrature to the uncorrelated theoretical uncertainties, they contribute to the full theoretical uncertainty, shown by dotted error bars in Figure 2. We checked that our NLO results are compatible with the NLO predictions shown in Figure 3 of Ref. [18]. The NNLO QCD corrections appear to be significant, and their inclusion in the theoretical prediction improves substantially the agreement with the experimental data. Moreover, we find consistent agreement for all the considered values of a.
Differential distributions
In this section we present results for a number of differential distributions, focusing on the inclusive W + (→ + ν)bb production. We selected distributions which showcase the difference between the flavour k T and anti-k T algorithms or exhibit a particularly strong dependence on the a-parameter. We start by showing the distributions for the distance between the b-flavoured jets ∆R bb and their azimuthal angular separation ∆φ bb in Figure 3. These distributions are of particular interest, as they explicitly enter the definition of the jet algorithms. One can clearly see that the region of small angular separation is the origin of the increased cross section and scale uncertainty of the anti-k T setups in comparison to the k T setup. The reason for this enhancement, which can be observed already at NLO, is the sensitivity to gluon splittings into b-quark pairs. These splittings are divergent, and are regulated by the jet definitions and phase space cuts. The flavoured k T suppresses this region and therefore is less sensitive to the gluon splittings. In both distributions the differences between the k T and anti-k T algorithms become slightly stronger in this region when higher order corrections are included. For larger angular separations, we observe that the scale uncertainty and the higher order corrections are similar for the flavour k T and anti-k T algorithms. The behaviour due to this enhancement for the anti-k T algorithms propagates to the total cross sections, discussed in Section 3.1.
As for the a-parameter, the results are sensitive to it at 0.5 < ∆R bb < 1. Lower values of ∆R bb are not allowed due to the selected jet radius R bb = 0.5. The differences between the selected anti-k T setups reach 25%. Similar observations hold for the ∆φ bb distribution in the ∆φ bb < 0.4 region.
On the left plot in Figure 4, we present the distribution of the invariant mass of the two b-jets. We observe a difference of order 50% between the flavoured anti-k T and k T algorithms at low energies, starting at 50 GeV, gradually vanishing by 200 GeV. Once again, we see that the scale band and results are slightly larger for the flavoured anti-k T setups. This feature can also be attributed to the sensitivity to gluon splittings. With respect to the distributions in Figure 3, the sensitivity to the a-parameter at low energies is in this case smaller. The setup at a = 0.2 is further away from the other two, reflecting the increasing modification of the clustering sequence due to the flavour modifications. The differences in the K-factors at NLO are completely determined by the standard jet algorithm differences, whereas at NNLO the K-factors are very similar.
The right plot of Figure 4 shows the p T distribution of the b-jet pair. The difference between k T and anti-k T , in this case, shows up at higher energies, starting at 40 GeV and reaching a factor of 2. A high p T of the pair means that the jets are moving in the same direction. We observe that the relative distance between the curves with different values of the a parameter is roughly constant above 60 GeV. This might indicate that the proximity of the b-jets depends only weakly on the p T of the pair. The peak of the absolute value distribution is shifted from 35 to 60 GeV for the anti-k T algorithm due to the contribution from the region of small distance between the quarks in the bb-pair. This feature is observed already at NLO when using the standard anti-k T algorithm.
In Figure 5 we show two observables which are particularly sensitive to the a-parameter. On the left, starting at 100 GeV, the W-boson transverse mass shows a noticeable split between the anti-k T setups with different values of a. The differences between the selected setups reach 25% and extend beyond the scale band. For larger values of a, we also observed that the distribution tails of the anti-k T setups may go even below the k T setup. This sensitivity is already present at LO, and it is thus not an artefact of the QCD corrections. The region of large transverse mass is populated by events with a large recoil against the bb system, and thus events where the bb pair are kinematically close to each other.
As shown on the right of Figure 5, the charged lepton p T distribution also shows significant sensitivity to the a-parameter, starting at 60 GeV and reaching 25% by 300 GeV. We note, however, that this sensitivity is not observed at LO, where the setups Figure 5: Distribution of the W + -boson transverse mass (left) and of the positively charged lepton transverse momentum (right) for inclusive pp → W + (→ + ν)bb production. The individual plot structure is the same as in Figure 3.
match each other almost completely, and only starts occurring at NLO, as is reflected by the corresponding K-factor. The NNLO corrections amplify these differences. Table 3: NLO, NNLO and NLO+ fiducial cross sections for inclusive pp → W + (→ + ν)bb production at the LHC with √ s = 8 TeV using the flavour-k T and flavour anti-k T algorithms. The corresponding K-factors are defined in Eq. 4.2. The statistical errors are shown in parentheses and correspond to the central predictions, while sub-and superscripts denote the theoretical uncertainty calculated using the standard 7-point scale variation.
NNLO versus NLO-merged calculations
The large corrections and strong dependence on the renormalisation and factorisation scales in the NLO QCD calculation of the inclusive Wbb production originate from the tree-level qg(qg) → Wbbq(q) subprocess, which opens up only at this order [8][9][10][11]. Efforts have been made to evaluate such subprocesses at higher accuracy by computing the pp → Wbbj process at NLO QCD [14,45]. pp → Wbbj production at NLO QCD constitutes one of the ingredients of NNLO QCD calculation of Wbb production. The NLO QCD prediction for Wbb production can then be improved by taking into account the contribution from Wbbj production computed also at NLO QCD accuracy, achieved through a particular merging scheme. Such an NLO-merged prediction contains some of the NNLO QCD corrections. The aim of this section is to compare the NNLO prediction for inclusive Wbb production against the NLO-merged calculation and assess the genuine impact of the full NNLO QCD corrections.
We employ only fixed order calculations to carry out the comparisons and focus on the W + bb signature. To this end, we make use of the exclusive sums method to merge fixed order NLO QCD calculations with different jet multiplicities [15,46]. The resulting NLO-merged prediction will further be denoted as 'NLO+'. The inclusive NLO+ prediction for Wbb production is derived by summing up the exclusive NLO QCD Wbb and inclusive NLO QCD Wbbj contributions σ NLO+,Wbb,inc = σ NLO,Wbb,exc + σ NLO,Wbbj,inc .
(4.1)
Note that, if the last term of Eq. (4.1) were evaluated at leading order, we would recover the NLO inclusive cross section for Wbb production. We first present in Table 3 the comparison between NNLO and NLO+ predictions at the level of fiducial cross section. The NLO, NNLO and NLO+ cross sections are presented together with the corresponding K-factors defined by
K NNLO = σ NNLO σ NLO , K NLO+ = σ NLO+ σ NLO . (4.2)
We note that the NLO+ predictions capture already a significant amount of the NNLO corrections, although they still come short in predicting the NNLO cross sections, about 4% lower for the flavour-k T prediction, and up to 8% lower for the flavour anti-k T predictions. It is important to note that the NLO+ predictions already exhibit a much improved scale dependence over NLO results, basically resembling NNLO level.
Comparisons between the diffential distributions for NNLO and NLO-merged calculations are shown in Figures 6 and 7. In particular, we show the invariant mass of the bb system (M bb ), and the separation in the rapidity and azimuthal angle between the two leading b-jets (∆R bb ). We display the LO, NLO, NNLO and NLO+ predictions obtained using the flavour-k T jet algorithm (left plots) and the flavour anti-k T jet algorithm with a = 0.1 (right plots). In general, we also observe the same characteristics, already highlighted in the comparison of the fiducial cross sections. Namely, that the behaviour and scale dependence of NLO+ predictions is markedly similar to NNLO QCD corrections. The NLO+ results generally exhibit a lower normalisation as already observed in the fiducial cross section comparison. For the differential distributions, however, in some phase-space regions, NLO+ calculations display higher predictions than the NNLO ones which can be particularly seen in the tails of M bb distribution (c.f. Figure 6). Figure 7: ∆R bb differential distributions for inclusive pp → W + (→ + ν)bb production obtained using the flavour-k T (left) and flavour anti-k T (right) jet algorithms. The individual plot structure is the same as in Figure 6.
Conclusions
We considered the production and leptonic decay of a W-boson in association with a bottomquark pair at the LHC, at NNLO QCD accuracy. Such a final state requires the use of a flavour sensitive jet algorithm to define an IR-safe fixed order prediction beyond NLO QCD.
We computed cross sections and differential distributions using the flavoured modification of the anti-k T jet algorithm proposed in Ref. [33]. We compared its output for three different values of the tuneable parameter a against predictions by the flavour-k T jet algorithm [28], where we observed a 50% difference for the integrated cross section, coming from the small bb-distance region. The scale band is increased accordingly, and can be understood as a consequence of sensitivity to gluon to b-quark pair splittings. We compared our theoretical predictions for the exclusive cross section, where we require exactly two b-jets and no other jets in the final state, obtained with the flavoured anti-k T jet algorithm, against the measurement by the CMS collaboration [18]. We found good agreement for all values of the considered values for the a-parameter. The NNLO QCD corrections are significant, and their inclusion substantially improves the agreement with the data.
We showed differential observables that are particularly sensitive to the choice of the jet algorithm. The dominant differences between the flavoured k T and anti-k T algorithm can be attributed to the region of small angular separation between the bb-pair. This region shows also an enhanced sensitivity to the a parameter of the flavoured anti-k T algorithm. The determination of an optimal value for a for this process is left for future work.
Finally, we studied the genuine impact of the full NNLO QCD corrections by comparing the NNLO calculation against an improved NLO prediction, that is obtained by merging the pp → Wbb and pp → Wbbj NLO QCD calculations, using the exclusive sums technique. We found that the NLO-merged prediction captures a significant amount of the NNLO QCD corrections, and at the same time shows improvements on the scale dependence, both at the level of integrated cross sections and differential distributions. Still, the NLO-merged predictions do not capture entirely the NNLO effects and, therefore, the complete NNLO QCD corrections are indispensable.
Figure 1 :
1Distribution of ∆R bb at NLO for W + bb, calculated with different jet algorithms: flavour-k T , standard k T and anti-k T , and flavoured anti-k T with a selected set of values for the tuneable parameter a. The coloured bands show the scale uncertainty using the standard 7-point variation scheme for the calculations based on the flavour-k T and standard anti-k T algorithms. All calculations were performed simultaneously using the same Monte-Carlo seed.
standard 7-point scale variation, where the renormalisation µ R and factorisation µ F scales are varied according to T is the chosen central scale, defined in Eq. (2.3).
σFigure 2
2(N)NLO,Wbb,exc = σ (N)NLO,Wbb,inc − σ (N)LO,Wbbj,inc .
Figure 3 :
3Distribution of ∆R bb and ∆φ bb for inclusive pp → W + (→ + ν)bb production. The second panel shows the ratio of all setups to the flavoured-k T algorithm. The coloured bands define scale uncertainty for two calculations: flavour-k T , and flavoured anti-k T with a = 0.1. The last two panels show the K-factors at NNLO and NLO, correspondingly. The vertical bars define the statistical uncertainty. All calculations were performed simultaneously using the same Monte-Carlo seed.
Figure 4 :
4Distribution of invariant mass (left) and transverse momentum (right) of the bb-pair for inclusive pp → W + (→ + ν)bb production. The individual plot structure is the same as inFigure 3.
Figure 6 :
6M bb differential distributions for inclusive pp → W + (→ + ν)bb production obtained using the flavour-k T (left) and flavour anti-k T (right) jet algorithms. LO, NLO, NNLO and NLO+ predictions are presented. The lower panel shows the ratio to the NLO QCD calculation. The vertical bars define the statistical uncertainty.
Inclusive W + (→ + ν)bb cross sectionsJet algorithm
σ NLO [fb]
σ NNLO [fb]
K NNLO
σ NLO+ [fb]
K NLO+
flavour-k T
362.0(6) +13.7%
−11.4%
445(5) +6.7%
−7.0%
1.23
426(5) +7.6%
−8.9%
1.18
flavour anti-k T
(a = 0.05)
500.9(8) +16.1%
−12.8%
690(7) +10.9%
−9.7%
1.38
635(6) +11.2%
−11.1%
1.27
flavour anti-k T
(a = 0.1)
497.8(8) +16.0%
−12.7%
677(7) +10.4%
−9.4%
1.36
626(6) +10.8%
−10.9%
1.26
flavour anti-k T
(a = 0.2)
486.3(8) +15.5%
−12.5%
647(7) +9.5%
−8.9%
1.33
602(6) +10.2%
−10.5%
1.24
AcknowledgmentsThe authors would like to thank Michał Czakon for making the Stripper library available to us, and Simon Badger and Alexander Mitov for many useful discussions. This project received funding from the European Union's Horizon 2020 research and innovation programmes New level of theoretical precision for LHC Run
Triphoton production at hadron colliders in NNLO QCD. S Kallweit, V Sotnikov, M Wiesemann, 4681S. Kallweit, V. Sotnikov and M. Wiesemann, Triphoton production at hadron colliders in NNLO QCD, 2010.04681.
NNLO QCD corrections to three-photon production at the LHC. H A Chawdhry, M Czakon, A Mitov, R Poncelet, 10.1007/JHEP02(2020)0571911.00479JHEP. 0257H. A. Chawdhry, M. Czakon, A. Mitov and R. Poncelet, NNLO QCD corrections to three-photon production at the LHC, JHEP 02 (2020) 057, [1911.00479].
NNLO QCD corrections to diphoton production with an additional jet at the LHC. H A Chawdhry, M Czakon, A Mitov, R Poncelet, 2105.06940H. A. Chawdhry, M. Czakon, A. Mitov and R. Poncelet, NNLO QCD corrections to diphoton production with an additional jet at the LHC, 2105.06940.
Next-to-leading order QCD corrections to diphoton-plus-jet production through gluon fusion at the LHC. S Badger, T Gehrmann, M Marcoli, R Moodie, 2109.12003S. Badger, T. Gehrmann, M. Marcoli and R. Moodie, Next-to-leading order QCD corrections to diphoton-plus-jet production through gluon fusion at the LHC, 2109.12003.
Next-to-Next-to-Leading Order Study of Three-Jet Production at the LHC. M Czakon, A Mitov, R Poncelet, 10.1103/PhysRevLett.127.1520012106.05331Phys. Rev. Lett. 127152001M. Czakon, A. Mitov and R. Poncelet, Next-to-Next-to-Leading Order Study of Three-Jet Production at the LHC, Phys. Rev. Lett. 127 (2021) 152001, [2106.05331].
Automation of antenna subtraction in colour space: gluonic processes. X Chen, T Gehrmann, N Glover, A Huss, M Marcoli, 2203.13531X. Chen, T. Gehrmann, N. Glover, A. Huss and M. Marcoli, Automation of antenna subtraction in colour space: gluonic processes, 2203.13531.
NNLO QCD corrections to W bb production at the LHC. H B Hartanto, R Poncelet, A Popescu, S Zoia, 2205.01687H. B. Hartanto, R. Poncelet, A. Popescu and S. Zoia, NNLO QCD corrections to W bb production at the LHC, 2205.01687.
Strong radiative corrections to W b anti-b production in p anti-p collisions. R K Ellis, S Veseli, 10.1103/PhysRevD.60.011501hep-ph/9810489Phys. Rev. D. 6011501R. K. Ellis and S. Veseli, Strong radiative corrections to W b anti-b production in p anti-p collisions, Phys. Rev. D 60 (1999) 011501, [hep-ph/9810489].
NLO QCD corrections to W boson production with a massive b-quark jet pair at the Tevatron p anti-p collider. F Cordero, L Reina, D Wackeroth, 10.1103/PhysRevD.74.034007hep-ph/0606102Phys. Rev. D. 7434007F. Febres Cordero, L. Reina and D. Wackeroth, NLO QCD corrections to W boson production with a massive b-quark jet pair at the Tevatron p anti-p collider, Phys. Rev. D 74 (2006) 034007, [hep-ph/0606102].
W-and Z-boson production with a massive bottom-quark pair at the Large Hadron Collider. F Cordero, L Reina, D Wackeroth, 10.1103/PhysRevD.80.034015Phys. Rev. D. 80340150906.1923F. Febres Cordero, L. Reina and D. Wackeroth, W-and Z-boson production with a massive bottom-quark pair at the Large Hadron Collider, Phys. Rev. D 80 (2009) 034015, [0906.1923].
QCD Corrections to the Hadronic Production of a Heavy Quark Pair and a W-Boson Including Decay Correlations. S Badger, J M Campbell, R K Ellis, 10.1007/JHEP03(2011)0271011.6647JHEP. 0327S. Badger, J. M. Campbell and R. K. Ellis, QCD Corrections to the Hadronic Production of a Heavy Quark Pair and a W-Boson Including Decay Correlations, JHEP 03 (2011) 027, [1011.6647].
W +-bb production in POWHEG. C Oleari, L Reina, 10.1007/JHEP11(2011)0401105.4488JHEP. 0861C. Oleari and L. Reina, W +-bb production in POWHEG, JHEP 08 (2011) 061, [1105.4488].
W and Z/γ * boson production in association with a bottom-antibottom pair. R Frederix, S Frixione, V Hirschi, F Maltoni, R Pittau, P Torrielli, 10.1007/JHEP09(2011)0611106.6019JHEP. 0961R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, R. Pittau and P. Torrielli, W and Z/γ * boson production in association with a bottom-antibottom pair, JHEP 09 (2011) 061, [1106.6019].
W bbj production at NLO with POWHEG+MiNLO. G Luisoni, C Oleari, F Tramontano, 10.1007/JHEP04(2015)1611502.01213JHEP. 04161G. Luisoni, C. Oleari and F. Tramontano, W bbj production at NLO with POWHEG+MiNLO, JHEP 04 (2015) 161, [1502.01213].
NLO QCD predictions for W bb production in association with up to three light jets at the LHC. F R Anger, F Cordero, H Ita, V Sotnikov, 10.1103/PhysRevD.97.0360181712.05721Phys. Rev. D. 9736018F. R. Anger, F. Febres Cordero, H. Ita and V. Sotnikov, NLO QCD predictions for W bb production in association with up to three light jets at the LHC, Phys. Rev. D 97 (2018) 036018, [1712.05721].
A Search for W bb and W H Production in pp Collisions at √ s = 1.96 TeV. V M Abazov, D0 collaboration10.1103/PhysRevLett.94.091802hep-ex/0410062Phys. Rev. Lett. 9491802D0 collaboration, V. M. Abazov et al., A Search for W bb and W H Production in pp Collisions at √ s = 1.96 TeV, Phys. Rev. Lett. 94 (2005) 091802, [hep-ex/0410062].
Measurement of the Production Cross Section for a W Boson and Two b Jets in pp Collisions at √ s=7 TeV. S Chatrchyan, CMS collaboration10.1016/j.physletb.2014.06.041Phys. Lett. B. 7351312.6608CMS collaboration, S. Chatrchyan et al., Measurement of the Production Cross Section for a W Boson and Two b Jets in pp Collisions at √ s=7 TeV, Phys. Lett. B 735 (2014) 204-225, [1312.6608].
Measurement of the production cross section of a W boson in association with two b jets in pp collisions at √ s = 8 TeV. V Khachatryan, CMS collaboration10.1140/epjc/s10052-016-4573-z1608.07561Eur. Phys. J. C. 7792CMS collaboration, V. Khachatryan et al., Measurement of the production cross section of a W boson in association with two b jets in pp collisions at √ s = 8 TeV, Eur. Phys. J. C 77 (2017) 92, [1608.07561].
Higher-order QCD effects for associated WH production and decay at the LHC. G Ferrera, M Grazzini, F Tramontano, 10.1007/JHEP04(2014)0391312.1669JHEP. 0439G. Ferrera, M. Grazzini and F. Tramontano, Higher-order QCD effects for associated WH production and decay at the LHC, JHEP 04 (2014) 039, [1312.1669].
Associated production of a Higgs boson at NNLO. J M Campbell, R K Ellis, C Williams, 10.1007/JHEP06(2016)1791601.00658JHEP. 06179J. M. Campbell, R. K. Ellis and C. Williams, Associated production of a Higgs boson at NNLO, JHEP 06 (2016) 179, [1601.00658].
Associated production of a Higgs boson decaying into bottom quarks at the LHC in full NNLO QCD. G Ferrera, G Somogyi, F Tramontano, 10.1016/j.physletb.2018.03.021Phys. Lett. B. 7801705.10304G. Ferrera, G. Somogyi and F. Tramontano, Associated production of a Higgs boson decaying into bottom quarks at the LHC in full NNLO QCD, Phys. Lett. B 780 (2018) 346-351, [1705.10304].
NNLO QCD corrections to associated W H production and H → bb decay. F Caola, G Luisoni, K Melnikov, R Röntsch, 10.1103/PhysRevD.97.0740221712.06954Phys. Rev. D. 9774022F. Caola, G. Luisoni, K. Melnikov and R. Röntsch, NNLO QCD corrections to associated W H production and H → bb decay, Phys. Rev. D 97 (2018) 074022, [1712.06954].
Associated production of a Higgs boson decaying into bottom quarks and a weak vector boson decaying leptonically at NNLO in QCD. R Gauld, A Gehrmann-De Ridder, E W N Glover, A Huss, I Majer, 10.1007/JHEP10(2019)0021907.05836JHEP. 102R. Gauld, A. Gehrmann-De Ridder, E. W. N. Glover, A. Huss and I. Majer, Associated production of a Higgs boson decaying into bottom quarks and a weak vector boson decaying leptonically at NNLO in QCD, JHEP 10 (2019) 002, [1907.05836].
Bottom quark mass effects in associated W H production with the H → bb decay through NNLO QCD. A Behring, W Bizoń, F Caola, K Melnikov, R Röntsch, 10.1103/PhysRevD.101.114012Phys. Rev. D. 1011140122003.08321A. Behring, W. Bizoń, F. Caola, K. Melnikov and R. Röntsch, Bottom quark mass effects in associated W H production with the H → bb decay through NNLO QCD, Phys. Rev. D 101 (2020) 114012, [2003.08321].
NNLOPS accurate associated HW production. W Astill, W Bizon, E Re, G Zanderighi, 10.1007/JHEP06(2016)1541603.01620JHEP. 06154W. Astill, W. Bizon, E. Re and G. Zanderighi, NNLOPS accurate associated HW production, JHEP 06 (2016) 154, [1603.01620].
Next-to-next-to-leading order event generation for V H production with H → bb decay. S Zanoli, M Chiesa, E Re, M Wiesemann, G Zanderighi, 2112.04168S. Zanoli, M. Chiesa, E. Re, M. Wiesemann and G. Zanderighi, Next-to-next-to-leading order event generation for V H production with H → bb decay, 2112.04168.
Theory Uncertainties for Higgs and Other Searches Using Jet Bins. I W Stewart, F J Tackmann, 10.1103/PhysRevD.85.0340111107.2117Phys. Rev. D. 8534011I. W. Stewart and F. J. Tackmann, Theory Uncertainties for Higgs and Other Searches Using Jet Bins, Phys. Rev. D 85 (2012) 034011, [1107.2117].
Infrared safe definition of jet flavor. A Banfi, G P Salam, G Zanderighi, 10.1140/epjc/s2006-02552-4hep-ph/0601139Eur. Phys. J. C. 47A. Banfi, G. P. Salam and G. Zanderighi, Infrared safe definition of jet flavor, Eur. Phys. J. C 47 (2006) 113-124, [hep-ph/0601139].
Precise predictions for WH+jet production at the LHC. R Gauld, A Gehrmann-De Ridder, E W N Glover, A Huss, I Majer, 10.1016/j.physletb.2021.136335Phys. Lett. B. 8171363352009.14209R. Gauld, A. Gehrmann-De Ridder, E. W. N. Glover, A. Huss and I. Majer, Precise predictions for WH+jet production at the LHC, Phys. Lett. B 817 (2021) 136335, [2009.14209].
Predictions for Z -Boson Production in Association with a b-Jet at O(α 3 s ). R Gauld, A Gehrmann-De Ridder, E W N Glover, A Huss, I Majer, 10.1103/PhysRevLett.125.222002Phys. Rev. Lett. 1252220022005.03016R. Gauld, A. Gehrmann-De Ridder, E. W. N. Glover, A. Huss and I. Majer, Predictions for Z -Boson Production in Association with a b-Jet at O(α 3 s ), Phys. Rev. Lett. 125 (2020) 222002, [2005.03016].
NNLO QCD predictions for W+c-jet production at the LHC. M Czakon, A Mitov, M Pellen, R Poncelet, 10.1007/JHEP06(2021)1002011.01011JHEP. 06100M. Czakon, A. Mitov, M. Pellen and R. Poncelet, NNLO QCD predictions for W+c-jet production at the LHC, JHEP 06 (2021) 100, [2011.01011].
The anti-k t jet clustering algorithm. M Cacciari, G P Salam, G Soyez, 10.1088/1126-6708/2008/04/063JHEP. 04630802.1189M. Cacciari, G. P. Salam and G. Soyez, The anti-k t jet clustering algorithm, JHEP 04 (2008) 063, [0802.1189].
Infrared-safe flavoured anti-k T jets. M Czakon, A Mitov, R Poncelet, M. Czakon, A. Mitov and R. Poncelet, Infrared-safe flavoured anti-k T jets, 2205.11879.
A dress of flavour to suit any jet. R Gauld, A Huss, G Stagnitto, 2208.11138R. Gauld, A. Huss and G. Stagnitto, A dress of flavour to suit any jet, 2208.11138.
A novel subtraction scheme for double-real radiation at NNLO. M Czakon, 10.1016/j.physletb.2010.08.0361005.0274Phys. Lett. B. 693M. Czakon, A novel subtraction scheme for double-real radiation at NNLO, Phys. Lett. B 693 (2010) 259-268, [1005.0274].
Four-dimensional formulation of the sector-improved residue subtraction scheme. M Czakon, D Heymes, 10.1016/j.nuclphysb.2014.11.0061408.2500Nucl. Phys. B. 890M. Czakon and D. Heymes, Four-dimensional formulation of the sector-improved residue subtraction scheme, Nucl. Phys. B 890 (2014) 152-227, [1408.2500].
Single-jet inclusive rates with exact color at O (α 4 s ). M Czakon, A Van Hameren, A Mitov, R Poncelet, 10.1007/JHEP10(2019)2621907.12911JHEP. 10262M. Czakon, A. van Hameren, A. Mitov and R. Poncelet, Single-jet inclusive rates with exact color at O (α 4 s ), JHEP 10 (2019) 262, [1907.12911].
Numerical evaluation of multi-gluon amplitudes for High Energy Factorization. M Bury, A Van Hameren, 10.1016/j.cpc.2015.06.0231503.08612Comput. Phys. Commun. 196M. Bury and A. van Hameren, Numerical evaluation of multi-gluon amplitudes for High Energy Factorization, Comput. Phys. Commun. 196 (2015) 592-598, [1503.08612].
On-the-fly reduction of open loops. F Buccioni, S Pozzorini, M Zoller, 10.1140/epjc/s10052-018-5562-11710.11452Eur. Phys. J. C. 7870F. Buccioni, S. Pozzorini and M. Zoller, On-the-fly reduction of open loops, Eur. Phys. J. C 78 (2018) 70, [1710.11452].
. F Buccioni, J.-N Lang, J M Lindert, P Maierhöfer, S Pozzorini, H Zhang, 10.1140/epjc/s10052-019-7306-21907.13071Eur. Phys. J. C. 2866OpenLoopsF. Buccioni, J.-N. Lang, J. M. Lindert, P. Maierhöfer, S. Pozzorini, H. Zhang et al., OpenLoops 2, Eur. Phys. J. C 79 (2019) 866, [1907.13071].
Two-Loop QCD Corrections to W bb Production at Hadron Colliders. S Badger, H B Hartanto, S Zoia, 10.1103/PhysRevLett.127.0120012102.02516Phys. Rev. Lett. 12712001S. Badger, H. B. Hartanto and S. Zoia, Two-Loop QCD Corrections to W bb Production at Hadron Colliders, Phys. Rev. Lett. 127 (2021) 012001, [2102.02516].
Pentagon Functions for One-Mass Planar Scattering Amplitudes. D Chicherin, V Sotnikov, S Zoia, 2110.10111D. Chicherin, V. Sotnikov and S. Zoia, Pentagon Functions for One-Mass Planar Scattering Amplitudes, 2110.10111.
Parton distributions from high-precision collider data. R D Ball, NNPDF collaboration10.1140/epjc/s10052-017-5199-51706.00428Eur. Phys. J. C. 77663NNPDF collaboration, R. D. Ball et al., Parton distributions from high-precision collider data, Eur. Phys. J. C 77 (2017) 663, [1706.00428].
LHAPDF6: parton density access in the LHC precision era. A Buckley, J Ferrando, S Lloyd, K Nordström, B Page, M Rüfenacht, 10.1140/epjc/s10052-015-3318-81412.7420Eur. Phys. J. C. 75132A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht et al., LHAPDF6: parton density access in the LHC precision era, Eur. Phys. J. C 75 (2015) 132, [1412.7420].
Towards W bb + j at NLO with an Automatized Approach to One-Loop Computations. L Reina, T Schutzmeier, 10.1007/JHEP09(2012)1191110.4438JHEP. 09119L. Reina and T. Schutzmeier, Towards W bb + j at NLO with an Automatized Approach to One-Loop Computations, JHEP 09 (2012) 119, [1110.4438].
Nlo Sm, SM MC Working Group collaboration; J. Alcaraz Multileg Working Group, SM MC Working Group collaborationMaestre, SM MC Working Group collaborationThe SM and NLO Multileg and SM MC Working Groups. SM, NLO MULTILEG Working Group, SM MC Working Group collaboration, J. Alcaraz Maestre et al., The SM and NLO Multileg and SM MC Working Groups:
| []
|
[
"arXiv:quant-ph/9804009v1 2 Apr 1998 Metrical Quantization *",
"arXiv:quant-ph/9804009v1 2 Apr 1998 Metrical Quantization *"
]
| [
"John R Klauder \nDepartments of Physics and Mathematics\nUniversity of Florida Gainesville\n32611Fl\n"
]
| [
"Departments of Physics and Mathematics\nUniversity of Florida Gainesville\n32611Fl"
]
| [
"* Presented at the workshop on Quantum Future"
]
| Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators-even those that eschew Cartesian coordinates-implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum "aggregations", namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically leads to a canonical coherent state representation. Although attention in this paper is confined to canonical quantization we note that alternative, nonflat metrics may also be used, and they generally give rise to qualitatively different, noncanonical quantization schemes. | 10.1007/bfb0105343 | [
"https://export.arxiv.org/pdf/quant-ph/9804009v1.pdf"
]
| 18,116,353 | quant-ph/9804009 | 87cc46b4fb7c8185fb0a518211b37586fd5dd2d4 |
arXiv:quant-ph/9804009v1 2 Apr 1998 Metrical Quantization *
September, 1997
John R Klauder
Departments of Physics and Mathematics
University of Florida Gainesville
32611Fl
arXiv:quant-ph/9804009v1 2 Apr 1998 Metrical Quantization *
* Presented at the workshop on Quantum Future
Przesieka, PolandSeptember, 19971
Canonical quantization may be approached from several different starting points. The usual approaches involve promotion of c-numbers to q-numbers, or path integral constructs, each of which generally succeeds only in Cartesian coordinates. All quantization schemes that lead to Hilbert space vectors and Weyl operators-even those that eschew Cartesian coordinates-implicitly contain a metric on a flat phase space. This feature is demonstrated by studying the classical and quantum "aggregations", namely, the set of all facts and properties resident in all classical and quantum theories, respectively. Metrical quantization is an approach that elevates the flat phase space metric inherent in any canonical quantization to the level of a postulate. Far from being an unwanted structure, the flat phase space metric carries essential physical information. It is shown how the metric, when employed within a continuous-time regularization scheme, gives rise to an unambiguous quantization procedure that automatically leads to a canonical coherent state representation. Although attention in this paper is confined to canonical quantization we note that alternative, nonflat metrics may also be used, and they generally give rise to qualitatively different, noncanonical quantization schemes.
Introduction
Quantization, like any other procedure, lends itself to an axiomatization. As discussed shortly, there are such procedures that characterize the usual quantization proposals of Heisenberg, Schrödinger, and Feynman. Hidden in these procedures is an often unstated assumption, namely, that the coordinates in which the very quantization rules are laid down must be chosen to be Cartesian whenever a canonical quantization is sought. This procedural step is so ingrained and automatic that it is often overlooked or ignored for what it really is, namely, an essential assumption in the given procedure. In this paper we briefly review postulates of the usual quantization procedures and introduce yet another procedure we refer to as metrical quantization.
Let us start with a brief review of classical mechanics.
Classical mechanics
Consider a phase space M for a single degree of freedom which is two dimensional. As a symplectic manifold the space M is endowed with a symplectic two form ω, which is nondegenerate and closed, dω = 0. Darboux's Theorem assures us that local coordinates p and q exist such that ω = dp ∧ dq in the given coordinates. Such coordinates are referred to as canonical coordinates, and any coordinate transformation with a unit Jacobian leads from one set of canonical coordinates to another set of canonical coordinates. Indeed, if r and s denote another pair of canonical coordinates, then it follows that rds = pdq + dF (s, q) for some generator F . The new coordinates are canonical since the exterior derivative of both sides of this relation yields dr ∧ ds = dp ∧ dq = ω.
Besides the kinematical aspects of the classical theory of mechanics, dynamics arises with the introduction of a distinguished scalar, the Hamiltonian H, or as expressed in the original canonical coordinates, the function H(p, q). By a scalar we mean that H(r, s) ≡ H(p(r, s), q(r, s)) = H(p, q), an equation which indicates how H transforms under (canonical) coordinate transformations. Finally, classical dynamics may be introduced as the stationary paths of a distinguished action functional given in coordinate form by
I = [pq +Ġ(p, q) − H(p, q)] dt ,(1)
subject to variations that hold both p(t) and q(t) fixed at the initial time t = 0 and the final time t = T . The resultant equations are independent of the gauge function G, and are given bẏ
q = ∂H(p, q)/∂p , p = −∂H(p, q)/∂q .(2)
Note that the exterior derivative of the one form pdq + dG(p, q) that appears in the action functional leads to d[pdq + dG(p, q)] = dp ∧ dq = ω. In this way the symplectic structure enters the dynamics. Lastly, we observe that the dynamical equations of motion may also be given a Poisson bracket structure. In particular, if
{A, B} ≡ ∂A ∂q ∂B ∂p − ∂A ∂p ∂B ∂q ,(3)
then it follows thatq
= {q, H(p, q)} , p = {p, H(p, q)} ,(4)
and for a general function W (p, q) it follows thaṫ
W (p, q) = {W (p, q), H(p, q)} .(5)
Thus, since {B, A} = −{A, B}, we observe thaṫ
H(p, q) = {H(p, q), H(p, q)} = 0 ,(6)
and therefore H(p, q) = E, which is a constant of the motion usually identified with the energy.
The Classical Mechanics Aggregation
Let us collect all the concepts and formulas appropriate to classical mechanics, a few of which have been indicated above, in one place, and let us refer to that as the classical mechanics aggregation. For example, the classical mechanics aggregation would include the set of all canonical coordinates, the set of all Hamiltonians each of which is expressed in all possible canonical coordinates, the rules for dynamical evolution, and indeed the set of all solutions of the dynamical equations of motion for each Hamiltonian expressed in all possible canonical coordinates. Also in the classical mechanics aggregation would be the formulation of classical mechanics expressed in differential geometric form, i.e., as coordinate-free expressions and operations that effect the Poisson brackets, etc. Evidently, the classical mechanics aggregation contains all that is known and, implicitly, all that is knowable about classical mechanics! Let us develop an analogous aggregation appropriate to quantum mechanics.
The Quantum Mechanics Aggregation
There are a number of standard ideas and equations that enter into the formulation of quantum mechanics irrespective of the particular details of the system being quantized, and, for purposes of illustration, let us focus on systems with just one degree of freedom. We have in mind, for example, a Hilbert space composed of complex, square integrable functions over the real line, namely the space L 2 (IR), or a Hilbert space composed of square summable sequences, namely the space l 2 , etc. Operators arise in the form of functions of position and derivatives with respect to position, or functions of momentum and derivatives with respect to momentum, or semi-infinite square matrices, etc. Probability amplitudes occur in the form of inner products of two Hilbert space vectors, or more generally, matrix elements of an operator in the form of an inner product involving two vectors with an operator standing between them. Many of these concepts can be formulated in a coordinate-free language in terms of an abstract Hilbert space formulation and an abstract operator language as well. These elements form the arena in which quantum mechanics takes place. Quantum mechanics is also distinguished by equivalent sets of rules for the introduction of dynamics. For example, there is the abstract Schrödinger equation giving the time derivative of the state vector as the action of the Hamiltonian operator on the state vector, apart from suitable constants (ih). Alternatively, there is the Heisenberg equation of motion which equates the time derivative of an operator in the Heisenberg picture to the commutator of the operator with the Hamiltonian, again up to the same constants. Additionally, we mention the Feynman representation of the propagator as a path integral, a representation which in fact is a direct consequence of the abstract vector and operator language, or alternatively, a consequence of the Schrödinger equation and its solution for a suitable boundary condition.
We may also mention distinguished operator sets such as the Heisenberg canonical operators P and Q which, either abstractly or in a concrete realization, satisfy the fundamental commutation relation [Q, P ] = ih1 1. If these operators are self adjoint then we may also consider the Weyl operators U[p, q] ≡ exp[i(pQ − qP )/h] for all real p and q. Armed with such operators and an arbitrary normalized vector in the Hilbert space |η , we may consider the canonical coherent states
|p, q ≡ |p, q; η ≡ e i(pQ−qP )/h |η .(7)
It is but a simple exercise to show that |p, q p, q| dpdq/2πh = 1 1 (8)
for any choice of the fiducial vector |η . Thus, coherent states, the representations of Hilbert space they induce, etc., are all implicitly contained within the quantum mechanical aggregation. Unitary transformations that map one form of Hilbert space vectors and one form of operators into another form are all part of the quantum mechanical aggregation. In short, everything kinematical and dynamical that one could think of belonging to Hilbert space, operator theory, quantum mechanics, etc., everything known and, implicitly, everything knowable about quantum mechanics is contained in the quantum mechanical aggregation. Now let us try to build a bridge between the classical mechanical aggregation and the quantum mechanical aggregation.
Conventional Quantization
The act of quantization is designed to connect the principal entities in the classical mechanical aggregation with the appropriate entities in the quantum mechanical aggregation, in some cases in a one-to-one fashion, but in other cases in a many-one fashion. It is the genius of Heisenberg and Schrödinger that they were able to guess several basic concepts and quantities lying in the quantum mechanical aggregation and use these few ideas as stepping stones in order to construct a bridge between the classical and the quantum worlds. Feynman used a different set of concepts and quantities to select his stepping stones between these two worlds. In modern parlance, we could call these stepping stones "postulates" (or at the very least "assumptions").
Heisenberg quantization
In the case of Heisenberg quantization, we may cast the postulates in the form (for postulate 1. see below):
2. Introduce matrices Q = {Q mn } and P = {P mn }, where m, n ∈ {1, 2, 3, . . .}, that satisfy [Q, P ] mn ≡ Σ p (Q mp P pn − P mp Q pn ) = ihδ mn .
3. Build a Hamiltonian matrix H = {H mn } as a function (e.g., polynomial) of the matrices, H mn = H(P, Q) mn , that is the same function as the classical Hamiltonian H(p, q). (In so doing there may be operator ordering ambiguities which this prescription cannot resolve; choose an ordering that leads to a Hermitian operator.) 4. Introduce the equation of motion ihẊ mn = [X, H] mn for the elements of a general matrix X = {X mn }. ✷ Along with these postulates comes the implicit task of solving the called for equations of motion subject to suitable operator-valued boundary conditions. Once the several steps are accomplished, a general path has opened up as to how a given system is to be taken from its classical version to its quantum version.
Accepting these postulates, it becomes clear how the general classical system is to be connected with the general quantum system apart from one postulate that we have neglected and which was not immediately obvious to the founding fathers. The question arises as to exactly which choice of canonical coordinates are to be used when promoting the classical canonical variables to quantum canonical variables. After the principal paper on quantization [1], it subsequently became clear to Heisenberg that it is necessary to make this promotion from c-number to q-number variables only in Cartesian coordinates. Thus there is implicitly another postulate [2]:
1. Express the classical kinematical variables p and q in Cartesian coordinates prior to promoting them to matrices {P mn } and {Q mn }, respectively.
We will present a rationale for this postulate below.
Schrödinger quantization
The postulates for Schrödinger's formulation of quantization may be given in the following form [3]:
1. Express the classical kinematical variables p and q in Cartesian coordinates.
2. Promote the classical momentum p to the differential operator −ih(∂/∂x) and the classical coordinate q to the multiplication operator x, a choice that evidently satisfies the commutation relation [x, −ih(∂/∂x)] = ih.
3. Define the Hamiltonian operator H as the classical Hamiltonian with the momentum variable p replaced by the operator −ih(∂/∂x) and the coordinate variable q replaced by the operator x. (In so doing there may be operator ordering ambiguities which this prescription cannot resolve; choose an ordering that leads to a Hermitian operator.) 4. For ψ(x) a complex, square integrable functions of x, introduce the dynamical equation ihψ = Hψ. ✷ Implicit with these postulates is the instruction to solve the Schrödinger equation for a dense set of initial conditions and a large class of Hamiltonian operators, and in that way help to build up the essentials of the quantum mechanical aggregation.
It is interesting to note that Schrödinger himself soon became aware of the fact that his procedure generally works only in Cartesian coordinates.
Feynman quantization
Feynman's formulation of quantization focuses on the solution to the Schrödinger equation and postulates that the propagator, an integral kernel that maps the wave function (generally in the Schrödinger representation) at one time to the wave function at a later time, may be given by means of a path integral expression [4]. On the surface, it would seem that the (phase space) path integral, using only concepts from classical mechanics, would seem to get around the need for Cartesian coordinates; as we shall see that is not the case. As postulates for a path integral quantization scheme we have:
1. Express the classical kinematical variables p and q in Cartesian coordinates.
2. Given that |q, t , where Q(t)|q, t = q|q, t , denote sharp position eigenstates, write the transition matrix element in the form of a path integral as
q ′′ , T |q ′ , 0 = M exp{(i/h) [pq − H(p, q)] dt} Dp Dq .(9)
3. Recognize that the formal path integral of Step 2. is effectively undefined and replace it by a regularized form of path integral, namely,
q ′′ , T |q ′ , 0 = lim N →∞ M N exp{(i/h)Σ N l=0 [p l+1/2 (q l+1 − q l ) −ǫH(p l+1/2 , (q l+1 + q l )/2)]} Π N l=0 dp l+1/2 Π N l=1 dq l ,(10)
where q N +1 = q ′′ , q 0 = q ′ , M N = (2πh) −(N +1) , and ǫ = T /(N + 1). ✷ Implicit in the latter expression is a Weyl ordering choice to resolve any operator ordering ambiguities. Observe that the naive lattice formulation of the classical action leads to correct quantum mechanical results, generally speaking, only in Cartesian coordinates. Although the formal phase space path integral of postulate 2. appears superficially to be covariant under canonical coordinate transformations, it would be incorrect to conclude that was the case inasmuch as it would imply that the spectrum of diverse physical systems would be identical. In contrast, the naive lattice prescription applies only to Cartesian coordinates, the same family of coordinates singled out in the first postulate of each of the previous quantization schemes.
Elements of the quantum mechanical aggregation
Traditional quantization-be it Heisenberg, Schrödinger, or Feynman-leads invariably to a Hilbert space (or a particular representation thereof), and to canonical operators (or particular representations thereof). For physical reasons we restrict attention to that subclass of systems wherein the canonical operators are self adjoint and obey not only the Heisenberg commutation relations but also the more stringent Weyl form of the commutation relations. In particular, we assert that a byproduct of any conventional quantization scheme-and even some nonconventional quantization schemes such as geometric quantization or deformation quantization-is to lead to (normalized) Hilbert space vectors, say |η , and a family of unitary Weyl operators U[p, q] ≡ exp[i(pQ − qP )/h], (p, q) ∈ IR 2 , that obey the standard Weyl commutation relation. These expressions lead directly to a set of coherent states each of the form |p, q ≡ U[p, q]|η . Given such conventional quantities lying in the quantum mechanical aggregation, and minimal domain assumptions, we first build the one form θ(p, q) ≡ ih p, q|d|p, q = 1 2 (p dq − q dp) + P dq − Q dp ,
where ( · ) ≡ η|( · )|η , and which is recognized as a natural candidate for the classical symplectic potential for a general |η . Indeed, dθ = dp ∧ dq = ω holds for a general |η . As a second quantity of interest, we build the Fubini-Study metric
dσ 2 (p, q) ≡ 2h 2 [ d|p, q 2 − | p, q|d|p, q | 2 ] = 2 (∆Q) 2 dp 2 + 2 (∆P )(∆Q) + (∆Q)(∆P ) dpdq + 2 (∆P ) 2 dq 2 .(12)
Here ∆Q ≡ Q − Q , etc. The latter expression given above holds for a general vector |η . Observe well, for a general |η , that this phase space metric is always flat because all the metric coefficients are constants. Stated otherwise, for a general |η , the Fubini-Study metric invariably describes a flat phase space, here expressed in (almost) Cartesian coordinates thanks to the use of canonical group coordinates for the Weyl group. For "physical" vectors |η , defined such that (∆Q) 2 + (∆P ) 2 = o(h 0 ), it follows that the phase space metric is a quantum property and it vanishes in the limith → 0; indeed, if |η is chosen as the ground state of a harmonic oscillator with unit angular frequency, then dσ 2 =h(dp 2 + dq 2 ). One may of course change the coordinates, e.g., introduce r = r(p, q) and s = s(p, q); this may change the form of the metric coefficients for dσ 2 , but it will not alter the fact that the underlying phase space is still flat.
We conclude these remarks by emphasizing that inherent in any canonical quantization scheme is the implicit assumption of a flat phase space which can carry globally defined Cartesian coordinates. These properties automatically lie within the quantum mechanical aggregation for any quantization scheme that leads to Hilbert space vectors and canonical operators!
Metrical Quantization
We define metrical quantization by the following set of postulates:
1. Assign to classical phase space a flat space metric dσ 2 , and choose Cartesian coordinates in such a way that dσ 2 (p, q) =h(dp 2 + dq 2 ) .
2. Introduce the regularized phase-space path integral, which explicitly uses the phase space metric, and is formally given by (14) and more precisely given by
K(p ′′ , q ′′ , T ; p ′ , q ′ , 0) = lim ν→∞ N ν exp{(i/h) [(pq − qṗ)/2 − h(p, q)] dt} × exp{−(1/2ν) [ṗ 2 +q 2 ] dt} Dp DqK(p ′′ , q ′′ , T ; p ′ q ′ , 0) = lim ν→∞ 2πhe νT /2 exp{(i/h) [(pdq − qdp)/2 − h(p, q) dt]} dµ ν W (p, q) ,(15)
where µ ν W denotes a Wiener measure for two-dimensional Brownian motion on the plane expressed in Cartesian coordinates, and where ν denotes the diffusion constant. Finally, we observe that a s a positive-definite function, it follows from the GNS (Gel'fand, Naimark, Segal) Theorem that
K(p ′′ , q ′′ , T ; p ′ , q ′ , 0) ≡ p ′′ , q ′′ | e −iHT /h |p ′ , q ′ , (16) |p, q ≡ e i[(pQ−qP )/h] |0 , [Q, P ] = ih1 1 ,(17)(Q + iP )|0 = 0 , 0|0 = 1 ,(18)
H ≡ h(p, q) |p, q p, q| dp dq/2πh .
All these things follow from positive-definiteness, and the implication is that the Wiener measure regularized phase-space path integral automatically gives rise to the propagator expressed in a coherent-state representation. ✷
The canonical quantization formulation given above has raised the metric on a flat phase space to the level of a postulate. The assumption that the given coordinates are indeed Cartesian is by no means an arbitrary one. There is, in fact, a great deal of physics in the statement that certain coordinates are Cartesian. In the present case, we can read that physics straight out of (19) which relates the classical Hamiltonian h(p, q) to the quantum Hamiltonian operator H. The given integral representation is in fact equivalent to antinormal ordering, i.e., the monomial (q + ip) k (q − ip) l is quantized as the operator (Q + iP ) k (Q − iP ) l for all nonnegative integers k and l. Thus, for example, in these coordinates, the c-number expression p 2 + q 2 + q 4 is quantized as P 2 + Q 2 + Q 4 + O(h). Of course, the latter term can be made explicit; here we are only interested in the fact that the leading terms [O(h 0 )] of the quantum Hamiltonian operator are exactly those as given by the classical Hamiltonian. This connection may seem evident but that is far from the case.
Observe that expressed in terms of the Brownian motion regularization, and when we define the stochastic integral p dq via a (midpoint) Stratonovich prescription (as we are free to do in Cartesian coordinates), the procedure of metrical quantization is actually covariant under canonical coordinate transformations. As noted earlier, such a transformation is determined by the expression r ds = p dq + dF (s, q) in the classical theory, and, thanks to the S tratonovich prescription, also in the quantum theory where the paths p and q are Brownian motion paths. The function h transforms as a scalar, and therefore h(r, s) ≡ h(p(r, s), q(r, s)) = h(p, q). Lastly, we transform the Wiener measure which still describes Brownian motion on a flat two-dimensional plane, but now, generally speaking, in curvilinear coordinates. After the change of coordinates the propagator reads K(r ′′ , s ′′ , T ; r ′ , s ′ , 0) = lim ν→∞ 2πhe νT /2 exp{(i/h) [r ds + dG(r, s) − h(r, s) dt]} dµ ν W (r, s).(20)
Here, dG denotes a total differential, which amounts to nothing more than a phase change of the coherent states, and µ ν W denotes Brownian measure on the flat two-dimensional plane expressed now in curvilinear coordinates rather than Cartesian coordinates. In this case the connection of the classical and quantum Hamiltonians is given by H = h(r, s) |p(r, s), q(r, s) p(r, s), q(r, s)| dr ds/2πh .
Observe in this coordinate change that the coherent states have remained unchanged (only their names have changed) and, as a consequence, the Hamiltonian operator H is absolutely unchanged even though its c-number counterpart (symbol) is now expressed by h(r, s). In other words, the leading [O(h 0 )] dependence of h and H are no longer identical. As we have stressed elsewhere [5], the physical significance of the mathematical expression for a given classical quantity is encoded into the specific coordinate form of the auxiliary metric dσ 2 ; for example, if the metric is expressed in Cartesian coordinates, then the physical meaning of the classical Hamiltonian is that directly given by its coordinate form, as has been illustrated above by the anharmonic oscillator. Since quantization deals, for example, with the highly physical energy spectral values, it is manditory that the mathematical expression for the Hamiltonian somehow "know" to which physical system it belongs. It is the role of the metric and the very form of the metric coefficients themselves to keep track of just what physical quantity is represented by any given mathematical expression. And that very metric is build right into the Wiener measure regularized phase-space path integral, which, along with the metric itself, is the centerpiece of metrical quantization. Although we do not develop the subject further here, it is noteworthy that choosing a different geometry to support the Brownian motion generally leads to a qualitatively different quantization. For example, if the two-dimensional phase space has the geometry of a sphere of an appropriate radius, then metrical quantization leads not to canonical operators but rather to spin (or angular momentum) kinematical operators that obey the Lie algebra commutation relations of SU (2). On the other hand, for a phase space with the geometry of a space of constant negative curvature, metrical quantization leads to kinematical operators that are the generators of the Lie algebra for SU (1,1). Stated otherwise, the geometry of the chosen metric in postulate 1. of metrical quantization-which then explicitly appears in the expression defining the Wiener measure regularization in postulate 2.-actually determines the very nature of the kinematical operators in the metrical quantization procedure [6].
Zur Quantenmechanik II. M Born, W Heisenberg, P Jordan, Z. Phys. 35M. Born, W. Heisenberg, and P. Jordan, "Zur Quantenmechanik II", Z. Phys. 35, 557-615 (1926).
P A M Dirac, The Principles of Quantum Mechanics. OxfordClarendon Press1144th EditionP.A.M. Dirac, The Principles of Quantum Mechanics (Clarendon Press, Oxford, 4th Edition, 1958), p. 114.
Quantisierung als Eigenwertproblem I. E Schrödinger, Ann. Phys. 79E. Schrödinger, "Quantisierung als Eigenwertproblem I, Ann. Phys. 79, 361-376;
. Ibid Ii, 79II, ibid. 79, 489-527 (1926).
Space-time Approach to Non-relativistic Quantum Mechanics. R P Feynman, Rev. Mod. Phys. 20R.P. Feynman, "Space-time Approach to Non-relativistic Quantum Me- chanics", Rev. Mod. Phys. 20, 367-387 (1948).
Understanding Quantization. J R Klauder, Foundations of Physics. 27J.R. Klauder, "Understanding Quantization", Foundations of Physics 27, 1467-1483 (1997).
Quantization Is Geometry, After All. J R Klauder, Annals of Physics. 188J.R. Klauder, "Quantization Is Geometry, After All", Annals of Physics 188, 120-141 (1988).
| []
|
[
"A TULLY-FISHER RELATION FOR S0 GALAXIES",
"A TULLY-FISHER RELATION FOR S0 GALAXIES"
]
| [
"Eyal Neistein \nSchool of Physics & Astronomy and Wise Observatory\nTel-Aviv University\n69978Tel-AvivIsrael\n",
"Dan Maoz \nSchool of Physics & Astronomy and Wise Observatory\nTel-Aviv University\n69978Tel-AvivIsrael\n",
"Hans-Walter Rix \nSteward Observatory\nUniversity of Arizona\n85726TucsonAZ\n",
"John L Tonry \nInstitute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHI\n",
"Alfred P Sloan Fellow "
]
| [
"School of Physics & Astronomy and Wise Observatory\nTel-Aviv University\n69978Tel-AvivIsrael",
"School of Physics & Astronomy and Wise Observatory\nTel-Aviv University\n69978Tel-AvivIsrael",
"Steward Observatory\nUniversity of Arizona\n85726TucsonAZ",
"Institute for Astronomy\nUniversity of Hawaii\n2680 Woodlawn Dr96822HonoluluHI"
]
| []
| We present an I-band Tully-Fisher relation (TFR) for 18 nearby S0 galaxies using kinematics derived from long slit spectroscopy of stellar absorption lines. Our estimates of the circular velocity, V c , at 2-3 exponential disk scale lengths account for line-of-sight projection and for the stellar random motions through an asymmetric drift correction. Uniform and accurate distance calibration for all objects is available from surface brightness fluctuation measurements ofTonry et al. (1998). Despite the care taken in estimating both V c and M I , the TFR shows an intrinsic scatter, ∼ 0.7 mag in M I , or 0.15 in log 10 V c . This result is surprising, as S0 galaxies appear to have both the simple kinematics of disk galaxies, and the simple stellar populations of early-type galaxies. Remarkably, in this sample of overall rotation-dominated galaxies, the central stellar velocity dispersion is a better predictor of the total I-band luminosity (through the Fundamental Plane relations) than the circular speed at several exponential scale lengths. Furthermore, the TFR zeropoint, or the mean stellar I-band luminosity at a given V c , differs by only ∼ 0.5 mag between our sample of S0s and Mathewson et al.'s (1992) sample of late-type spirals, once both data sets are brought onto a consistent distance scale. This offset is less than expected if S0s are former spiral galaxies with prematurely truncated star-formation ( > ∼ 4 Gyrs ago). | 10.1086/300869 | [
"https://export.arxiv.org/pdf/astro-ph/9903007v1.pdf"
]
| 18,191,471 | astro-ph/9903007 | fe08685e10c68fb049fad6004395b11f7b3e3637 |
A TULLY-FISHER RELATION FOR S0 GALAXIES
28 Feb 1999
Eyal Neistein
School of Physics & Astronomy and Wise Observatory
Tel-Aviv University
69978Tel-AvivIsrael
Dan Maoz
School of Physics & Astronomy and Wise Observatory
Tel-Aviv University
69978Tel-AvivIsrael
Hans-Walter Rix
Steward Observatory
University of Arizona
85726TucsonAZ
John L Tonry
Institute for Astronomy
University of Hawaii
2680 Woodlawn Dr96822HonoluluHI
Alfred P Sloan Fellow
A TULLY-FISHER RELATION FOR S0 GALAXIES
28 Feb 1999Subject headings: galaxies: elliptical and lenticular -galaxies: kinematics and dynamics -galaxies: photometry -galaxies: formation -2 -
We present an I-band Tully-Fisher relation (TFR) for 18 nearby S0 galaxies using kinematics derived from long slit spectroscopy of stellar absorption lines. Our estimates of the circular velocity, V c , at 2-3 exponential disk scale lengths account for line-of-sight projection and for the stellar random motions through an asymmetric drift correction. Uniform and accurate distance calibration for all objects is available from surface brightness fluctuation measurements ofTonry et al. (1998). Despite the care taken in estimating both V c and M I , the TFR shows an intrinsic scatter, ∼ 0.7 mag in M I , or 0.15 in log 10 V c . This result is surprising, as S0 galaxies appear to have both the simple kinematics of disk galaxies, and the simple stellar populations of early-type galaxies. Remarkably, in this sample of overall rotation-dominated galaxies, the central stellar velocity dispersion is a better predictor of the total I-band luminosity (through the Fundamental Plane relations) than the circular speed at several exponential scale lengths. Furthermore, the TFR zeropoint, or the mean stellar I-band luminosity at a given V c , differs by only ∼ 0.5 mag between our sample of S0s and Mathewson et al.'s (1992) sample of late-type spirals, once both data sets are brought onto a consistent distance scale. This offset is less than expected if S0s are former spiral galaxies with prematurely truncated star-formation ( > ∼ 4 Gyrs ago).
Introduction
The Tully-Fisher relation (TFR) is a correlation between some measure of the maximal, or asymptotic, circular velocity of the disk and the integrated stellar luminosity of a galaxy. Since its discovery (Tully & Fisher 1977) much effort has been invested in studying its manifestation at various wavelengths, its dependence on different kinematic tracers, and its differences among galaxy types.
Measures of the circular velocity have been derived from either the 21cm H I line width or from optical rotation curves (e.g., Mathewson, Ford, & Buchhorn 1992;Raychaudhury et al. 1997;Giovanelli et al. 1997ab). The optical rotation curves in all these cases were derived from H II emission lines. Courteau (1997) has recently compared the TFRs based on H I widths and optical rotation curves and finds basic agreement among them. Integrated galaxy magnitudes were initially measured in the B-band and later in the I and H bands (see Aaronson et al. 1979Aaronson et al. , 1986. The slope (Aaronson & Mould 1983), the zero point, and the scatter of the TFR depend on the band (see Jacoby et al. 1992, for a summary; see also Tully et al. 1998). The lowest scatter has been found in the I band (∼ 0.1 mag, Bernstein et al. 1994). Presumably, this is because the B magnitude is more influenced by dust extinction and short-lived stellar populations, while the infrared magnitude is a more robust measure of the total stellar mass of the galaxy.
While the TFR serves as a fundamental tool for measuring extragalactic distances, the physical mechanism behind its existence is also of great interest. Possible explanations for a well defined TFR are emerging (Aaronson et al. 1979;Schechter 1980;Eisenstein & Loeb 1996;Dalcanton, Spergel, & Summers 1997;Mo, Mao, & White 1998;Elizondo et al. 1998;Heavens & Jimenez 1999). Self-regulated star-formation, cosmologically-determined initial angular momentum distributions, and adiabatic baryon infall all seem to play important roles. Alternatively, Milgrom (1983;1989) has advocated that his Modified Newtonian Dynamics (MOND), designed to explain the rotation curves of galaxies without resorting to dark matter, also naturally predicts a TFR. In the MOND picture, the instrinsic scatter in the TFR for a given galaxy population simply reflects the spread of mass-to-light ratios (M/L) in the population.
Most observational efforts have focussed on the TFR for late-type spiral galaxies, one extreme of the Hubble sequence. Rubin et al. (1985) studied the TFR for Sa, Sb and Sc galaxies, claiming a zero-point offset between Hubble types Sa and Sc that corresponds to 1.5 magnitudes in I. However, Giovanelli et al. (1997b) found an offset of only ∼ 0.3 mag between these Hubble types, and Aaronson & Mould (1983), Pierce & Tully (1988), and Bernstein et al. (1994) did not find a type dependence in their TFRs. None of these authors derived a TFR for the next Hubble type, S0s, because it is difficult to measure their rotation curves using H I or H II emission lines. Although 27% of the S0s in Roberts et al. (1991) have H I gas detected, in many of them the gas shows unusual characteristics, such as large velocity dispersions and counter-rotating components, and single-dish measurements often cannot reveal that the gas is concentrated in the inner regions or in an outer ring (e.g. Van Driel & Van Woerden 1991).
In this paper we explore an analogous relation for these earliest-type disk galaxies. S0 galaxies were classified by Hubble as a transition class between spirals and ellipticals (see van den Bergh 1997, for a recent review), and in the RSA catalog (Sandage & Tammann 1981) they comprise 11% of bright galaxies. The formation histories of S0s are not well understood and are likely to be heterogeneous. Their overabundance in cluster environments (Dressler 1980;see, e.g., Hashimoto & Oemler 1998, for an update) has led to suggestions that they are the products of disk-galaxy collisions and mergers (Schweizer 1986), or of gas stripping in later types (Gunn & Gott 1972). Numerical simulations of gas and stellar dynamics indeed suggest that the merger of two gas-rich disk galaxies of unequal mass can produce an object resembling an S0 (Hernquist & Mihos 1995;Bekki 1998a,b). In this picture, the merger induces a flow of gas to the central parts of the product galaxy, where the gas is almost completely transformed into stars during an induced central starburst. The simulated merger products resemble actual S0 galaxies in that they are much less gas-rich than their progenitors, contain a thickened disk, and exhibit little, if any, spiral structure. Observationally, this scenario is not free of problems, e.g., the absence of two distinct populations of globular clusters (old and young) in early-type galaxies (Kissler-Patig, Forbes, & Minniti 1998). As an alternative, Van den Bosch (1998) and Mao & Mo (1998) have proposed that S0s form a continuum with later types. In the context of hierarchical galaxy formation models, the bulge-to-disk ratio is a tracer of the formation redshift and/or the initial angular momentum of the dark halo in which the galaxy formed.
In their gross stuctural properties, S0s are similar to ellipticals and share the same Fundamental Plane relations ( e.g. Jorgensen, Franx, & Kjaergard 1996). Since the central velocity dispersions of S0s and ellipticals are considerably higher than the rotation velocities of spirals of a given luminosity (which usually rise quickly to their assymptotic values), it appears that the mass-to-light ratio in the inner regions of galaxies increases when going to earlier types. The existence and parameters of a TFR for S0s could help us place them relative to ellipticals and spirals, and give a better understanding of the physical mechanism behind the TFR. From a practical viewpoint, a tight TFR for S0s could improve the distance estimate to many clusters, where S0s are the dominant population.
To our knowledge, there has been only one published effort to measure a TFR in S0 galaxies, by Dressler & Sandage (1983). They found no evidence for any actual correlation between stellar luminosity and the observed mean stellar rotation speed. However, their rotation curves had very limited radial extent and were not corrected for projection effects nor for the stellar velocity dispersions. Furthermore, approximate (Hubble flow) distances were used, and the integrated blue magnitudes were based on photographic plates. An intrinsic TFR for S0s may have therefore been lost in the observational noise.
In this paper we attempt to measure an I-band TFR in a sample of S0 galaxies. Rotation curves are obtained from major-axis long-slit optical absorption-line spectra. In §2 we describe our sample and observations. In §3 we describe the spectroscopic and photometric reduction and analysis, present rotation curves, and derive the asymptotic circular velocities of the galaxies. In §4 we derive the TFR relation and discuss its implications. Our conclusions are summarized in §5.
Sample and Observations
Sample Selection
Because of the paucity of gas in S0s, the circular velocities must be estimated from stellar absorption-line kinematics, which requires fairly high signal-to-noise (S/N) ratios. We therefore first chose the brightest-possible sample of galaxies. Second, to explore or to establish any TFR we need accurate and independent distance estimates to our sample galaxies. Since the brightest S0s are nearby, Hubble distances, even when corrected for peculiar velocities using a large-scale flow model, are unreliable. We therefore chose only S0s whose distances have been measured by Tonry et al. (1998) using the surface brightness fluctuation (SBF) method (Tonry & Schneider 1988;Tonry et al. 1997;Blakeslee et al. 1998).
Specifically, our sample criteria were as follows: a) The galaxy is in the Tonry et al. These criteria lead to an initial sample of nearly forty galaxies, of which we observed a sub-sample of 20, devoid of morphological peculiarities, and with inclinations i ∼ 35 • − 60 • . In this inclination range both the corrections for sin i and for the line-of-sight integration through the disk (see §4) are small. Two of the galaxies, NGC 4406 and NGC 4472, were subsequently excluded from the sample because they showed little or no rotation, with their kinematics dominated by random motions. These galaxies are perhaps more suitably labeled as elliptical galaxies, which do not have a major disk component.
For three of the 18 galaxies in our final sample (NGC 936,NGC 3115,and NGC 7332) the stellar kinematics have been studied by other authors, and only photometric data were needed. For NGC 3115 we used kinematic data from Capaccioli et al. (1993) and Illingworth & Schechter (1982). A deep study of NGC 7332 by Fisher, Illingworth, & Franx (1994) provided the kinematics for this galaxy. Rotation curves for NGC 936 were taken from Kent (1987) and Kormendy (1983;1984). Table 1 lists the objects, their parameters, and our sources of data.
Observations
Cousins I band photometry of the sample galaxies was obtained at the Wise Observatory 1m telescope using a Tektronix 1024 × 1024-pixel back-illuminated CCD with a scale of 0.696±0.002 arcsec pixel −1 . For each galaxy we took 1 − 3 exposures of 300 s each. Most of the images were obtained on 1995 December 29, and a few were taken on 1996 December 15 and on 1997 February 17. All nights had photometric conditions and photometric standard stars from Landolt (1992) were observed throughout each night, and used to translate counts to I-band magnitudes.
The spectroscopic observations were also obtained at the Wise Observatory 1m telescope. We used the Faint Object Spectrograph Camera (Kaspi et al. 1996) coupled to the above CCD. A 2 ′′ -wide slit and a 600 line/mm grism, gave a dispersion of 3.68Å pixel −1 in the 4000-7263Å range, corresponding to a resolution of ∼ 300 km s −1 . The angular sampling was 2.08 arcsec pixel −1 . The observations were made on 1995, October 25-26, November 29, and December 15-16, and 1996, March 14-16 and April 14-15. On each night we also obtained spectra of bright stars, mostly K-giants, to serve as templates for modeling the galaxy spectrum. Observations typically consisted of two consecutive major-axis exposures for each galaxy. Total integration times varied from 1 hr for the brightest galaxies to 4 hrs for the faintest. He-Ar lamp exposures, for wavelength calibration, and quartz lamp exposures, for flat fielding, were taken between consecutive galaxy exposures. One spectrum of NGC 5866 was obtained at Kitt Peak National Observatory (KPNO) using the 4m telescope on 1994 March 7, with the RC spectrograph, a 1200 line mm −1 grating and an exposure time of 30 min.
Data Reduction and Analysis
Photometry
The I-band images were reduced using standard IRAF 1 routines. Images were bias subtracted, and flat-field corrected using twilight sky exposures. Foreground stars were found and removed by examining each image and replacing the affected area with an interpolated two-dimensional surface, using the Imedit task.
In order to measure the ellipticity of each galaxy and the scale length of its disk we used the Ellipse task. The semi-major axis lengths of the fitted elliptical isophotes was increased in increments of 5% until the change in the intensity between two successive ellipses was negligible (except for two cases where a bright star near the galaxy prevented extracting additional isophotes). The task outputs for each ellipse the semi-major axis length, the mean isophotal intensity, the ellipticity, and the position angle. The Elapert task was then used to approximate each ellipse with a polygon and the counts within each polygon were measured with the Polyphot task. The projected disk ellipticity was taken to be the ellipticity of the last well-fitted ellipse. The disk scale length was found by χ 2 minimization, allowing the central surface-brightness, disk scale length, and sky level to vary. The parameters of the exponential disk fit and their uncertainties were used to extrapolate the counts from the last measured radius to infinity, resulting in a "total" I-band magnitude and its error. Tonry et al.'s (1998) distances to the galaxies were used to derive the absolute magnitudes, M I . These parameters are listed in Table 1.
Spectroscopy
The long slit spectra were also reduced using standard IRAF routines. Each twodimensional spectrum was bias subtracted. Variations in slit illumination were removed by dividing each image by an illumination image derived from a spectrum of the twilight sky. Pixel-to-pixel sensitivity variations were removed by division by a quartz lamp spectrum taken after every galaxy exposure. The quartz spectrum was first normalized by a 6th-order polynomial fit to its low frequency structure in the dispersion direction. Cosmic ray events were removed with the IRAF tasks Ccrej, Cosmicrays and Imedit. He-Ar arc-lamp spectra with about 40 lines were used to rectify all science frames to uniform sampling in slit position and log λ, where λ is the wavelength, in the two cardinal directions. The resulting accuracy of the wavelength calibration is ∼ 15 km s −1 . The sky background was removed by interpolating along the two ends of the slit, where the sky dominates. Template star spectra were reduced in the same fashion, and subsequently extracted from the frames to yield one-dimensional spectra.
The line-of-sight velocities V obs (R) and velocity dispersions σ(R) as functions of the projected radius R were extracted from the galaxy spectra, following Rix & White (1992) and Rix et al. (1995). The two dimensional spectrum was first rebinned into a sequence of one-dimensional spectra of approximately constant S/N and each of these spectra was then matched by a shifted and broadened linear combination of templates, minimizing χ 2 . This resulted in a kinematic profile that, at each radius, is derived from an "optimal" template.
Figure 1 (top and middle panels) shows rotation and velocity dispersion curves, V obs (R) and σ(R), for the 15 galaxies we observed spectroscopically. One of the galaxies, NGC 5866, was measured both at Wise Observatory and at KPNO (see Fig. 1). Although the degradation in S/N when going to a small telescope is obvious, the agreement is good and shows that, for the present purpose the Wise Observatory spectra are of sufficient quality.
Deriving Circular Velocities from V (R) and σ(R)
Determining the true circular velocity of a galaxy, defined as V c (R) ≡ R ∂Φgrav ∂R , from stellar kinematics is somewhat model-dependent, even if rotation dominates (see, e.g., discussion by Illingworth & Schechter 1982;Binney & Tremaine 1987;Raychaudhury et al. 1997). We derive the circular velocity in several steps. When several rotation curves were available for a single galaxy (see §3.2) we computed the asymptotic velocity and velocity dispersion in each curve separately and subsequently used the means.
To obtain the mean stellar rotation velocity, V φ , in the plane of the disk, we deproject the observed velocity, using the observed disk ellipticity and assuming an edge-on disk axis ratio q 0 = 0.22 (de Vaucouleurs et al. 1991):
V φ (R) = V obs (R) sin(i) = V obs × 1 − q 2 0 2e − e 2 ,
where i is the inclination, e is the ellipticity, V obs is the observed radial velocity, and V φ is the azimuthal speed. Galaxies with ellipticities greater than 0.57 (i > 67 • ) were deemed to be edge-on, and no attempt at the above inclination correction was made.
However, in highly-inclined galaxies the line-of-sight integration through the disk will reduce the observed mean velocity relative to the actual velocity V φ (R) at the tangent point. We constructed a simple model of an exponential disk with a vertical scale height of 0.2R exp , to calculate V obs /V φ (R). For the edge-on case, an approximate analytic expression for f ≡ V obs /V φ (R) can be found, which is is shown in Figure 2. The same effect will lead to an overestimate of the azimuthal velocity dispersion. The two corrections for edge-on disks are:
V φ (R) = V obs (R) f ( R Rexp ) , and σ 2 φ = σ 2 obs − 1 2 (V φ − V obs ) 2 , with f (x) = exp(−x) −0.5772 − ln(x) + x − x 2 2×2! + x 3 3×3! − ... − x,
where σ φ is the corrected velocity dispersion, and σ obs is the observed velocity dispersion. Note that our uncertainties in how close to edge-on these galaxies actually are, lead to an error of only ∆log 10 (V φ ) ≈ 0.025, assuming random inclinations between i = 90 • and i = 70 • . For inclinations less than 70 • , the correction is < 4%, and we neglect it.
Most importantly, however, V c will differ from the directly observable quantities by the "asymmetric drift" correction, which accounts for the non-circular orbits of the stars, or, equivalently, their velocity dispersion. The circular velocity V c is related to the gravitational potential Φ(R), in the galaxy plane by
V 2 c (R) = R ∂Φ(R) ∂R .
To obtain the circular velocity (i.e. the velocity of a "cold" gas in the disk) we follow Binney & Tremaine (1987), eqn. 4-33:
V 2 c = V 2 φ + σ 2 φ − σ 2 r − R ρ ∂(ρσ 2 R ) ∂R − R ∂(V R V z ) ∂z ,
where ρ(R) = ρ 0 exp(− R Rexp ) is the mass density, and the term V R V z is usually negligible (Binney and Tremaine, 1987). For a flat rotation curve, σ 2 φ (r)/σ 2 r (r) = 0.5, which leads to
V 2 c = V 2 φ + σ 2 φ 2 R R exp − ∂ln σ 2 R ∂ln R − 1 .
For many of the sample galaxies ∂ln σ 2 φ ∂ln R , and hence ∂ln σ 2 R ∂ln R , is small, and can be neglected, yielding:
V 2 c = V 2 φ + σ 2 φ 2 R R exp − 1 .
To obtain the corrected rotation curves, we first fit an exponential function to the observed dispersion profile σ φ (R). We then use the fit value of σ φ (R) to apply the asymmetric drift correction to every measurement of V φ for which V φ /σ φ > 2.5 (see below). The final, corrected, curves are shown in the bottom panels of Figure 1. Finally, to estimate the deprojected, asymptotic rotation speed (usually at R ∼ 3R exp ), we average the last three points on either side of the corrected rotation curves in Figure 1 (bottom panels). Points with errors ≥ 100 km s −1 were discarded. The radius of the measured aymptotic velocity, R, was taken as the average radius of the points in the rotation curve that we used, and the uncertainty in that radius is half the distance between the inner point and the outer point that we used to obtain the final velocity.
We list all the measured and corrected velocities in Table 1. Three of the galaxies, NGC 2768, NGC 4382, and NGC 4649, have relatively large velocity dispersions even in their outer parts, such that σ φ > ∼ V φ /2.5. Under such circumstances, the approximations and systematics involved in the asymmetric drift correction may lead to an unacceptably large error in the inferred V c and we mark the measurements of these galaxies as uncertain in the subsequent discussion.
Results
With the information assembled in Table 1 we can explore the two questions posed initially: a) To what extent do S0s follow a TFR, i.e., how well are M I and V c correlated? b) What is the mean stellar luminosity for S0s at a given circular velocity, and how does it compare to the luminosity of later-type disk galaxies? Figure 3 shows M I vs. V c for the sample galaxies. The errorbars in M I include photometric errors and distance uncertainties, and the errors in V c include propagation of all the uncertainties involved in the calculation of the final circular velocity. The data points with dotted errorbars represent the three galaxies for which the asymmetric drift corrections were uncertain due to their relatively large velocity dispersions (see above). The dashed line shows the I-band TFR for late type spiral galaxies, as derived from the Mathewson et al. (1992) data by Courteau and Rix (1998) and adjusted to the same distance scale (H 0 = 80 km s −1 Mpc −1 ) as that implied by the SBF method for these galaxies (Tonry et al. 1998).
To estimate the best fit and the intrinsic scatter in the TFR, we proceeded as follows (see also Rix et al. 1997.) We assumed a relation of the form
M I (log V c ) = M I (2.3) − α(log V c − 2.3),
where the fit's pivoting point is 200 km s −1 , i.e., log V c = 2.3. Further, we assumed that the relation has an intrinsic Gaussian scatter in M I (at a given log V c ) of σ magnitudes. For each parameter set M I (2.3), α, σ this defines a model probability distribution, P model in the (M I , log V c ) plane. Each data point i, with its uncertainties in V c and M I , also constitutes a probability distribution in the same parameter plane, P i (M I , log V c ). The overall probability of a parameter set M I (2.3), α, σ , given the data, can be calculated as:
P (M I (2.3), α, σ) = i P model × P i dM d log V c ,
which is a measure of the overlap between the data and the model probability distributions for a given model. It is apparent from the data (Fig. 3) that the slope is poorly determined. Therefore, we fit a relation assuming the spiral TFR slope from Mathewson et al. (1992), α = 7.5. The best fit has a zeropoint of M I (2.3) = −21.36 ± 0.15 mag and an intrinsic scatter of σ = 0.68 ± 0.15 mag. The thick line in Fig. 3 is this best fit relation, and the thin lines show the scatter. From the plot is is clear that the data indicate a steeper slope. Formally, α > 10.5 (at 95% confidence), with no well-defined upper bound. Note that if we have underestimated the (dominant) velocity errors by 30%, the estimated intrinsic scatter in the relation will only decrease to ≈ 0.58 magnitudes. Figure 3, we can now answer the two question posed above:
Based on
• Despite the care taken in deriving V c and M I , there is a great deal of intrinsic scatter in the TFR: 0.68 ± 0.15 mag.
• At a given V c , there is only a small (0.5 ± 0.15 mag) systematic offset in M I , between the S0s and the Sc galaxies from Mathewson et al. This offset is much smaller than the 1.5 magnitudes (in I) between Sa's and Sc's, claimed by Rubin et al. (1985), and adds to the other evidence (e.g., Pierce & Tully 1988;Bernstein et al. 1994) that the zero point of the I-band TFR is only weakly dependent on galaxy type. Fig. 3 is particularly remarkable in light of the well-behaved Fundamental Plane (FP) relation ( e.g. Jorgensen et al. 1996, and references therein) for S0s in general, as well as for this particular set of objects. Figure 4 shows the FP for our sample, based on values for the effective radii, R e , as compiled in Bender, Burstein, & Faber (1992) and Fisher (1997), and central velocity dispersions, σ 0 , estimated both from our data and the literature, and listed in Table 1. For comparison with the existing FP literature, we reconstructed I ef f from M I and R e , assuming a de Vaucouleur's law. The median scatter among the points is well below 0.1 in either axis. Figure 4 uses the central stellar dispersion as the kinematic parameter, while the TFR in Figure 3 involves V c at 2-3R exp , characterizing the total mass within this radius. It is clear from this comparison that, at least for this sample, the central stellar dispersion is a much better predictor of the total stellar luminosity than the circular velocity at several disk exponential radii.
The large scatter in
The important difference between Figures 3 and 4 is that the FP in
We have searched for possible sources, either observational or intrinsic, for the large scatter we have found in the S0 TFR. Fisher (1997) obtained stellar rotation curves and velocity dispersion profiles for 18 S0 galaxies, 7 of which are in our sample. Although he presents his measurements only out to about one disk scale length, R exp , while our rotation curves typically extend to R/R exp = 2 − 4, a meaningful comparison can be made, since, as seen in Fig. 1, the rotation curves usually flatten out already at small radii (10 to 25 arcsec). Our measured asymptotic line-of-sight velocities agree with Fisher's at the ∼ 10% level. A similar level of agreement exists between his measurements in the B-band and our measurements in I-band of the disk scale lengths and ellipticities. Velocity dispersions in his data are also generally consistent with ours, except for two cases, NGC 4382 and NGC 5866, in which he measures twice the values we obtained. NGC 4382, however, was already excluded from our analysis above because of its relatively low level of rotation, while for NGC 5866 we have both Wise Observatory data and high-quality data from KPNO, which are consistent with each other. Simien & Prugniel (1997), Bettoni & Galletta (1997), Fried & Illingworth (1994) and Seifert & Scorza (1996) have each derived rotation curves for some of the galaxies in our sample, and their results are in good agreement with ours. A mild exception is NGC 2549, for which Simien & Prugniel (1997) and Seifert & Scorza (1996) obtain a maximum velocity of 150 ± 30 km s −1 compared to our 113 ± 13 km s −1 .
While our sample has the advantage of uniform SBF distance estimation, distance errors could contribute to the TFR scatter as well. The SBF method has an r.m.s scatter of less than 0.1 mag, but there are a number of distance discrepancies which could affect a small sample like ours. Among the galaxies in our sample, Blakeslee et al. (1998) and Ciardullo et al. (1993) find differences of order of 0.3 mag between SBF-based distance modulii and distances based on planetary nebula luminosity functions for NGC 3115, NGC 4382, and NGC 1023. However, it is difficult to see how this could be a dominant source of scatter in the TFR without introducing comparable scatter in the Fundamental Plane relation for our sample.
A second potential source of errors is in the corrections for inclination and assymetric drift we have applied to our data. These corrections are sometimes at a level of 100% (most are above 35%) and are based on noisy velocity dispersion measurements. H I observations for some of our galaxies exist, and can partially confirm the velocity corrections. Comparisons of H I velocities and corrected stellar velocities are not straightforward, since the gas component in S0s may sometimes be concentrated only in the inner parts or in an outer ring, as a relic from a past accretion event. Furthermore, there are different measures of 21 cm linewidth (e.g., at 50% or 20% of the peak). Nevertheless, from Roberts et al. (1991), Huchtmeier et al. (1995), and Wardle & Knapp (1986 we obtained H I velocities for five galaxies in our sample, and find excellent agreement with our corrected stellar velocities in four cases, the exception being NGC 1052, where there is a ∼ 2σ discrepancy between the inclination-corrected H I width of Roberts et al. (1991), 288 km s −1 , and our final circular velocity of 190 ± 39 km s −1 . As an alternative method of calculating the assymetric drift correction, we attempted, instead of the procedure decribed above, to apply the correction directly to the outermost measurements of the velocities and dispersions, after averaging the outer three points. However, this had the effect of increasing the scatter in the TFR. This is a consequence of the large R/R exp values making the dispersion term in the assymetric drift correction, σ 2 φ 2 R Rexp − 1 , dominant compared to V 2 φ . Modifications in the choice of R, or correcting σ φ for inclination had little effect on the TFR scatter.
Next, we searched for intrinsic sources of TFR scatter, arising from a possible dependence on additional parameters. We have checked for correlations among the residuals in the best-fitting TFR relation and a variety of parameters. We found no dependence of TFR residuals on disk ellipticity, as was found, e.g., in the late-type-galaxy TFR of Bernstein et al. (1994) and interpreted as the effect of extinction by dust. The ratio V φ σ 0 can serve as a kinematic indicator of rotational vs. dispersive support in a given galaxy, and correlation of the TFR residuals with it could indicate, e.g., that those galaxies with the least rotation (and the largest assymetric drift corrections) are those contributing most to the scatter. However, we found no significant correlation between the TFR residuals and this ratio. Similarly the residuals are not correlated with Rexp Re , a photometric measure of disk vs. bulge dominance. We found that the parameter
x = V φ σ 0 − Rexp
Re is marginally correlated with the TFR residuals, at a significance level of 93%. Although the physical significance of x is unclear, applying this correction reduces the intrinsic TFR scatter by ∼ 0.2 mag.
In view of these tests, we conclude that the large intrinsic TFR scatter of 0.7 mag that we find for S0s is most likely not the result of errors in observation and analysis.
Likewise, we have not found additional parameters that significantly lower the scatter. For comparison, the TFR in late type spirals usually has an intrinsic r.m.s scatter of σ in ∼ 0.25 mag (e.g. Giovanelli et al. 1997b), although a smaller scatter can occur in homogeneous, well-defined samples; Bernstein et al. (1994) found an r.m.s. scatter of 0.23 mag, which, after correction for extinction based on ellipticities reduced to 0.1 mag.
From the physical viewpoint, our result is in conflict with the idea that most S0s were disk galaxies -on their way to become present day spirals -whose star-forming career was cut short by some mechanism, e.g. tidal stripping in a dense environment (Gunn & Gott 1972). In that case, we would expect the S0s to have faded significantly at constant V c , exhibiting a larger TFR zeropoint offset. Specifically, if S0s had had similar star formation histories to Sc's (e.g., Kennicutt et al. 1994) until a truncation, say, > ∼ 4 Gyrs ago, we would expect an offset of > ∼ 0.9 magnitudes in I due to the fading of the stellar population, based on Charlot & Bruzual (1991) models.
Similarly, the absence of a tight S0 TFR argues against a physical continuity of S0s with later-type spirals, as suggested in the context of hierarchical structure formation models (Van den Bosch 1998; Mao & Mo 1998). Alternatively, S0s may be more closely related to ellipticals. Both may be the relics of non-cataclysmic mergers (Schweizer 1986). For individual sample members, ( e.g. NGC 4649, NGC 4406, NGC 4472) this may be apparent from their individual structure, but the present evidence is pointing towards this being true for a good fraction of the morphological class. Qualitatively, the spread among S0s in time elapsed since the merger and its ensuing gas-depleting starburst would produce the TFR scatter, while, on average, the larger concentration of stars may compensate for the fading of the stellar population, and give a mean luminosity comparable to that of late-type galaxies, for a given halo mass. A quantitative examination of the TFR resulting in this scenario is, however, needed.
Conclusions
We have constructed a TFR for nearby S0 galaxies, deriving corrected circular velocities from stellar velocities, and using high-quality distance estimates (Tonry et al. 1998) based on surface brightness fluctuations. Despite the care taken, the relation between M I and V c exhibits ∼ 0.7 magnitudes of scatter. As an illustration, NGC 2787 and NGC 4753 both have similar circular velocities of 230 km s −1 , but their luminosities differ by over 3 mag. The reason for this large scatter is not clear. Perhaps it indicates that the S0 morphological class truly represents a "mixed bag", with a wide range of galaxy formation channels feeding into it. The central stellar velocity dispersion is a much better predictor of the total stellar luminosity than V c at several exponential radii.
Similarly, the fact that on average S0's and Sc's of the same V c have such similar luminosities is a puzzle. S0s have older, and hence dimmer, stellar populations, which should lead to a TFR zero-point offset. The absence of such an offset could be explained if S0s have a considerably higher fraction of their total mass in stars than Sc's. This perhaps would be expected in the merger-formation scenario if, in fact, such events are very efficient at converting the available gas into stars.
Observationally, it is desirable to reconfirm our result on a larger sample with higher S/N measurements at larger radii, where presumably the kinematic corrections will be smaller. An independent test, which is insensitive to errors in the distance estimate, is to measure the TFR for S0s in a galaxy cluster. Analysis of such a measurement for the Coma cluster is underway (Hinz, Rix, & Bernstein 1999). Simien, F., & Prugniel, P. 1997, A&AS, 126, 519 Steiman-Cameron, T.Y., Kormendy, J., & Durisen, R.H. 1992, ApJ, 104, 1339Tonry, J.L. & Schneider, D.P. 1988 Tonry, J.L., Blakeslee, J.P., Ajhar, E.A., & Dressler, A. 1997, ApJ, 475, 339 Tonry, J.L., Ajhar, E.A., Blakeslee, J.P., & Dressler, A. 1998 V c for our sample galaxies. The data with dotted errorbars indicate the objects which have relatively large velocity dispersions even in their outer parts, leading to asymmetric drift corrections ≥ 25%. The dashed line shows the I-band TFR for late-type spiral galaxies, as derived from the Mathewson et al. (1992) data by Courteau and Rix (1998) and adjusted to the same distance scale (H 0 = 80 km s −1 Mpc −1 ) as that implied by the SBF method for these galaxies (Tonry et al. 1998). The thick solid line shows the best fit relation, when constrained to have the same slope as the late-types, and the thin lines mark the intrsinsic scatter.
(1998) sample; b) Heliocentric radial velocity < 2000 km s −1 ; c) Declination > −20 • ; d) RSA classification S0/E, S0, SB0, S0/Sa, SB0/SBa or S0pec; e) RC3 (de Vaucouleurs et al. 1991) B magnitude < 12.6.
R.B., Pierce, M.J., Huang, J.-S., Saunders, W., Verheijen, M.A.W., & Witchalls, P.L. 1998, AJ, 115, 2264 Van den Bergh, S. 1997, AJ, 113, 2054 Van den Bosch, F.C. 1998, ApJ, submitted, astro-ph/9805113 Van Driel, W., & Van Woerden, H. 1991, A&A, 243, 71 Wardle, M., & Knapp, G.R.1986, AJ, 91, 23 This preprint was prepared with the AAS L A T E X macros v4.0.-18 -.
Fig. 1 .
1-Rotation and velocity dispersion curves, V (R) and σ(R), for the 15 galaxies observed spectroscopically. Top panels show the observed velocity dispersion profiles, σ φ (R). Middle panels show the observed rotation curves V obs (R). Bottom panels show the observed rotation curves (empty symbols) and the circular velocity (filled symbols), V c (R), after correction for inclination or integration along the line of sight and asymmetric drift correction.
Fig. 3 .
3-M I vs.
Fig. 4 .
4-Fundamental Plane relation for our sample, based on values for the effective radii, R e , and and the central velocity dispersions, σ 0 , taken both from our data and from the literature. The median scatter among the points is well below 0.1 in either axis.
Table 1 .
1Galaxy ParametersNGC V h
Class.
B
S
ellip.
i
Rexp
I
R
V obs
V
Vc
D
M I
0
Re
(1)
(2)
(3)
(4) (5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17) (18
IRAF (Image Reduction and Analysis Facility) is distributed by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under cooperative agreement with the National Science Foundation.
We thank Rachel Somerville for useful discussions, and the referee, Brent Tully, for helpful comments. This work was supported by the US-Israel Binational Science Foundation Grant 94-00300, and by the Alfred P. Sloan Foundation (HWR).
. M Aaronson, J Huchra, J Mould, ApJ. 2291Aaronson, M., Huchra, J. & Mould, J. 1979, ApJ, 229, 1
. M Aaronson, J Mould, ApJ. 2651Aaronson, M. & Mould, J. 1983, ApJ, 265, 1
. M Aaronson, G Bothun, J Mould, J Huchra, R A Schommer, M E Cornell, ApJ. 302536Aaronson, M., Bothun, G., Mould, J., Huchra, J., Schommer, R.A. & Cornell, M.E. 1986, ApJ, 302, 536
. K Bekki, astro-ph/9804220ApJ. in pressBekki, K. 1998a, ApJ, in press, astro-ph/9804220
. K Bekki, astro-ph/9806106ApJL. in pressBekki, K. 1998b, ApJL, in press, astro-ph/9806106
. R Bender, D Burstein, S M Faber, ApJ. 399462Bender, R., Burstein, D., & Faber, S.M. 1992, ApJ, 399, 462
. G M Bernstein, P Guhathakurta, S Raychaudhury, R Giovanelli, M P Haynes, T Herter, N P Vogt, AJ. 107Bernstein, G.M., Guhathakurta, P., Raychaudhury, S., Giovanelli, R., Haynes, M.P., Herter, T., & Vogt, N.P. 1994, AJ, 107, 1962
. F Bertola, L M Buson, W W Zeilinger, ApJ. 40179Bertola, F., Buson, L.M. & Zeilinger, W.W. 1992, ApJ, 401, L79
. D Bettoni, G Galletta, A&AS. 12461Bettoni, D., & Galletta, G. 1997, A&AS, 124, 61
J P Blakeslee, E A Ajhar, J L Tonry, astro-ph/9807124Kluwer). Post-Hipparcos Cosmic Candles, A. Heck & F. CaputoDordrechtBlakeslee, J.P., Ajhar, E.A., & Tonry, J.L. 1998, in Post-Hipparcos Cosmic Candles, A. Heck & F. Caputo (eds), (Dordrecht: Kluwer), astro-ph/9807124.
. M Capaccioli, E Cappellaro, E V Held, M Vietri, A&A. 27469Capaccioli, M., Cappellaro, E., Held, E.V, & Vietri, M. 1993 , A&A, 274, 69
. M Capaccioli, G Longo, A&ARv. 5293Capaccioli, M. & Longo, G. 1994, A&ARv, 5, 293
. S Charlot, G Bruzual, ApJ. 367126Charlot, S. & Bruzual, G., 1991, ApJ, 367, 126.
. R Ciardullo, G H Jacoby, J L Tonry, ApJ. 419479Ciardullo, R., Jacoby, G.H., & Tonry, J.L. 1993, ApJ, 419, 479
. S Courteau, H.-W Rix, ApJ. submittedCourteau, S., & Rix, H.-W. 1998, ApJ, submitted.
. J J Dalcanton, D N Spergel, F J Summers, ApJ. 482659Dalcanton, J.J., Spergel, D.N., & Summers, F.J. 1997, ApJ, 482, 659
G De Vaucouleurs, A De Vaucouleurs, H G Corwin, R J Buta, G Paturel, P Fouqué, Third Reference Catalog of Bright Galaxies. New YorkSpringerRC3de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Buta, R. J., Paturel, G., & Fouqué, P. 1991, Third Reference Catalog of Bright Galaxies (New York: Springer) (RC3)
. A Dressler, ApJ. 236351Dressler, A. 1980, ApJ, 236, 351
. A Dressler, ApJ. 3171Dressler, A. 1987, ApJ, 317, 1
. A Dressler, A Sandage, ApJ. 265664Dressler, A., & Sandage, A. 1983, ApJ, 265, 664
. D J Eisenstein, A Loeb, ApJ. 475421Eisenstein, D.J. & Loeb, A. 1996, ApJ, 475, 421
. D Elizondo, G Yepes, R Kates, V Müller, A Klypin, astro-ph/9808287ApJ. Elizondo, D., Yepes, G., Kates, R, Müller, V., & Klypin, A. 1998, ApJ, submitted, astro-ph/9808287
. R A W Elson, MNRAS. 286771Elson, R.A.W. 1997, MNRAS, 286, 771
. S M Faber, R E Jackson, ApJ. 204668Faber, S.M., & Jackson, R.E. 1976, ApJ, 204, 668
. D Fisher, G Illingworth, M Franx, AJ. 107160Fisher, D., Illingworth, G., & Franx, M. 1994, AJ, 107, 160
. D Fisher, G Illingworth, M Franx, ApJ. 438539Fisher, D., Illingworth, G., & Franx, M. 1995, ApJ, 438, 539
. D Fisher, AJ. 113950Fisher, D. 1997, AJ, 113, 950
. J W Fried, G D Illingworth, AJ. 107992Fried, J.W., & Illingworth, G.D. 1994, AJ, 107, 992
. R Giovanelli, AJ. 11322Giovanelli, R., et al. 1997a, AJ, 113, 22
. R Giovanelli, AJ. 11353Giovanelli, R., et al. 1997b, AJ, 113, 53
. J E Gunn, J R Gott, 1761Gunn, J.E., & Gott, J.R. 1972, 176, 1
. Y Hashimoto, A Oemler, astro-ph/9807275ApJ. in pressHashimoto, Y., & Oemler, A. 1998, ApJ, in press, astro-ph/9807275
. A F Heavens, R Jimenez, L Hernquist, J C Mihos, MNRAS. 44841Heavens, A.F., & Jimenez, R. 1999, MNRAS, in press Hernquist, L., & Mihos, J.C. 1995, 448, 41
. P Hinz, H.-W Rix, G M Bernstein, W K Huchtmeier, L J Sage, C Henkel, A&A. 300675Hinz, P., Rix, H.-W., & Bernstein, G.M. 1999, in preparation Huchtmeier, W.K., Sage, L.J., Henkel, C. 1995, A&A, 300, 675
. G D Illingworth, P L Schechter, ApJ. 256481Illingworth, G.D., & Schechter, P.L. 1982, ApJ, 256, 481
. G H Jacoby, R Ciardullo, H C Ford, ApJ. 356332Jacoby, G.H., Ciardullo, R., & Ford, H.C. 1990, ApJ, 356, 332
. G H Jacoby, PASP. 104599Jacoby, G.H., et al. 1992, PASP, 104, 599
. R I Jedrzejewski, MNRAS. 226747Jedrzejewski, R.I. 1987, MNRAS, 226,747
. I Jorgensen, M Franx, P Kjaergaard, MNRAS. 280167Jorgensen, I., Franx, M., & Kjaergaard, P. 1996, MNRAS, 280, 167
. S Kaspi, P A Ibbetson, E Mashal, N Brosch, Wise Obs. Tech. Rep. 6Kaspi, S., Ibbetson, P.A., Mashal, E., & Brosch, N. 1996, Wise Obs. Tech. Rep., No 6
. S M Kent, AJ. 931062Kent, S.M. 1987, AJ, 93, 1062
. R Kennicutt, P Tamblyn, C Congdon, ApJ. 22Kennicutt, R., Tamblyn, P. & Congdon, C., 1994, ApJ, 435, 22.
. M Kissler-Patig, D A Forbes, D Minniti, astro-ph/9804261MNRAS. in pressKissler-Patig, M., Forbes, D.A., & Minniti, D. 1998, MNRAS, in press, astro-ph/9804261
. J Kormendy, ApJ. 275529Kormendy, J. 1983, ApJ, 275, 529
. J Kormendy, ApJ. 286132Kormendy, J. 1984, ApJ, 286, 132
. K Kuijken, D Fisher, M R Merrifield, MNRAS. 283543Kuijken, K., Fisher, D., & Merrifield, M.R. 1996, MNRAS, 283, 543
. A U Landolt, AJ. 104340Landolt, A.U. 1992, AJ, 104, 340
. R P Van Der Marel, M Franx, ApJ. 407525van der Marel, R.P., & Franx, M. 1993, ApJ, 407, 525
. S Mao, H J Mo, MNRAS. submittedMao, S., & Mo, H.J. 1998, MNRAS, submitted
. D S Mathewson, V L Ford, M Buchhorn, ApJS. 81413Mathewson, D.S., Ford, V.L., & Buchhorn, M. 1992, ApJS, 81, 413
. M Milgrom, ApJ. 270365Milgrom, M. 1983, ApJ, 270, 365
. M Milgrom, ApJ. 338121Milgrom, M. 1989, ApJ, 338, 121
& M Persic, P Salucci, astro-ph/9503051Proceedings of the 3rd Italian Cosmology Meeting. the 3rd Italian Cosmology MeetingPersic, & M., Salucci, P. 1995 in Astroph. Lett. & Comm.:Proceedings of the 3rd Italian Cosmology Meeting (astro-ph/9503051)
. R Pierce, R B Tully, ApJ. 330579Pierce, R. & Tully, R.B. 1988, ApJ 330, 579
. S Raychaudhury, K Von Braun, G M Bernstein, P Guhathakurta, AJ. 1132046Raychaudhury, S., Von Braun, K., Bernstein, G.M., & Guhathakurta, P. 1997, AJ, 113, 2046
. H W Rix, S D M White, MNRAS. 254389Rix, H.W., & White, S.D.M. 1992, MNRAS, 254, 389
. H W Rix, R C J Kennicutt, R Braun, R A M Walterbros, ApJ. 438155Rix, H.W., Kennicutt, R.C.J., Braun, R., & Walterbros, R.A.M. 1995, ApJ, 438, 155
. H.-W Rix, P Guhathakurta, M Colless, K Ing, MNRAS. 285779Rix, H.-W., Guhathakurta, P., Colless, M., & Ing. K. 1997, MNRAS, 285, 779
. M S Roberts, D E Hogg, J N Bregman, W R Forman, C Jones, ApJS. 75751Roberts, M.S., Hogg, D.E., Bregman, J.N., Forman, W.R., & Jones, C. 1991, ApJS, 75, 751
. V C Rubin, D Burstein, W K Ford, N Thonnard, ApJ. 28981Rubin, V.C., Burstein, D., Ford, W.K., & Thonnard, N. 1985, ApJ, 289, 81
A Revised Shapley-Ames Catalog of Bright Galaxies. A Sandage, G Tammann, Carnegie Institution Pub638Washington D.C.RSASandage, A., & Tammann, G. 1981, A Revised Shapley-Ames Catalog of Bright Galaxies (Washington D.C.: Carnegie Institution Pub. 638) (RSA)
. P Schechter, AJ. 85801Schechter, P. 1980, AJ, 85, 801
. F Schweizer, Science. 231193Schweizer, F. 1986, Science, 231, 193
. W Seifert, C Scorza, A&A. 31075Seifert, W., & Scorza, C. 1996, A&A, 310, 75
All velocities in km s 1 ; all lengths in arcseconds. columns denote 1 uncertainties. Column header explanations: (1) NGC number; (2) Heliocentric velocity, from deVaucouleurs et. All velocities in km s 1 ; all lengths in arcseconds. columns denote 1 uncertainties. Column header explanations: (1) NGC number; (2) Heliocentric velocity, from deVaucouleurs et al. (1991);
Integrated I magnitude; (10) Radius of outermost veloc measurement; (11) Observed velocity at R (12) Rotation velocity at R, after correction for inclination or line-of-sight integration; (13) Circular velocity, after correction for asymmetric drift; ( Observed velocity dispersion at R; (15) Distance modulus; (16) Absolute I magnitude. ) B Magnitude, Classi cation, from Sandage & Tamann23Source of spectroscopic data: 1=Wise Observatory. 17) Central velocity dispersion; (18) E ective radius, from literatureClassi cation, from Sandage & Tamann (1981); (4) B magnitude, from deVaucouleurs et al. (1991); (5) Source of spectroscopic data: 1=Wise Observatory, 2=KPNO, 3=Kent (1987), 4=Capacc et al. (1993), 5=Fisher et al. (1994); (6) Disk ellipticity in the I band; (7) Inclination, in degrees; (8) Exponential disk scale length; (9) Integrated I magnitude; (10) Radius of outermost veloc measurement; (11) Observed velocity at R (12) Rotation velocity at R, after correction for inclination or line-of-sight integration; (13) Circular velocity, after correction for asymmetric drift; ( Observed velocity dispersion at R; (15) Distance modulus; (16) Absolute I magnitude; (17) Central velocity dispersion; (18) E ective radius, from literature.
| []
|
[
"Strange quark matter in a chiral SU(3) quark mean field model",
"Strange quark matter in a chiral SU(3) quark mean field model"
]
| [
"P Wang \nInstitut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany\n",
"V E Lyubovitskij \nInstitut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany\n",
"Th Gutsche \nInstitut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany\n",
"Amand Faessler \nInstitut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany\n"
]
| [
"Institut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany",
"Institut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany",
"Institut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany",
"Institut für Theoretische Physik\nUniversität Tübingen\nAuf der Morgenstelle 14D-72076TübingenGermany"
]
| []
| We apply the chiral SU(3) quark mean field model to investigate strange quark matter. The stability of strange quark matter with different strangeness fraction is studied. The interaction between quarks and vector mesons destabilizes the strange quark matter. If the strength of the vector coupling is the same as in hadronic matter, strangelets can not be formed. For the case of β equilibrium, there is no strange quark matter which can be stable against hadron emission even without vector meson interactions. | 10.1103/physrevc.67.015210 | [
"https://export.arxiv.org/pdf/hep-ph/0205251v1.pdf"
]
| 17,647,646 | hep-ph/0205251 | c1754a764ab1bbf821e49a9e310233614fc05cfd |
Strange quark matter in a chiral SU(3) quark mean field model
22 May 2002
P Wang
Institut für Theoretische Physik
Universität Tübingen
Auf der Morgenstelle 14D-72076TübingenGermany
V E Lyubovitskij
Institut für Theoretische Physik
Universität Tübingen
Auf der Morgenstelle 14D-72076TübingenGermany
Th Gutsche
Institut für Theoretische Physik
Universität Tübingen
Auf der Morgenstelle 14D-72076TübingenGermany
Amand Faessler
Institut für Theoretische Physik
Universität Tübingen
Auf der Morgenstelle 14D-72076TübingenGermany
Strange quark matter in a chiral SU(3) quark mean field model
22 May 20021130Rd1239Ki1465Bt Keywords: Strange quark matterchiral symmetryrelativistic mean field
We apply the chiral SU(3) quark mean field model to investigate strange quark matter. The stability of strange quark matter with different strangeness fraction is studied. The interaction between quarks and vector mesons destabilizes the strange quark matter. If the strength of the vector coupling is the same as in hadronic matter, strangelets can not be formed. For the case of β equilibrium, there is no strange quark matter which can be stable against hadron emission even without vector meson interactions.
I. INTRODUCTION
Strange quark matter has attracted a lot of interest since Witten suggested that it could be absolutely stable even at zero temperature and pressure [1]. The investigation of such a possibility is relevant not only for high energy physics, but also for astrophysics. For example, the core of a neutron star may be composed of quark matter. The possible existence of strange stars which are made entirely of deconfined u, d and s quarks is one of the most intriguing aspects of modern astrophysics. There have been some reports of events with A ≃ 350-500 and Z ≃ 10-20 in cosmic ray experiments [2]- [4], the so-called exotic cosmic ray events. Also, recent studies have shown that X-ray burst sources are likely strange star candidates [5]- [8]. It is also interesting to produce strange quark matter (strangelets) in the laboratory because they could serve as a signature of the formation of the quark-gluon-plasma which is a direct demonstration of QCD [9]- [11]. Many ultrarelativistic heavy-ion collision experiments at Brookhaven and CERN [12] are proposed to search for (meta)stable lumps of such kind of strangelets. Recently, Ardouin et al. [13] presented a novel method which can be applied to characterize the possible existence of a strange quark matter distillation process in heavy-ion collisions. Up to now, there is no experiment which confirms the existence of strangelets. For example, the E864 collaboration found that there is no evidence for strangelet production in 11.5GeV/c per nucleon Au+Pb collisions [14].
Besides the experimental efforts, there are also a lot of theoretical investigations of the stability of strange quark matter. The earliest discussions are based on the MIT bag model [15] which assumes that quarks are confined by a phenomenological bag. Within the bag quarks are asymptotically free. Calculations [16] within this model indicate that there is a range of parameters in which strange quark matter is absolutely stable, i.e. the energy per baryon is less than 930 MeV. The stability of strange quark matter with finite volume (strangelets) was also discussed in the MIT bag model. Berger and Jaffe [17] discussed the surface correction for the strangelets, where they found that the surface tension destabilizes strangelets. The curvature contribution was considered by Madsen [18] which is dominant for strangelets with small baryon numbers. Though the bag model is simple, it is an incomplete description of confinement. Results from lattice calculations [19] show that quark matter does not become asymptotically free and some hadronic degrees of freedom remain within the quark matter immediately after the phase transition. Fowler, Raha and Weiner [20] suggested another description of the confinement mechanism via the introduction of a density-dependent quark mass. This quark mass-density-dependent (QMDD) model was first employed to study the properties of ordinary quark matter [20] and then applied to the investigation of strange quark matter [21]- [25]. As was pointed in our recent paper [26], their thermodynamic treatment was not correct. We reconsidered strange quark matter in the self-consistent quark mass density dependent model and found a region of parameters in which the strange quark matter is absolutely stable.
In the QMDD model, the concept of a density dependent quark mass has no dynamical origin. In recent years, some approaches for strange quark matter based on dynamical models were developed. Alberico et al. [27] utilized the color dielectric model to calculate the energy per baryon of strange quark matter. They found that while the double minimum version of the color dielectric model allowed the existence of strangelets, the single minimum version of this model excluded the possibility. Stability of strange quark matter was also investigated using the effective 4-quark interactions [28], the SU(3) Nambu-Jona-Lasinio (NJL) model with and without 4-quark vector type interactions [29,30]. In studying hadronic matter, we proposed a chiral SU(3) quark mean field model. This chiral quark model was applied to investigate the properties of strange hadronic matter and multi-strange hadronic systems [31,32]. In this paper, we want to use this model to discuss the stability of strange quark matter. The difference between quark and hadronic matter is that in quark matter, the u, d, s quarks are deconfined and not combined into baryons by the confining potential.
The paper is organized as follows. In section II, we introduce the basic model features. We apply the model to investigate strange quark matter in section III. The numerical calculations are discussed in section IV. Finally, main conclusions are drawn in section V.
II. THE MODEL
Our considerations are based on the chiral SU(3) quark mean field model (for details see Refs. [31,32]). For completeness, we introduce the main concepts of the model in this section. In the chiral limit, the quark field q can be split into left and right-handed parts q L and q R : q = q L + q R . Under SU(3) L × SU(3) R they transform as
q ′ L = L q L , q ′ R = R q R .(1)
The spin-0 mesons are written in the compact form
M(M + ) = Σ ± iΠ = 1 √ 2 8 a=0 (σ a ± iπ a ) λ a ,(2)
where σ a and π a are the nonets of scalar and pseudoscalar mesons, respectively, λ a (a = 1, ...
l µ (r µ ) = 1 2 (V µ ± A µ ) = 1 2 √ 2 8 a=0 v a µ ± a a µ λ a .(3)
They transform as l µ → l ′ µ = Ll µ L + , r ′ µ = Rr µ R + . These matrices can be written in a form where the physical states are explicit. For the scalar and vector nonets, the expressions are
Σ = 1 √ 2 8 a=0 σ a λ a = 1 √ 2 (σ + a 0 0 ) a + 0 K * + a − 0 1 √ 2 (σ − a 0 0 ) K * 0 K * −K * 0 ζ ,(4)V µ = 1 √ 2 8 a=0 v a µ λ a = 1 √ 2 ω µ + ρ 0 µ ρ + µ K * + µ ρ − µ 1 √ 2 ω µ − ρ 0 µ K * 0 µ K * − µK * 0 µ φ µ .(5)
Pseudoscalar and pseudovector nonet mesons can be written in the same way. The total effective Lagrangian for the description of strange quark matter is given by:
L eff = L q0 + L qM + L ΣΣ + L V V + L χSB + L ∆ms + L h .(6)
It contains the free part for massless quarks L q0 =q iγ µ ∂ µ q, the quark-meson field interaction term
L qM = g s q L Mq R +q R M + q L − g v (q L γ µ l µ q L +q R γ µ r µ q R ) ,(7)
the chiral-invariant scalar meson L ΣΣ and vector meson L V V self-interaction terms in the mean field approximation [31,33]
L ΣΣ = − 1 2 k 0 χ 2 σ 2 + ζ 2 + k 1 σ 2 + ζ 2 2 + k 2 σ 4 2 + ζ 4 + k 3 χσ 2 ζ −k 4 χ 4 − 1 4 χ 4 ln χ 4 χ 4 0 + δ 3 χ 4 ln σ 2 ζ σ 2 0 ζ 0 ,(8)L V V = 1 2 χ 2 χ 2 0 m 2 ω ω 2 + m 2 ρ ρ 2 + m 2 φ φ 2 + g 4 ω 4 + 6ω 2 ρ 2 + ρ 4 + 2φ 4 ,(9)
where δ = 6/33; σ 0 , ζ 0 and χ 0 are the vacuum values of the mean fields σ, ζ and χ and the three terms L χSB , L ∆ms and L h which explicitly break the chiral symmetry. Chiral symmetry requires the following basic relations for the quark-meson coupling constants:
g s √ 2 = g u a 0 = −g d a 0 = g u σ = g d σ = . . . = 1 √ 2 g s ζ , g s a 0 = g s σ = g u ζ = g d ζ = 0 ,(10)g v 2 √ 2 = g u ρ 0 = −g d ρ 0 = g u ω = g d ω = . . . = 1 √ 2 g s φ , g s ω = g s ρ 0 = g u φ = g d φ = 0.(11)
Note, the values of σ 0 , ζ 0 and χ 0 are determined later from Eqs. (20)- (22). Particularly, the parameters σ 0 and ζ 0 are expressed through the pion (F π = 93 MeV) and the kaon (F K = 115 MeV) leptonic decay constants as:
σ 0 = −F π ζ 0 = 1 √ 2 (F π − 2F K )(12)
The Lagrangian L χSB generates the nonvanishing masses of pseudoscalar mesons
L χSB = χ 2 χ 2 0 m 2 π f π σ + √ 2m 2 K f K − √ 2 m 2 π f π ζ ,(13)
leading to a nonvanishing divergence of the axial currents which satisfy the PCAC relations for π and K mesons. Scalar mesons obtain the masses by spontaneous breaking of the chiral symmetry in the Lagrangian (8). The masses of u, d and s quarks are generated by the vacuum expectation values of the two scalar mesons σ and ζ. To obtain the correct constituent mass of the strange quark, an additional mass term should be added:
L ∆ms = −∆m sq Sq(14)
where S = 1 3 I − λ 8 √ 3 = diag(0, 0, 1) is the strangeness quark matrix. Finally, the quark masses are given by
m u = m d = − g s √ 2 σ 0 , m s = −g s ζ 0 + ∆m s ,(15)
The parameters g s = 4.76 and ∆m s = 29 MeV are determined from m q = 313 MeV and m s = 490 MeV. In order to obtain reasonable hyperon potentials in hadronic matter we include the additional coupling between strange quarks and scalar mesons σ and ζ [31]. This term is expressed as
L h = (h 1 σ + h 2 ζ)ss .(16)
III. APPLICATION TO STRANGE QUARK MATTER
Now we apply the model to investigate strange quark matter. We begin with the thermodynamical potential because all other quantities such as energy per volume and pressure can be obtained from it. The thermodynamical potential is defined as
Ω = τ =q,e −2k B T γ τ (2π) 3 ∞ 0 d 3 k ln 1 + e −(E * τ (k)−ντ )/k B T + ln 1 + e −(E * τ (k)+ντ )/k B T − L M ,(17)
where E * τ (k) = m * 2 τ + k 2 , γ τ is 3 for quarks and 1 for electrons and L M is the meson interaction including the scalar meson self-interaction L ΣΣ , the vector meson self-interaction L V V and the explicit chiral symmetry breaking term L χSB . In the MIT bag and the QMDD model, L M is replaced by the effective bag constant. At zero temperature, Ω can be expressed as
Ω = − i=u,d,s 1 8π 2 ν i ν 2 i − m * 2 i 1/2 2ν 2 i − 5m * 2 i + 3m * 4 i ln ν i + (ν 2 i − m * 2 i ) 1/2 m * i − µ 4 e 12π 2 − L M ,(18)
where µ e is the chemical potential of the electron and the quantity ν i (i=u, d, s) is related to the usual chemical potential µ i by
ν i = µ i − g i ω ω − g i φ φ.
The effective quark mass is given by m
* i = −g i σ σ − g i ζ ζ + m i0 .
The total baryon density is defined as
ρ B = 1 3 (ρ u + ρ d + ρ s ).(19)
With the thermodynamical potential, the energy per volume ε and pressure p of the system can be derived as ε = Ω + i=u,d,s,e µ i ρ i and p = −Ω.
The mean field equations for the meson φ i are obtained with ∂Ω ∂φ i = 0. For the scalar mesons σ, ζ and χ, the equations are expressed as
k 0 χ 2 σ − 4k 1 σ 2 + ζ 2 σ − 2k 2 σ 3 − 2k 3 χσζ − 2δ 3σ χ 4 + χ 2 χ 2 0 m 2 π f π − − χ χ 0 2 m ω ω 2 ∂m ω ∂σ = i=u,d g i σ <ψ i ψ i >,(20)k 0 χ 2 ζ − 4k 1 σ 2 + ζ 2 ζ − 4k 2 ζ 3 − k 3 χσ 2 − δ 3ζ χ 4 + + χ 2 χ 2 0 √ 2m 2 k f k − 1 √ 2 m 2 π f π = i=s g i ζ <ψ i ψ i >,(21)k 0 χ σ 2 + ζ 2 − k 3 σ 2 ζ + 4k 4 + 1 + 4ln χ χ 0 − 4δ 3 ln σ 2 ζ σ 2 0 ζ 0 χ 3 + + 2χ χ 2 0 m 2 π f π σ + √ 2m 2 k f k − 1 √ 2 m 2 π f π ζ − χ χ 2 0 m 2 ω ω 2 = 0.(22)
The equations for the vector mesons can be obtained in the same way as
χ 2 χ 2 0 m 2 ω ω + 4g 4 ω 3 + 12g 4 ωρ 2 = i=u,d g i ω <ψ i γ 0 ψ i >,(23)χ 2 χ 2 0 m 2 ρ ρ + 4g 4 ρ 3 + 12g 4 ω 2 ρ = i=u,d g i ρ <ψ i γ 0 ψ i >,(24)χ 2 χ 2 0 m 2 φ φ + 8g 4 φ 3 = i=s g i φ <ψ i γ 0 ψ i > .(25)
The scalar and vector densities can be written as
<ψ i ψ i > = 6 (2π) 3 k F i 0 d 3 k m * i k 2 + m * 2 i ,(26)<ψ i γ 0 ψ i > = 6 (2π) 3 k F i 0 d 3 k(27)
with
k F i = ν 2 i − m * 2 i .
IV. NUMERICAL RESULTS
Now we investigate the implications of the previous formalism to strange quark matter. Compared to earlier applications to hadronic matter, here we need not to introduce the confining potential which combines quarks into baryons. In quark matter, the u, d and s quarks are deconfined. They only interact by scalar and vector mesons. The self-interactions between mesons are the same as in hadronic matter. All the parameters in this model are determined in our previous papers. They are listed in Table I. We assume that these parameters do not change when the model is applied to quark matter. In fact, the parameters which describe the interactions between fields 'should' be universal. The medium effects are included by the treatment (relativistic mean field theory) itself and are not included in the original Lagrangian. In our discussion, we first do not include the additional coupling L h and discuss the effect of this term on strange quark matter later.
If strange quark matter is stable and can survive for a long time, equilibrium with respect to the weak processes:
s → u + e − +ν e , d → u + e − +ν e ,(28)
may be achieved. If we neglect the chemical potential of the neutrino, the chemical potentials of quarks and electron have the following relation:
µ d = µ s = µ u + µ e(29)
The values for µ u and µ d are determined by the baryon density and the total charge Q. As usually, we assume the charge Q to be zero. The equations for the mesons (20)-(25) can be solved simultaneously. The energy per baryon with different strength of vector coupling and for different parameter sets are shown in Fig.1. When vector interactions are not considered, there is a local minimum of energy per baryon with parameter set A. The corresponding density is about 0.13f m −3 . As shown later, at this density no strange quarks appear. When the vector interactions are included, even when the strength is half of that in hadronic matter, the local minimum for the energy per baryon disappears. The energy per baryon E/A increases monotonously with the increasing baryon density. In Fig.2, we show the fractions r i = ρ i 3ρ B of u, d, and s quark versus baryon density with and without vector interactions. The fraction of u quarks almost does not change with the density. There exists a relationship ρ d + ρ s ≃ 2ρ u . Therefore, although we included the electron in our calculations, the fraction of electrons is very small. The chemical potential of the electron µ e is not zero which results in a larger fraction of d than u quarks. At low density no strange quarks are present. The fraction of d quarks is about two times that of u quarks. Without vector interactions, when the density is larger than about 0.41f m −3 , strange quarks appear. Compared to Fig.1, this density is larger than the density where the local minimum appears. This result is close to that of Ref. [30] where the SU(3) NJL model is used. Therefore, no stable strange quark matter can exist at zero pressure in contradiction to the original suggestion by Witten [1] The so-called strange star cannot be composed entirely by the deconfined u, d and s quarks. The stable strange quark matter can only exist in the core of these objects where the pressure is high enough to force the transition from hadronic matter to quark matter to occur.
If strange quark matter is metastable, then it may be produced in heavy ion collisions. In this case, the β-equilibrium may not be achieved. In our calculations, we assume that µ u = µ d = µ q . The values of µ q and µ s are determined by the total baryon density and the strangeness fraction f s (f s = 3r s ). In Fig.3, we plot the effective masses of nonstrange and strange quarks versus the baryon density for different strangeness fraction with parameter set A. Both u (d) and s quark masses decrease with the increasing density. In nonstrange quark matter, the mass of the u (d) quark decreases more quickly than that of the s quark. When f s increases, the mass of the s quark decreases at fixed density. At some high density, the strange quark mass is even lower than the mass of nonstrange quarks. Compared to the QMDD model, here the effective quark masses are obtained dynamically. The quark masses are not only density dependent but also strangeness fraction dependent which are not present in the QMDD model.
In Figs.4-6 we plot the energy per baryon versus density for different values of f s , which corresponds to different vector coupling constants. The solid and dashed lines are for parameter sets A and B, respectively. First we do not include the interactions between quarks and vector mesons. This is close to the QMDD model or SU(3) NJL model with only scalar type 4-quark interactions. In Fig.4, for parameter set A, there exists a local minimum of energy per baryon for any f s . The baryon density at the minimum of energy per baryon first increases and then decreases when f s increases. For nonstrange quark matter, though there is a local minimum, the system has no positive binding energy compared to the vacuum mass of nonstrange quarks. For strange quark matter, the energy per baryon is lower than the masses of hyperons with the same strangeness number. When f s = 1, the binding energy is about 45 MeV compared to the vacuum constituent quark mass. The maximum binding energy is about 60 MeV and the corresponding f s is about 2.0. Metastable strange quark matter is therefore favored to have a large strangeness fraction (high negative charge). For the parameter set B, the system has a local minimum of energy per baryon only for some range of f s , that is 1 < f s < 3. The maximum binding energy is about 5 MeV, when f s is around 2.0.
When the interactions between quarks and vector mesons are included, the system is destabilized. In Fig.5, we plot the energy per baryon versus baryon density with the vector coupling constant half of that in hadronic matter. Compared to Fig.4, the energy per baryon becomes higher. For parameter set A, there exists a local minimum for 0 < f s < 2. The maximum binding energy is only about 10 MeV with f s ≃ 1.0. The corresponding density is also much lower compared to Fig.4. For parameter set B, there is no local minimum for any strangeness fraction. If the vector coupling is the same as in hadronic matter, as is shown in Fig.6, the energy per baryon will increase monotonously with the baryon density for both parameter sets A and B. The vector meson couplings are also discussed in Ref. [29]. Though these two models are quite different, the results are comparable to each other.
If vector interactions are included, even when the strength is only half of that in hadronic matter, the binding energy of metastable strange quark matter becomes small or negative. At the minimum of the energy per baryon, the pressure p of the system is zero. This kind of objects with finite volume are called strangelets. If the vector interactions are fully considered, the metastable strangelets can not be formed at zero pressure.
Up to now, we did not consider the additional coupling between strange quark and scalar mesons L h , which is important to obtain reasonable hyperon potentials in hadronic matter [31]. When the additional term is included, the effective quark masses versus baryon density for parameter set A are given in Fig.7. The result for the strange quark mass evidently change, especially for large strangeness fraction, when compared to Fig.3. In nonstrange quark matter, the effective mass of s quarks almost stays constant. When the strangeness fraction is high, the strange quark has a lower mass when compared to Fig.3, where L h is not included
The effective quark masses will affect the energy of the system. In Fig.8 we plot the energy per baryon versus density with the vector coupling constant half of that in hadronic matter. The solid and dashed lines correspond to the cases without and with L h , respectively. For small strangeness fraction f s , the results of these two cases are close. For large strangeness fraction, the additional term L h will produce larger binding energy. If the vector coupling strength is as big as in hadronic matter, there is no local minimum for both cases, with or without L h . Therefore, the inclusion of the additional term L h does not affect the main results of Fig.1. This is because when the β-equilibrium is achieved, the strangeness fraction of the system is smaller than 1. The additional term L h only gives sizable contributions for systems with a large strangeness fraction.
V. CONCLUSIONS
We investigate strange quark matter in a chiral SU(3) quark mean field model. The effective quark masses are obtained dynamically by the quark meson interactions and they are both density and strangeness fraction dependent. The stability of strange quark matter is studied for different values of the vector coupling constant and for different parameter sets. The effect of the additional term L h is also discussed.
If the strange quark is stable and can survive a long time, the β-equilibrium can be achieved. The strangeness fraction f s is smaller than 1. In the chiral SU(3) quark model, at the density where the system has a local minimum for the energy per baryon, no strange quarks appear. Even without the vector meson coupling, this nonstrange quark matter has a negative binding energy and cannot be bound. Therefore, opposed to the suggestion by Witten, stable quark matter at zero pressure cannot exist even without vector meson interactions.
If quark matter can be produced in heavy ion collisions, the β-equilibrium may not be achieved. This metastable strange quark matter can have a high strangeness fraction (high negative charge). When we do not take the vector meson interaction into account, as in the QMDD model or SU(3) NJL model with only scalar type 4-quark interaction, the maximum binding energy is about 5 MeV -60 MeV and the corresponding strangeness fraction is about 1 -2. If we assume that the strength of vector meson coupling is half of that in hadronic matter (in this case, the strength of vector coupling is comparable to the one of Ref. [29]), the maximum binding energy decreases. Inclusion of the additional term L h will forces the system to have a larger strangeness fraction and binding energy. When the vector coupling constant is the same as for hadronic matter, no metastable strange quark matter (strangelets) can be formed in heavy ion collisions.
However, strange quark matter can still exist in the core of a neutron star. The phase transition from hadronic matter to quark matter can occur at high density and pressure which is caused by gravity. Because both hadronic and quark matter can be described in the SU(3) quark mean field model, it is of interest to investigate this phase transition in this model. This will be studied in the future.
TABLES
Figure captions
, 8 )
8are the Gell-Mann matrices, and λ 0 = 2 3 I. Plus and minus sign correspond to M and M + . Under chiral SU(3) transformations, M and M + transform as M → M ′ = LMR + and M + → M + ′ = RM + L + . As for the spin-0 mesons, the spin-1 mesons are set up in a similar way as
Fig. 1 :
1Energy per baryon versus baryon density for different vector coupling constants. The solid and dashed lines are for parameter sets A and B. Results are shown for the case of β-equilibrium.
Fig. 2 :
2Fractions of u, d and s quarks in strange matter versus baryon density with and without vector interactions, respectively. The solid and dashed lines are for parameter sets A and B. Results are shown for the case of β-equilibrium.
Fig. 3 :
3The effective nonstrange and strange quark mass versus baryon density for different strangeness fraction. Parameters are taken from set A.
Fig. 4 :
4The energy per baryon versus baryon density for different strangeness fraction for parameter sets A and B. The interaction between quarks and vector mesons is not considered.
Fig. 5 :
5Same as inFig. 4, but here the quark-vector meson interaction is included. The value of the vector coupling constant is half of that used in hadronic matter.
Fig. 6 :
6Same as Fig. 5, but the vector coupling constant is of the strength as in hadronic matter.
Fig. 7 :
7Same asFig. 3, but the the additional term L h is included in the calculation.
Fig. 8 :
8The energy per baryon versus baryon density calculated with parameter set A. The vector coupling constant is half of that in hadronic matter. Solid and dashed lines correspond to the case without and with the additional term L h , respectively.
Fig. 2
2Fig.2
Fig. 5
5Fig.5
Fig. 6
6Fig.6
Fig. 7
7Fig.7
Fig. 8
8Fig.8
TABLE I .
IParameters of sets A and B of the model.set
k 0
k 1
k 2
k 3
k 4
g s
g v
g 4
h 1
h 2
A
4.94
2.12
-10.16
-5.38
-0.06
4.76
10.92
37.5
-2.20
3.24
B
3.83
2.64
-10.16
-3.40
-0.18
4.76
10.13
0.0
-2.03
2.55
Acknowledgements. P. Wang would like to thank the Institute for Theoretical Physics, University of Tübingen for their hospitality. This work was supported by the Alexander von Humboldt Foundation and by the Deutsche Forschungsgemeinschaft (DFG) under contracts FA67/25-1, GRK683.Fig.3
. E Witten, Phys. Rev. D. 30272E. Witten, Phys. Rev. D 30, 272 (1984).
. M Kasuya, Phys. Rev. D. 472153M. Kasuya et al., Phys. Rev. D 47, 2153 (1993).
. M Ichimura, Nuovo Cim. A. 106843M. Ichimura et al., Nuovo Cim. A 106, 843 (1993).
. J N Capdevielle, Nuovo Cim. C. 19623J. N. Capdevielle et al., Nuovo Cim. C 19, 623 (1996).
. I Bombaci, Phys. Rev. C. 551587I. Bombaci, Phys. Rev. C 55, 1587 (1997).
. K S Cheng, Z G Dai, D M Wai, T Lu, Science. 280407K. S. Cheng, Z. G. Dai, D. M. Wai, and T. Lu, Science 280, 407 (1998).
. M Dey, I Bombaci, J Dey, S Ray, B C Samanta, Phys. Lett. B. 438123M. Dey, I. Bombaci, J. Dey, S. Ray, and B. C. Samanta, Phys. Lett. B 438, 123 (1998);
. Erratum Phys. Lett. B. 467303Erratum Phys. Lett. B 467, 303 (1999).
. X D Li, I Bombaci, M Dey, J Dey, E P J Van Den, Heuvel, Phys. Rev. Lett. 833776X. D. Li, I. Bombaci, M. Dey, J. Dey, and E. P. J. van den Heuvel, Phys. Rev. Lett. 83, 3776 (1999).
. C Greiner, P Koch, H Stöcker, Phys. Rev. Lett. 581825C. Greiner, P. Koch, and H. Stöcker, Phys. Rev. Lett. 58, 1825 (1987).
. C Greiner, D H Rischke, H Stöcker, P Koch, Phys. Rev. D. 382797C. Greiner, D. H. Rischke, H. Stöcker, and P. Koch, Phys. Rev. D 38, 2797 (1988).
. C Greiner, J Schaffner, H Stöcker, Nucl. Phys. Proc. Suppl. B. 24239C. Greiner, J. Schaffner, and H. Stöcker, Nucl. Phys. Proc. Suppl. B 24, 239 (1991).
. J Barrette, Phys. Lett. B. 252550J. Barrette et al., Phys. Lett. B 252, 550 (1990);
. M Aoki, Phys. Rev. Lett. 692345M. Aoki et al., Phys. Rev. Lett. 69, 2345 (1992);
. K Borer, ibid. 721415K. Borer et al., ibid. 72, 1415 (1994);
. D Beavis, ibid. 753078D. Beavis et al., ibid 75, 3078 (1995).
. D Ardouin, Phys. Lett. B. 446191D. Ardouin et al., Phys. Lett. B 446, 191 (1999).
. T A Armstrong, Phys. Rev. C. 591829T. A. Armstrong et al., Phys. Rev. C 59, 1829 (1999);
. Phys. Rev. C. 6354903Phys. Rev. C 63 054903 (2001).
. A Chodos, Phys. Rev. D. 93472A. Chodos et al., Phys. Rev. D 9, 3472 (1974).
. E Farhi, R L Jaffe, Phys. Rev. D. 302379E. Farhi and R. L. Jaffe, Phys. Rev. D 30, 2379 (1984).
. M S Berger, R L Jaffe, Phys. Rev. C. 35213M. S. Berger and R. L. Jaffe, Phys. Rev. C 35, 213 (1987).
. J Madsen, Phys. Rev. Lett. 70391J. Madsen, Phys. Rev. Lett. 70, 391 (1993);
. Phys. Rev. D. 475156Phys. Rev. D 47, 5156 (1993).
. A Ukawa, Nucl. Phys. A. 498227A. Ukawa, Nucl. Phys. A 498, 227c (1989).
. G N Fowler, S Raha, R M Weiner, Z. Phys. C. 9271G. N. Fowler, S. Raha, and R. M. Weiner, Z. Phys. C 9, 271 (1981).
. S Chakrabarty, S Raha, B Sinha, Phys. Lett. B. 229112S. Chakrabarty, S. Raha, and B. Sinha, Phys. Lett. B 229, 112 (1989).
. S Chakrabarty, Phys. Rev. D. 43627S. Chakrabarty, Phys. Rev. D 43, 627 (1991).
. S Chakrabarty, Phys. Rev. D. 481409S. Chakrabarty, Phys. Rev. D 48, 1409 (1993).
. O G Benvenuto, G Lugones, Phys. Rev. D. 511989O. G. Benvenuto and G. Lugones, Phys. Rev. D 51, 1989 (1995).
. G X Peng, H C Chiang, P Z Ning, B S Zou, Phys. Rev. C. 593452G. X. Peng, H. C. Chiang, P. Z. Ning, and B. S. Zou, Phys. Rev. C 59, 3452 (1999).
. P Wang, Phys. Rev. C. 6215204P. Wang, Phys. Rev. C 62, 015204 (2000).
. W M Alberico, A Drago, C Ratti, hep-ph/0110091W. M. Alberico, A. Drago, and C. Ratti, hep-ph/0110091.
. M Jaminon, B Van Den Bossche, Nucl. Phys. A. 686341M. Jaminon and B. Van den Bossche, Nucl. Phys. A 686, 341 (2001).
. I N Mishustin, L M Satarov, H Stöcker, W Greiner, Phys. Atom. Nucl. 64802I. N. Mishustin, L. M. Satarov, H. Stöcker, and W. Greiner, Phys. Atom. Nucl. 64, 802 (2001).
. M Buballa, M Oertel, Phys. Lett. B. 457261M. Buballa and M. Oertel, Phys. Lett. B 457, 261 (1999).
. P Wang, Z Y Zhang, Y W Yu, R K Su, H Q Song, Nucl. Phys. A. 688791P. Wang, Z. Y. Zhang, Y. W. Yu, R. K. Su, and H. Q. Song, Nucl. Phys. A 688, 791 (2001).
. P Wang, H Guo, Z Y Zhang, Y W Yu, R K Su, H Q Song, Nucl. Phys. A. 705455P. Wang, H. Guo, Z. Y. Zhang, Y. W. Yu, R. K. Su, and H. Q. Song, Nucl. Phys. A 705, 455 (2002).
. P Papazoglou, D Zschiesche, S Schramm, J Schaffner-Bielich, H Stöcker, W Greiner, Phys. Rev. C. 59411P. Papazoglou, D. Zschiesche, S. Schramm, J.Schaffner-Bielich, H. Stöcker, and W. Greiner, Phys. Rev. C 59, 411 (1999).
| []
|
[
"Near-and middle-ultraviolet reconfigurable Raman source using a record-low UV/visible transmission loss inhibited-coupling hollow-core fiber",
"Near-and middle-ultraviolet reconfigurable Raman source using a record-low UV/visible transmission loss inhibited-coupling hollow-core fiber"
]
| [
"M Chafer \nGLOphotonics\n123 Avenue Albert ThomasLimogesFrance\n",
"J H Osório \nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n",
"A Dhaybi \nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n",
"F Ravetta \nLATMOS/IPSL\nSorbonne University\nUVSQ\nCNRS\nParisFrance\n",
"F Amrani \nGLOphotonics\n123 Avenue Albert ThomasLimogesFrance\n\nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n",
"F Delahaye \nGLOphotonics\n123 Avenue Albert ThomasLimogesFrance\n\nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n",
"B Debord \nGLOphotonics\n123 Avenue Albert ThomasLimogesFrance\n\nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n",
"C Cailteau-Fischbach \nLATMOS/IPSL\nSorbonne University\nUVSQ\nCNRS\nParisFrance\n",
"G Ancellet \nLATMOS/IPSL\nSorbonne University\nUVSQ\nCNRS\nParisFrance\n",
"F Gérôme \nGLOphotonics\n123 Avenue Albert ThomasLimogesFrance\n\nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n",
"F Benabid [email protected] \nGLOphotonics\n123 Avenue Albert ThomasLimogesFrance\n\nGPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance\n"
]
| [
"GLOphotonics\n123 Avenue Albert ThomasLimogesFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance",
"LATMOS/IPSL\nSorbonne University\nUVSQ\nCNRS\nParisFrance",
"GLOphotonics\n123 Avenue Albert ThomasLimogesFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance",
"GLOphotonics\n123 Avenue Albert ThomasLimogesFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance",
"GLOphotonics\n123 Avenue Albert ThomasLimogesFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance",
"LATMOS/IPSL\nSorbonne University\nUVSQ\nCNRS\nParisFrance",
"LATMOS/IPSL\nSorbonne University\nUVSQ\nCNRS\nParisFrance",
"GLOphotonics\n123 Avenue Albert ThomasLimogesFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance",
"GLOphotonics\n123 Avenue Albert ThomasLimogesFrance",
"GPPMM Group\nUMR 7252\nXLIM Institute\nCNRS\nUniversity of Limoges\nFrance"
]
| []
| A B S T R A C TWe report on two types of Raman laser sources emitting in the near and middle ultraviolet spectral ranges by the use of a solarization-resilient gas-filled inhibited-coupling (IC) hollow-core photonic-crystal fiber (HCPCF) with record low transmission loss (minimum of 5 dB/km at 480 nm). The first source type emits a Raman comb generated in a hydrogen-filled HCPCF pumped by a 355 nm wavelength microchip nanosecond pulsed laser. The generated comb lines span from 270 nm to the near-infrared region with no less than 20 lines in the 270-400 nm wavelength range. The second type stands for the first dualwavelength Raman source tuned to the ozone absorption band in the ultraviolet. Such dual-wavelength source emits at either 266 nm and 289 nm, or 266 nm and 299 nm. The relative power of the pair components is set to optimize the sensitivity of ozone detection in differential absorption lidar (DIAL). The source's physical package represents more than 10-fold size-reduction relative to current DIAL lasers, thus opening new opportunities in on-field ozone monitoring and mapping. Both Raman sources exhibit a very small footprint and are solarization-free. | 10.1016/j.optlastec.2021.107678 | [
"https://arxiv.org/pdf/2108.11327v1.pdf"
]
| 237,290,032 | 2108.11327 | bd7731d9df56f79a284db7cc9b8fcfc7a5a99c5d |
Near-and middle-ultraviolet reconfigurable Raman source using a record-low UV/visible transmission loss inhibited-coupling hollow-core fiber
M Chafer
GLOphotonics
123 Avenue Albert ThomasLimogesFrance
J H Osório
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
A Dhaybi
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
F Ravetta
LATMOS/IPSL
Sorbonne University
UVSQ
CNRS
ParisFrance
F Amrani
GLOphotonics
123 Avenue Albert ThomasLimogesFrance
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
F Delahaye
GLOphotonics
123 Avenue Albert ThomasLimogesFrance
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
B Debord
GLOphotonics
123 Avenue Albert ThomasLimogesFrance
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
C Cailteau-Fischbach
LATMOS/IPSL
Sorbonne University
UVSQ
CNRS
ParisFrance
G Ancellet
LATMOS/IPSL
Sorbonne University
UVSQ
CNRS
ParisFrance
F Gérôme
GLOphotonics
123 Avenue Albert ThomasLimogesFrance
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
F Benabid [email protected]
GLOphotonics
123 Avenue Albert ThomasLimogesFrance
GPPMM Group
UMR 7252
XLIM Institute
CNRS
University of Limoges
France
Near-and middle-ultraviolet reconfigurable Raman source using a record-low UV/visible transmission loss inhibited-coupling hollow-core fiber
* Corresponding author.
A B S T R A C TWe report on two types of Raman laser sources emitting in the near and middle ultraviolet spectral ranges by the use of a solarization-resilient gas-filled inhibited-coupling (IC) hollow-core photonic-crystal fiber (HCPCF) with record low transmission loss (minimum of 5 dB/km at 480 nm). The first source type emits a Raman comb generated in a hydrogen-filled HCPCF pumped by a 355 nm wavelength microchip nanosecond pulsed laser. The generated comb lines span from 270 nm to the near-infrared region with no less than 20 lines in the 270-400 nm wavelength range. The second type stands for the first dualwavelength Raman source tuned to the ozone absorption band in the ultraviolet. Such dual-wavelength source emits at either 266 nm and 289 nm, or 266 nm and 299 nm. The relative power of the pair components is set to optimize the sensitivity of ozone detection in differential absorption lidar (DIAL). The source's physical package represents more than 10-fold size-reduction relative to current DIAL lasers, thus opening new opportunities in on-field ozone monitoring and mapping. Both Raman sources exhibit a very small footprint and are solarization-free.
Introduction
Ultraviolet (UV) spectral range extends from 300-400 nm in the near-UV, from 200-300 nm in the middle-UV, and down to 120-200 nm in the far-UV. It is established that UV laser sources are of great interest in many applications such as spectroscopy, biomedical, gas detection, and decontamination of water and food to cite a few [1]. In this application landscape, the need for small-footprint UV sources whose emission could cover the three UV bands is as pressing as pervasive, particularly in health and environment sectors. For example, in DNA sequencing, a single laser source that emits multiple and specific UV spectral lines would represent an extension in current cytometry spectral coverage to hitherto unexplored regimes in cellular analysis, and a gain in size-reduction, which would make laser integration in cytometry machines much more impactful. A second example of a timely and critical application is real-time and spatially-mapped ozone detection in the troposphere for its capability in assessing, via the ozone physio-chemical and kinetic dynamics, climate change, and air pollution [2][3][4].
So far, the DIAL technique has proved to be an efficient means to measure ozone concentration. However, current laser systems emitting within the ozone absorption line wavelength range between 215 nm and 300 nm are too cumbersome and costly for widespread and on-field deployment to assess the ozone dynamics over the whole globe. In fact, this shortage in compact and spectrally adjustable UV light sources is ubiquitous. Indeed, to emit directly in the UV range, excimer lasers are still widely used despite their cumbersomeness, high cost, and need for maintenance [5]. To overcome these limitations, frequency-conversion of high-power near-infrared to UV wavelengths via frequency tripling or quadrupling in borate-based crystals (e.g., LBO, BBO) has been introduced. However, this scheme offers emissions at a limited number of wavelengths. Additionally, the conversion efficiency does not exceed 50%, meaning that, after a third harmonic generation, only 25% of the pump power is converted. Finally, the crystals have a limited lifetime due to UV radiation, and the footprint remains set by a large NIR laser.
Another route that enables UV generation is the use of stimulated Raman scattering (SRS) in inhibited-coupling guiding hollow-core photonic crystal fibers (IC-HCPCF). The ability of IC-HCPCFs to confine gases in a micrometer-scale core for long and diffraction-free lengths allows exacerbating SRS conversion efficiency by a factor larger than ~10 6 compared to a simple capillary configuration [6]. This in turn enabled the generation of Raman combs with microchips laser pumps [7]. Another benefit of these fibers arises from the very small optical overlap between the core-guided mode with the fiber microstructured cladding. In addition to the enabling ultralow transmission loss levels, a suppressed optical overlap between the core mode with the silica cladding represents a promising means in mitigating solarization effects in UV light handling [8]. However, so far, the reported Raman spectra were limited to ~300 nm in the UV and were generated using NIR or visible pump lasers [9].
In this paper, we report on the development of two types of Raman sources using an 8-tube single-ring (SR) tubular-lattice (TL) IC-HCPCF optimized for UV guidance. The measured fiber loss was found to be as low as 10 dB/km in the UV-visible spectral region, which represents a 7.7-fold loss reduction compared to the values at this UV range [10]- [12]. The first source is based on a hydrogen-filled IC-HCPCF. By pumping it with a 355 nm microchip laser one could generate a comb-like spectrum spanning from 270 nm to 400 nm. In the second source, a 266 nm diode-pumped solidstate (DPSS) laser pumps an IC-HCPCF filled either with hydrogen, to generate a dual-wavelength emission at (266 nm, 299 nm), or with deuterium to generate (266 nm, 289 nm) wavelength pair, with a proper power ratio between the two spectral lines to maximize a DIAL sensitivity for ozone detection. Fig. 1 summarizes the optical transmission properties of the IC-HCPCF we report in the present work. The fiber fabrication process was optimized for record low-loss optical guidance in the UV-visible spectral range. Fig. 1a shows the transmission loss spectrum in the 250-900 nm range. The loss spectrum has been obtained by a cutback measurement using fibers with lengths of 104.5 m and 8.5 m and a supercontinuum source (blue solid curve). To cover the full wavelength range, two optical spectrum analyzers were used. Furthermore, cutback measurements were also carried out using lasers emitting at 355 nm and 266 nm (red symbols). The inset in Fig. 1a shows a micrograph of the IC-HCPCF. The fiber was fabricated by the stack-and-draw technique. The fiber cladding is composed of 8 nontouching tubes with a diameter of 11 μm and a thickness of 600 nm. The tubes are arranged to form the surround of a hollow core with a diameter of 27 µm. Numerical simulations show that the optical overlap fraction of the fundamental core mode with the silica core contour ranges between 10 -5 and 10 -6 in the 300-700 nm wavelength interval [13], which is a promising feature to avoid solarization effects.
IC-HCPCF fabrication and characterization
The loss measurement results show values as low as 15.7 dB/km at 725 nm and 5 dB/km at 480 nm. The latter loss figure is the lowest one ever reported for any optical fiber guiding in the blue spectral region. It is noteworthy that the measured loss values in the IC-HCPCF transmission bands between 250 nm and 600 nm are below the fundamental Rayleigh scattering limit in bulk silica (light blue-filled and dashed-contour curve). Furthermore, the loss spectrum shows two other low-loss bands in the UV range, corresponding respectively to the fiber third and fourth high order transmission bands. They are centered at 375 nm, with a minimum loss of 10 dB/km, and 245 nm with a minimum loss of 50 dB/km. The loss spectrum is consistent with the loss figures measured using 355 nm and 266 nm lasers (red symbols). Fig. 1b summarizes the fiber UV-light handling and resilience against solarization. The blue curve shows the transmission ratio of an 8 m long IC-HCPCF when a beam from a 355 nm microchip laser (1 kHz repetition rate, 5 mW average power) has been coupled at full energy (5 µJ) to an 8 m-long fiber. The transmission was monitored over 6 months. The transmission coefficient shows a stable behavior as illustrated by a normal distribution with an average of 80% and a standard deviation of 1% (inset in the bottom right of Fig. 1b). The fiber-output beam profile was also monitored regularly in the 6 months time transmission monitoring. The results show a Gaussian-like profile consistent with the fundamental fiber core mode. To put this result into context, we observed the transmission handling of a 1 m long and 8 µm core-diameter SMF (red curve). Here, only 0.5 J of the 355 nm laser was coupled into the fiber. The results show that the transmission irreversibly dropped to 15% after 24h.
Demonstration of HCPCF-based UV sources
To experimentally demonstrate the UV sources referenced above, the tips of two sections from the fabricated and characterized IC-HCPCF were mounted into gas cells and loaded with a Raman gas. The formed photonic microcells are then implemented in optical setups for Raman source generation. For the Raman comb demonstration, a 355 nm microchip laser (emitting 0.8 ns pulses with a repetition rate of 1 kHz) has been used. For the ozone DIAL application, we used a laser emitting 1 ns pulses at 266 nm (with maximum energy of 40 µJ and 1 kHz repetition rate). The setup comprises standard optics for beam steering, collimation, and coupling, and a set of polarizers and waveplates for laser-beam polarization control, necessary to regulate the Raman spectral structure [14]. The coupling efficiency is typically higher than 90%.
UV comb pumped by a 355 nm microchip laser
For the UV comb generation, the fiber has been filled with hydrogen, and the laser energy was set to its maximum level (15 µJ). To optimize the UV components of the spectrum, a systematic study has been performed by changing the hydrogen pressure and fiber length. The spectra have been measured via free-space coupling to a photo-spectrometer with a wavelength coverage of 190 nm-1100 nm and a resolution lower than 2 nm. Fig. 2a shows the generated stimulated Raman scattering spectra for pressure levels of 5, 7, 11, and 15 bar, respectively. Here, the fiber length was fixed to 180 cm. The laser polarization is initially set to a circular state for rotational Raman enhancement. Then, it was finely tuned to maximize the power of the spectral components in the short wavelength range so to compensate for the small polarization change during the propagation in the fiber. All the recorded spectra show discrete components corresponding to the different orders of Stokes and anti-Stokes rotational and vibrational Raman transitions. A Raman line is labeled by the integer-couple (± , ± ), where n and m correspond to the Stokes or anti-Stokes order of the Raman vibrational and rotational transition respectively. The negative and positive signs relate to Stokes and anti-Stokes respectively. The shadowed regions in Fig. 2 identify the fiber transmission bands.
The results show that, when the pressure is increased, the conversion is shifted towards longer wavelengths. This is partly explained by the fact that the wavelength of the higher-order vibrational Stokes lines are generated in the fiber low-loss region (e.g., (−1,0) = 504 , (−2,0) = 637 and (−3,0) = 867 ) and the anti-Stokes ones lie in higher loss regions (e.g., (1,0)
= 310
and (2,0) = 274 ). Hence, to limit the gain of the Stokes transitions and, thus, to increase the number of Raman lines in the UV we need to operate pressure levels below 7 bar. Furthermore, at this pressure range, the magnitude of the Raman coefficient of the rotational transition is much closer to that of the vibrational resonance when compared to higher gas pressure.
A further pathway to optimize the Raman conversion to the UV is to conveniently adjust the fiber length. Fig. 2b shows the generated spectra for four different fiber lengths, namely 60, 120, 180, and 240 cm. Here, the pressure of hydrogen was fixed at 5 bar. The results show spectra with more than 40 Raman lines between 270 nm to 910 nm. Of particular interest is the generation of the first two vibrational anti-Stokes at the wavelengths (1,0) = 310 and (2,0) = 274 , which was enabled by the low-loss guidance of the IC-HCPCF. Furthermore, the results show that, to obtain the highest generation in the UV, the optimum fiber length is 120 cm. For longer fiber lengths, the loss in the UV becomes a limiting factor. Conversely, if the fiber length is smaller than 120 cm, the gain is not high enough to obtain a maximum of Raman transitions in the UV. Fig. 3a shows the optimal comb spectrum together with a picture of the projected beam obtained by using a diffraction grating (top of Fig. 3a). This procedure allowed measuring the power of each line and the corresponding spectral bandwidth to calculate the spectral densities. Fig. 3b presents the spectral density of a close-up of the spectral comb in the 265-375 nm wavelength range. The results show that, in the 345-375 nm wavelength range, 4 Raman components exhibits power spectral densities higher than 200 µW/nm. In the 305-345 nm range, 5 lines display spectral densities above 50 µW/nm. Furthermore, the 7 lines spreading from 269 to 305 nm show spectral densities above 4 µW/nm. The resulted Raman comb, therefore, covers 7 lines in the middle-UV, and 14 lines in the near-UV. The lines situated beneath 300 nm show a lower spectral density but significative enough for several applications.
Dual-wavelength laser source for DIAL LIDAR of O3
This section reports on the second Raman source. The experimental set-up is similar to the one used for the UV-comb generator, except for the pump laser, which, for this application, emits at 266 nm. Here, we explored two configurations. The first one uses a hydrogen-filled fiber and is configured to emit at 266 nm and 299 nm. In the second configuration, the fiber is filled with deuterium to emit at the wavelength pair of 266 nm and 289 nm. The choice of the laser pump wavelength and the filling gases (D2 and H2) are such that the emission wavelengths of the Raman source lie within the ozone Hartley absorption band. Fig. 4a shows the normalized ozone absorption spectrum in the spectral region around 255 nm (green dashed curve). The blue and red curves show the emitted spectrum from H2-and D2-based Raman sources, respectively. Both spectra show a power ratio between the two components of ~1:0.3. This power ratio is set to enhance the differential absorption signal in ozone detection DIAL and was achieved by adjusting the fiber length and the gas pressure according to a systematic study that consisted of measuring the Raman source emission spectrum evolution for different gas pressure and fiber lengths. The results of this study show that, for attaining the optimal power ratio when using D2 as the filling gas, one should use a 25 cm-long fiber and a gas pressure of 3.2 bar. When using H2, the optimum conditions were found to be achieved when the fiber length is 7 cm length and the gas pressure 5 bar. Finally, the measured energy of the Raman sources when the fiber is pumped by 40 μJ pulses was found to be 30 μJ for the H2 configuration and 32 μJ for the D2 configuration. In both cases, the whole system sits on a breadboard of 50 cm long and 30 cm wide ( Fig. 4b displays a diagram of the HCPCF-based DIAL LIDAR source reported in this manuscript). The system is, therefore, highly compact, and represents expressive size-reduction relative to current DIAL lasers, which are typically deployed in trucks due to their large sizes [15].
Conclusions
We reported on the fabrication of an IC HCPCF exhibiting record low transmission loss in the UV-visible range. The measured loss showed figures that are well below the silica Rayleigh scattering limit for all the transmission bands lying between 600 nm and 250 nm, and with a minimum value of 5 dB/km at 480 nm. The fiber was found to be very resilient to solarization and, thus, we used it to develop two Raman sources emitting in the UV range.
The robustness of the fiber to UV transmission combined with a simple set-up has led to the creation of a UV comb generator, coined the UV-Comblas, with no less than 20 lines in the UV range associated with high spectral density, which has the potential to address biomedical applications needs. Remarkably, the use of the record low-loss IC HCPCF reported herein allowed to extend the Raman comb to wavelengths as low as 270 nm while using a modest DPSS pump.
By changing the pump laser to another one emitting at 266 nm, we demonstrated the realization of a dual-wavelength laser source emitting at 266 nm and 289 nm or 266 nm and 299 nm with a relative power ratio of 1:0.3 between the 266 nm and 289 nm (or 289 nm) emissions respectively (according to the Raman active gas that fills the fiber). This source, which is the first dual wavelength Raman source tuned to the O3 absorption band in the UV, is highly compact and allows to significantly reduce the footprint of the current systems, hence increasing its applicability in practical ozone concentration measurements and photochemical studies.
Fig. 1 .
1(a) Measured transmission loss spectrum. The blue solid curve represents the data recorded using supercontinuum and optical spectrum analyzer. The symbols represent the data obtained using laser sources and a power-meter. (b) Long-term power stability over 6 months time. Insets show a typical near field output profile and a histogram on the fiber transmission data.
Fig. 2 .
2(a) Comb spectra obtained by using H2 at 5, 7, 11, and 15 bar for a 180 cm fiber length. (b) Comb spectra for fibers with lengths of 60, 120, 180, and 240 cm filled with H2 at 5 bar.
Fig. 3 .
3(a) Comb spectrum for a fiber with length 120 cm and filled with H2 at 5 bar. The diffracted output comb pattern is displayed on the figure top part. (b) Zoom in the wavelength range between 265 nm to 375 nm.
AcknowledgmentsThis research was funded by PIA program (grant 4F).
Route to stabilized ultrabroadband microresonator-based frequency combs. M R E Lamont, Y Okawachi, A L Gaeta, Opt. Lett. 383478M. R. E. Lamont, Y. Okawachi, A. L. Gaeta, "Route to stabilized ultrabroadband microresonator-based frequency combs," Opt. Lett. 38, 3478 (2013).
Tropospheric ozone and its precursors from the urban to the global scale from air quality to short-lived climate forcer. P A Monks, A T Archibald, A Colette, O Cooper, M Coyle, R Derwent, D Fowler, C Granier, K S Law, G E Mills, D S Stevenson, O Tarasova, V Thouret, E Schneidemesser, R Sommariva, O Wild, M L Williams, Atmos. Chem. Phys. 15P. A. Monks, A. T. Archibald, A. Colette, O. Cooper, M. Coyle, R. Derwent, D. Fowler, C. Granier, K. S. Law, G. E. Mills, D. S. Stevenson, O. Tarasova, V. Thouret, E. von Schneidemesser, R. Sommariva, O. Wild, M. L. Williams, "Tropospheric ozone and its precursors from the urban to the global scale from air quality to short-lived climate forcer", Atmos. Chem. Phys., 15, 8889-8973, (2015).
Long-range transport and tropospheric ozone variability in the western Mediterranean region during the Intercontinental Transport of Ozone and Precursors (ITOP-2004) campaign. F Ravetta, G Ancellet, A Colette, H Schlager, J. Geophys. Res. Atmos. 112D10F. Ravetta, G. Ancellet, A. Colette, H. Schlager, "Long-range transport and tropospheric ozone variability in the western Mediterranean region during the Intercontinental Transport of Ozone and Precursors (ITOP-2004) campaign", J. Geophys. Res. Atmos., 112 (D10), pp.D10S46, (2007).
Multiwavelength lidar for ozone measurements in the troposphere and the lower stratosphere. A Papayannis, G Ancellet, J Pelon, G Mégie, Appl. Opt. 29A. Papayannis, G. Ancellet, J. Pelon, G. Mégie, "Multiwavelength lidar for ozone measurements in the troposphere and the lower stratosphere," Appl. Opt. 29, 467-476 (1990).
History and future prospects of excimer laser technology, Riken review no 43. D Basting, K Pippert, U Stamm, D. Basting, K. Pippert, and U. Stamm, History and future prospects of excimer laser technology, Riken review no 43, (2002).
Stimulated Raman scattering in hydrogen-filled hollow-core photonic crystal fiber. F Benabid, J C Knight, G Antonopoulos, P S J Russell, Science. 298F. Benabid, J. C. Knight, G. Antonopoulos, P. S. J. Russell, "Stimulated Raman scattering in hydrogen-filled hollow-core photonic crystal fiber," Science 298, 5592, 399- 402 (2002)
W W Duley, UV lasers effects and applications in materials science. Cambridge university pressW. W. Duley, UV lasers effects and applications in materials science, Cambridge university press, (2005).
Single-mode solarization-free hollow-core fiber for ultraviolet pulse delivery. F Yu, M Cann, A Brunton, W Wadsworth, J Knight, Optics Express. 26F. Yu, M. Cann, A. Brunton, W. Wadsworth and J. Knight, "Single-mode solarization-free hollow-core fiber for ultraviolet pulse delivery," Optics Express 26, 10879-10887 (2018).
Over five octaves wide Raman combs in high power picosecond-laser pumped H2 filled in inhibited coupling Kagome fiber. A Benoît, B Beaudou, M Alharbi, B Debord, F Gérôme, F Salin, F Benabid, Opt. Express. 2314002A. Benoît, B. Beaudou, M. Alharbi, B. Debord, F. Gérôme, F. Salin, F. Benabid, "Over five octaves wide Raman combs in high power picosecond-laser pumped H2 filled in inhibited coupling Kagome fiber," Opt. Express 23, 11, 14002 (2015).
1-km hollow-core fiber with loss at the silica Rayleigh limit in the green spectral region. M Chafer, J H Osório, F Amrani, F Delahaye, M Maurel, B Debord, F Gérôme, F Benabid, IEEE Photon. Technol. Lett. 31M. Chafer, J. H. Osório, F. Amrani, F. Delahaye, M. Maurel, B. Debord, F. Gérôme, F. Benabid, "1-km hollow-core fiber with loss at the silica Rayleigh limit in the green spectral region," IEEE Photon. Technol. Lett. 31, 9, 685-689 (2019).
Hollow-core negative-curvature fiber for UV guidance. S Gao, Y Wang, W Ding, P Wang, Opt. Lett. 43S. Gao, Y. Wang, W. Ding, P. Wang, "Hollow-core negative-curvature fiber for UV guidance," Opt. Lett. 43, 1347-1350 (2018).
Single-mode solarizationfree hollow-core fiber for ultraviolet pulse delivery. F Yu, M Cann, A Brunton, W Wadsworth, J Knight, Opt. Express. 26F. Yu, M. Cann, A. Brunton, W. Wadsworth, J. Knight, "Single-mode solarization- free hollow-core fiber for ultraviolet pulse delivery," Opt. Express 26, 10879- 10887(2018).
. B Debord, A Amsanpally, M Chafer, A Baz, M Maurel, J M Blondy, E , B. Debord, A. Amsanpally, M. Chafer, A. Baz, M. Maurel, J. M. Blondy, E.
Ultralow transmission loss in inhibited-coupling guiding hollow fibers. F Hugonnot, L Scol, F Vincetti, F Gérôme, Benabid, Optica. 4Hugonnot, F. Scol, L. Vincetti, F. Gérôme, F. Benabid, "Ultralow transmission loss in inhibited-coupling guiding hollow fibers," Optica 4, 209-217 (2017).
Stimulated pure rotational raman scattering in deuterium. R W Minck, E E Hagenlocker, W G Rado, Phys. Rev. Lett. 17R. W. Minck, E. E. Hagenlocker, and W. G. Rado, "Stimulated pure rotational raman scattering in deuterium," Phys. Rev. Lett. 17, 5, 229-231 (1966).
. A O Langford, R J Alvarez, I I , G Kirgis, C J Senff, D Caputi, S A Conley, I , A. O. Langford, R. J. Alvarez II, G. Kirgis, C. J. Senff, D. Caputi, S. A. Conley, I.
Intercomparison of lidar, aircraft, and surface ozone measurements in the San Joaquin Valley during the California Vaseline Ozone Transport Study (CABOTS). A Faloona, L T Iraci, J E Marrero, M E Mcnamara, J Ryoo, E L Yates, AtmosA. Faloona, L. T. Iraci, J. E. Marrero, M. E. McNamara, J. Ryoo, E. L. Yates, "Intercomparison of lidar, aircraft, and surface ozone measurements in the San Joaquin Valley during the California Vaseline Ozone Transport Study (CABOTS)," Atmos.
. Meas. Tech. 12Meas. Tech. 12, 1889-1904 (2019).
| []
|
[
"High speed outflows driven by the 30 Doradus starburst",
"High speed outflows driven by the 30 Doradus starburst"
]
| [
"M P Redman \nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n",
"Z A Al-Mostafa \nJodrell Bank Observatory\nUniversity of Manchester\nSK11 9DLMacclesfieldUK\n\nKing Abdulaziz City for Science and Technology, Astronomy and Geophysics Research Institute\nP.O.Box 608611442RiyadhSaudi Arabia\n",
"J Meaburn \nJodrell Bank Observatory\nUniversity of Manchester\nSK11 9DLMacclesfieldUK\n",
"M Bryce \nJodrell Bank Observatory\nUniversity of Manchester\nSK11 9DLMacclesfieldUK\n"
]
| [
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK",
"Jodrell Bank Observatory\nUniversity of Manchester\nSK11 9DLMacclesfieldUK",
"King Abdulaziz City for Science and Technology, Astronomy and Geophysics Research Institute\nP.O.Box 608611442RiyadhSaudi Arabia",
"Jodrell Bank Observatory\nUniversity of Manchester\nSK11 9DLMacclesfieldUK",
"Jodrell Bank Observatory\nUniversity of Manchester\nSK11 9DLMacclesfieldUK"
]
| [
"Mon. Not. R. Astron. Soc"
]
| Echelle spectroscopy has been carried out towards a sample region of the halo of the giant H ii region 30 Doradus in the Large Magellanic Cloud. This new kinematical data is the amongst the most sensitive yet obtained for this nebula and reveals a wealth of faint, complex high speed features. These are interpreted in terms of localised shells due to individual stellar winds and supernova explosions, and collections of discrete knots of emission that still retain the velocity pattern of the giant shells from which they fragmented. The high speed velocity features may trace the base of the superwind that emanates from the 30 Doradus starburst, distributed around the super star cluster R136. | 10.1046/j.1365-8711.2003.06865.x | [
"https://export.arxiv.org/pdf/astro-ph/0308213v1.pdf"
]
| 18,287,086 | astro-ph/0308213 | e82754fab806256c4e95b0809f4a53eb5eeeb28d |
High speed outflows driven by the 30 Doradus starburst
2002
M P Redman
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonUK
Z A Al-Mostafa
Jodrell Bank Observatory
University of Manchester
SK11 9DLMacclesfieldUK
King Abdulaziz City for Science and Technology, Astronomy and Geophysics Research Institute
P.O.Box 608611442RiyadhSaudi Arabia
J Meaburn
Jodrell Bank Observatory
University of Manchester
SK11 9DLMacclesfieldUK
M Bryce
Jodrell Bank Observatory
University of Manchester
SK11 9DLMacclesfieldUK
High speed outflows driven by the 30 Doradus starburst
Mon. Not. R. Astron. Soc
0002002arXiv:astro-ph/0308213v1 12 Printed 28 February 2022 (MN L A T E X style file v2.2)galaxies:starburst -galaxies:Magellanic Clouds -H ii regions -ISM: kinematics and dynamics -ISM:supernova remnants -ISM:individual (30 Doradus)
Echelle spectroscopy has been carried out towards a sample region of the halo of the giant H ii region 30 Doradus in the Large Magellanic Cloud. This new kinematical data is the amongst the most sensitive yet obtained for this nebula and reveals a wealth of faint, complex high speed features. These are interpreted in terms of localised shells due to individual stellar winds and supernova explosions, and collections of discrete knots of emission that still retain the velocity pattern of the giant shells from which they fragmented. The high speed velocity features may trace the base of the superwind that emanates from the 30 Doradus starburst, distributed around the super star cluster R136.
INTRODUCTION
The 30 Doradus nebula in the Large Magellanic Cloud (LMC) is the closest example of a giant extragalactic H ii region and the largest in the local group of galaxies. It is regarded as undergoing intense enough star formation to be referred to as a 'mini-starburst' by Leitherer (1998) and as such is an important nearby laboratory of both massive star formation and starburst phenomena. The highly dynamic nebulosity (e.g. Meaburn 1981;Meaburn 1987) is powered by a super star cluster of ∼ 100 massive stars. Remarkable HST imagery of the environment of the central cluster of massive stars has recently been presented by Walborn et al. (2002). The combined winds, UV radiation and supernova explosions from so many massive stars at a similar evolutionary epoch enables the generation of the nested giant (20 -300 pc diameter) shells that comprise the giant H ii region (Meaburn 1980;Meaburn 1990;Leitherer 1998). On the largest scales, surrounding 30 Dor are supergiant (600 -1400 pc diameter) interstellar shells such as LMC3.
The term 'shell' will be used in this paper rather than the commonly used term 'bubble' since it is preferable to use a term that is dynamically neutral and constrained to no specific geometry (e.g. spherical). The term 'bubble', often erroneously presupposes a roughly spherical, pressuredriven, energy-conserving shell. This is certainly not the case for the supergiant shells which are unlikely to be either spherical or energy-conserving. The division between 'giant' and 'supergiant' when applied to the LMC shells will be for the diameter ranges above and recently confirmed by the H i observations of Kim et al. (1999). Different, though related, mechanisms must be involved in the formation of LMC shells in these distinctly separate diameter ranges. The most important difference is that supergiant shells have diameters in excess of the neutral gas scale-height of the LMC.
The overlapping giant shells comprising the halo of 30 Doradus have been shown to be expanding at around 50 km s −1 (e.g. Meaburn 1984;Chu & Kennicutt 1994) whereas a multitude of 15 pc diameter regions exhibit outflows of 200 km s −1 . The latter were interpreted as young supernova remnants in the perimeters of giant shells (Meaburn 1988). Fig. 1 is a cartoon that illustrates the hierarchy of scale sizes present in a giant H ii region like 30 Doradus.
The brightest, dominant velocity components of 30 Doradus are complex but seem to be comprised of three distinct velocity regimes corresponding to H i sheets along the sightline. These are at 250 km s −1 , 270 km s −1 and 300 km s −1 (McGee et al. 1978;Chu & Kennicutt 1994;Kim et al. 1999). In this paper, the systemic velocity, Vsys is taken to be the average heliocentric velocity (VHEL) of these components, 270 km s −1 , in agreement with previous observations (see for example Peck et al. 1997;Meaburn 1991;Garay et al. 1993;Clayton 1987).
In this work, the aim is to investigate the faint highest speed phenomena in the halo of 30 Doradus in order to complete the kinematical characterisation of this important giant H ii region. New echelle observations of the line profiles of the highest speed phenomena in the halo of 30 Doradus have been made with unprecedented sensitivity. These are described in Section 2. The region of study was in the vicinity of the Wolf-Rayet star R130 and was se- lected as a representative portion of the halo of 30 Doradus. This area is located in 'shell 3', of one of the several giant shells that comprise the 30 Doradus region (nomenclature from Meaburn 1980;Chu & Kennicutt 1994, see also Wang & Helfand 1991) and was one of the regions investigated by Chu & Kennicutt (1994). Section 3 is a discussion of the high-speed motions and morphologies in terms of the structures and dynamics of the giant shells that comprise 30 Doradus. How the high speed features relate to the outflow of hot gas from the 30 Doradus nebula is also discussed. Conclusions are drawn in Section 4.
OBSERVATIONS AND RESULTS
A reproduction of the ESO PR Photo 14a/02 is shown in Fig. 2. This is an optical image of the 30 Doradus nebula obtained with the Wide Field Imager camera on the 2.2-m MPG/ESO telescope at the La Silla Observatory. A white box marks the region which is investigated spectroscopically in the present work.
Spatially resolved, long-slit echelle spectra of [O iii] 5007Å emission lines were obtained using the Manchester Echelle Spectrometer (MES) at the 3.9M Anglo-Australian Telescope (AAT) on the night of 1994 October 17. The data were obtained using three parallel east-west slits simultaneously, separated by 45 ′′ which were then stepped southwards in 2 ′′ increments. The slit length was 165 ′′ and the slit width was 300 µm(≡ 2 ′′ and 20 km s −1 ). The broad slit compromises spatial and spectral resolution but permits detection of the faintest, highest speed components of the line profiles. The exposure time for each multi-slit position was 1800s. The locations of all the slit positions are shown in Fig. 3 and the boundary of this figure corresponds to the white box in Fig. 2. In the discussion that follows, a slit is identified by which block of six it appears in. From north to south the blocks are labelled A, B, and C while within a block, also from north to south, the slits are numbered from 1 to 6. The data were processed in the standard manner.
Greyscale representations of the position-velocity (pv) arrays of line profiles obtained from all the slits are displayed in Figs 4-6. These data are also presented and discussed in Al-Mostafa (1999). The scale is logarithmic in these three figures to enable all features to be discerned. In Figs 7 and 8, deep representations of the data from slits C4 and C6 are displayed in order to highlight the faintest, high velocity material.
These data are amongst the most sensitive obtained for the halo of 30 Doradus and reveal a wealth of faint, complex high speed features. Chu & Kennicutt (1994) carried out echelle spectroscopy across the halo of 30 Doradus but many of the faint, highest speed phenomena revealed here were not detected. In the following discussion, general trends amongst the wealth of complex kinematical features are highlighted. Discussion of individual features may be found in Al-Mostafa (1999). Adopting a distance to the LMC of 55 Kpc gives that 1 ′′ ≈ 0.27 pc.
Bright systemic features
The pv arrays are dominated by very bright continuous velocity features. These are due to the ionized overlapping H i sheets close to the systemic velocity being disturbed by the slowly expanding (50 km s −1 ) giant shells around 30 Doradus (Meaburn 1980;Chu & Kennicutt 1994). These systemic kinematical structures have been investigated in detail in earlier work on the halo of 30 Doradus.
Discrete high speed knots
At the smallest scale (a few arcseconds or approximately one parsec), the pv arrays contain numerous localised high speed (±200 km s −1 with respect to the Vsys) velocity knots which do not appear to vary widely in spatial scale. They are visible in all the pv arrays and represent the finest-scale high speed substructure detected here.
Velocity loops and arcs
At a slightly larger scale (around ten arcseconds or a few parsecs), individual loops and arcs are discerned. They are not continuous in the pv arrays but are coherent velocity features made up of the discrete velocity knots. The clearest example is that seen at slit position C6 ( Fig. 6 and the deep representation, Fig. 8).
Large scale coherent velocity features
At the largest scales (tens of arcseconds or approximately ten parsecs), high speed knots are found to trace out coherent velocity features that slowly vary between being red and blue shifted. The clearest example is perhaps that in slit position B3 where at offsets of approximately 0 to 50 ′′ the feature is redshifted and between around 50 and 100 ′′ it is blueshifted with respect to the bright continuous feature.
DISCUSSION
Giant shells in 30 Doradus
The current explanation for the origin of the 30 Doradus nebula is that as the massive stars within the nebula evolve, their winds (especially during the Wolf-Rayet phase) and subsequent supernova explosions generate swept-up shells of ionized gas (Meaburn 1988(Meaburn , 1991. The shells are observed to have a hierarchy of sizes and velocities as one moves further into the halo of 30 Doradus. In the dense centre, the shell sizes and velocities are ∼ 1 pc and ∼ 10 − 50 km s −1 respectively, while in the halo the sizes reach ∼ 100 pc with velocities of up to 100 km s −1 (see Fig. 1). The shells are prone to instabilities and can break up and fragment, venting the interior pressure. For example a dense shell that is accelerating (due to either a rapid drop in the external density or to a new supernova explosion within the shell) may break up via the Rayleigh-Taylor instability. Alternatively, dynamical overstabilites can also lead to a shell breaking up into fragments (Mac Low & Norman 1993). In both cases, the fragments produced will have sizes of the order of the shell thickness. In general, the halo of 30 Doradus, into which the shells are expanding, is inhomogeneous. This inhomogeneity, and the disruptive effects of nearby supernovae and winds from stars not within the original shell, will mean that a shell will not remain coherent for long.
It is important to note that the LMC is thought to be flattened and viewed close to face-on. The scale height of the H i in the LMC disk was calculated by Kim et al. (1999) to be ∼ 180 pc so that it is likely that the structure 30 Doradus may also be somewhat flattened. The scale-height imposes a limit on the sizes of the shells that can be formed, irrespective of how intense and coeval in time the massive star activity is. As the shells grow, they become elongated in the direction of the density scale height (Koo & McKee 1992), leading to a break up of the shell in this direction and a 'blow-out' of the hot interior gas into the galactic halo. The remaining structure is known as a galactic chimney (Norman & Ikeuchi 1989) and they have been observed in the Galaxy (Normandeau et al. 1996) and in the starburst galaxy M82 (Wills et al. 1999).
The giant shells of 30 Doradus are likely to be the maximum sized spherical momentum conserving shell structures, since the scale height of the LMC is comparable to their diameters. The supergiant shells far exceed this scale height and may be collections of fossil chimneys viewed faceon and also the result of propagating star formation (see e.g. McCray & Kafatos 1987) that is constrained to proceed in the plane of the galaxy, resulting in a ring shape supergiant shell. The loss of driving pressure means they are expanding in a momentum conserving phase and surround a low density cavity. Such cavities are clearly seen in the H i data of Kim et al. (1999) and Staveley-Smith et al. (2002). The hot gas that has escaped from the interior of the giant shells will enter the LMC halo. There is strong evidence for such a halo in the LMC. (Wakker et al. 1998) have used GHRS/HST observations to detect C iv absorption towards LMC stars that do not to reside within a shell. This means the hot gas implied by these observations is not local to the star and is likely to reside in the halo (see also Savage et al. 1997).
Origin of the velocity features
It would seem unlikely that the high speed knots represent random gas clouds within 30 Dor since that would require an explanation for both their hypersonic velocities and the systematic way they are distributed about the pv arrays. In terms of the scenario discussed above (section 3.1), a straight-forward interpretation of the kinematical features is as follows. The largest scale features represent old giant shells that have broken up via Rayleigh-Taylor (RT) instabilities. An instability is generated as the shell is accelerated by its interior pressure through a decreasing ambient density. Those portions of the shell that are expanding in the plane of the LMC are less prone to disruption. The fragments that used to be part of the shell have continued to coast at the pre-break up velocity and together these remain as a coherent velocity feature. For the RT instability, the characteristic knot size at the break up of the shell will be of the order of the thickness of the shell. The sizes implied by this picture seem reasonable -the old shell will have a dimension of up to a hundred parsecs towards the outer regions of the nebulosity while the shell wall will have had a thickness of a few parsecs. These estimates are in accord with measurements of intact shells observed within the halo of 30 Doradus and elsewhere in the LMC (e.g. Oey 1996).
The smaller chains of high-speed knots could be due to more localised disruptions of the giant shells due to, for example, a neighbouring supernova explosion. This latter scenario was proposed by Redman et al. (1999) to explain the unique Honeycomb nebula, which lies in the halo of 30 Doradus. They argued that its cellular structure is due to a shell that has begun to fragment by a RT instability being impacted by a blast wave from a nearby SN explosion. There have been approximately ∼ 40 supernova explosions within the halo of 30 Doradus in the last 10 4 yr alone (Meaburn 1991) (in comparison, in the starburst galaxy M82 there have been ≃ 50 SNe in the last ≃ 200 yr; Muxlow et al. 1994).
In the H i pv array data of Staveley-Smith et al. (2002), the LMC and the Galaxy are well separated in velocity. In their data, the Galaxy does not exhibit kinematical features with a VHEL 100 km s −1 while the LMC does not exhibit kinematical features with a VHEL 100 km s −1 . In our data faint velocity features are seen from the VHEL of 30 Doradus down to a VHEL 100 km s −1 . However, it is unlikely that these velocity features are associated with the Galaxy rather than the LMC for several reasons. Firstly, such features are not seen at slit positions offset from 30 Doradus; secondly, many of the features can be traced back
Superwind from 30 Doradus
Starburst activity can give rise to a 'superwind' due to the intense radiation fields, winds and supernova explosions caused by the star formation. The extensive high velocity features revealed here may be marking the very base of a superwind localised around the 30 Doradus complex. The escape velocity of gas from the LMC in the neighbourhood of 30 Doradus is around 150 km s −1 so that the high speed ionized gas from disrupted shells and giant shells is escaping the gravitational pull of 30 Doradus and is being ejected perpendicularly to the plane of the LMC, along the line of sight. The ionization boundary due to the R130 cluster (perpendicular to the plane of the LMC) will depend on the distribution of the gas but an upper limit of a few hundred parsecs can be estimated by calculating the Strömgren radius due to an ionizing flux of ∼ 10 51 s −1 from the ∼ 100 O stars and a mean gas density of ∼ 1 cm −3 . Assuming an ejection speed of 200 km s −1 , it will take of the order 2 × 10 6 yr to reach a distance of ∼ 500 pc from the point of origin and thus escape the 30 Doradus region. The gas will rapidly recombine once it has passed the ioniziation boundary and will then not be visible on the [O iii] 5007Å pv arrays. The dynamical timescale of the remaining giant shell walls is much longer since their progress in the direction of the plane of the LMC is slower than the material ejected perpendicular to the plane. High velocity H i clouds will be formed by the escaping material as it recombines and these clouds may be detectable in high resolution and high sensitivity H i kinematical studies. Of course, the kinematics of gas ejected from the LMC rapidly becomes highly complex due to the interaction with the Galaxy (Wakker & van Woerden 1997).
CONCLUSIONS
In this work the kinematics of sample region of the 30 Doradus nebula have been investigated using the MES. This intensive study has revealed high speed velocity features throughout this region. Although the kinematics are complex, general patterns are discerned at three different spatial scales. Small coherent velocity features are present throughout the region. These knots are often found to form loops and chains in the pv arrays and at the largest scales, can form velocity features which vary slowly between red and blue-shifted emission. It is suggested that all of these fea-tures are explicable in terms of the current understanding of the 30 Doradus nebula. Shells and giant shells formed by the winds and supernovae of massive stars form and are then disrupted in the energetic turbulent environment of the halo of 30 Doradus. The fragments of the shells retain the velocity pattern of the original shell and are observed as the small high speed knots. If this explanation is correct, then high velocity knots are likely to be found across much of the face of 30 Doradus wherever the size of the giant shells have exceeded the scale-height of the LMC and led to 'blow-out'. The whole 30 Doradus nebula is flattened and viewed face on. The high speed velocity fragments are likely to form the base of an outflowing superwind that is escaping the galaxy. This is a microcosm of the processes that are taking place in starburst galaxies such as M82 in which there are many super star clusters like 30 Doradus and whose combined output lead to the spectacular optical filaments that mark the M82 superwind.
Figure 1 .
1Cartoon of the 30 Doradus nebula to illustrate the hierarchy of shell scale sizes. The ambient density increases towards R136.
Figure 2 .
230 Doradus nebula. This figure is a cropped reproduction of ESO PR Photos 14a/02. The white box marks the region from which line profiles were obtained. The size of the region displayed is approximately 200 pc across
Figure 3 .
3MES slit positions marked against the background nebulosity. This region corresponds to the white box in the previous figure
Figure 4 .
4Position-velocity arrays of line profiles from slit block A. The vertical scale is heliocentric radial velocity, V HEL ( km s −1 ) and the horizontal scale is in arcseconds. At the distance of the LMC, 1 ′′ = 0.27 pc.
Figure 5 .
5Position-velocity arrays of line profiles from slit block B.The vertical scale is V HEL ( km s −1 ) and the horizontal scale is in arcseconds. At the distance of the LMC, 1 ′′ = 0.27 pc. to the 30 Doradus systemic velocity; thirdly, there is no known Galactic H ii region or ionizing source along the line of sight that could excite the [O iii] 5007Å emitting gas and there is also no extensive background [O iii] 5007Å emission in the galaxy (compare Fig. 7 and 8 here with figure 9 of Staveley-Smith et al. 2002).
Figure 6 .
6Position-velocity arrays of line profiles from slit block C. The vertical scale is V HEL ( km s −1 ) and the horizontal scale is in arcseconds. At the distance of the LMC, 1 ′′ = 0.27 pc.
Figure 7 .
7Deep presentation of position-velocity arrays of line profiles from slit block C4 to highlight fainter features.
Figure 8 .
8Deep presentation of position-velocity arrays of line profiles from slit position C6 to highlight fainter features.
c 2002 RAS, MNRAS 000, 1-8
ACKNOWLEDGEMENTSJM and MB would like to thank the staff at the AAT, who provided their usual excellent service during the observing run. MPR is supported by PPARC. A King Abdulaziz City for Science and Technology 'KACST' studentship is acknowledged by ZAA. We thank the referee for comments which improved the paper.
. Z A Al-Mostafa, ApJ. Manchester Chu Y. H., Kennicutt R. C.425720University ofPhD thesisAl-Mostafa Z. A., 1999, PhD thesis, University of Manch- ester Chu Y. H., Kennicutt R. C., 1994, ApJ, 425, 720
. C A Clayton, A&A. 173137Clayton C. A., 1987, A&A, 173, 137
. G Garay, L F Rodríguez, J M Moran, E Churchwell, ApJ. 418368Garay G., Rodríguez L. F., Moran J. M., Churchwell E., 1993, ApJ, 418, 368
. S Kim, M A Dopita, L Staveley-Smith, M S Bessell, AJ. 1182797Kim S., Dopita M. A., Staveley-Smith L., Bessell M. S., 1999, AJ, 118, 2797
. B C Koo, C F Mckee, ApJ. 38893Koo B. C., McKee C. F., 1992, ApJ, 388, 93
C Leitherer, Stellar astrophysics for the local group: VIII Canary Islands Winter School of Astrophysics Populations of Massive Stars and the Interstellar Medium. 527Leitherer C., 1998, in Stellar astrophysics for the local group: VIII Canary Islands Winter School of Astrophysics Populations of Massive Stars and the Interstellar Medium. p. 527
. Mac Low, M M Norman, M L , ApJ. 407207Mac Low M. M., Norman M. L., 1993, ApJ, 407, 207
. R Mccray, M Kafatos, ApJ. 317190McCray R., Kafatos M., 1987, ApJ, 317, 190
. R X Mcgee, L M Newton, P W Butler, MNRAS. 183799McGee R. X., Newton L. M., Butler P. W., 1978, MNRAS, 183, 799
. J Meaburn, MNRAS. 192365Meaburn J., 1980, MNRAS, 192, 365
. J Meaburn, MNRAS. 19Meaburn J., 1981, MNRAS, 196, 19
. J Meaburn, MNRAS. 211521Meaburn J., 1984, MNRAS, 211, 521
. J Meaburn, MNRAS. 229457Meaburn J., 1987, MNRAS, 229, 457
. J Meaburn, MNRAS. 235375Meaburn J., 1988, MNRAS, 235, 375
. J Meaburn, MNRAS. 244551Meaburn J., 1990, MNRAS, 244, 551
IAU Symposium 148 The Magellanic Clouds Studies of the large magellanic cloud using optical interstellar emission lines. J Meaburn, Haynes R., Milne D.421Meaburn J., 1991, in Haynes R., Milne D., eds, IAU Sym- posium 148 The Magellanic Clouds Studies of the large magellanic cloud using optical interstellar emission lines. p. 421
. J Meaburn, B Blundell, R Carling, D F Gregory, D Keir, C G Wynne, MNRAS. 210463Meaburn J., Blundell B., Carling R., Gregory D. F., Keir D., Wynne C. G., 1984, MNRAS, 210, 463
. T W B Muxlow, A Pedlar, P N Wilkinson, D J Axon, E M Sanders, A G De Bruyn, MNRAS. 266455Muxlow T. W. B., Pedlar A., Wilkinson P. N., Axon D. J., Sanders E. M., de Bruyn A. G., 1994, MNRAS, 266, 455
. C A Norman, S Ikeuchi, ApJ. 345372Norman C. A., Ikeuchi S., 1989, ApJ, 345, 372
. M Normandeau, A R Taylor, P E Dewdney, Nature. 380687Normandeau M., Taylor A. R., Dewdney P. E., 1996, Na- ture, 380, 687
. M S Oey, ApJ. 467666Oey M. S., 1996, ApJ, 467, 666
. A B Peck, W M Goss, H R Dickel, ApJ. 486107Peck A. B., Goss W. M., Dickel H. R., et al 1997, ApJ, 486, 107
. M P Redman, Z A A Al-Mostafa, J Meaburn, M Bryce, J E Dyson, A&A. 345943Redman M. P., Al-Mostafa Z. A. A., Meaburn J., Bryce M., Dyson J. E., 1999, A&A, 345, 943
. B D Savage, K R Sembach, L Lu, AJ. 1132158Savage B. D., Sembach K. R., Lu L., 1997, AJ, 113, 2158
. L Staveley-Smith, S Kim, M R Calabretta, MNRAS. press Wakker B., Howk J. C., Chu Y. H., Bomans D., Points S. D.49987ApJStaveley-Smith L., Kim S., Calabretta M. R., et al 2002, MNRAS, in press Wakker B., Howk J. C., Chu Y. H., Bomans D., Points S. D., 1998, ApJ, 499, L87
. B P Wakker, H Van Woerden, ARA&A. 35217Wakker B. P., van Woerden H., 1997, ARA&A, 35, 217
. N R Walborn, J Maíz-Apellániz, R H Bardá, AJ. 1241601Walborn N. R., Maíz-Apellániz J., Bardá R. H., 2002, AJ, 124, 1601
. Q Wang, D J Helfand, ApJ. 370541Wang Q., Helfand D. J., 1991, ApJ, 370, 541
. K A Wills, M P Redman, T W B Muxlow, A Pedlar, MNRAS. 309395Wills K. A., Redman M. P., Muxlow T. W. B., Pedlar A., 1999, MNRAS, 309, 395
| []
|
[
"EVA-Planner: Environmental Adaptive Quadrotor Planning",
"EVA-Planner: Environmental Adaptive Quadrotor Planning"
]
| [
"Lun Quan ",
"Zhiwei Zhang ",
"Xingguang Zhong ",
"Chao Xu ",
"Fei Gao "
]
| []
| []
| The quadrotor is popularly used in challenging environments due to its superior agility and flexibility. In these scenarios, trajectory planning plays a vital role in generating safe motions to avoid obstacles while ensuring flight smoothness. Although many works on quadrotor planning have been proposed, a research gap exists in incorporating self-adaptation into a planning framework to enable a drone to automatically fly slower in denser environments and increase its speed in a safer area. In this paper, we propose an environmental adaptive planner to adjust the flight aggressiveness effectively based on the obstacle distribution and quadrotor state. Firstly, we design an environmental adaptive safety aware method to assign the priority of the surrounding obstacles according to the environmental risk level and instantaneous motion tendency. Then, we apply it into a multi-layered model predictive contouring control (Multi-MPCC) framework to generate adaptive, safe, and dynamical feasible local trajectories. Extensive simulations and real-world experiments verify the efficiency and robustness of our planning framework. Benchmark comparison also shows superior performances of our method with another advanced environmental adaptive planning algorithm. Moreover, we release our planning framework as open-source ros-packages 1 . arXiv:2011.04246v2 [cs.RO] 5 Jul 2021 | 10.1109/icra48506.2021.9561759 | [
"https://arxiv.org/pdf/2011.04246v2.pdf"
]
| 226,282,070 | 2011.04246 | bf1abe856832e223a08089ea9f7d130f9451c3c3 |
EVA-Planner: Environmental Adaptive Quadrotor Planning
Lun Quan
Zhiwei Zhang
Xingguang Zhong
Chao Xu
Fei Gao
EVA-Planner: Environmental Adaptive Quadrotor Planning
The quadrotor is popularly used in challenging environments due to its superior agility and flexibility. In these scenarios, trajectory planning plays a vital role in generating safe motions to avoid obstacles while ensuring flight smoothness. Although many works on quadrotor planning have been proposed, a research gap exists in incorporating self-adaptation into a planning framework to enable a drone to automatically fly slower in denser environments and increase its speed in a safer area. In this paper, we propose an environmental adaptive planner to adjust the flight aggressiveness effectively based on the obstacle distribution and quadrotor state. Firstly, we design an environmental adaptive safety aware method to assign the priority of the surrounding obstacles according to the environmental risk level and instantaneous motion tendency. Then, we apply it into a multi-layered model predictive contouring control (Multi-MPCC) framework to generate adaptive, safe, and dynamical feasible local trajectories. Extensive simulations and real-world experiments verify the efficiency and robustness of our planning framework. Benchmark comparison also shows superior performances of our method with another advanced environmental adaptive planning algorithm. Moreover, we release our planning framework as open-source ros-packages 1 . arXiv:2011.04246v2 [cs.RO] 5 Jul 2021
I. INTRODUCTION
Nowadays, quadrotors are increasingly used in dangerous scenarios such as mine exploration, quick response rescue, and target search [1]. These applications may be difficult and dangerous for human beings and have high demands on the autonomy and robustness of drone.
Although there are extensive works on quadrotor motion planning, few of them consider the self-adaptation of flight aggressiveness. Imagine this situation, for a quadrotor flies in a forest, as it moves more aggressively, there may occur more deviations in the state estimation, perception, and control. These errors result in a higher probability of a crash, especially while the quadrotor navigates between multiple obstacles in a narrow space. In contrast, if the quadrotor operates in a wide-open space, a large safety margin naturally promises that the drone can fly faster. Moreover, in this scenario, the quadrotor should fly fast instead of limit its speed conservatively. Based on the above analysis, an ideal planner needs to leave sufficient planning margin by automatically adjusting its aggressiveness according to the degree of risk, (c) and (d) are the snapshots of critical moments. The curve represents the generated trajectory, and the gradient color is the planned velocity. This experiment is described in detail in Sec. V-C. Video is available at https: //www.youtube.com/watch?v=HcwBNcah0eo.
which is directly decided by the obstacle distribution and quadrotor dynamics.
To this end, we propose an environmental adaptive quadrotor planning method, named EVA (EnVironmental Adaptive)-Planner, which effectively adjusts the flight aggressiveness based on multi-layered adaptive planning. Our method takes the environmental risk level and instantaneous motion tendency into account. In this way, quadrotor can adaptively assign the priority of obstacles around it and plan a more safe and efficient flight trajectory. This method is implemented with a unified planning framework extending our recent results on quadrotor model predictive contouring control [2]. We summarize our contributions as: 1) An environmental adaptive safety aware method is proposed with a reasonable judgment of the danger degree of surrounding obstacles according to the environmental information and the system's motion tendency. 2) A unified planning framework for generating aggressiveness-adaptive, obstacle-free, smooth, and dynamics feasible trajectory using multi-layered model predictive contouring control.
3) Extensive simulations and real-world experiments verify the robustness of our method. Benchmark shows the efficiency of our method in generating fast and safe trajectories. The rest of the paper is organized as follows, related works are discussed in Sec.II. Our method for environmental adaptive safety aware is stated in Sec.III, and the multi-layered planning framework is described in detailed in Sec.IV. Simulation and real-world experiments results are discussed in Sec.V. This paper is concluded in Sec.VI.
II. RELATED WORKS
A. Autonomous navigation algorithms
The problem of online autonomous navigation in the 3D complex environment has been investigated extensively. According to the planner structure, the planning algorithms can be divided into direct and hierarchical methods.
Direct methods plan trajectories on the abstract environment, which reduces the computational load. By establishing a safe flight corridor (SFC) directly on the depth map [3] or point clouds [4], trajectory planning can be carried out without building the grid map. Nanomap [5] generates rough k-d trees for collision check. However, direct methods tend to fail in a complex environment due to the memoryless of the map. Hierarchical methods such as [6]- [9] employ a planning framework with front-end path searching and back-end trajectory optimizing, which transforms the trajectory generation to a nonlinear optimization problem and formulates it as trading off smoothness, obstacle-free, and dynamical feasibility simultaneously. However, the executed trajectory may hit the obstacles due to the tracking errors caused by the high-speed motion, even if the desired obstacle-free trajectory is planned.
B. Adaptive fast and safe planning FASTER [10] plans a safe trajectory and a fast trajectory simultaneously at each time and realizes safe-fast flight through practical replanning design. However, FASTER uses mixed integer programming and can only maintain real time planning performance when optimizing in several polyhedras. Some works mention the importance of jointly considering the geometry of configuration space and system dynamics, but few have shown convincing performance for a quadrotor in 3D dense environments. A planning framework is proposed in [11] with offline precomputed safety margins relative to different system dynamics and an online generated tree of trajectories that adaptively search a proper planning model. However, if the quadrotor needs to fly at high speed, substantial pre-computed models prevent it from being searched online. A new state-dependent directional metric is designed in [12] to adaptively adjust the influence of the environment on the system according to the velocity direction of vehicle. This metric checks collision based on how dangerous an obstacle is. In this way, obstacles parallel with velocity are considered more likely to cause collision than obstacles perpendicular to the velocity. However, this method treats the direction along and reverse with the velocity as same, thus behaves rather conservative. For instance, even when the vehicle is escaping from an obstacle-rich area, it also lowers its speed to satisfy the collision checker.
The above planning methods only consider the influence of the closest distance between the trajectory and the obstacles when dealing with safety, which is counterintuitive because the safety is also related to the system dynamics. For example, the degree of danger to the system is different when the quadrotor is flying towards and away from the obstacles, even if the distance is the same between the quadrotor and obstacles. In order to solve this problem, an environmental adaptive safety aware method is proposed in Sec.III.
III. ENVIRONMENTAL ADAPTIVE SAFETY AWARE
As mentioned above, it is essential to adaptively adjust the safety aware of quadrotor according to the environmental risk level and instantaneous motion tendency. EVA-planner uses the environmental adaptive safety aware (EASA) to calculate the risk weight η which regulates the flight behavior of quadrotor, which comprehensively considers the gradient ∇c of the Euclidean signed distance field (ESDF) with respect to the velocity v.
Inspired by [12], we realize that it is essential to assign the priority of obstacles depending on the system velocity. So we propose a method to adjust the priority of obstacles more intuitively than the above method, which uses the sigmoid function to map the cosine of the angle between v and ∇c to the risk weight η ∈ [0, 2]. The function is defined as
η(β) = 2 1 + e αβ ,(1)β = < v, ∇c > v ∇c ,(2)
where α ∈ R + is the change rate coefficient and <, > is the dot product operation. As shown in Fig.2, η is maximum when v and ∇c are opposite, and η is minimum when v and ∇c are in the same direction. We simply regard the situation β ∈ (0, 1] as safe and the situation β ∈ [−1, 0] as dangerous. The impact of α in (1) is shown in the Fig.2. With the α increasing, the difference of the risk weight η between dangerous and safe situations also increases. EASA reasonably corresponds each pair of {v, ∇c} to a risk weight η. This method provides more intelligent environmental information for the planning process, reducing conservative planning, and providing clear signals in dangerous situations. As shown in Fig.3, η is large when a quadrotor flies into the corridor, which means it is in a dangerous situation and needs to fly cautiously at a slower velocity. Moreover, η is small when a quadrotor flies away from the corridor, which means it is in a safe state and can fly more aggressively. The application of EASA to the trajectory generation is described in detailed in Sec.IV-D.
IV. MULTI-LAYERED MODEL PREDICTIVE CONTOURING
CONTROL FRAMEWORK
A. Multi-layered planning framework Thanks to the differential flatness of the quadrotor dynamics stated in [13], we deal with the quadrotor as a particle model. In the planning process, the trajectories of the three axes x, y, z are decoupled, and the yaw angle ψ planning is carried out independently just like [14].
This paper designs a planning framework with three coarse-to-fine layers to generate receding-horizon local adaptive trajectories in real-time. The framework is unified among these layers and can be applied to quadrotors with different computing forces. This property enables planners to deal with approximations of system with arbitrary high order, depending on the hardware.
In the first layer, we use the path finding algorithm, such as sampling-based method RRT* [15] or searching-based method A* [16] to generate a global guiding path which connects the start and end points in the free space. In the middle layer, we use the low-order motion model to optimize the geometry of guiding path, named Low-level MPC. By doing so, the computing resources required to generate a reference trajectory can be reduced. In the last layer, we generate the local trajectory by optimizing EASA, tracking error, flight aggressiveness, and dynamical feasibility with high order system, named High-level MPCC.
B. System representation
The same system representation is used in Sec.IV-C and Sec.IV-D with different system dimension and order. The different parameter selections can be found in Tab.I. The time interval between the adjacent state points is fixed as δt. We assume that the system input u i is constant in δt. Therefore the kinematic relationship between the two adjacent points can be expressed as
s i+1 = A d s i + B d u i ,(3)
where A d and B d are the state-transfer matrixes governed by the d th -order integral model. Based on the above assumptions, state sequence {s i } can be represented as a mapping of the input sequence {u i } by using the state transition function
S µ = AU µ + Bs 0 ,(4)A = 0 · · · · · · · · · 0 B d 0 · · · · · · 0 A d B d B d 0 · · · 0 . . . . . . . . . . . . . . . A H−1 d B d A H−2 d B d A H−3 d B d · · · B d , ,(5)B = (I, A d , A d 2 · · · A d H ) T ,(6)
where state vector S µ = [s 0 , s 1 · · · s H ] T and input vector U µ = [u 1 , u 2 · · · u H ] T . From (4), all system states can be represented by U µ when the initial state s 0 is known. Therefore the decision variables of the trajectory optimization problem are reduced to U µ . Meanwhile, we construct the trajectory generation as an unconstrained optimization problem by transforming the constraints into penalty terms. The reference trajectory generation and the local trajectory optimization are explained in detail in Sec.IV-C and Sec.IV-D respectively.
C. Low-level MPC
The primary purpose of this layer is to optimize a geometrically continuous and obstacle-free reference trajectory quickly. Inspired by [17], the guiding path attracts the trajectory to escape the local minimum, and ESDF pushes the trajectory to find the local optimum. In this paper, we choose the velocity as the input to the reference trajectory, as shown in Tab.I. The objective function is
min κ 1 J s + κ 2 J c + κ 3 J u ,(7)
where J s is the similarity penalty of the distance between the trajectory and the guiding path, J c is the collision cost, J u is the smoothness term. And κ 1 , κ 2 , κ 3 are the weights of these penalty items respectively
J s = M i=1 p i − g i 2 ,(8)s i = p i i = 0, 1...M s i = [p i , v i , a i ] T i = 0, 1...N input u i = v i i = 1, 2...M u i = j i i = 1, 2...N transform matrix A 1 = 1 B 1 = δt A 3 = 1 δt δt 2 2 0 1 δt 0 0 1 B 3 = [ δt 3 6 , δt 2 2 , δt] T J c = M i=1 F c (c(p i )),(9)J u = M −1 i=1 (u i+1 − u i ) 2 ,(10)
where g i is the i th uniformly distributed point of the guiding path and c(p) is the distance of point p in the ESDF. Because the input u i is constant in δt, the sum of squares of the difference between adjacent inputs is used to indicate the smoothness of trajectory in (10). F c (·) is the penalty function of collision cost as
F c (c(p)) = (c(p) − c thr ) 2 , if c(p) < c thr 0, if c(p) ≥ c thr ,(11)
where c thr is the safe distance threshold. The collision cost starts to rise rapidly when the distance from the obstacles is less than this threshold. The optimized reference trajectory ρ µ (θ) is parameterized by θ ∈ [0, M δt], which is equal to the time parameter.
D. High-level MPCC
After generating a reference trajectory, we take model predictive contouring control (MPCC) in the final layer. In this layer, we choose the jerk as the input and add the state {p θ,i , v θ,i , a θ,i } of reference points which travels on the reference trajectory as another system dimension, as shown in Tab.I.
Inspired by [2,18], we optimize the tracking accuracy and traveling progress simultaneously. Meanwhile we consider the environmental adaptive flight by calculating EASA which is detailed in Sec.III. Furthermore, there is also a penalty for violating the dynamical feasibility. The objective function is where f s is the tracking error between the local trajectory and the reference trajectroy, f p indicates the progress of reference points and f e represents EASA penalty itemm, f c is the collision cost same as (9) which is for obstacle avoidance and f d is the penalty for violating kinodynamics feasibility. λ i , i = 1 · · · 5 are the weights of these items
min λ 1 f s + λ 2 f p + λ 3 f e + λ 4 f c + λ 5 f d ,(12)f s = N i=1 p i − p θ,i 2 ,(13)f p = −δt N i=1 v θ,i ,(14)f e = N i=1 η(β)F c (c(p i ))( v i − v thr ) 2 ,(15)
above three cost function items are the main determinants of the trajectory behavior. The optimization problem trades off f s and f p to make the trajectory as fast as possible while ensuring the tracking error small enough. However, when the quadrotor flies towards obstacles, as shown in Fig.4, η increases to make f e become the primary determinant of the trajectory behavior, which causes the trajectory to slow down until the quadrotor is out of the dangerous area. The derivatives of (13) and (14) can be found in [18]. The derivative of (15) is
∂f e ∂U µ = N i=1 2αe αβ (1 + e αβ ) 2 ∂β ∂U µ F c (c(p i ))( v i − v thr ) 2 + η(β)∇ µ F c (c(p i )) ∂p i ∂U µ ( v i − v thr ) 2 + η(β)2F c (c(p i )) v µ v i ( v i − v thr ) ∂v i ∂U µ , ∂β ∂U µ = ∇ µ c(p i ) v 2 − v µ < v, ∇c > v 3 ∇c(p i ) ∂v i ∂U µ(16)+ v µ ∇c(p i ) 2 − ∇ µ < v, ∇c > v ∇c(p i ) 3 ∇ µ 2 c(p i ) ∂p i ∂U µ ,
where ∇ µ 2 c(p i ) is the second derivative in the third-order interpolation ESDF and v thr is generally set as 0.1m/s to prevent the optimized trajectory velocity from being zero.
f c = N i=1 F c (c(p i )),(17)f d = N i=1 (F d (v i ) + F d (a i ) + F d (j i ) + F d (v θ,i )),(18)
where F c (·) is the penalty function of collision cost same as (11). F d (·) is the penalty function of kinodynamics feasibility
F d (D) = µ f d (d µ ),(19)f d (d µ ) = −(d µ − d min ) 3 , d µ < d min , 0, d min ≤ d µ ≤ d max , (d µ − d max ) 3 , d µ > d max ,(20)
where d min and d max are the lower and upper bound of each variable.
V. RESULTS
A. Implementation Details
Extensive simulation and real-world flight experiments are carried out to test the effectiveness and robustness of proposed method. We use the flight platform proposed in [19] with Intel Realsense D435 2 for perception and mapping. Modules including state estimation, environment perception, trajectory planning, and flight control are all running onboard computer Manifold2 3 with Intel i7-8550U in real-time. All simulations run on a laptop with Intel i7-6500U.
In both simulation and real-world experiments, we generate the reference trajectory by solving (7) at a fixed interval of 2 seconds and check whether the reference trajectory has a collision with the new explored environment at a frequency of 100Hz. If the collision is detected, the reference trajectory 2 https://www.intelrealsense.com/depth-camera-d435/ 3 https://www.dji.com/cn/manifold-2
B. Benchmark
We design a benchmark comparison to validate the effectiveness of EVA-planner. We compare EVA with SDDM [12], which also incorporates the velocity direction into the judgement of obstacles. However, this method is still conservative when the system flies away from the obstacles, as shown in Fig.5. The performance of SDDM is adjusted to the best according to the parameters in [12]. The velocity limit is set as 3m/s, and the same reference trajectories are generated. 40m × 5m random forest map and narrow gate map are used in tests. Comparisons of the running times are shown in Tab.II. When the number of obstacles in the random forest map increases, the running time gap between EVA and SDDM gradually increases.
The benchmark shows that EVA-planner has better adaptability to the environment and can generate a more aggressive trajectory while ensuring safety.
DECELERATION ACCELERATION
EVA SDDM 0 / 3 / Fig. 5.
Velocity curve comparison of EVA and SDDM in the narrow gate map. Our method shows a more reasonable change in velocity when passing through a narrow gate in contrast to the zoom-in trajectories.
C. Real-world Test
We design several real-world experiments to validate the feasibility and robustness of our method. All calculations are run on the onboard computer in real-time. In order to verify the characteristics of EVA-planner, we set up the quadrotor to go through a loop in the straight line, as shown in Fig.6. Velocity limit is set as 3m/s. Composite snapshots and executed trajectory show that a quadrotor slows down when it approaches the loop and accelerates as soon as it passes through it. The quadrotor flies in maximum velocity on the other times. This experiment verifies that EVA-planner can increase the aggressiveness of the flight trajectory while ensuring safety.
To verify the necessity of our method, we compare the flight results with & without EASA in a complex indoor environment, as shown in Fig.7. We design a challenging scenario for UAV planning in response to the limited FOV of visual cameras. The quadrotor flies through a narrow gap and immediately turns to the right where an obstacle is placed in its path. In Fig.7b and Fig.7d, trajectory without EASA performs unsafe aggressiveness when it flies through the narrow gap. This behavior causes the quadrotor to have no time to avoid the sudden obstacle. By contrast, from Fig.7b and Fig.7d, the quadrotor with EASA performs conservative velocity when passing the dangerous area, which makes the system have enough time to deal with the sudden danger. We repeat the experiments with EASA seven times with a success rate of 100%. However, two out of three experiments without EASA fail and hit the obstacles.
In the outdoor experiment, the quadrotor flies from an open area into the bushes, then out of the bushes into another open area, as shown in Fig.1. Although the bushes are rugged and leafy, the quadrotor can still carry out adaptive deceleration while flying into the dangerous area and adaptive acceleration while flying out of the dangerous area. As validated in the attatched video, we repeat ten flights, and all are successful. This experiment verifies the robustness of our algorithm in the complex unknown environment.
VI. CONCLUSIONS AND FUTURE WORK
In this paper, we propose a novel environmental adaptive safety aware method according to the gradient of ESDF and the velocity. Compared with the benchmark method, this method is more reasonable in judging the degree of danger. Therefore it can plan a more aggressive trajectory without affecting safety.
To apply this method into the trajectory generation, we design a unified multi-layered planning framework to generate smooth, adaptively aggressive, safe, and dynamical feasible local trajectories. Extensive simulations and realworld experiments validate the robustness and effectiveness of our planning framework. In the future, we plan to generate environmental adaptive trajectories in complex dynamic environments and intend to challenge our quadrotor system in more extreme situations.
This work was supported by National Natural Science Foundation of China under Grant 62003299 and Grant 62088101 1 State Key Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China. 2 Huzhou Institute, Zhejiang University, Huzhou 313000, China. 3 National Engineering Research Center for Industrial Automation (Ningbo Institute), Ningbo 315000, China. E-mail:{lunquan,cxu,fgaoaa}@zju.edu.cn 1 https://github.com/ZJU-FAST-Lab/EVA-planner
Fig. 1 .
1(a) and (b) are the side and vertical views of the outdoor experiment.
Fig. 2 .
2The polar graph of η with different v when ∇c is constant to (0, −1) in the 2D situation. The angle of the polar graph is between v and ∇c, and the length of the curve along the polar axis is the magnitude of η corresponds to velocity direction.
Fig. 3 .
3Illustration of η changing in the planning process. The risk weight η is a scalar, but we put it in the opposite direction of v to show its slowing effect on the quadrotor.
Fig. 4 .
4Illustration of EASA calculation during trajectory optimization.
∂pi ∂Uµ and ∂vi ∂Uµ are the corresponding row vectors of A in (5). The obstacle avoidance and kinodynamics constraints are
Fig. 6 .Fig. 7 .
67Composite snapshots and executed trajectory of flying through a loop Composite snapshots and executed trajetcory of flying in a complex indoor environment with & without EASA. The flight results in (a) and (c) are with EASA while (b) and (d) are without EASA.
TABLE I
ISYSTEM REPRESENTATION OF THE MULTI-LAYERED MPCCLow-level MPC
High-level MPCC
System dimension µ ∈ {x, y, z}
µ ∈ {x, y, z, θ}
System order
d = 1
d = 3
prediction horizons
H = M
H = N
State
TABLE II
IICOMPARISON OF RUNNING TIMES IN DIFFERENT MAPSDensity(obs./m 2 )
Random forest map
Gate map
0.04
0.16
0.28
0.40
SDDM time(s)
20.74 29.50 40.05 42.90
42.71
EVA time(s)
19.28 25.02 31.04 33.01
32.99
replanning is carried out immediately. For local trajectory
generation, we solve the unconstrained optimization problem
(12) with prediction horizons N = 40 and time step δt =
0.05s, so T = 2s. The trajectory optimization problem
is solved by a nonlinear optimization open-source library
NLopt 4 . The optimization times of Low-level MPC and
High-level MPCC are less than 10ms each time, respectively.
https://nlopt.readthedocs.io/
Survey of uav motion planning. L Quan, L Han, B Zhou, S Shen, F Gao, IET Cyber-Systems and Robotics. 21L. Quan, L. Han, B. Zhou, S. Shen, and F. Gao, "Survey of uav motion planning," IET Cyber-Systems and Robotics, vol. 2, no. 1, pp. 14-21, 2020.
Cmpcc: Corridor-based model predictive contouring control for aggressive drone flight. J Ji, X Zhou, C Xu, F Gao, arXiv:2007.03271arXiv preprintJ. Ji, X. Zhou, C. Xu, and F. Gao, "Cmpcc: Corridor-based model pre- dictive contouring control for aggressive drone flight," arXiv preprint arXiv:2007.03271, 2020.
Aggressive 3-d collision avoidance for high-speed navigation. B T Lopez, J P How, 2017 IEEE International Conference on Robotics and Automation (ICRA. B. T. Lopez and J. P. How, "Aggressive 3-d collision avoidance for high-speed navigation," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 5759-5765.
Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments. F Gao, W Wu, W Gao, S Shen, https:/onlinelibrary.wiley.com/doi/abs/10.1002/rob.21842Journal of Field Robotics. 364F. Gao, W. Wu, W. Gao, and S. Shen, "Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments," Journal of Field Robotics, vol. 36, no. 4, pp. 710-733, 2019. [Online]. Available: https: //onlinelibrary.wiley.com/doi/abs/10.1002/rob.21842
Nanomap: Fast, uncertainty-aware proximity queries with lazy search over local 3d data. P R Florence, J Carter, J Ware, R Tedrake, 2018 IEEE International Conference on Robotics and Automation (ICRA). P. R. Florence, J. Carter, J. Ware, and R. Tedrake, "Nanomap: Fast, uncertainty-aware proximity queries with lazy search over local 3d data," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 7631-7638.
Chomp: Gradient optimization techniques for efficient motion planning. N Ratliff, M Zucker, J A Bagnell, S Srinivasa, Proc. of the IEEE Intl. Conf. on Robot. and Autom. of the IEEE Intl. Conf. on Robot. and AutomN. Ratliff, M. Zucker, J. A. Bagnell, and S. Srinivasa, "Chomp: Gradient optimization techniques for efficient motion planning," in Proc. of the IEEE Intl. Conf. on Robot. and Autom., May 2009.
Continuous-time trajectory optimization for online uav replanning. H Oleynikova, M Burri, Z Taylor, J Nieto, R Siegwart, E Galceran, Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst. of the IEEE/RSJ Intl. Conf. on Intell. Robots and SystDaejeon, KoreaH. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Galceran, "Continuous-time trajectory optimization for online uav replanning," in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Daejeon, Korea, Oct. 2016.
Gradient-based online safe trajectory generation for quadrotor flight in complex environments. F Gao, Y Lin, S Shen, Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst. of the IEEE/RSJ Intl. Conf. on Intell. Robots and SystF. Gao, Y. Lin, and S. Shen, "Gradient-based online safe trajectory generation for quadrotor flight in complex environments," in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Sept 2017.
Robust real-time uav replanning using guided gradient-based optimization and topological paths. B Zhou, F Gao, J Pan, S Shen, arXiv:1912.12644arXiv preprintB. Zhou, F. Gao, J. Pan, and S. Shen, "Robust real-time uav replanning using guided gradient-based optimization and topological paths," arXiv preprint arXiv:1912.12644, 2019.
FASTER: Fast and safe trajectory planner for flights in unknown environments. J Tordesillas, B T Lopez, J P How, 2019J. Tordesillas, B. T. Lopez, and J. P. How, "FASTER: Fast and safe trajectory planner for flights in unknown environments," in 2019
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEIEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.
Planning, fast and slow: A framework for adaptive real-time safe trajectory planning. D Fridovich-Keil, S Herbert, J Fisac, S Deglurkar, C Tomlin, D. Fridovich-Keil, S. Herbert, J. Fisac, S. Deglurkar, and C. Tomlin, "Planning, fast and slow: A framework for adaptive real-time safe trajectory planning," 05 2018, pp. 387-394.
Fast and safe path-following control using a state-dependent directional metric. Z Li, N Arslan, Atanasov, IEEE Int. Conf. on Robotics and Automation (ICRA). 2020Z. Li, . Arslan, and N. Atanasov, "Fast and safe path-following control using a state-dependent directional metric," in IEEE Int. Conf. on Robotics and Automation (ICRA), 2020.
Minimum snap trajectory generation and control for quadrotors. D Mellinger, V Kumar, Proc. of the IEEE Intl. Conf. on Robot. and Autom. of the IEEE Intl. Conf. on Robot. and AutomShanghai, ChinaD. Mellinger and V. Kumar, "Minimum snap trajectory generation and control for quadrotors," in Proc. of the IEEE Intl. Conf. on Robot. and Autom., Shanghai, China, May 2011, pp. 2520-2525.
Ego-planner: An esdffree gradient-based local planner for quadrotors. X Zhou, Z Wang, H Ye, C Xu, F Gao, IEEE Robotics and Automation Letters. 62X. Zhou, Z. Wang, H. Ye, C. Xu, and F. Gao, "Ego-planner: An esdf- free gradient-based local planner for quadrotors," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 478-485, 2021.
Sampling-based algorithms for optimal motion planning. S Karaman, E Frazzoli, The International Journal of Robotics Research. 30S. Karaman and E. Frazzoli, "Sampling-based algorithms for optimal motion planning," The International Journal of Robotics Research, vol. 30, pp. 846-894, 2011.
. T H Cormen, C E Leiserson, R L Rivest, C Stein, MIT PressIntroduction to Algorithms, Third Edition. 3rd ed. TheT. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Third Edition, 3rd ed. The MIT Press, 2009.
Raptor: Robust and perceptionaware trajectory replanning for quadrotor fast flight. B Zhou, J Pan, F Gao, S Shen, B. Zhou, J. Pan, F. Gao, and S. Shen, "Raptor: Robust and perception- aware trajectory replanning for quadrotor fast flight," 2020.
Optimization-based autonomous racing of 1:43 scale rc cars. A Liniger, A Domahidi, M Morari, Optimal Control Applications and Methods. 36A. Liniger, A. Domahidi, and M. Morari, "Optimization-based au- tonomous racing of 1:43 scale rc cars," Optimal Control Applications and Methods, vol. 36, pp. 628-647, 09 2015.
Teach-repeatreplan: A complete and robust system for aggressive flight in complex environments. F Gao, L Wang, B Zhou, L Han, J Pan, S Shen, arXiv:arXiv preprintF. Gao, L. Wang, B. Zhou, L. Han, J. Pan, and S. Shen, "Teach-repeat- replan: A complete and robust system for aggressive flight in complex environments," arXiv preprint arXiv:, 2019.
| [
"https://github.com/ZJU-FAST-Lab/EVA-planner"
]
|
[
"Spin-5 2 resonance contributions to the pion-induced reactions for energies √ s 2.0 GeV",
"Spin-5 2 resonance contributions to the pion-induced reactions for energies √ s 2.0 GeV",
"Spin-5 2 resonance contributions to the pion-induced reactions for energies √ s 2.0 GeV",
"Spin-5 2 resonance contributions to the pion-induced reactions for energies √ s 2.0 GeV"
]
| [
"V Shklyar \nInstitut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany\n",
"G Penner \nInstitut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany\n",
"U Mosel \nInstitut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany\n",
"V Shklyar \nInstitut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany\n",
"G Penner \nInstitut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany\n",
"U Mosel \nInstitut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany\n"
]
| [
"Institut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany",
"Institut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany",
"Institut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany",
"Institut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany",
"Institut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany",
"Institut für Theoretische Physik\nUniversität Giessen\nD-35392GiessenGermany"
]
| []
| The spin-5 2 resonance effects are studied within the coupled channel effective Lagrangian model for baryon resonance analysis. We extend our previous hadronic calculations to incorporate the D15, F15, D35, F35 states. While the effect of the spin-5 2 resonances to the ηN , KΛ, and KΣ reactions are small, the contribution to the ωN is found to be important. The results for the 'conventional' and Pascalutsa-like spin-5 2 descriptions are discussed. PACS. 11.80.-m ,13.75.Gx,14.20.Gk,13.30.Gk 2 V. Shklyar et al.: Spin-5 2 resonance... | 10.1140/epja/i2004-10003-3 | [
"https://export.arxiv.org/pdf/nucl-th/0403064v1.pdf"
]
| 118,792,793 | nucl-th/0403064 | 6ab18fa97dc6eee5fbbbf4ce6a8a932a503d8b95 |
Spin-5 2 resonance contributions to the pion-induced reactions for energies √ s 2.0 GeV
22 Mar 2004
V Shklyar
Institut für Theoretische Physik
Universität Giessen
D-35392GiessenGermany
G Penner
Institut für Theoretische Physik
Universität Giessen
D-35392GiessenGermany
U Mosel
Institut für Theoretische Physik
Universität Giessen
D-35392GiessenGermany
Spin-5 2 resonance contributions to the pion-induced reactions for energies √ s 2.0 GeV
22 Mar 2004Received: date / Revised version: dateEPJ manuscript No. (will be inserted by the editor)1180-m1375Gx1420Gk1330Gk
The spin-5 2 resonance effects are studied within the coupled channel effective Lagrangian model for baryon resonance analysis. We extend our previous hadronic calculations to incorporate the D15, F15, D35, F35 states. While the effect of the spin-5 2 resonances to the ηN , KΛ, and KΣ reactions are small, the contribution to the ωN is found to be important. The results for the 'conventional' and Pascalutsa-like spin-5 2 descriptions are discussed. PACS. 11.80.-m ,13.75.Gx,14.20.Gk,13.30.Gk 2 V. Shklyar et al.: Spin-5 2 resonance...
Introduction
The extraction of baryon-resonance properties is one of the important tasks of modern hadron physics. Great efforts have been made in the past to obtain this information from the analysis of pion-and photon-induced reaction data. The precise knowledge of these properties is an important step towards understanding the hadron structure and finally the strong interactions.
Some quark models (see [1] and Refs. therein) predict that the baryon resonance spectrum may be richer then discovered so far. This is the so-called problem of 'missing' nucleon resonances. One assumes that these states are weakly coupled to pion channels and are consequently not clearly seen in πN , 2πN and ηN reactions from which experimental data are most often used for baryon-resonance analyses. To incorporate other possible finale states a unitary coupled-channel model (Giessen model) has been developed which includes γN , πN , 2πN , ηN , KΛ final states and deals with all available experimental data on pionand photon-induced reactions [2,3]. The most recent extensions of this model include KΣ and ωN final states [4,5,6] as well, which allows for the simultaneous analysis of all hadronic and photoproduction data up to √ s = 2 GeV. A shortcoming of this study is the missing of higher-spin resonances with spin J > 3 2 . Since the spin-5 2 resonances have large electromagnetic couplings [7,8,9] this limited the previous analysis of the Compton scattering data to the energy region √ s 1.6 GeV. Moreover, the extension to higher-spin baryon spectra becomes unavoidable for investigation of 'hidden' or 'missing' nucleon resonances. In particular, a study of the spin-5 2 part of the baryon spectra can shed light on the dynamics of the vector (ω and Send offprint requests to: a Supported by DFG and GSI Darmstadt ρ) meson production mechanisms which is itself a very intriguing question (see [10] and references therein).
In the present paper we study the effect of spin-5 2 resonance contributions to πN , 2πN , ηN , KΛ, KΣ, and ωN final states. Starting from the effective Lagrangian coupled-channel model [5] we extend our previous hadronic calculation [5] by including the D 15 , F 15 , D 35 , F 35 resonances and simultaneously analysing all available pioninduced reaction data in up to 2 GeV energy region. Due to the coupled-channel calculations this model provides a stringent test for the resonance contributions to the all open final states. Similar to the spin- 3 2 case in [5,6], the contributions from spin- 5 2 states are investigated for two different types of the spin- 5 2 couplings: for the 'conventional' (C) and Pascalutsa (P ) prescriptions. While the first approach dates back to the original work of Rarita and Schwinger [11] and is widely used in the literature, the latter one assumes the gauge-invariant resonance coupling. Although the data quality is not good enough to distinguish between these two pictures now, this question is challenging for an understanding of the meson-baryon interactions. With this aim in mind, the present work extends our earlier multi-channel analysis based on an effective Lagrangian approach by including also the spin-5 2 resonances.
The paper is organized as follows. We start in Sec. 2 with a description of the formalism concentrating mainly on the spin-5 2 couplings; the complete discussion of our model including all other couplings can be found in [5,6,12]. In Sec. 3 we discuss the results of our calculations in comparison with the previous studies [5] and finish with a summary.
The Giessen model
We solve the Bethe-Salpeter coupled-channel equation in the K-matrix approximation to extract scattering amplitudes for the final states under consideration. The validity of the K-matrix approximation has been tested by Pearce and Jennings who have performed a fit to the elastic πN phase shifts up to 1.38 GeV with the 'smooth', Blankenbecler-Sugar and the K-matrix propagators [13]. They have found no significant differences in the parameters extracted in the three cases. Also, a successful description of the pion-and photon-induced reaction data [5,6] and η-production [14] points to the applicability of this approximation for investigation of the baryon resonance spectra.
In order to decouple the equations we perform a partialwave decomposition of the T matrix into total spin J, isospin I, and parity P = (−1) J± 1 2 . Then the partial-wave amplitudes can be expressed in terms of an interaction potential K via the matrix equation
T I,J± = K I,J± 1 − iK I,J± ,(1)
where each element of the matrices T I,J± f i and K I,J± f i corresponds to a given initial and final state (i, f = πN , 2πN , ηN , KΛ, KΣ, ωN ). The interaction potential is approximated by tree-level Feynman diagrams which in turn are obtained from effective Lagrangians [5,12]. The T -matrix (1) fulfils unitarity as long as the K matrix is hermitian. In our model the following 19 resonances are included P 33 (1232), P 11 (1440), D 13 (1520), S 11 (1535), P 33 (1600), S 31 (1620), S 11 (1650), D 15 (1675), F 15 (1680), D 33 (1700), P 11 (1710), P 13 (1720), P 31 (1750), P 13 (1900), P 33 (1920), F 35 (1905), D 35 (1930), F 15 (2000), and D 13 (1950), which is denoted as D 13 (2080) by the PDG [7].
The Lagrangian for the spin-5 2 resonance decay to a final baryon B and a (pseudo)scalar meson ϕ is chosen in the form
L 5 2 ϕBR = g ϕBR m 2 πū µν R Θ µδ (a)Θ νλ (a ′ )Γ S u B ∂ δ ∂ λ ϕ + h.c. (2)
with the matrix Γ S = 1 1 if resonance and final meson have identical parity and Γ S = iγ 5 otherwise. The off-shell projector Θ µν (a) is defined by
Θ µν (a) = g µν − aγ µ γ ν ,(3)
where a is a free off-shell parameter. Since the on-shell symmetric spin-5 2 field u µν R has to obey the Dirac equation and satisfies the conditions γ µ u µν R = ∂ µ u µν R = g µν u µν R = 0 [11] the second part in (3) only contributes for off-shell particles, giving rise to lower off-shell spin components in (2). In general the interaction Lagrangian (2) can have two off-shell projectors matched with both vector indices of the resonance field tensor. However, as we will see later, a good description of the experimental data can be achieved already with a single parameter a keeping the second one equal to zero. Thus, to keep our model as simple as possible we use only one off-shell projector in (2).
The widths of the hadronic resonance decays as extracted from the Lagrangian (2) are
Γ ± (R 5 2 → ϕB) = I g 2 ϕBR 30πm 4 π k 5 ϕ E B ∓ m B √ s .(4)
The upper sign corresponds to the decay of a resonance into a meson with the identical parity and vice versa. I is the isospin factor and k ϕ , E B , and m B are the meson momentum, energy and mass of the final baryon, respectively.
The coupling of the spin-5 2 resonances to the ωN final state is chosen to be
L 5 2 ωN =ū µλ R Γ V g 1 4m 2 N γ ξ + i g 2 8m 3 N ∂ ξ N + i g 3 8m 3 N ∂ ξ ω ×(∂ ω ξ g µν − ∂ ω µ g ξν )u N ∂ ω λ ω ν + h.c.,(5)
where the matrix Γ V is 1 1 (iγ 5 ) for positive (negative) resonance parity and ∂ µ N (∂ ω µ ) denotes the partial derivative of the nucleon and the ω-meson fields, respectively. The above Lagrangian is constructed in the same manner as the one for spin-3 2 in [5]. Similar couplings were also used to describe electromagnetic processes [10,15,16,17]. Since the different parts of (5) contribute at different kinematical conditions we keep all three couplings as free parameters and vary them during the fit. The helicity amplitudes for the decay R → ωN are given by
A ωN 3 2 = √ E N ± m N √ 5m N k ω 4m 2 N (−g 1 (m N ∓ m R ) + g 2 (m R E N − m 2 N ) 2m N + g 3 m 2 ω 2m 2 N ), A ωN 1 2 = √ E N ± m N √ 10m N k ω 4m 2 N (g 1 (m N ± (m R − 2E N )) + g 2 (m R E N − m 2 N ) 2m N + g 3 m 2 ω 2m 2 N ), A ωN 0 = (E N ± m N ) √ 5m N k ω m ω 4m 2 N (g 1 ± g 2 E N 2m N ± g 3 (m R − E N ) 2m N ),(6)
with upper (lower) signs corresponding to positive (negative) resonance parity. The lower indices stand for the helicity λ of the final ωN state λ = λ V − λ N , where we use an abbreviation as follows: λ = 0 : 0 + 1 2 , 1 2 : 1 − 1 2 , 3 2 : 1 + 1 2 . The resonance ωN decay width Γ ωN can be written as the sum over the three helicity amplitudes given above:
Γ ωN = 2 (2J + 1) k ω m N 2πm R 3/2 λ=0 |A ωN λ | 2 ,(7)
where J = 5 2 for the spin-5 2 resonance decay. For practical calculations we adopt the spin- 5 2 projector in the form
P µν,ρσ 5 2 (q) = 1 2 (T µρ T νσ + T µσ T νρ ) − 1 5 T µν T ρσ + 1 10 (T µλ γ λ γ δ T δρ T νσ + T νλ γ λ γ δ T δσ T µρ +T µλ γ λ γ δ T δσ T νρ + T νλ γ λ γ δ T δρ T µσ ), (8) with T µν = −g µν + q µ q ν m 2 R ,(9)
which has also been used in an analysis of KΛ photoproduction [16].
As is well known the description of particles with spin J > 1 2 leads to a number of different propagators which have non-zero off-shell lower-spin components. To control these components the off-shell projectors (3) are usually introduced. There were attempts to fix the off-shell parameters and remove the spin-1 2 contribution in the case of spin-3 2 particles [18]. However, it has been shown [19] that these contributions cannot be suppressed for any value of a. Indeed, Read [20] has demonstrated that the choice of the off-shell parameter in the coupling is closely linked to the off-shell behavior of the propagator. To overcome this problem Pascalutsa suggested gauge invariance as an additional constraint to fix the interaction Lagrangians for higher spins and remove the lower-spin components [21]. Constructing the spin- 3 2 interaction for a Rarita-Schwinger field u µ 3 2 by only allowing couplings to the gauge-invariant field tensor U µν
3 2 = ∂ µ u ν 3 2 − ∂ ν u µ 3 2
Pascalutsa derived an interaction which (for example) for the πN ∆ coupling is
L πN ∆ = f πūN γ 5 γ µ U µν ∂ ν ϕ + h.c.,(10)
where U µν is the tensor dual to U µν : U µν = ε µνλρ U λρ and ε µνλρ is the Levi-Civita tensor. The same arguments can also be applied to spin-5 2 particles. In this case the amplitude of meson-baryon scattering can be obtained from the conventional amplitude by the replacement
Γ µν (p ′ , k ′ ) P µν,ρσ 5 2 (q) / q − m R Γ ρσ (p, k) → Γ µν (p ′ , k ′ ) P µν,ρσ 5 2 (q) / q − m R Γ ρσ (p, k) q 4 m 4 R ,(11)
where Γ ρσ (p, k) are vertex functions that follow from (2) and (5) by applying Feynman rules and the projector P µν,ρσ 5 2 (q) is obtained from (8,9) by the replacement
q µ q ν /m 2 R → q µ q ν /q 2 .
This procedure is similar to that which has been used in the spin-3 2 case [21]. It has been shown for the spin-3 2 case [22] , that both prescriptions are equivalent in the effective Lagrangian approach as long as additional contact interactions are taken into account when the Pascalutsa couplings are used. The differences between these descriptions have been discussed in [5,23,24] and here we perform calculations by using both 'conventional' (C) and Pascaluta (P ) approaches. Similar to the spin- 3 2 case [20], the off-shell parameters a in (3) can be linked to the coupling strengths extracted through (6) In order to take into account the internal structure of mesons and baryons each vertex is dressed by a corresponding formfactor:
F p (q 2 , m 2 ) = Λ 4 Λ 4 + (q 2 − m 2 ) 2 .(12)
Here q is the four momentum of the intermediate particle and Λ is a cutoff parameter. In [5] it has been shown that the formfactor (12) gives systematically better results as compared to other ones, therefore we do not use any other forms for F (q 2 ). The cutoffs Λ in (12) are treated as free parameters and allowed to be varied during the calculation. However we demand the same cutoffs in all channels for a given resonance spin J :
Λ J πN = Λ J ππN = Λ J ηN = ... etc., (J = 1 2 , 3 2 , 5 2 )
. This greatly reduces the number of free parameters; i.e. for all spin- 5 2 resonances there is only one cutoff Λ 5 2 for all decay channels. To take into account contributions of the 2πN channel in our calculations we use the inelastic partial-wave cross section σ JI 2πN data extracted in [25]. To this end the inelastic 2πN channel is parameterized by an effective ζN channel where ζ is an effective isovector meson with mass m ζ = 2m π . Thus ζN is considered as a sum of different (π∆, ρN , etc.) contributions to the total 2πN flux. We allow only resonance ζN -couplings since each background diagram would introduce a meaningless coupling parameter. Despite this approximation the studies [2,3,14,5] have achieved a good description of the total partial-wave cross sections [25] and we proceed in our calculations by using the above prescription. For the R → ζN interaction the same Lagrangians are used as for the R → πN couplings taking into account the positive parity of the ζ meson.
Results and discussion
We use the same database as in [5] with additional elastic πN data for the spin- 5 2 partial wave amplitudes taken from the VPI group analysis [26]. For the 2πN channel we use the spin-5 2 partial wave cross sections derived in [25]. We confine ourselves to the energy region m π + m N √ s 2
GeV. The database on the ηN , KΛ, KΣ and ωN channels incorporates all available experimental information from the pion threshold up to the 2 GeV energy region. This includes partial and differential cross sections and polarisation measurements. The references on these reactions, 34 in total, are summarized in [12] The results presented in the following are from ongoing calculations to describe the data in all channels simultaneously. The resulting χ 2 of our best overall hadronic fits are given in Table 1. The obtained χ 2 ππ and χ 2 π2π are calculated using experimental data from all πN and 2πN partial waves up to spin- 5 2 except the D 35 wave. We find a problem with the description of the D 35 partial wave so the resulting χ 2 ππ turns out to be very large. Hence the χ 2 ππ values given in Table 1 the πN datapoints for the D 35 partial wave. From Table 1 one can conclude that the conventional C-prescription leads to a better description of the data in all partial waves. Note, that since the P coupling does not have 'offshell' background we also include additional D 13 (1700) and S 31 (1900) resonances in the P -calculations [5,6]. Compared to the previous best hadronic fits C-p-π+ and P-p-π+ from [5,6], we obtain the same values for cutoffs and non-resonant couplings. The only exception is the Λ 1 2 =2.79 for the C-coupling which is less than that of Cp-π+. In addition we find g N N ω =4.59 (4.20) and κ N N ω =-0.12(0.06) for C(P ) coupling calculations which slightly differ from [5]. The results for the πN partial wave amplitudes are shown on the Figs. 1 -2 in comparison with C-p-π+ result from [5]. We do not show here the corresponding P-p-π+ result since it almost coincides with the new P -calculations. The main differences are found for the conventional coupling calculations in comparison with the previous study. A substantially better description in the P 13 partial wave is due to the additional off-shell background generated by spin- 5 2 resonances. The same effect also improves the description of the real and imaginary high energy tails of the P 31 and S 31 amplitudes, respectively. The contribution from the spin- 5 2 resonances can also be seen in the D 33 amplitude which is also affected by spin-5 2 off-shell components. This leads to a worsening in the imaginary part of D 33 above 1.8 GeV, giving however improvement in the corresponding real part.
The D 15 (1675), F 15 (1680), and F 35 (1905) resonances were included in our calculations. We have also found evidence for a second F 15 state around 1.98 GeV which is rated two-star by [7]. The results for πN → 2πN partial wave cross sections are shown in Fig. 3. We stress that the πN partial wave inelasticities are not fitted but obtained as a sum of the individual contributions from all open channels.
In the following each spin-5 2 wave is discussed separately. The extensive discussion of the spin-1 2 and spin-3 2 partial waves can be found in [5,6]. The parameters of the corresponding baryon resonances are listed in [27]. D 15 . The elastic VPI data show a single resonant peak which corresponds to the well established D 15 (1675) state. We find a good description of the elastic amplitude in both the C-and P -calculations.
The 2πN data [25] are systematically below the total inelasticity of the VPI group [26]. This can be an indication that apart from 2πN there are additional contributions from other inelastic channels. However, in the analysis of Manley and Saleski [28] as well as in the most recent study of Vrana et al. [29] the total inelasticity in the D 15 wave is entirely explained by the resonance decay to the π∆ channel. We also find no significant contributions from the ηN , KΛ, KΣ, and ωN channels to the total πN inelasticity in the present hadronic calculations. The calculated 2πN cross sections are found to be substantially above the data from [25] in all fits. Indeed, the difference between the 2πN and inelasticity data runs into 2 mb at 1.67 GeV. This flux can be absorbed by neither ηN , KΛ, KΣ, ωN channels, see Fig. 4. Thus we conclude that either the πN and 2πN data are inconsistent with each other or other open channels (as 3πN ) must be taken into account. To overcome this problem and to describe the πN and 2πN data in the D 15 partial wave the original 2πN data error bars [25] were enlarged by a factor 3. The same procedure was also used by Vrana et al. [29] and Cutkosky et al. [30] to fit the inelastic data.
In the both C and P coupling calculations the total inelasticities in the D 15 wave almost coincide with the partial-wave cross sections and therefore are not shown in Fig. 3 (left top). A good description of the inelasticity in the D 15 wave is achieved and the extracted resonance parameters are also in agreement with other findings (see next section). (2000) resonances are identified in this partial wave. The inclusion of the second resonance significantly improves the description of the πN and 2πN experimental data in the higher-energy region. Some evidence for this state was also found in earlier works [28,31]. A visible inconsistency between the inelastic VPI data and the 2πN cross section from [25] above 1.7 GeV can be seen in Fig. 3 (left bottom). The three data points at 1.7, 1.725, and 1.755 GeV have, therefore, not been included in our calculations. Finally we achieve a reasonable description for both πN and 2πN data. The C and P coupling calculations give approximately the same results. F 35 . A single resonance state F 35 (1905) was taken into account. Some other models find an additional lower-lying resonance with a mass of about 1.75 GeV [28,32,31,29]. However, we already find a good description of the elastic πN amplitudes and the 2πN cross sections by only including the single F 35 (1905) state. The inclusion of a second state with somewhat lower mass leads to a worse description of the πN and 2πN data due to the strong interference between the two nearby states. The two 2πN data points at 1.87 and 1.91 GeV, which are apparently above the total inelasticity, have not been included in the calculations.
The total inelasticity in the F 35 partial wave almost coincides with the calculated 2πN cross section and is not shown in Fig. 3. Note, that the 2πN data at 1.7 GeV are slightly below the total inelasticity from [26]. This could indicate that other inelastic channels (as 3πN ) give additional contributions to this partial wave.
There are also difficulties in the description of the 2πN low-energy tails of the D 15 and F 15 partial waves below 1.6 GeV, where the calculated cross sections are slightly below the 2πN data. The discrepancy leads to a significant rise in χ 2 π2π (cf. Table 1). The same behavior has been found in our previous calculations for the D I3 partial waves [5]. The dash-dotted line is the best hadronic fit C-p-π+ from [5]. The data are taken from [26]. Table 2. Properties of the spin-5 2 resonances considered in the present calculation. Masses and total widths Γtot are given in MeV, the decay ratios R in percent of the total width. In brackets, the sign of the coupling is given (all πN couplings are chosen to be positive). a : The coupling is presented since the resonance is below threshold. b : Decay ratio in 0.1 . The first line corresponds to C-calculation and the second one to P .
There, it has been suggested that the problem might be caused by the description of the 2πN channel in terms of an effective ζN state. Indeed, the findings of [28,29] show strong π∆ decay ratios in all three D 15 , F 15 , and F 35 partial waves. The description of the 2πN channel in terms of the ρN and π∆ channels may change the situation when taking into account the ρN and π∆ phase spaces and corresponding spectral functions. Upcoming calculations will address this question.
D 35 . A single D 35 (1930)
resonance is taken into account. However, there is no clear resonance structure in the πN data for this partial wave. The data [26] also show a total inelasticity at the 2 mb level whereas the 2πN channel was found to be negligible [25]. It has been suggested [25] that this channel could have an important inelastic 3πN contribution. Since the measured 2πN cross section is zero we have used the inelastic πN data with enlarged error bars instead of the 2πN data to pin down the 2πN D 35 contributions. Even in this case we have found difficulties in the description of the D 35 partial wave. The πN channel turns out to be strongly influenced by the uchannel nucleon and resonance contributions which give The dotted line shows our previous results C-p-π+ [5]. The contributions from the spin-5 2 states are shown by dash-dotted (C) and dash-double-dotted (P ) lines. For the data references see [5].
significant contributions to the real part of D 35 . As can be seen in Fig. 2 the C-and P -coupling calculations cannot give even a rough description of the experimental data [26]. The situation can be improved by either using a reduced nucleon cutoff Λ N or by neglecting the nucleon uchannel contribution in the interaction kernel. The latter approximation has been used in the coupled-channel approach of Lutz et al. [33]. To illustrate this point we have carried out an additional fit for the C-coupling with the reduced cutoff Λ N =0.91 taking only the πN and 2πN data into account. The calculated χ 2 are χ 2 ππ =3.63 and χ 2 π2π =7.87 where the D 35 data are also taken into account (note, that all values in Table 1 are calculated by neglecting these datapoints ). The results for the D 35 partial wave are shown in Fig. 2 by the dotted line. In all calculations for D 35 presented in Fig. 2 the D 35 (1930) mass was found to be about 2050 MeV. One sees that the calculations with a reduced nucleon cutoff lead to a better description of the D 35 data giving, however, a worse description of other πN partial-wave data. Note, that a reduction of the nucleon cutoff is required for a successful description of the lowerspin photoproduction multipoles [5,6] which also leads to a worsening in χ 2 for the πN elastic channel.
Finally, we conclude that the main features of the considered spin-5 2 partial waves except for D 35 are well reproduced. From Figs. 1-3 one can see that there is no significant difference between the conventional (8) and the Pascalutsa (11) spin- 5 2 couplings. The parameters of the spin- 5 2 resonances are presented in Table 2. We note that the total resonance widths calculated here do not necessarily coincide with the full widths at half maximum because of the energy dependence of the decay widths (4,7) and the formfactors used [5]. We do not show here the parameters of the D 35 (1930) resonance because of the problems in the D 35 (1930) partial wave. Although a good description of the experimental data is achieved some differences in the extracted resonance parameters for the C-and the P -couplings calculations exist.
We obtain a little lower mass for the D 15 (1675) as compared to that obtained by Manley and Saleski [28] and Vrana et al. [29], but in agreement with other findings [35,34]. The total width is found to be consistent with the results from [34,31,29]. In the ηN channel our calculations show a small (≈0.6%) decay fraction which is somewhat higher than the value obtained by Batinić We conclude that both fits give approximately the same results for the resonance masses and branching ratios.
The properties of the F 15 (1680) state are found to be in good agreement with the values recommended by [7]. We find a somewhat smaller branching ratio in the ηN channel as compared to that of [35]. However, the obtained value R ηN =0.1% is again in agreement with the findings of Vrana et al. [29]: ±1%. The parameters of the second F 15 (2000) resonance differ strongly in various analyses: Manley and Saleski [28] give 490 ± 310 MeV for the total decay width while other studies [36,31] find it at the level of 95 − 170 MeV. Moreover, this state has not been identified in the investigations of [29,35]. Although we find different results for Γ tot in the two independent calculations, the branching ratios are close to each other. A small decay width of about 4.3% is found for the ηN channel (C). However, since the F 15 (2000) resonance is found to be strongly inelastic with 84-88% of inelasticity absorbed by the 2πN channel, more 2πN data above 1.8 GeV (cf. Fig. 3) are needed for a reliable determination of the properties of this state.
The parameters of the F 35 (1905) state are in good agreement with [7]. Both fits give approximately the same result for the decay branching ratios.
All considered resonances have a rather small decay ratios to the ηN , KΛ, KΣ, and ωN channels. The only exception is the F 15 (2000) resonance where a small decay width to ηN has been found for the conventional coupling calculations.
In Fig. 4 the results for ηN , KΛ, KΣ, and ωN total cross sections are shown in comparison with best hadronic fit C-p-π+ from [5]. The main difference from the previous result is found in the ωN final state where a visible effect from the inclusion of the spin- 5 2 resonances is found in the C-calculations. Although the D 15 (1675) and F 15 (1680) states are below the ω production threshold, they give noticeable contributions in the C-coupling calculations. This effect is, however, less pronounced in the P -calculations where the role of D 15 (1675) and F 15 (1680) are found to be less important.
Since the hadronic ωN data include about 115 datapoints the couplings to the ωN channel are not well constrained and inclusion of photoproduction data may change the situation [5]. Looking to the ω-photoproduction reaction the new SAPHIR data may give an opportunity to distinguish between various reaction mechanisms. We are presently working on this [37].
Summary and outlook
We have performed a first investigation of the pion-induced reactions on the nucleon within the effective Lagrangian coupled-channels approach including spin- 5 2 resonances. To investigate the influence of additional background from the spin-3 2 and -5 2 resonances calculations using both the conventional and the Pascalutsa higher-spin couplings have been carried out. A good description of the available experimental data has been achieved in all πN , 2πN , ηN , KΛ, KΣ, and ωN final states within both frameworks. The χ 2 is somewhat worse for the Pascalutsa prescription, but this is at least partly due to the absence of additional off-shell parameters in these couplings. In view of this ambiguity in the coupling it is gratifying to see that both coupling schemes lead to similar physical results for the baryon properties. The effective Lagrangian model used in our calculations imposes stringent physical constraints on the various channels and, in particular, on the interplay of the resonance and background contributions. The latter are generated by the same Lagrangian without any new unphysical parameters. Thus any remaining discrepancy between the data and the calculation points to the necessity to improve our understanding of the meson-baryon interactions further, for example, by including additional t-channel exchanges.
Apart from 2πN we find no significant contributions from other channels to the total πN inelasticities in the spin- 5 2 waves. Nevertheless, the contributions from higherspin resonances can be important in the ω-production channel. More data on this reaction are highly desirable to establish the role of different reaction mechanisms.
We have found evidence for F 15 (2000) resonance which is rated two-star by [7] and has not been included in the most recent resonance analysis by Vrana et al. [29]. However, more precise πN and 2πN data are necessary to to identify this state more reliably in purely hadronic calculations.
For a complete description of πN scattering up to higher energies the J= 5 2 resonances are obviously needed. Compared to our previous study we arrive at a better description in the πN , ηN , and πΣ channels for the conventional coupling calculations. Looking only at the lower partial-waves, the improvement in πN is only possible due to the additional off-shell background from the spin-5 2 resonances. On the other hand, the missing background in the Pascalutsa prescription is compensated by contributions from the D 15 and F 15 resonances allowing for a better description in the ηN and ωN final states.
We are proceeding with the extension of our model by performing a combined analysis of pion-and photoninduced reactions taking into account spin- 5 2 states. Moreover, the decomposition of the 2πN channel into ρN , π∆ etc. states will be the subject of further investigations.
F 15 .
15The F 15 (1680) and F 15
Fig. 1 .
1The πN → πN partial waves for I= 1 2 . The solid (dashed) line corresponds C(P )-calculations.
Fig. 2 .Fig. 3 .
23The πN → πN partial waves for I=3 2 . The solid (dashed) line corresponds C(P )-calculations. The dash-dotted line is the best hadronic fit C-p-π+ from[5]. The dotted line is the result for the D35-wave obtained with reduced nucleon cutoff (see text). The data are taken from[26]. The inelastic D15, F15, F35, and D35 waves. The solid (dashed) line corresponds to calculation C (P ) for the 2πN channel. Open and filled circles represent the total inelasticity from the VPI group[26] and the 2πN data[25], respectively. The calculated inelasticities almost coincide with the calculated 2πN cross sections and are not shown here. Calculation with a reduced nucleon cutoff is shown by the dotted line.
Fig. 4 .
4The total cross sections for the inelastic reactions. The solid (dashed) line corresponds to the C (P ) result.
are calculated by neglectingFit Total π χ 2
ππ
χ 2
π2π
χ 2
πη
χ 2
πΛ
χ 2
πΣ
χ 2
πω
C
2.60 2.60
7.63 1.37 2.14 1.83 1.23
P
3.65 3.80 10.06 1.75 2.54 2.93 1.83
Table 1. χ 2 of the C (first line) and P fits (second line) . The
D35 πN and 2πN data have not been taken into account (see
text).
et al.: 0.1±0.1 % [35], whereas Vrana et al. give another bound: ±1%.
. S Capstick, W Roberts, nucl-th/0008028Prog. Part. Nucl. Phys. 45241S. Capstick and W. Roberts, Prog. Part. Nucl. Phys. 45, S241 (2000), nucl-th/0008028.
. T Feuster, U Mosel, Phys. Rev. C. 58457T. Feuster and U. Mosel, Phys. Rev. C 58, 457 (1998).
. T Feuster, U Mosel, Phys. Rev. C. 59460T. Feuster and U. Mosel, Phys. Rev. C 59, 460 (1999).
. G Penner, U Mosel, Phys. Rev. C. 6555202G. Penner and U. Mosel, Phys. Rev. C 65, 055202 (2002).
. G Penner, U Mosel, Phys. Rev. C. 6655211G. Penner and U. Mosel, Phys. Rev. C 66, 055211 (2002).
. G Penner, U Mosel, Phys. Rev. C. 6655212G. Penner and U. Mosel, Phys. Rev. C 66, 055212 (2002).
. K Hagiwara, Phys. Rev. D. 6610001K. Hagiwara et al., Phys. Rev. D 66, 010001 (2002), http://pdg.lbl.gov.
. R A Arndt, W J Briscoe, I I Strakovsky, R L , Workman Phys. Rev. C. 6655213R.A. Arndt, W.J. Briscoe, I.I. Strakovsky and R.L. Work- man Phys. Rev. C 66, 055213 (2002)
. D Drechsel, O Hanstein, S S Kamalov, L Tiator, Nucl. Phys. 645145D. Drechsel, O. Hanstein, S.S. Kamalov, and L. Tiator, Nucl. Phys. A645, 145 (1999);
. S S Kamalov, D Drechsel, O Hanstein, L Tiator, S N Yang, ibid. 684321S.S. Kamalov, D. Drechsel, O. Hanstein, L. Tiator, and S.N. Yang, ibid, A684, 321c (2001).
. A I Titov, T.-S H Lee, Phys. Rev. C. 6615204A. I. Titov, T.-S. H. Lee, Phys. Rev. C 66, 015204 (2002);
. B Kämpfer, A I Titov, B L Reznik, nucl-th/0211078Osaka, JapanB. Kämpfer, A.I. Titov, B.L. Reznik, PANIC 02, Osaka, Japan (2002), nucl-th/0211078.
. W Rarita, J Schwinger, Phys. Rev. 6061W. Rarita and J. Schwinger, Phys. Rev. 60, 61 (1941).
. G Penner, Universität GießenPhD thesisG. Penner, PhD thesis, Universität Gießen, 2002, available via http://theorie.physik.uni-giessen.de.
. B C Pearce, B K Jennings, Nucl. Phys. 528655B.C. Pearce and B.K. Jennings, Nucl. Phys. A528, 655 (1991).
. C Sauermann, B L Friman, W Nörenberg, Phys. Lett. 341261C. Sauermann, B.L. Friman, and W. Nörenberg, Phys. Lett. B341, 261 (1995);
. C Deutsch-Sauermann, B Friman, W Nörenberg, ibid. 40951C. Deutsch-Sauermann, B. Friman, and W. Nörenberg, ibid, B409, 51 (1997).
. M Zétényi, Gy, Wolf, nucl-th/0103062M. Zétényi and Gy. Wolf, nucl-th/0103062.
. J C David, C Fayard, G H Lamot, B Saghai, Phys. Rev. C. 532613J.C. David, C. Fayard, G.H. Lamot, and B. Saghai, Phys. Rev. C 53, 2613 (1996).
. B S Han, M K Cheoun, K S Kim, I.-T Cheon, Nucl. Phys. 691713B.S. Han, M. K. Cheoun, K.S. Kim, and I.-T. Cheon, Nucl. Phys. A691, 713 (2001).
. L M Nath, B Etemadi, J D Kimel, Phys. Rev. D. 32153L.M. Nath, B. Etemadi, and J.D. Kimel, Phys. Rev. D 3, 2153 (1971);
. L M Nath, B K Bhattacharyya, Z. Phys. 59L.M. Nath and B.K. Bhattacharyya, Z. Phys. C5, 9 (1980).
. M Benmerrouche, R M Davidson, N C Mukhopadhyay, Phys. Rev. C. 392339M. Benmerrouche, R.M. Davidson, and N.C. Mukhopad- hyay, Phys. Rev. C 39, 2339 (1989);
. R M Davidson, N C Mukhopadhyay, R Wittman, Phys. Rev. Lett. 56804R.M. Davidson, N.C. Mukhopadhyay, and R. Wittman, Phys. Rev. Lett. 56, 804 (1986).
. B J Read, Nucl. Phys. 52565B.J. Read, Nucl. Phys. B52, 565 (1973).
. V Pascalutsa, Phys. Rev. D. 5896002V. Pascalutsa, Phys. Rev. D 58, 096002 (1998);
. V Pascalutsa, R Timmermans, Phys. Rev. C. 6042201V. Pas- calutsa and R. Timmermans, Phys. Rev. C 60, 042201 (1999).
. V Pascalutsa, Phys. Lett. 50385V. Pascalutsa, Phys. Lett. B503, 85 (2001).
. V Pascalutsa, J A Tjon, Phys. Rev. C. 6154003V. Pascalutsa and J.A. Tjon, Phys. Rev. C 61, 054003 (2000).
. A D Lahiff, I R Afnan, Phys. Rev. C. 6024608A.D. Lahiff and I.R. Afnan, Phys. Rev. C 60, 024608 (1999).
. D M Manley, R A Arndt, Y Goradia, V L Teplitz, Phys. Rev. D. 30904D.M. Manley, R.A. Arndt, Y. Goradia, and V.L. Teplitz, Phys. Rev. D 30, 904 (1984).
Workman. M M Pavan, R A Arndt, I I Strakovsky, R L A ; R, I I Arndt, R L Strakovsky, M M Workman, Pavan, nucl-th/9807087Phys. Scr. 872120Phys. Rev.M.M. Pavan, R.A. Arndt, I.I. Strakovsky, and R.L. Work- man, Phys. Scr. T87, 62 (2000); nucl-th/9807087, R.A. Arndt, I.I. Strakovsky, R.L. Workman, and M.M. Pa- van, Phys. Rev. C52, 2120 (1995), updates available via: http://gwdac.phys.gwu.edu/.
Available via. Available via http://www.uni-giessen.de/∼gd1267/ .
. D M Manley, E M Saleski, Phys. Rev. D. 454002D.M. Manley and E.M. Saleski, Phys. Rev. D 45, 4002 (1992).
. T P Vrana, S A Dytman, T.-S H Lee, Phys. Rept. 328181T.P. Vrana, S.A. Dytman, and T.-S.H. Lee, Phys. Rept. 328, 181 (2000).
. R E Cutkosky, S Wang, Phys. Rev. D. 42235R.E. Cutkosky and S. Wang, Phys. Rev. D 42, 235 (1990).
Handbook of Pion-Nucleon scattering. G Höhler, F Kaiser, R Koch, E Pietarinen, Landolt-Börnstein. Physics Data N o. 12-1, (1979)G. Höhler, F. Kaiser, R. Koch, and E. Pietarinen, Handbook of Pion-Nucleon scattering, Landolt-Börnstein, [Physics Data N o. 12-1, (1979)].
. R L Kelly, R E Cutkosky, Phys. Rev. D. 202782R.L. Kelly and R.E. Cutkosky, Phys. Rev. D 20, 2782 (1979).
. M F Lutz, Gy Wolf, B Friman, Nucl. Phys. 706431M.F.M Lutz, Gy. Wolf, and B. Friman, Nucl. Phys. A706, 431 (2002).
R E Cutkosky, C P Forsyth, J B Babcock, R L Kelly, R E Hendrick, presented at 4th Int. Conf. on Baryon Resonances. Toronto, Canadapublished in BaryonR.E. Cutkosky, C.P. Forsyth, J.B. Babcock, R.L. Kelly, and R.E. Hendrick, presented at 4th Int. Conf. on Baryon Resonances, Toronto, Canada, Jul 14-16, 1980, published in Baryon 1980:19;
. R E Cutkosky, C P Forsyth, R E Hendrick, R L Kelly, Phys. Rev. D. 202839R.E. Cutkosky, C.P. Forsyth, R.E. Hen- drick, and R.L. Kelly, Phys. Rev. D 20, 2839 (1979).
. M Batinić, I Šlaus, A Švarc, B M K Nefkens, Phys. Rev. 511004Erratum ibidM. Batinić, I.Šlaus, A.Švarc, and B.M.K. Nefkens, Phys. Rev. C51, 2310 (1995), Erratum ibid, C57, 1004 (1998);
. M Clajus, B M K Nefkens, 776M. Clajus and B.M.K. Nefkens, πN - Newsletter 7, 76 (1992).
. R A Arndt, I I Strakovsky, R L Workman, M M Pavan, Phys. Rev. C. 522120R.A. Arndt, I.I. Strakovsky, R.L. Workman, and M.M. Pa- van, Phys. Rev. C 52, 2120 (1995).
. V Shklyar, G Mosel, U Mosel, in preparationV. Shklyar, G. Mosel and U. Mosel, in preparation.
| []
|
[
"A NEW SCHEME FOR APPROXIMATING THE WEAKLY EFFICIENT SOLUTION SET OF VECTOR RATIONAL OPTIMIZATION PROBLEMS",
"A NEW SCHEME FOR APPROXIMATING THE WEAKLY EFFICIENT SOLUTION SET OF VECTOR RATIONAL OPTIMIZATION PROBLEMS"
]
| [
"Feng Guo ",
"Liguo Jiao "
]
| []
| []
| In this paper, we provide a new scheme for approximating the weakly efficient solution set for a class of vector optimization problems with rational objectives over a feasible set defined by finitely many polynomial inequalities. More precisely, we present a procedure to obtain a sequence of explicit approximations of the weakly efficient solution set of the problem in question. Each approximation is the intersection of the sublevel set of a single polynomial and the feasible set. To this end, we make use of the achievement function associated with the considered problem and construct polynomial approximations of it over the feasible set from above. Remarkably, the construction can be converted to semidefinite programming problems.Several nontrivial examples are designed to illustrate the proposed new scheme.Date: May 26, 2022. | 10.1007/s10898-023-01287-8 | [
"https://arxiv.org/pdf/2205.12863v1.pdf"
]
| 249,062,527 | 2205.12863 | e08fa01f3c135536dbfabeffdffc54742d733eb9 |
A NEW SCHEME FOR APPROXIMATING THE WEAKLY EFFICIENT SOLUTION SET OF VECTOR RATIONAL OPTIMIZATION PROBLEMS
Feng Guo
Liguo Jiao
A NEW SCHEME FOR APPROXIMATING THE WEAKLY EFFICIENT SOLUTION SET OF VECTOR RATIONAL OPTIMIZATION PROBLEMS
In this paper, we provide a new scheme for approximating the weakly efficient solution set for a class of vector optimization problems with rational objectives over a feasible set defined by finitely many polynomial inequalities. More precisely, we present a procedure to obtain a sequence of explicit approximations of the weakly efficient solution set of the problem in question. Each approximation is the intersection of the sublevel set of a single polynomial and the feasible set. To this end, we make use of the achievement function associated with the considered problem and construct polynomial approximations of it over the feasible set from above. Remarkably, the construction can be converted to semidefinite programming problems.Several nontrivial examples are designed to illustrate the proposed new scheme.Date: May 26, 2022.
Introduction
Vector optimization forms an important field of research in optimization theory; see, e.g., [4,8,9,28,35], and many practical applications in various areas, such as engineering [9], humanitarian aid [13], medical health [5] and so on. In this paper, we will be concerned with the following constrained vector rational optimization problem of the form
Min R m + f (x) := p 1 (x) q 1 (x) , . . . , p m (x) q m (x) : x ∈ Ω ,(VROP)
where "Min R m + " is understood with respect to the ordering non-negative orthant R m + , f : R n → R m is a rational mapping with f i = p i q i , in which p i and q i are real polynomials in the variable x = (x 1 , . . . , x n ) for each i = 1, . . . , m, and the feasible set Ω is given by Ω := {x ∈ R n : g j (x) ≥ 0, j = 1, . . . , r}, where for each j = 1, . . . , r, g j is a real polynomial in the variable x. By letting q i = 1 for all i = 1, . . . , m, our model then covers vector polynomial optimization problems [1,17,25,28,31], and by letting p i , q i be linear functions for all i = 1, . . . , m, our model also covers linear fractional vector optimization problems [16] as well.
For vector optimization, it is almost impossible to find a single point simultaneously minimizing all the objective functions. Therefore, we usually look for some "best preferred" solutions in q 2 i for p i q i , (A2) can be weakened as q i (x) = 0 over Ω for all i = 1, . . . , m.
Motivated by its extensive applications, a great deal of attention has been attracted to the development of algorithms for computing (weakly) efficient solutions to vector optimization; see [3,6,10,24,25,31,[37][38][39] and references therein. Among them, there are mainly two different approaches for solving vector optimization, by which we mean finding its (weakly) efficient solutions. One is based on the scalarization methods (e.g., [3,6,24,25,31]), which computes (weakly) efficient solutions by choosing some parameters in advance and reformulating them as one or several single objective optimization problems. The other is based on descent methods; see e.g., [10] for Newton's methods, [37-39, 42, 43] for (projected) gradient methods.
We would like to emphasize that the aforementioned methods can only find one or some particular (weakly) efficient solutions, rather than giving information about the whole set of (weakly) efficient solutions, which is apparently important for applications of vector optimziation in the real world. Instead, the aim and novelty of this paper is to provide a new scheme for approximating the whole set of weakly ( -)efficient solutions of (VROP). More precisely, we provide a procedure to obtain a sequence of explicit approximations of S w (and hence S w by letting → 0). Each approximation is the intersection of the sublevel set of a single polynomial and the feasible set Ω. As far as we know, there are few methods of this type for solving vector optimization problems in the literature.
To this end, we make use of the achievement function (c.f. [8,32,41]) associated with the problem (VROP) which is defined as
ψ(x) := sup y∈Ω min i=1,...,m [f i (x) − f i (y)].
It can be shown that the sets S w and S w can be written as the intersection of sublevel sets of ψ(x) and the feasible set Ω (see Section 3). As the function ψ(x) can be fairly complicated, the problem is reduced to construct polynomial approximations of ψ(x). By rewriting the definition of ψ(x) as a parametric polynomial optimization problem, we can contruct a sequence of polynomial approximations {ψ k (x)} k∈N of ψ(x) over the feasible set Ω from above by invoking the "joint+marginal" approach developed by Lasserre in [21,22]. Remarkably, the construction of {ψ k (x)} k∈N can be converted to semidefinite programming (SDP) problems. For ∈ R m + of the form = (δ, . . . , δ) with δ > 0, the intersection, denoted by A(δ, k), of the sublevel set ψ k (x) ≤ δ and the feasible set Ω are inner approximations of S w . Under some conditions, we prove that vol (S w \ A(δ, k)) → 0 as k → ∞, where "vol(·)" denotes the Lebesgue volume (see Theorem 4.2). Since it holds for = (δ, . . . , δ) that S w → S w as δ → 0 (see Proposition 3.2), we may take A(δ, k) as an approximation of S w with sufficiently small δ > 0 and sufficiently large k ∈ N (see Corollary 4.1 and Remark 4.1).
The rest of this paper is organized as follows. Section 2 contains some preliminaries on polynomial optimization. In Section 3, we study the characterization of the weakly efficient solution set of the problem (VROP) by the associated achievement function ψ(x). In Section 4, we show how to approximate the weakly ( -)efficient solution set of the problem (VROP), and present some nontrivial illustrating examples. Concusions are given in Section 5.
Preliminaries
In this section, we collect some notation and preliminary results which will be used in this paper. The symbol N (resp., R, R + , R ++ ) denotes the set of nonnegative integers (resp., real numbers, nonnegative real numbers, positive real numbers). For a set D in R n , we use cl(D) and int(D) to denote the closure and interior of D, respectively. Denote by B the closed unit ball in R n centered at the origin. For a point u ∈ R n , dist(u, D) denotes the Euclidean distance between u and D. For u ∈ R n , u denotes the standard Euclidean norm of u. For α := (α 1 , . . . , α n ) ∈ N n , |α| = α 1 + · · · + α n . For k ∈ N, denote by N n k = {α ∈ N n : |α| ≤ k} and |N n k | its cardinality. Denote by R[x] the ring of polynomials in x := (x 1 , . . . , x n ) with real coefficients and by R[x] k the set of polynomials in R[x] of degree up to k. For a polynomial f, we use deg(f ) to denote the total degree of f. For α ∈ N n , the notation x α stands for the monomial x α 1 1 · · · x αn n . Now we recall some background about the sum of squares representations of nonnegative (positive) polynomials over a set defined by finitely many polynomial inequalities. We say that a polynomial h ∈ R[x] is sum of squares of polynomials if there exist polynomials h j , j = 1, . . . , s,
such that h = s j=1 h 2 j .
The set consisting of all sum of squares polynomial in x is denoted by
Σ 2 [x]. Let {h 1 , . . . , h s } ⊂ R[x]
be a finite set of polynomials and S := {x ∈ R n : h j (x) ≥ 0, j = 1, . . . , s}.
Assumption 2.1. There exists some N ∈ R such that N − n i=1 x 2 i = σ 0 (x) + s j=1 σ j (x)h j (x),
for some sum of squares polynomials σ j ∈ Σ 2 [x], j = 0, 1, . . . , s.
h(x) = σ 0 (x) + s j=1 σ j h j (x),(1)
for some sum of squares polynomials σ j ∈ Σ 2 [x], j = 0, 1, . . . , s.
Note that if we fix the degrees of σ j 's in (1), then checking the above representation of h(x)
reduces to an SDP feasibility problem (c.f. [23]). The well-known Lasserre's hierarchy of SDP relaxations for polynomial optimization problems is based on Putinar's Positivstellensatz and the dual moment theory (c.f. [18,21]).
I i+1 ∩ (I 1 ∪ · · · ∪ I i ) ⊆ I k ; (ii) h j ∈ R[x I i ] for each j ∈ J i , 1 ≤ i ≤ l.
(iii) For each i = 1, . . . , l, there exists some N i ∈ R such that
N i − j∈I i x 2 j = σ i,0 + j∈J i σ i,j h j ,
for some sum of squares polynomials
σ i,0 , σ i,j ∈ Σ 2 [x I i ], j ∈ J i .
The following result enables us to construct sparse SDP relaxations of polynomial optimization problems, which can significantly reduce the computational cost (c.f. [19,40]).
Theorem 2.2 (Sparse version of Putinar's Positivstellensatz [12,19,40]). Suppose that As-
sumption 2.2 holds. If h(x) ∈ l i=1 R[x I i ]
and is positive on S, then h(x) can be written as
h(x) = l i=1 σ i,0 + j∈J i σ i,j h j ,
for some sum of squares polynomials σ i,0 , σ i,j ∈ Σ 2 [x I i ], j ∈ J i , i = 1, . . . , l.
Charactering the weakly efficient solution set
In this section, we study the achievement function associated with (VROP), which can be used to characterize the weakly ( -)efficient solution set of (VROP).
By defintion of S w , we have
S w = x ∈ Ω : ∀y ∈ Ω, f (y) − f (x) ∈ −R m ++ = {x ∈ Ω : ∀y ∈ Ω, ∃i ∈ {1, . . . , m} such that f i (x) − f i (y) ≤ 0} = x ∈ Ω : ∀y ∈ Ω, min i=1,...,m [f i (x) − f i (y)] ≤ 0 = x ∈ Ω : sup y∈Ω min i=1,...,m [f i (x) − f i (y)] ≤ 0 .
Let ψ : R n → R be the function given by
ψ(x) := sup y∈Ω min i=1,...,m [f i (x) − f i (y)].
The function ψ(x) is known as the achievement function in the area of vector optimization in the literature; see [8,Section 4.6] and [32,41]. Therefore,
S w = {x ∈ R n : ψ(x) ≤ 0} ∩ Ω.
Moreover, we have the following results, which imply that the function ψ(x) is indeed a merit function (see [7,26,38,39]). Recall the definition of the set S w of all weakly -efficient solutions to (VROP), and clearly by definition, S w ⊂ S w for any ∈ R m + . Conversely, denote a set-valued mapping F(·) : R m ⇒ R n and let F( ) := S w for ∈ R m + . The following proposition shows that F(·) is continuous at¯ = 0 relative to R m + in the sense of Painlevé-Kuratowski (see [34,Definition 5.4]), i.e., F( ) → F(0) as → 0. For convenience, we recall the definitions of continuity (outer semicontinuity, inner semicontinuity) for set-valued mapppings; see [34,Chapters 4 & 5] for more information. Given a set-valued mapping F : R m ⇒ R n , we denote by
lim sup y→ȳ F (y) := {x ∈ R n : ∃y k →ȳ, ∃x k → x with x k ∈ F (y k )} , lim inf y→ȳ F (y) := {x ∈ R n : ∀y k →ȳ, ∃x k → x with x k ∈ F (y k )} ,
the outer and inner limit of F atȳ in the sense of Painlevé-Kuratowski, respectively.
Definition 3.1. A set-valued mapping F : R m ⇒ R n is said to be outer semicontinuous (osc) atȳ if lim sup y→ȳ F (y) ⊂ F (ȳ), and inner semicontinuous (isc) atȳ if F (ȳ) ⊂ lim inf y→ȳ F (y).
It is called continuous atȳ if F is simultaneously osc and isc atȳ, i.e., F (y) → F (ȳ) as y →ȳ.
These terms are invoked relative to X, a subset of R m containingȳ, if the inclusions hold in restriction to convergence y →ȳ with y ∈ X.
It follows from Definition 3.1 that F(·) is continuous at¯ = 0 relative to R m + . Similar to [34, Proposition 5.12 and Exercise 5.13], we have the following result. For any ∈ R m + , denote
max := max i=1,...,m { i } and min := min i=1,...,m { i }. Proposition 3.2. For any d > 0, there exists a number δ(d) > 0 depending on d such that dist(u, S w ) < d for any u ∈ S w , i.e., S w ⊂ S w + dB, whenever max < δ(d).
Proof. Suppose that the conclusion does not hold for some d > 0. Then, for any k ∈ N,
there exist (k) with (k) max < 1 k and a point u (k) ∈ S (k) w such that dist(u (k) , S w ) ≥ d.
As Ω is compact, without loss of generality, we can assume that there is a point u ∈ Ω such that lim k→∞ u (k) = u . Now we show that u ∈ S w . To the contrary, suppose that there exists y ∈ Ω
such that f (y ) − f (u ) ∈ −R m ++ , i.e., max i=1,...,m [f i (y ) − f i (u )] < 0. Due to the continuity of f i , there exists k ∈ N such that for each i = 1, . . . , m, max i=1,...,m [f i (y ) − f i (u )] + 1 k + f i (u ) − f i (u (k) ) < 0
holds for any k ≥ k . Then for each i = 1, . . . , m,
f i (y ) − f i (u (k) ) + (k) i = f i (y ) − f i (u ) + f i (u ) − f i (u (k) ) + (k) i ≤ max i=1,...,m [f i (y ) − f i (u )] + f i (u ) − f i (u (k) ) + 1 k (by (k) max < 1 k ) < 0, which means that f (y ) − f (u (k) ) + (k) ∈ −R m ++ , i.e., u (k) ∈ S (k)
w , a contradiction. Hence, u ∈ S w and dist(u , S w ) = 0. However, due to the continuity of distance function, one has
dist(u , S w ) = lim k→∞ dist(u (k) , S w ) ≥ d > 0, a contradiction.
Furthermore, the following proposition allows us to study the set S w of all weakly -efficient solutions by means of sublevels of ψ(x).
Proposition 3.3. For any ∈ R m + , we have {x ∈ Ω : ψ(x) ≤ min } ⊂ S w ⊂ {x ∈ Ω : ψ(x) ≤ max } .(2)
Particularly, if max = min , then
{x ∈ Ω : ψ(x) ≤ min = max } = S w .
Proof. To show the first relation in (2), suppose to the contrary that there exists u ∈ Ω such
that ψ(u) ≤ min but u ∈ S w . Then, there exists y ∈ Ω such that f (y ) − f (u) + ∈ −R m ++ , i.e., f i (u) − f i (y ) − i > 0 for each i = 1, . . . , m. Thus, min i=1,...,m [f i (u) − f i (y )] > min which implies that ψ(u) > min , a contradiction. Now, fix a point u ∈ S w . For any y ∈ Ω, by definition, there exists k y ∈ {1, . . . , m} depending on y such that f ky (y) − f ky (u) + ky ≥ 0. Then, min i=1,...,m [f i (u) − f i (y)] ≤ ky for all y ∈ Ω,
and hence
ψ(u) = max y∈Ω min i=1,...,m [f i (u) − f i (y)] ≤ max ,
thus, the second relation in (2) holds. Consequently, the conclusion follows.
Approximations of weakly ( -)efficient solution set
In this section, we will construct polynomial approximations of the achievement function ψ(x) from above and use their sublevel sets to approximate the set of all weakly ( -)efficient solutions to (VROP). The construction of these polynomial approximations of ψ(x) is inspired by [22] and can be reduced to SDP problems. As Ω is compact, after a possible re-scaling of the g j 's, we may and will assume that ∆ := [−1, 1] n ⊇ Ω in the rest of this paper.
p i (x) − f lower i q i (x) = σ i,0 (x) + r j=1 σ i,j (x)g j (x) + n j=1 σ i,r+j (x)(1 − x 2 j ), σ i,0 , σ i,j ∈ Σ 2 [x], j = 1, . . . , r + n, deg(σ i,0 ) ≤ 2k i , k i ∈ N, deg(σ i,j g j ) ≤ 2k i , j = 1, . . . , r, deg(σ i,r+j (1 − x 2 j )) ≤ 2k i , j = 1, . . . , n,(3)
which is equivalent to an SDP feasibility problem (c.f. [23]). Under (A1-2), each p i (x) q i (x) is bounded from below on Ω and p i (x) − f lower i q i (x) > 0 on Ω for any f lower i < min x∈Ω
p i (x) q i (x) .
Hence, by Putinar's Positivstellensatz, a number f lower i satisfying (3) always exists for k i large enough (note that Assumption 2.1 holds due to the redundant polynomials 1 − x 2 j , j = 1, . . . , n, added in (3)). Clearly, it holds that
f lower := min i=1,...,m f lower i ≤ min i=1,...,m, x∈Ω p i (x) q i (x) . Similarly, replace p i (x) − f lower i q i (x) in (3) by f upper i q i (x) − p i (x),p i (x) q i (x) .
Now, we deal with the achievement function ψ(x) over ∆ from the viewpoint of polynomial optimization problems. For each x ∈ R n , it holds that
ψ(x) := sup y∈Ω min i=1,...,m [f i (x) − f i (y)] = sup y∈Ω min i=1,...,m p i (x) q i (x) − p i (y) q i (y) = sup y∈Ω,z∈R z : p i (x) q i (x) − p i (y) q i (y) ≥ z, i = 1, . . . , m .
For any x ∈ ∆, let
ψ (x) := max y∈R n ,z∈R z s.t. p i (x)q i (y) − p i (y)q i (x) − zq i (x)q i (y) ≥ 0, i = 1, . . . , m, y ∈ Ω, z ∈ [f lower − f upper , f upper − f lower ].(4)
In other words,ψ(x) over ∆ can be seen as the optimal value function of the parameter polynomial optimization problem (4). Under (A1-2), we have Next, we construct polynomial approximations ofψ over ∆ from above by means of the SDP method proposed in [22], and use their sublevel sets to approximate the set of all weakly ( -)efficient solutions to (VROP).
Consider the following sets
K := (x, y, z) ∈ R n × R n × R : p i (x)q i (y) − p i (y)q i (x) − zq i (x)q i (y) ≥ 0, i = 1, . . . , m, x ∈ ∆, y ∈ Ω, z ∈ [f lower − f upper , f upper − f lower ] ,
and
K x := {(y, z) ∈ R n × R : (x, y, z) ∈ K} , for x ∈ ∆.
Then it is clear that K is compact and for any x ∈ ∆,ψ(x) = max (y,z)∈Kx z.
As proved in [22, Theorem 1], a sequence of polynomial approximations ofψ(x) on ∆ from above exists mainly due to the Stone-Weierstrass theorem. For any δ > 0, with a slight abuse of notation, we denote S δ w := S w , where = (δ, . . . , δ). The following result can be derived by slightly modifying the proof of [22,Theorem 3]. It shows that we can approximate the set S δ w by the sequence {A(δ, k)} k∈N .
Consequently, if vol ({x ∈ Ω : ψ(x) = δ}) = 0, then lim k→∞ vol S δ w \ A(δ, k) = 0.
Proof. By Proposition 3.3, it is clear that A(δ, k) ⊂ S δ w . By Proposition 4.2, ψ k converges toψ in measure, that is, for every α > 0,
lim k→∞ vol {x ∈ ∆ : |ψ k (x) −ψ(x)| ≥ α} = 0.(6)
Consequently, for every ≥ 1, it holds that vol x ∈ Ω : ψ(x) ≤ δ + −1 = vol x ∈ Ω :ψ(x) ≤ δ + −1 (by Proposition 4.1)
= vol x ∈ Ω :ψ(x) ≤ δ + −1 ∩ {x ∈ Ω : ψ k (x) > δ} (ii) Suppose there is a sequence {δ i } i∈N with δ i ↓ 0 such that vol ({x ∈ Ω : ψ(x) = δ i }) = 0 holds for all i and S w = S w ∩ cl (int (Ω \ S w )), then Corollary 4.1 (i) and (ii) indicate that the whole set of the weakly efficient solutions of (VROP) can be approximated arbitrarily well by A(δ, k) with sufficiently small δ > 0 and sufficiently large k ∈ N. We denote the following m + r + 2n + 1 polynomials in R[x, y, z]
+ vol x ∈ Ω :ψ(x) ≤ δ + −1 ∩ {x ∈ Ω : ψ k (x) ≤ δ} = lim k→∞ vol x ∈ Ω :ψ(x) ≤ δ + −1 ∩ {x ∈ Ω : ψ k (x) ≤ δ} (by (6)) ≤ lim k→∞ vol ({x ∈ Ω : ψ k (x) ≤ δ}) ≤ vol x ∈ Ω :ψ(x) ≤ δ = vol S δ w .(δ, k j )} j∈N with k j → ∞ such that A(δ, k j ) ∩ O (l 0 ) = ∅ for all k j . Then, vol(S δ w \ A(δ, k j )) ≥ vol(O (l 0 ) ) >h 1,1 (x, y, z) = p 1 (x)q 1 (y) − p 1 (y)q 1 (x) − zq 1 (x)q 1 (y), . . . , h 1,m (x, y, z) = p m (x)q m (y) − p m (y)q m (x) − zq m (x)q m (y),K = {(x, y, z) ∈ R n × R n × R : h i,j (x, y, z) ≥ 0, i = 1, . . . , 4, j ∈ J i }.
Let λ be the scaled Lebesgue measure on ∆, i.e., dλ(x) = dx/2 n , and
γ α := ∆ x α dλ(x) = 0, if α i is odd for some i n i=1 (α i + 1) −1 , otherwise be the moment of λ for each α ∈ N n . For each k ∈ N, with k ≥ max deg h i,j 2
, i = 1, . . . , 4, j ∈ J i , consider the following optimization problem,
ρ * k := inf φ,σ 0 ,σ i,j ∆ φ(x)dλ(x) = α∈N n 2k c α γ α s.t. φ(x) = α∈N n 2k c α x α ∈ R[x] 2k , c α ∈ R, φ(x) − z = σ 0 + 4 i=1 j∈J i σ i,j h i,j , σ 0 , σ i,j ∈ Σ 2 [x, y, z], deg(σ 0 ), deg(σ i,j h i,j ) ≤ 2k, i = 1, . . . , 4, j ∈ J i , (P k )
which can be reduced to an SDP problem (c.f. [18,20]). Clearly, for any (φ, σ 0 , σ i,j ) feasible to (P k ), we have φ(x) ≥ψ(x) on ∆. The following result follows directly from [22,Theorem 5] and we include here a brief proof for the sake of completeness. It shows that we can compute the sequence of polynomials {ψ k ∈ R[x] : k ∈ N} in Proposition 4.2 by solving (P k ).
Theorem 4.2. We have lim k→∞ ρ * k = ∆ψ (x)dλ(x). Consequently, let ψ k , σ (k) 0 , σ (k) i,j
be a nearly optimal solution to (P k ), e.g., ∆ ψ k dλ(x) ≤ ρ * k + 1/k, then ψ k (x) ≥ψ(x) on ∆ and
lim k→∞ ∆ |ψ k (x) −ψ(x)|dλ(x) = 0.
Proof. We only need to prove that lim k→∞ ρ * k = ∆ψ (x)dλ(x). Consider the following infinitedimensional linear program
ρ * := inf φ ∆ φ(x)dλ(x) = α∈N n 2k c α γ α s.t. φ(x) = α∈N n c α x α ∈ R[x], c α ∈ R, φ(x) − z ≥ 0, ∀ (x, y, z) ∈ K.
It is clear that ∆, K are compact and K x is nonempty for every x ∈ ∆. Then, by [22,Corollary 2.6], it holds that ρ * = ∆ψ (x)dλ(x). Let (φ ) ∈N be a minimizing sequence of the above problem. For any ∈ N, let φ (x) = φ (x) + 1/ , then we have φ (x) − z ≥ 1/ > 0 on K.
Notice that
2n + (f upper − f lower ) 2 − n i=1 x 2 i + y 2 i − z 2 = r+n j=r+1 h 2,j + n j=1 h 3,j + h 4,1 ,
that is, Assumption 2.1 holds for the defining polynomials of K. Therefore, by Putinar's Positivstellensatz (Theorem 2.1), there exists k ∈ N and σ ( )
0 , σ ( ) i,j ∈ Σ 2 [x, y, z] such that (φ , σ ( ) 0 , σ ( ) i,j )
is a feasible solution to (P k ). Note that ρ * ≤ ρ * k holds for any k ∈ N. Then, it implies that
∆ψ (x)dλ(x) = ρ * ≤ ρ * k ≤ ∆ φ (x)dλ(x) + 1 ↓ ρ * = ∆ψ (x)dλ(x).
As ρ * k is monotone, we have lim k→∞ ρ * k = ∆ψ (x)dλ(x).
Next, we propose a sparse version of the SDP problem (P k ) by exploiting its sparsity pattern, which reduces the computational costs at the order k. Add a redundant polynomial (7) Then, by the sparse version of Putinar's Positivstellensatz (Theorem 2.2), we can construct a sparse version of (P k ) as
h 1,m+1 (x, y, z) = 2n + (f upper − f lower ) 2 − n i=1 x 2 i + y 2 i − z 2 in ρ * k := inf φ,σ i,0 ,σ i,j ∆ φ(x)dλ(x) = α∈N n 2k c α γ α s.t. φ(x) = α∈N n 2k c α x α ∈ R[x] 2k , c α ∈ R, φ(x) − z = 4 i=1 σ i,0 + j∈J i σ i,j h i,j , σ i,0 , σ i,j ∈ Σ 2 [I i ], deg(σ i,0 ), deg(σ i,j h i,j ) ≤ 2k, i = 1, . . . , 4, j ∈ J i . (SP k )i,0 , σ ( ) i,j ∈ Σ 2 [I i ], i = 1, . . . , 4, j ∈ J i such that φ , σ ( ) i,0 , σ ( ) i,j
is a feasible solution to (SPk ).
Hence, the conclusion follows from the proof of Theorem 4.2.
4.3.
Comparisons with existing SDP relaxation methods. Now, we compare our method with the recent existing work in [31] and [29]. All the three methods can deal with vector (nonlinear) polynomial optimization problems by SDP relaxations, without convexity assumptions on the involved functions. For convenience, we assume that all objectives f i 's in (VROP) are polynomials, i.e., q i (x) = 1, i = 1, . . . , m.
To get weakly efficient solutions to (VROP), Nie and Yang [31] used the linear scalarization and the Chebyshev scalarization techniques to scalarize (VROP) to a single objective polynomial optimization problem and solve it by the SDP relaxation method proposed in [30]. Precisely, for a given nonzero weighting parameter w := (w 1 , . . . , w m ) ∈ R m , the linear scalarization scalarizes the problem (VROP) to
min w 1 f 1 (x) + · · · + w m f m (x) s.t. x ∈ Ω,(8)
and the Chebyshev scalarization scalarizes the problem (VROP) to
min x∈Ω max 1≤i≤m w i (f i (x) − f * i ),(9)
where each f * i is the goal which decision maker wants to achieve for the objective f i . In general, by the scalarizations (8) and (9), we can only find one or some particular (weakly) efficient solutions for a given weight w. Moreover, a serious drawback of linear scalarization is that it can not provide a solution among sunken parts of Pareto frontier due to "duality gap" of On the other hand, Magron et al. [29] studied the problem (VROP) with m = 2. Rather than computing the weakly efficient solutions, they presented a method to approximate as closely as desired the Pareto curve which is the image of the objective functions over the set of weakly efficient solutions. To this end, they also considered the scalarizations (8) and (9), as well as the parametric sublevel set approximation method which is inspired by [11] and amounts to solving the following parametric problem
min x∈Ω f 2 (x) s.t. f 1 (x) ≤ w,(10)
with a parameter w ∈ [min x∈Ω f 1 (x), max x∈Ω f 1 (x)]. By treating w in (8), (9) and (10) as a parameter and employing the "joint+marginal" approach proposed in [20], they associated each scalarization problem a hierarchy of SDP relaxations and obtained an approximation of the Pareto curve by solving an inverse problem (for (8) and (9)) or by building a polynomial underestimator (for (10)). Again, comparing with the approximate Pareto curve obtained in [29], it is more convenient to apply our explicit approximation A(δ, k) of the weakly efficient solution set to optimization problems with Pareto constraint. Moreover, when using the scalarization problems (8) and (9), the approach in [29] requires that for almost all the values of the parameter w, these parametric problems (8) and (9) in approximating S w , we illustrate the corresponding images of f (Ω) and f (A(δ, k)). To this end, we choose a square containing Ω. For each point u on a uniform discrete grid inside the square, we check if u ∈ Ω (resp., u ∈ A(δ, k)). If so, we have (f 1 (u), f 2 (u)) ∈ f (Ω) (resp., (f 1 (u), f 2 (u)) ∈ f (A(δ, k))) and we plot the point (f 1 (u), f 2 (u)) in grey (resp., in red) in the image plane.
Example 4.1. Consider the problem Min R 3 + x 1 , x 2 , x 2 1 + x 2 2 s.t. x ∈ Ω 1 := {x ∈ R 2 : x 2 1 + x 2 2 ≤ 1}.
Clearly, the set of all weakly efficient solution to this problem is
S w = x ∈ R 2 : x 1 ≤ 0, x 2 ≤ 0, x 2 1 + x 2 2 ≤ 1 .
For any δ > 0, by considering the four quadrants of R 2 one by one, it is easy to check by definition that the set S δ w consists of the following four sets {x ∈ R 2 :
x 1 ≥ δ, x 2 ≥ δ, x 2 1 + x 2 2 ≤ δ}, {x ∈ R 2 : x 1 ≤ δ, x 2 ≥ δ, x 2 2 + 2δx 1 − δ − δ 2 ≤ 0, x 2 1 + x 2 2 ≤ 1}, {x ∈ R 2 : x 1 ≤ δ, x 2 ≤ δ, x 2 1 + x 2 2 ≤ 1}, {x ∈ R 2 : x 1 ≥ δ, x 2 ≤ δ, x 2 1 + 2δx 2 − δ − δ 2 ≤ 0, x 2 1 + x 2 2 ≤ 1}.
For δ = 0.1, we show the set S δ w and its approximations A(δ, k), k = 2, 3, 4, in Figure 1.
Example 4.2.
To illustrate how the set A(δ, k) behaves in approximating the set of weakly efficient solutions S w as δ → 0 and k → ∞, we consider the problem
Min R 2 + x 1 , x 2 2 − 2x 1 x 2 + 1 x 2 2 + 1 s.t. x ∈ Ω 2 := x ∈ R 2 : 1 − x 2 1 − x 2 2 ≥ 0 .
We plot the images f (A(δ, k)) with δ = 0.1, 0.05, 0.02 and k = 3, 4, 5, as well as f (Ω 2 ), in Figure 2.
Min R 2 + √ 2 2 (−x 1 + x 2 ), √ 2 2 (x 1 + x 2 ) s.t. x ∈ Ω 3 := x ∈ R 2 : g(x) := x 2 2 (1 − x 2 1 ) − (x 2 1 + 2x 2 − 1) 2 ≥ 0 ,
In fact, the equality g(x) = 0 defines the so-called bicorn curve as show in Figure 3 (a). Hence, the feasible set Ω 3 of this problem is the region enclosed by the bicorn curve and the image f (Ω 3 ) is obtained by rotating Ω 3 clockwise by 45 • (Figure 3 (b)). It is clear that the weakly efficient solution set S w consists of the points on the shorter path connecting the two singular points of the bicorn curve. As discussed in subsection 4.3, the linear scalarization (8) can only enable us to compute two points in S w , namely, the two singular points of the bicorn curve. Figure 3 (c), which shows that we can obtain good approximations of S w including the ones corresponding to the sunken part of Pareto curve.
Next, we consider the following optimization problem with a Pareto constraint min which is to compute the square of the Euclidean distance between the point (0, 1) and the curve S w . It is easy to see that the unique minimizer of the above problem is 0, 1 3 and the minimum is 4 9 ≈ 0.444. With the approximation A(0.01, 4) of S w , we consider the polynomial optimization problem min x 2 1 + (x 2 − 1) 2 s.t. x ∈ Ω 3 , ψ 4 (x) ≤ 0.01.
We solve this problem by Lasserre's hierarchy of SDP relaxations (c.f. [18,21]) with the software GloptiPoly [15], and get the certified minimizer (−0.0000, 0.3473) and minimum 0.4260. in the objective plane where t 1 = −x 2 1 and t 2 = x 4 1 + x 2 2 . Clearly, for every point (t 1 , t 2 ) ∈ f (S w ), there are two weakly efficient solutions (− √ −t 1 , 0) and ( √ −t 1 , 0). Therefore, this problem does not satisfy the assumptions of the approach proposed in [29] when using the scalarizations (8) and (9). By our method, we compute the set A(0.005, 5), which is the intersection of the unit disk and the area enclosed by the red curve defined by ψ 5 (x) = 0.005 in Figure 4 (a). The images f (Ω 4 ) and f (A(0.005, 5)) is shown in Figure 4 (b), which illustrates that we can approximate the set of weakly efficient solutions as closely as possible.
Conclusions
In this paper, we provide a new scheme for approximating the set of all weakly ( -)efficient solutions to the problem (VROP). The procedure mainly relies on the achievement function associated with (VROP) and the "joint+marginal" approach proposed by Lasserre [22]. The obtained results seem new in the area of vector optimization with polynomial structures, in the sense that we approximate the whole set of weakly ( -)efficient solutions to the problem (VROP).
Moreover, the obtained results also significantly develop the recent achievements in [6,24,25] for vector polynomial optimization problems from convex settings to nonconvex settings.
A
sparse version of the representation (1) is available if some sparsity pattern is satisfied by h and h j 's. For a subset I ⊆ {1, . . . , n}, denote the subset of variables x I := {x i : i ∈ I} and R[x I ] as the polynomial ring in the variables x I . Assumption 2.2. There are partitions {1, . . . , n} = I 1 ∪ · · · ∪ I l and {1, . . . , s} = J 1 ∪ · · · ∪ J l where J i , i = 1, . . . , l are disjoint. The collections {I i } l i=1 and {J i } l i=1 satisfy the following: (i) ∀i ∈ {1, . . . , l − 1}, ∃k ∈ {1, . . . , i} s.t.
Proposition 3. 1 .
1[32, Lemmas 3.1 and 3.2] The achievement function ψ(x) satisfies (i) ψ(x) ≥ 0 for all x ∈ Ω and hence S w = {x ∈ Ω : ψ(x) = 0} .
(
ii) ψ(x) is locally Lipschitz on Ω. Proof. (i) is clear. If the objective in (VROP) is a vector of polynomials, (ii) was proved in [32, Lemma 3.2] which is based on the locally Lipschitz property of polynomial functions. Note that the rational function f i is locally Lipschitz on Ω under (A1-2). Hence, the proof of [32, Lemma 3.2] is still valid for the case studied in this paper. So far, we know the weakly efficient solution set S w can be completely characterized with the help of the achievement function ψ(x). Note that, ψ(x) can be fairly complicated and computing ψ(x) by some descent methods directly might be difficult. However, as shown below in Proposition 3.3, the sublevels of ψ(x) have rather close relation with the set of all weaklyefficient solutions, which in turn yields the information of the set of all weakly efficient solutions.
4. 1 .
1Approximations of achievement function. To construct polynomial approximations of ψ(x), we need first compute upper and lower bounds of f i (x), i = 1, . . . , m, over Ω. To this end, for each i = 1, . . . , m, we compute a number f lower i ∈ R satisfying
. Then similarly, such a number f upper i exists for k i large enough and can be computed by solving another SDP feasibility problem. Then, we have f upper := max i=1,...
Proposition 4.1.ψ(x) = ψ(x)for all x ∈ Ω. Hence, Propositions 3.1 and 3.3 also hold forψ.
Proposition 4. 2 .
2(c.f.[22, Theorem 1]) There exists a sequence of polynomials {ψ k ∈ R[x] : k ∈ N} such that ψ k (x) ≥ψ(x) for all x ∈ ∆, and {ψ k } k∈N converges toψ in L 1 (∆), i.e.,lim k→∞ ∆ |ψ k (x) −ψ(x)|dx = 0. Let {ψ k ∈ R[x]: k ∈ N} be as in Proposition 4.2. For any δ > 0 and k ∈ N, denote A(δ, k) := {x ∈ Ω : ψ k (x) ≤ δ} .
Theorem 4. 1 .
1For any δ > 0, we have A(δ, k) ⊂ S δ w and vol ({x ∈ Ω : ψ(x) < δ}) ≤ lim k→∞ vol (A(δ, k)) ≤ vol ({x ∈ Ω : ψ(x) ≤ δ}) = vol S δ w .
taking → ∞ yields (5) and the conclusion.
Corollary 4. 1 .
1The following assertions are true.(i) For any d > 0, there exists δ(d) > 0 depending on d such that A(δ, k) ⊂ S w + dB holds for any δ < δ(d) and any k ∈ N.
( ii )
iiFor d > 0 and any δ > 0 with vol ({x ∈ Ω : ψ(x) = δ}) = 0, there exists k(d, δ) ∈ N depending on δ and d such thatS w ∩ cl (int (Ω \ S w )) ⊂ A(δ, k) + dB holds for any k > k(d, δ).Proof. (i) Since A(δ, k) ⊂ S δ w for any k ∈ N by Theorem 4.1, the existence of δ(d) is a direct consequence of Proposition 3.2.(ii) Let u ∈ S w ∩ cl (int (Ω \ S w )) = ∅, thenψ(u) = 0 by Propositions 3.1 (i) and 4.1, and there exists a sequence {u (l) } l∈N ⊂ int (Ω \ S w ) such that lim l→∞ u (l) = u. Fix the numbers d, δ > 0. By the continuity ofψ on Ω (Proposition 4.1), there exists l 0 ∈ N depending on d and δ such thatψ(u (l 0 ) ) < δ and u (l 0 ) − u < d. As u (l 0 ) ∈ int (Ω \ S w ), by the continuityofψ again, there is a neighborhood O (l 0 ) ⊂ Ω of u (l 0 ) such thatψ(x) < δ and x − u < d for all x ∈ O (l 0 ) . Proposition 3.3 implies that O (l 0 ) ⊂ S δ w .Then, we show that there exists k(d, δ) ∈ N such that for any k > k(d, δ), it holds that A(δ, k) ∩ O (l 0 ) = ∅ which means that u ∈ A(d, δ) + dB and the conclusion follows. To the contrary, suppose that such k(d, δ) does not exist, then there is subsequence {A
0 for all k j . As vol ({x ∈ Ω : ψ(x) = δ}) = 0, it contradicts the conclusion in Theorem 4.2.
Remark 4 . 1 .
41From Corollary 4.1 and its proof, we can see that (i) If S w ∩ cl (int (Ω \ S w )) = ∅, then for any δ > 0, A(δ, k) = ∅ for k large enough. In fact, we have O (l 0 ) ⊂ {x ∈ Ω : ψ(x) < δ} for the neighborhood O (l 0 ) in the proof of Corollary 4.1. Then, (5) implies that A(δ, k) = ∅ for k large enough.
4. 2 .
2Computational aspects. Now we follow the scheme proposed in [22, Section 3.3] to construct a sequence of polynomials (ψ k ) k∈N ∈ R[x] as defined in Proposition 4.2.
h 2,1 (x, y, z) = g 1 (y), . . . , h 2,r (x, y, z) = g r (y), h 2,r+1 (x, y, z) = 1 − y 2 1 , . . . , h 2,r+n (x, y, z) = 1 − y 2 n , h 3,1 (x, y, z) = 1 − x 2 1 , . . . , h 3,n (x, y, z) = 1 − x 2 n , h 4,1 (x, y, z) = (f upper − f lower ) 2 − z 2 .
J 1 = {1, . . . , m}, J 2 = {1, . . . , r + n}, J 3 = {1, . . . , n} and J 4 = {1}. Then,
and reset J 1 = {1, . . . , m + 1}. Denote the following subsets of variables I 1 = {x, y, z}, I 2 = {y}, I 3 = {x} and I 4 = {z}. For i = 1, . . . , 4, denote by R[I i ] the ring of real polynomials in the variables in I i . Then, the following conditions hold. (i) For each i = 1, 2, 3, there exists some s ≤ i such that I i+1 ∩ i j=1 I j ⊆ I s ; (ii) For each i = 1, . . . , 4, and each j ∈ J i , h i,j ∈ R[I i ]; x α −z in (P k ) is the difference of two polynomials in R[I 3 ] and R[I 4 ], respectively.
Theorem 4. 3 .
3The statements for (P k ) in Theorem 4.2 also hold for (SP k ).Proof. Let φ be the polynomial in the proof of Theorem 4.2. Note that Assumption 2.2 holds by adding the redundant polynomial h 1,m+1 . Then, by Theorem 2.2, there existsk ∈ N and σ ( )
nonconvex cases (seeExample 4.3). Instead, the sets {A(δ, k)} computed by our method can approximate the whole set of weakly efficient solutions in some sense under certain conditions. The representation of A(δ, k) as the intersection of the sublevel set of a single polynomial and the feasible set is more desirable in some applications. For example, it can be used in optimization problems with Pareto constraints (c.f.[14]). A Pareto constraint can be replaced by the polynomial inequality ψ k (x) ≤ δ with small δ > 0 and large k ∈ N (see Example 4.3).
Figure 1 .
1The set S δ w and its approximations A(δ, k) with δ = 0.1, k = 2, 3, 4, in Example 4.1.
Example 4. 3 .
3Consider the problem
Figure 2 .
2The images f (A(δ, k)) (in red) and f (Ω 2 ) (in gray) in Example 4.2 By our method, we compute the approximation A(0.01, 4) and show it inFigure 3(a), which is the intersection of Ω 3 and the area under the red curve defined by ψ 4 (x) = 0.01. The image f (A(0.01, 4)) is illustrated in
x 2 1Figure 3 .
23+ (x 2 − 1) 2 s.t. (x 1 , x 2 ) ∈ S w ,(a) The bicorn curve and the curve defined by ψ 4 (x) = 0.01; (b) The images f (A(0.01, 4)) (in red) and f (Ω 3 ) (in gray) in Example 4.3
t. x ∈ Ω 4 := x ∈ R 2 : 1 − x 2 1 − x 2 2 ≥ 0 .It is easy to see that the set of weakly efficient solutions S w = [−1, 1] × {0} and the image f (S w ) (the Pareto curve) is the curve(t 1 , t 2 ) ∈ R 2 : t 2 = t 2 1 , t 1 ∈ [−1, 0]
Figure 4 .
4(a) The set S w and the curve defined by ψ 5 (x) = 0.005; (b) The images f (A(0.005, 5)) (in red) and f (Ω 4 ) (in gray) in Example 4.4 .
A semidefinite programming approach for solving multiobjective linear programming. V Blanco, J Puerto, S E H B Ali, Journal of Global Optimization. 583V. Blanco, J. Puerto, and S. E. H. B. Ali. A semidefinite programming approach for solving multiobjective linear programming. Journal of Global Optimization, 58(3):465-480, 2014.
On the existence of Pareto efficient points. J M Borwein, Mathematics of Operations Research. 81J. M. Borwein. On the existence of Pareto efficient points. Mathematics of Operations Research, 8(1):64-73, 1983.
A new scalarization technique and new algorithms to generate Pareto fronts. R S Burachik, C Y Kaya, M M Rizvi, SIAM Journal on Optimization. 272R. S. Burachik, C. Y. Kaya, and M. M. Rizvi. A new scalarization technique and new algorithms to generate Pareto fronts. SIAM Journal on Optimization, 27(2):1010-1034, 2017.
Multiobjective Decision Making: Theory and Methodology. V Chankong, Y Y Haimes, Amsterdam, North-HollandV. Chankong and Y. Y. Haimes. Multiobjective Decision Making: Theory and Methodology. Amsterdam, North-Holland, 1983.
Including robustness in multi-criteria optimization for intensity modulated proton therapy. W Chen, J Unkelbach, A Trofimov, T Madden, H Kooy, T Bortfeld, D Craft, Physics in Medicine and Biology. 573W. Chen, J. Unkelbach, A. Trofimov, T. Madden, H. Kooy, T. Bortfeld, and D. Craft. Including robustness in multi-criteria optimization for intensity modulated proton therapy. Physics in Medicine and Biology, 57(3):591-608, 2012.
Second-order cone programming relaxations for a class of multiobjective convex polynomial problems. T D Chuong, Annals of Operations Research. 3112T. D. Chuong. Second-order cone programming relaxations for a class of multiobjective convex polynomial problems. Annals of Operations Research, 311(2):1017-1033, 2022.
Gap functions and error bounds for nonsmooth convex vector optimization problem. J Dutta, P Kesarwani, S Gupta, Optimization. 6611J. Dutta, P. Kesarwani, and S. Gupta. Gap functions and error bounds for nonsmooth convex vector optimization problem. Optimization, 66(11):1807-1836, 2017.
M Ehrgott, Multicriteria Optimization. BerlinSpringer2nd ed.M. Ehrgott. Multicriteria Optimization (2nd ed.). Springer, Berlin, 2005.
Multicriteria Design Optimization. H Eschenauer, J Koski, SpringerBerlinH. Eschenauer and J. Koski. Multicriteria Design Optimization. Springer, Berlin, 1990.
Newton's method for multiobjective optimization. J Fliege, L M G Drummond, B F Svaiter, SIAM Journal on Optimization. 202J. Fliege, L. M. G. Drummond, and B. F. Svaiter. Newton's method for multiobjective optimization. SIAM Journal on Optimization, 20(2):602-626, 2009.
Approximating the Pareto set of multiobjective linear programs via robust optimization. B L Gorissen, D D Hertog, Operations Research Letters. 405B. L. Gorissen and D. d. Hertog. Approximating the Pareto set of multiobjective linear programs via robust optimization. Operations Research Letters, 40(5):319-324, 2012.
A note on the representation of positive polynomials with structured sparsity. D Grimm, T Netzer, M Schweighofer, Archiv der Mathematik. 895D. Grimm, T. Netzer, and M. Schweighofer. A note on the representation of positive polynomials with structured sparsity. Archiv der Mathematik, 89(5):399-403, 2007.
Multicriteria optimization in humanitarian aid. W J Gutjahr, P C Nolz, European Journal of Operational Research. 2522W. J. Gutjahr and P. C. Nolz. Multicriteria optimization in humanitarian aid. European Journal of Operational Research, 252(2):351-366, 2016.
Maximizing a linear fractional function on a Pareto efficient frontier. S T Hackman, U Passy, Journal of Optimization Theory and Applications. 1131S. T. Hackman and U. Passy. Maximizing a linear fractional function on a Pareto efficient frontier. Journal of Optimization Theory and Applications, 113(1):83-103, 2002.
GloptiPoly 3: moments, optimization and semidefinite programming. D Henrion, J B Lasserre, J Löfberg, Optimization Methods and Software. 244-5D. Henrion, J. B. Lasserre, and J. Löfberg. GloptiPoly 3: moments, optimization and semidefinite programming. Optimization Methods and Software, 24(4-5):761-779, 2009.
Geoffrion's proper efficiency in linear fractional vector optimization with unbounded constraint sets. N T T Huong, J.-C Yao, N D Yen, Journal of Global Optimization. 783N. T. T. Huong, J.-C. Yao, and N. D. Yen. Geoffrion's proper efficiency in linear frac- tional vector optimization with unbounded constraint sets. Journal of Global Optimization, 78(3):545-562, 2020.
On the existence of Pareto solutions for polynomial vector optimization problems. D S Kim, T.-S Pha, N V Tuyen, Mathematical Programming Ser. A. 1771-2D. S. Kim, T.-S. Pha . m, and N. V. Tuyen. On the existence of Pareto solutions for polynomial vector optimization problems. Mathematical Programming Ser. A, 177(1-2):321-341, 2019.
Global optimization with polynomials and the problem of moments. J B Lasserre, SIAM Journal on Optimization. 113J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3):796-817, 2001.
Convergent SDP-relaxations in polynomial optimization with sparsity. J B Lasserre, SIAM Journal on Optimization. 173J. B. Lasserre. Convergent SDP-relaxations in polynomial optimization with sparsity. SIAM Journal on Optimization, 17(3):822-843, 2006.
A "joint+marginal" approach to parametric polynomial optimization. J B Lasserre, SIAM Journal on Optimization. 204J. B. Lasserre. A "joint+marginal" approach to parametric polynomial optimization. SIAM Journal on Optimization, 20(4):1995-2022, 2010.
J B Lasserre, Moments, Positive Polynomials and Their Applications. LondonImperial College PressJ. B. Lasserre. Moments, Positive Polynomials and Their Applications. Imperial College Press, London, 2010.
Tractable approximations of sets defined with quantifiers. J B Lasserre, Mathematical Programming Ser. B. 1511J. B. Lasserre. Tractable approximations of sets defined with quantifiers. Mathematical Programming Ser. B, 151(1):507-527, 2015.
Sums of squares, moment matrices and optimization over polynomials. M Laurent, Emerging Applications of Algebraic Geometry. New York, NYSpringer149Mathematics and its ApplicationsM. Laurent. Sums of squares, moment matrices and optimization over polynomials. In Emerging Applications of Algebraic Geometry, volume 149 of The IMA Volumes in Mathe- matics and its Applications, pages 157-270. Springer, New York, NY, 2009.
Solving fractional multicriteria optimization problems with sum of squares convex polynomial data. J H Lee, L G Jiao, Journal of Optimization Theory and Applications. 1762J. H. Lee and L. G. Jiao. Solving fractional multicriteria optimization problems with sum of squares convex polynomial data. Journal of Optimization Theory and Applications, 176(2):428-455, 2018.
Multi-objective convex polynomial optimization and semidefinite programming relaxations. J H Lee, N Sisarat, L G Jiao, Journal of Global Optimization. 801J. H. Lee, N. Sisarat, and L. G. Jiao. Multi-objective convex polynomial optimization and semidefinite programming relaxations. Journal of Global Optimization, 80(1):117-138, 2021.
Merit functions in vector optimization. C G Liu, K F Ng, W H Yang, Mathematical Programming. 119C. G. Liu, K. F. Ng, and W. H. Yang. Merit functions in vector optimization. Mathematical Programming, 119:215-237, 2009.
YALMIP : a toolbox for modeling and optimization in MATLAB. J Löfberg, 2004 IEEE International Conference on Robotics and Automation. IEEEJ. Löfberg. YALMIP : a toolbox for modeling and optimization in MATLAB. In 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508), pages 284-289, 2004.
Multiobjective Linear Programming: An Introduction. D T Luc, Springer International PublishingSwitzerlandD. T. Luc. Multiobjective Linear Programming: An Introduction. Springer International Publishing, Switzerland, 2016.
Approximating Pareto curves using semidefinite relaxations. V Magron, D Henrion, J B Lasserre, Operations Research Letters. 426-7V. Magron, D. Henrion, and J. B. Lasserre. Approximating Pareto curves using semidefinite relaxations. Operations Research Letters, 42(6-7):432-437, 2014.
Tight relaxations for polynomial optimization and Lagrange multiplier expressions. J Nie, Mathematical Programming. 1781J. Nie. Tight relaxations for polynomial optimization and Lagrange multiplier expressions. Mathematical Programming, 178(1):1-37, 2019.
The multi-objective polynomial optimization. J Nie, Z Yang, arXiv:2108.04336J. Nie and Z. Yang. The multi-objective polynomial optimization. 2021. arXiv:2108.04336.
The global weak sharp minima with explicit exponents in polynomial vector optimization problems. T.-S Pha . M, X D T Hà, J.-C Yao, Positivity. 221T.-S. Pha . m, X. D. T. Hà, and J.-C. Yao. The global weak sharp minima with explicit exponents in polynomial vector optimization problems. Positivity, 22(1):219-244, 2018.
Positive polynomials on compact semi-algebraic sets. M Putinar, Indiana University Mathematics Journal. 423M. Putinar. Positive polynomials on compact semi-algebraic sets. Indiana University Math- ematics Journal, 42(3):969-984, 1993.
Variational Analysis. R T Rockafellar, R , J.-B Wets, SpringerBerlin, HeidelbergR. T. Rockafellar and R. J.-B. Wets. Variational Analysis. Springer Berlin, Heidelberg, 1998.
Theory of Multiobjective Optimization. Y Sawaragi, H Nakayama, T Tanino, Academic Press, IncOrlando, FLY. Sawaragi, H. Nakayama, and T. Tanino. Theory of Multiobjective Optimization. Aca- demic Press, Inc., Orlando, FL, 1985.
Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. J F Sturm, Optimization Methods and Software. 111-4J. F. Sturm. Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Optimization Methods and Software, 11(1-4):625-653, 1999.
Proximal gradient methods for multiobjective optimization and their applications. H Tanabe, E H Fukuda, N Yamashita, Computational Optimization and Applications. 722H. Tanabe, E. H. Fukuda, and N. Yamashita. Proximal gradient methods for multiobjec- tive optimization and their applications. Computational Optimization and Applications, 72(2):339-361, 2019.
New merit functions and error bounds for non-convex multiobjective optimization. H Tanabe, E H Fukuda, N Yamashita, arXiv:2010.09333H. Tanabe, E. H. Fukuda, and N. Yamashita. New merit functions and error bounds for non-convex multiobjective optimization. 2020. arXiv: 2010.09333.
Convergence rates analysis of multiobjective proximal gradient methods. Optimization Letters. H Tanabe, E H Fukuda, N Yamashita, 10.1007/s11590-022-01877-7H. Tanabe, E. H. Fukuda, and N. Yamashita. Convergence rates analysis of multiobjective proximal gradient methods. Optimization Letters, 2022. https://doi.org/10.1007/s11590- 022-01877-7.
Sums of squares and semidefinite program relaxations for polynomial optimization problems with structured sparsity. H Waki, S Kim, M Kojima, M Muramatsu, SIAM Journal on Optimization. 171H. Waki, S. Kim, M. Kojima, and M. Muramatsu. Sums of squares and semidefinite program relaxations for polynomial optimization problems with structured sparsity. SIAM Journal on Optimization, 17(1):218-242, 2006.
On the completeness and constructiveness of parametric characterizations to vector optimization problems. A P Wierzbicki, OR Spektrum. 82A. P. Wierzbicki. On the completeness and constructiveness of parametric characterizations to vector optimization problems. OR Spektrum, 8(2):73-87, 1986.
A projected subgradient method for nondifferentiable quasiconvex multiobjective optimization problems. X P Zhao, M A Köis, Y H Yao, J.-C Yao, Journal of Optimization Theory and Applications. 1901X. P. Zhao, M. A. Köis, Y. H. Yao, and J.-C. Yao. A projected subgradient method for non- differentiable quasiconvex multiobjective optimization problems. Journal of Optimization Theory and Applications, 190(1):82-107, 2021.
Linear convergence of a nonmonotone projected gradient method for multiobjective optimization. X P Zhao, J.-C Yao, Liguo Jiao) Academy for Advanced Interdisciplinary Studies. 823Dalian University of Technology ; Northeast Normal UniversityFeng Guo) School of Mathematical Sciences. Email address: [email protected]; [email protected]. P. Zhao and J.-C. Yao. Linear convergence of a nonmonotone projected gradient method for multiobjective optimization. Journal of Global Optimization, 82(3):577-594, 2022. (Feng Guo) School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, Liaoning Province, China. Email address: [email protected] (Liguo Jiao) Academy for Advanced Interdisciplinary Studies, Northeast Normal University, Changchun 130024, Jilin Province, China. Email address: [email protected]; [email protected]
| []
|
[
"Fabrication of binary FeSe superconducting wires by diffusion process",
"Fabrication of binary FeSe superconducting wires by diffusion process"
]
| [
"Toshinori Ozaki \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Keita Deguchi \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Yoshikazu Mizuguchi \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Yasuna Kawasaki \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Takayoshi Tanaka \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Takahide Yamaguchi \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Hiroaki Kumakura \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"Yoshihiko Takano \nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n\nUniversity of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan\n\nJST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n"
]
| [
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"University of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"University of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"University of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"University of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"National Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"University of Tsukuba\n1-1-1Tennnodai305-0047TsukubaIbarakiJapan",
"JST\nTransformative Research-project on Iron Pnictides\n1-2-1 Sengen305-0047TsukubaIbarakiJapan"
]
| []
| (Abstract)We report successful fabrication of multi-and mono-core FeSe wires with high transport critical current density J c using a simple in-situ Fe-diffusion process based on the powder-in-tube (Fe-diffusion PIT) method. The seven-core wire showed transport J c of as high as 1027 A/cm 2 at 4.2 K. The superconducting transition temperature T c zero was observed at 10.5 K in the wire-samples, which is about 2 K higher than that of bulk FeSe. The Fe-diffusion PIT method is suitable for fabricating multi-core wires of the binary FeSe superconductors with superior properties.3 | 10.1063/1.4726243 | [
"https://export.arxiv.org/pdf/1103.3602v3.pdf"
]
| 119,278,890 | 1103.3602 | dce2aec0b0a5b229af241445259e2d3e41763244 |
Fabrication of binary FeSe superconducting wires by diffusion process
Toshinori Ozaki
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Keita Deguchi
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
University of Tsukuba
1-1-1Tennnodai305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Yoshikazu Mizuguchi
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
University of Tsukuba
1-1-1Tennnodai305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Yasuna Kawasaki
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
University of Tsukuba
1-1-1Tennnodai305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Takayoshi Tanaka
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Takahide Yamaguchi
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Hiroaki Kumakura
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
University of Tsukuba
1-1-1Tennnodai305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Yoshihiko Takano
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
University of Tsukuba
1-1-1Tennnodai305-0047TsukubaIbarakiJapan
JST
Transformative Research-project on Iron Pnictides
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Fabrication of binary FeSe superconducting wires by diffusion process
1
(Abstract)We report successful fabrication of multi-and mono-core FeSe wires with high transport critical current density J c using a simple in-situ Fe-diffusion process based on the powder-in-tube (Fe-diffusion PIT) method. The seven-core wire showed transport J c of as high as 1027 A/cm 2 at 4.2 K. The superconducting transition temperature T c zero was observed at 10.5 K in the wire-samples, which is about 2 K higher than that of bulk FeSe. The Fe-diffusion PIT method is suitable for fabricating multi-core wires of the binary FeSe superconductors with superior properties.3
I. INTRODUCTION
Since the discovery of the superconductivity in LaFeAsO 1-x F x 1 , several types of iron-based superconductors with layered structure have been discovered [2][3][4][5] . Among these iron-based superconductors, tetragonal FeSe with transition temperature of T c zero ~8 K and T c onset ~12 K has the simplest structure (PbO-type) and binary composition, consist of a stack of FeSe layers along the c-axis 4,6 . The starting materials for FeSe are less toxic compared to the As-based compounds, making it potential candidate for practical applications. It is also know that the T c of Fe chalcogenide is quite sensitive to compressive strain. The T c in FeSe is increased up to 37 K under high pressure 7,8 . The wires of Fe chalcogenide, which show increased T c by applying compressive strain, must be the greater advantage of wire applications.
Several attempts of wire fabrication for practical applications have been carried out in Fe-based compound 9-13 . We succeeded in observing transport J c for FeSe 1-x shorter, leading to better possibility for the reaction without the escape of Se. The observed transport J c for the seven-core wire was 1027 A/cm 2 at 4.2 K. Furthermore, our process produced a T c zero of 10.5 K which about 2 K higher than that of bulk samples.
II. EXPERIMENTAL
The Se powder was packed into pure Fe tubes with a length of 48 mm. The inner and outer diameters of the Fe tubes were 3.5 and 6.2 mm, respectively. The tubes were rolled into a rectangular rod of ~2.5 mm in size using groove rolling. After rolling, they 5 were drawn into a wire of 1.1 mm in diameter using wiredrawing die. These wires were cut into pieces of ~5 cm in length. Some of these pieces were used as samples of mono-core wires. The seven-core wires were produced by packing the unsintered seven pieces of the mono-core wires into another Fe tube. The seven-core composites were drawn down to final diameter of 2.0 mm. The seven-cores wires were also cut in ~5 cm long pieces. These mono-and seven-core wires were sealed inside a quartz tube with argon gas. These sealed wires were taken into a furnace heated at 800°C, held at this temperature for 2 hours, and then taken out from furnace to quench them.
The microstructure of these wires was investigated with scanning electron microscope (SEM) and x-ray diffraction (XRD). The surface mapping analysis of the wire was carried out using energy dispersive x-ray spectrometry (EDX). Transport critical currents (I c ) were measured for 4-cm-long wires by a standard four-probe resistive method in liquid helium (4.2 K) and applied magnetic fields. The magnetic field was applied perpendicularly to the wire axis. The criterion of I c definition was 1 μV/cm. The J c was obtained by dividing I c by the cross sectional area of the FeSe core excluding the hole, which was measured by optical microscope. The elemental mapping analysis showed that the Fe distribution is homogeneous in the superconducting phase, which indicates that the Fe sheath reasonably supplied Fe for synthesizing superconducting phase of FeSe. The dispersion of Se is also homogeneous.
These results indicate that the FeSe phase inside the Fe sheath was expectedly synthesized by the Fe-diffusion PIT method. Figure 3 shows the XRD pattern of reacted layer obtained from the mono-core wire. 7 It is found that the main peaks were well indexed on the basis of the tetragonal PbO-type structure with the space group of P4/nmm, and the minor peaks were identified as iron-oxide and hexagonal phase. Lattice constants were calculated to be a Temperature dependence of resistivity for the mono-core FeSe wires under different applied magnetic fields is shown in Fig. 4. Interestingly, the resistivity at 0 T began to decrease at 12.3 K and drops to zero at 10.5 K. The T c zero is ~2 K higher than that of The transport J c as a function of magnetic fields for FeSe wires at 4.2 K is presented in Fig. 5. We succeeded in observing the transport J c for both multi-and mono-core FeSe wires. The mono-core wire showed a transport J c of 350 A/cm 2 at 4.2K.
Furthermore, the transport J c for seven-core wire reached as high as 1027 A/cm 2 at 4.2 K. The transport J c for seven-core wire showed an enhancement by a factor of ~10 compared to that in the previous report for FeSe wire 12 . This high J c value would results from an enhancement of grain connectivity, due to the higher sintering temperature used in the present synthesis process. The J c of FeSe wires gradually decreased with 9 increasing magnetic field up to 12 T, indicating that the FeSe wires have clear advantages for the wire applications under high magnetic fields. Our result demonstrates that the Fe-diffusion PIT method is greatly effective for fabricating the multi-core FeSe wires. We expect that much higher J c could be realized by further improving grain-boundary conductivity and reducing the amounts of inclusions such as iron-oxide and hexagonal phase.
IV. CONCLUSION
We fabricated seven-and mono-core wires of FeSe using the Fe-diffusion PIT method with Fe sheath. The Fe-diffusion PIT method is the simplest process of all the wire-fabrication processes. We have succeeded in synthesizing high quality FeSe superconducting phase inside the Fe sheath. The seven-core superconducting wires showed a transport J c as high as 1027 A/cm 2 . The relatively high T c zero = 10.5 K was obtained for the FeSe wire, which might be attributed to the shrinkage of the c-axis.
These results show that the Fe-diffusion PIT method is suitable for fabricating iron-based superconducting multi-core wires with higher J c and T c . for mono-and seven-core FeSe wires fabricated by the in-situ Fe-diffusion PIT method.
The magnetic field was applied perpendicular to the wire axis.
Figure 1
1(a) and1(b) show respectively the polished transverse cross section of FeSe mono-and seven-core wires after heat treatment at 800°C for 2 hours. The FeSe layer was observed on the inside wall of the Fe sheath, and a hole was formed at the center of each core where the Se powder was filled before the heat treatment.
Figure 2
2shows the SEM image and elemental mapping images for polished longitudinal cross section of the mono-core FeSe wire. As can be seen in the SEM image, FeSe layer is dense and monolithic with no reaction layer between the superconducting core and the sheath, suggesting the good connection between them.
= 3 .
37689(9) and c = 5.5023(32) Å. Obtained lattice parameter c of the wire is slightly smaller than the value of 5.520(1) Å for bulk FeSe 14 , indicating a lattice compressive strain along c-axis. This compressive strain may be resulted from the quench of wire in the heat-treatment process.
FeSe
reported in bulk samples4,6,15 . The similar effect was reported in the FeSe 0.5 Te 0.5 films16,17 and Fe 1.03 Se synthesized by flux method18 . It is also reported in our previous report19 that the T c of the iron-based superconductor was strongly correlated with the anion height from Fe layer. Given these facts, it would be understood that theenhancement of T c zero in FeSe wire should relate to the shrinkage of c-axis value, arising from a compressive strain. The ρ(T) curves are shifted to lower temperatures with 8 increasing magnetic fields without noticeable broadening compared to the zero-field case. The transition width ∆T defined by the 90 and 10% point on ρ(T) is less than 2 K. This behavior is similar to that of the low-temperature superconductor with small anisotropy 20,21 . The inset of Fig. 4 shows the temperature dependence of upper critical field (μ 0 H c2 ) and the irreversibility field (μ 0 H irr ) determined by using criteria of 90% and 10% drop of normal state resistivity. The μ 0 H irr line is very close to the μ 0 H c2 line. Linear extrapolation of the μ 0 H c2 (T) and μ 0 H irr (T) data suggests μ 0 H c2 (0) ~ 32 T and μ 0 H irr (0) ~ 23 T.
Fig. 1
1was supported in part by the Japan Society for the Promotion of Science (JSPS) through Grants-in-Aid for JSPS Fellows and 'Funding program for World-Leading Innovative R&D on Science Technology (FIRST) Program'. Cross section view of (a) mono-and (b) seven-core wires of FeSe after heat treatment, which shows the Fe sheath, FeSe layer and the void space in the center.
Fig. 2
2SEM image and elemental mapping on the longitudinal cross section for the mono-core FeSe wire fabricated by the in-situ Fe-diffusion PIT method.
Fig. 3
3XRD pattern for FeSe superconducting wire fabricated by the in-situ Fe-diffusion PIT method.
Fig. 4
4Temperature dependence of resistivity for mono-core wires fabricated by the in-situ Fe-diffusion PIT method under magnetic fields up to 7 T. The inset shows temperature dependence of μ 0 H c2 and μ 0 H irr determined from 90% and 10% points on the resistive transition curve.
Fig. 5
5Magnetic field dependence of transport J c at liquid helium temperature (4.2 K)15
Figure 1
Figure 3
Figure 4
Figure 5
Te x
Tesuperconducting wire 9,13 . In contrast, there have been few reports about the fabrication of the FeSe wire showing transport J c 12 . Here we report the observation of the transport J c in multi-and mono-core wires of FeSe fabricated using a simple in-situ Fe diffusion powder-in-tube (Fe-diffusion PIT) method. Unlike the bulk synthesis process 4,6 , this method involves only one thermal treatment. The interesting aspect of this process is that Fe sheath plays the role of not only the sheath but also the raw materials for synthesizing the superconducting phase. For synthesizing FeSe phase, the effective diffusion-distance of Fe needs to be shortened so that Fe reacts with Se before it evaporates. The multi-core wires prepared by this method could be more efficient to obtain higher J c , because the diffusion-distance of Fe in the individual wires are further4
. Y Kamihara, T Watanabe, M Hirano, H Hosono, J. Am. Chem. Soc. 1303296Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. 130, 3296 (2008).
. M Rotter, M Tegel, D Johrendt, Phys. Rev. Lett. 101107006M. Rotter, M. Tegel, and D. Johrendt, Phys. Rev. Lett. 101, 107006 (2008).
. X C Wang, Q Q Liu, Y X Lv, W B Gao, L X Yang, R C Yu, F Y Li, C Q Jin, Solid Stat Commun. 148538X.C. Wang, Q.Q. Liu, Y.X. Lv, W.B. Gao, L.X. Yang, R.C. Yu, F.Y. Li, and C.Q. Jin, Solid Stat Commun. 148, 538 (2008).
. F C Hsu, J Y Luo, K W Yeh, T K Chen, T W Huang, P M Wu, Y C Lee, Y L , F. C. Hsu, J. Y. Luo, K. W. Yeh, T. K. Chen, T. W. Huang, P. M. Wu, Y. C. Lee, Y. L.
. Y Y Huang, D C Chu, M K Yan, Wu, Proc. Natl. Acad. Sci. U.S.A. 10514262Huang, Y. Y. Chu, D. C. Yan, and M. K. Wu, Proc. Natl. Acad. Sci. U.S.A. 105, 14262 (2008).
. H Ogino, S Sato, K Kishio, J Shimoyama, T Tohei, Y Ikuhara, Appl. Phys. Lett. 9772506H. Ogino, S. Sato, K. Kishio, J. Shimoyama, T. Tohei, and Y. Ikuhara, Appl. Phys. Lett. 97, 072506 (2010).
. Y Mizuguchi, Y Takano, J. Phys. Soc. Jpn. 79102001Y. Mizuguchi, and Y. Takano, J. Phys. Soc. Jpn. 79, 102001 (2010).
. Y Mizuguchi, F Tomioka, S Tsuda, T Yamaguchi, Y Takano, Appl. Phys. Lett. 93152505Y. Mizuguchi, F. Tomioka, S. Tsuda, T. Yamaguchi, and Y. Takano, Appl. Phys. Lett. 93, 152505 (2008).
. S Margadonna, Y Takabayashi, Y Ohishi, Y Mizuguchi, Y Takano, T Kagayama, T Nakagawa, M Takata, K Prassides, Phys. Rev. B. 8064506S. Margadonna, Y. Takabayashi, Y. Ohishi, Y. Mizuguchi, Y. Takano, T. Kagayama, T. Nakagawa, M. Takata and K. Prassides, Phys. Rev. B 80, 064506 (2009).
. Y Mizuguchi, Keita Deguchi, S Tsuda, T Yamaguchi, H Takeya, H Kumakura, Y Takano, Appl. Phys. Express. 283004Y. Mizuguchi, Keita Deguchi, S. Tsuda, T. Yamaguchi, H. Takeya, H. Kumakura, and Y. Takano, Appl. Phys. Express 2, 083004 (2009).
. L Wang, Y Qi, D Wang, Z Gao, X Zhang, Z Zhang, C Wang, Y Ma, Supercond. Sci. Technol. 2375005L. Wang, Y. Qi, D. Wang, Z. Gao, X. Zhang, Z. Zhang, C. Wang, and Y. Ma, Supercond. Sci. Technol. 23, 075005 (2010).
. K Togano, A Matsumoto, H Kumakura, Appl. Phys. Express. 443101K. Togano, A. Matsumoto, and H. Kumakura, Appl. Phys. Express 4, 043101 (2011).
. Z Gao, Y Qi, L Wang, D Wang, X Zhang, C Yao, Y Ma, Supercond. Sci. Technol. 2462022Z. Gao, Y. Qi, L. Wang, D. Wang, X. Zhang, C. Yao, and Y. Ma, Supercond. Sci. Technol. 24, 062022 (2011).
. T Ozaki, K Deguchi, Y Mizuguchi, H Kumakura, Y Takano, IEEE Trans. Appl. Supercond. 212858T. Ozaki, K. Deguchi, Y. Mizuguchi, H. Kumakura, and Y. Takano, IEEE Trans. Appl. Supercond. 21, 2858 (2011).
. Y Mizuguchi, F Tomioka, S Tsuda, T Yamaguchi, Y Takano, J. Phys. Soc. Jpn. 7874712Y. Mizuguchi, F. Tomioka, S. Tsuda, T. Yamaguchi, and Y. Takano, J. Phys. Soc. Jpn. 78, 074712 (2009).
. T M Mcqueen, Q Huang, V Ksenofontov, C Felser, Q Xu, H Zandbergen, Y S , T. M. McQueen, Q. Huang, V. Ksenofontov, C. Felser, Q. Xu, H. Zandbergen, Y. S.
. J Hor, A J Allred, D Williams, J Qu, N P Checkelsky, R J Ong, Cava, Phys. Rev. B. 7914522Hor, J. Allred, A. J. Williams, D. Qu, J. Checkelsky, N. P. Ong, and R. J. Cava, Phys. Rev. B 79, 014522 (2009).
. W Si, Z. -W Lin, Q Jie, W. -G Yin, J Zhou, G Gu, P D Johnson, Q Li, 13W. Si, Z. -W. Lin, Q. Jie, W. -G. Yin, J. Zhou, G. Gu, P. D. Johnson, and Q. Li, 13
. Appl. Phys. Lett. 9552504Appl. Phys. Lett. 95, 052504 (2009).
. E Bellingeri, I Pallecchi, R Buzio, A Gerbi, D Marre, M R Cimberle, M Tropeano, M Putti, A Palenzona, C Ferdeghini, Appl. Phys. Lett. 96102512E. Bellingeri, I. Pallecchi, R. Buzio, A. Gerbi, D. Marre, M. R. Cimberle, M. Tropeano, M. Putti, A. Palenzona, and C. Ferdeghini, Appl. Phys. Lett. 96, 102512 (2010).
. J Ge, S Cao, S Yuan, B Kang, J Zhang, J. Appl. Phys. 10853903J. Ge, S. Cao, S. Yuan, B. Kang, and J. Zhang, J. Appl. Phys. 108, 053903 (2010).
. Y Mizuguchi, K Hara, K Deguchi, S Tsuda, T Yamaguchi, K Takeda, H Kotegawa, H Tou, Y Takano, Supercond. Sci. Technol. 2354013Y. Mizuguchi, K. Hara, K. Deguchi, S. Tsuda, T. Yamaguchi, K. Takeda, H. Kotegawa, H. Tou, and Y. Takano, Supercond. Sci. Technol. 23, 054013 (2010).
. A Godeke, M C Fischer, A A Squitieri, P J Lee, D C Larbalestier, J. Appl. Phys. 9793909A. Godeke, M. C. Fischer, A. A. Squitieri, P. J. Lee, and D. C. Larbalestier, J. Appl. Phys. 97, 093909 (2005).
. H Kumakura, H Kitaguchi, A Matsumoto, H Yamada, M Hirakawa, K Tachikawa, Supercond. Sci. Technol. 18147H. Kumakura, H. Kitaguchi, A. Matsumoto, H. Yamada, M. Hirakawa, and K. Tachikawa, Supercond. Sci. Technol. 18, 147 (2005).
| []
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.